Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








1,374 Hits in 6.4 sec

Human attention model for semantic scene analysis in movies

Anan Liu, Yongdong Zhang, Yan Song, Dongming Zhang, Jintao Li, Zhaoxuan Yang
2008 2008 IEEE International Conference on Multimedia and Expo  
In this paper, we specifically propose the Weber-Fechner Law-based human attention model for semantic scene analysis in movies.  ...  Large-scale experiments demonstrate the effectiveness and generality of the proposed human attention model for movie analysis.  ...  Semantic scene detection In our system, we realize the detection of action, war, dialogue, sex and music scenes which are useful for movie edit and viewers' navigation.  ... 
doi:10.1109/icme.2008.4607724 dblp:conf/icmcs/LiuZSZLY08 fatcat:qno32ggjmfhi7krswtc5pameoq

Computational media aesthetics: finding meaning beautiful

F. Nack, C. Dorai, S. Venkatesh
2001 IEEE Multimedia  
As an example, consider our software model of To create tools for automatically understanding video, we need to be able to interpret the data with its maker's eye. an expressive element in movies: tempo  ...  With our software model, we define and derive tempo plots for full-length movies.  ...  Acknowledgments We thank Brett Adams for his help in shaping our ideas and realizing them in concrete algorithms and a system implementation.  ... 
doi:10.1109/93.959093 fatcat:yhkvfl4jsfcfhnrqckpwnsdspy

A hierarchical framework for movie content analysis: Let computers watch films like humans

Anan Liu, Sheng Tang, Yongdong Zhang, Yan Song, Jintao Li, Zhaoxuan Yang
2008 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops  
In this paper, we specially propose a hierarchical framework for movie content analysis.  ...  The promising results of users' subjective assessment indicate that the proposed framework is applicable for automatic analysis of movie content by computers.  ...  Besides, in [27] , by integrating film-making and psychology rules Hari Sundaram and Shih-Fu Chang presented an innovative method for film segmentation which is the basis for semantic analysis for movies  ... 
doi:10.1109/cvprw.2008.4563040 dblp:conf/cvpr/LiuTZSLY08 fatcat:c324julaljdndnxjfziue6ujki

Non-sequential decomposition, composition and presentation of multimedia content

Manfred del Fabro
2011 ACM SIGMultimedia Records  
Diese Arbeit wurde in gedruckter und elektronischer Form abgegeben. Ich bestätige, dass der Inhalt der digitalen Version vollständig mit dem der gedruckten Versionübereinstimmt.  ...  It consists of a content part that models videos in terms of scenes and shots and an event model that defines di↵erent events and how they may be related to each other.  ...  Fades and dark areas are detected, as these two punctuations often indicate a scene change. In addition, a tempo analysis is performed.  ... 
doi:10.1145/2132503.2132506 fatcat:auugpu2pznbq7jskuuo627treu

Automatic Generation of Movie Trailers using Ontologies

The SVP Group, Mediarep.Org
2021 Image  
With the advances in digital audio and video analysis, automatic movie summarization has become an important field of research.  ...  The extraction of features from movies using state-of-the-art image and audio processing techniques builds the foundation for the selection of meaningful and usable material, which is re-assembled according  ...  In section 5 we present the application of our system to generate trailers for current Hollywood action movies along with an evaluation of the corresponding output in section 6.  ... 
doi:10.25969/mediarep/16750 fatcat:gceesj2dt5eynh24evr65s3gou

Indexing of Fictional Video Content for Event Detection and Summarisation

Bart Lehane, Noel E. O'Connor, Hyowon Lee, Alan F. Smeaton
2007 EURASIP Journal on Image and Video Processing  
This paper presents an approach to movie video indexing that utilises audiovisual analysis to detect important and meaningful temporal video segments, that we term events.  ...  A user experiment designed to evaluate the usefulness of an event-based structure for both searching and browsing movie archives is also described and the results indicate the usefulness of the proposed  ...  ACKNOWLEDGMENT The research leading to this paper was partly supported by Enterprise Ireland and by Science Foundation Ireland under Grant no. 03/IN.3/I361.  ... 
doi:10.1155/2007/14615 fatcat:q6q2gvgkgjeqnki3ve4tpfhvwu

Indexing of Fictional Video Content for Event Detection and Summarisation

Bart Lehane, NoelE O'Connor, Hyowon Lee, AlanF Smeaton
2007 EURASIP Journal on Image and Video Processing  
This paper presents an approach to movie video indexing that utilises audiovisual analysis to detect important and meaningful temporal video segments, that we term events.  ...  A user experiment designed to evaluate the usefulness of an event-based structure for both searching and browsing movie archives is also described and the results indicate the usefulness of the proposed  ...  ACKNOWLEDGMENT The research leading to this paper was partly supported by Enterprise Ireland and by Science Foundation Ireland under Grant no. 03/IN.3/I361.  ... 
doi:10.1186/1687-5281-2007-014615 fatcat:7cei3qkv7jdcjiezwvw4zyykte

Cross-Modal Analysis of Audio-Visual Film Montage

Matthias Zeppelzauer, Dalibor Mitrovic, Christian Breiteneder
2011 2011 Proceedings of 20th International Conference on Computer Communications and Networks (ICCCN)  
Synchronous montage helps to increase tension and tempo in a scene and highlights important events in the story.  ...  This property is currently not exploited in automated indexing, annotation, and summarization of movies.  ...  Synchronous audiovisual montage enables the director to accentuate important events and actions and to increase tension and tempo (e.g. in action scenes and dialogue sequences) [1] .  ... 
doi:10.1109/icccn.2011.6005782 dblp:conf/icccn/ZeppelzauerMB11 fatcat:bijgniljsfew5gtuorbezjdaxe

Spott

Florian Vandecasteele, Karel Vandenbroucke, Dimitri Schuurman, Steven Verstockt
2017 ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)  
Spott is an innovative second screen mobile multimedia application which offers viewers relevant information on objects (e.g., clothing, furniture, food) they see and like on their television screens.  ...  In line with the current views on innovation management, the technological excellence of the Spott application is coupled with iterative user involvement throughout the entire development process.  ...  ACKNOWLEDGMENTS The research activities as described in this paper were funded by Ghent University, iMinds, the Institute for the Promotion of Innovation by Science and Technology in Flanders (IWT) and  ... 
doi:10.1145/3092834 fatcat:gdbcmzof6zedzbomfsgh4iauy4

Study of User Reuse Intention for Gamified Interactive Movies upon Flow Experience

Han Zhe, Lee Hyun-Seok
2020 Journal of multimedia information system  
In this case, users are allowed to involve actively in the scene as "players" to manage the tempo of the story to some extent, it, thus, makes users pleased to watch interactive movies repeatedly for trying  ...  an empirical analysis model for users' reuse intention with cognition, design, attitude emotional experience and conducts an empirical analysis on 425 pieces of valid sample data applying SPSS22 and Amos23  ...  Acknowledgement This work was supported by Dongseo University, "Dongseo Cluster Project" Research Fund of 2020 (DSU20200010).  ... 
doi:10.33851/jmis.2020.7.4.281 fatcat:sqczwbrhnncqnoyusf3kjy7rty

ADVISOR

Rajiv Ratn Shah, Yi Yu, Roger Zimmermann
2014 Proceedings of the ACM International Conference on Multimedia - MM '14  
Second, we perform heuristic rankings to fuse the predicted confidence scores of multiple models, and third we customize the video soundtrack recommendation functionality to make it compatible with mobile  ...  A series of extensive experiments confirm that our approach performs well and recommends appealing soundtracks for UGVs to enhance the viewing experience.  ...  the Centre of Social Media Innovations for Communities (COSMIC).  ... 
doi:10.1145/2647868.2654919 dblp:conf/mm/ShahYZ14 fatcat:w6miwqxozbfidj5slk2zvdeb74

"You Tube and I Find"—Personalizing Multimedia Content Access

S. Venkatesh, B. Adams, Dinh Phung, C. Dorai, R.G. Farrell, L. Agnihotri, N. Dimitrova
2008 Proceedings of the IEEE  
Ravin, and W. Teiken at IBM Research for their collaboration and active participation in the MAGIC project.  ...  A good example of this is the description of tempo of a movie. Motion in a sequence of shots portrayed in a certain way indicates the pace of the movie, for example, sluggish, steady, or fast.  ...  that follows running[) and video syntax rules (For example, BShoot events that catalyze other actions in extreme close up then get a shot of the action triggered[).  ... 
doi:10.1109/jproc.2008.916378 fatcat:3ilsibo5qjaudovr5euid56z3e

Associating characters with events in films

Andrew Salway, Bart Lehane, Noel E. O'Connor
2007 Proceedings of the 6th ACM international conference on Image and video retrieval - CIVR '07  
In an evaluation with 215 events from 11 films, the technique performed the character detection task with P recision = 93% and Recall = 71%.  ...  The technique fuses the results of event detection based on audiovisual features with the inferred on-screen presence of characters, based on an analysis of an audio description script.  ...  The support of the Irish Research Council for Science, Engineering and Technology is gratefully acknowledged.  ... 
doi:10.1145/1282280.1282354 dblp:conf/civr/SalwayLO07 fatcat:23stxbs4djcmrdplo7ij7fm4za

Themenheft zu Heft 5

(:Unkn) Unknown, Mediarep.Org, Jörg Schirra
2021 Image  
for applications, stimulated discussions and helped to improve certain aspects of the above-mentioned tools.  ...  Research was partially funded by the Deutsche Forschungsgemeinschaft and by the German Ministry of Research and Technology. All support is gratefully acknowledged.  ...  In section 5 we present the application of our system to generate trailers for current Hollywood action movies along with an evaluation of the corresponding output in section 6.  ... 
doi:10.25969/mediarep/16743 fatcat:uh3wpotdmjgpjciazcj4cmbjie

Social signal processing: Survey of an emerging domain

Alessandro Vinciarelli, Maja Pantic, Hervé Bourlard
2009 Image and Vision Computing  
This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development  ...  Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed  ...  The State of the Art The problem of machine analysis of human social signals includes four subproblem areas (see Figure 5 ): (1) recording the scene, (2) detecting people in it, tected in the scene and  ... 
doi:10.1016/j.imavis.2008.11.007 fatcat:rdgx4qgxjbdjlkuji4tyh3ugue
« Previous Showing results 1 — 15 out of 1,374 results