A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Using audio and video features to classify the most dominant person in a group meeting
2007
Proceedings of the 15th international conference on Multimedia - MULTIMEDIA '07
The automated extraction of semantically meaningful information from multi-modal data is becoming increasingly necessary due to the escalation of captured data for archival. A novel area of multi-modal data labelling, which has received relatively little attention, is the automatic estimation of the most dominant person in a group meeting. In this paper, we provide a framework for detecting dominance in group meetings using different audio and video cues. We show that by using a simple model
doi:10.1145/1291233.1291423
dblp:conf/mm/HungJYFBORMG07
fatcat:5xeoo2f6l5hqppghts4pf6crsm