Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








69 Hits in 6.6 sec

Use of a Hierarchical Oligonucleotide Primer Extension Approach for Multiplexed Relative Abundance Analysis of Methanogens in Anaerobic Digestion Systems

Jer-Horng Wu, Hui-Ping Chuang, Mao-Hsuan Hsu, Wei-Yu Chen
2013 Applied and Environmental Microbiology  
Among the samples, the methanogen populations detected with order-level primers accounted for >77.2% of the PCR-amplified 16S rRNA genes detected using anArchaea-specific primer.  ...  The method was based on the hierarchical oligonucleotide primer extension (HOPE) technique and combined with a set of 27 primers designed to target the total archaeal populations and methanogens from 22  ...  Two-Level Hierarchical Alignment for Semi-Coupled HMM-Based Audiovisual Emotion Recognition with Temporal Course Chung-Hsien Wu * , Jen-Chun Lin, and Wen-Li Wei A complete emotional expression typically  ... 
doi:10.1128/aem.02450-13 pmid:24077716 pmcid:PMC3837811 fatcat:ihyhlzteqfeq3fonbooy5wbapu

Emotion recognition from multi-modal information

Chung-Hsien Wu, Jen-Chun Lin, Wen-Li Wei, Kuan-Chun Cheng
2013 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference  
A variety of theoretical background and applications ranging from salient emotional features, emotionalcognitive models, to multi-modal data fusion strategies is surveyed for emotion recognition on these  ...  Conclusions outline some of the existing emotion recognition challenges. I.  ...  Fusion: Feature/Decision/Model/Hybrid-level, FAPs: Facial Animation Parameters, MUs: Motion Units, FP: fusion facial and prosodic features at feature level, 2H-SC-HMM: two-level hierarchical alignment-based  ... 
doi:10.1109/apsipa.2013.6694347 dblp:conf/apsipa/WuLWC13 fatcat:mnewdisdhfhq7b376ntz5bb5wm

Survey on audiovisual emotion recognition: databases, features, and data fusion strategies

Chung-Hsien Wu, Jen-Chun Lin, Wen-Li Wei
2014 APSIPA Transactions on Signal and Information Processing  
Facial and vocal features and audiovisual bimodal data fusion methods for emotion recognition are then surveyed and discussed.  ...  First, the currently available audiovisual emotion databases are described.  ...  SC-HMM: Semi-Coupled HMM, 2H-SC-HMM: Two-level Hierarchical alignment-based SC-HMM, EWSC-HMM: Error Weighted SC-HMM. Fusion Modality: Feature/Decision/Model/Hybrid-level.  ... 
doi:10.1017/atsip.2014.11 fatcat:6ujyy4sv55ezvdfbn3rt3leki4

A comprehensive study of visual event computing

WeiQi Yan, Declan F. Kieran, Setareh Rafatirad, Ramesh Jain
2010 Multimedia tools and applications  
We start by presenting events and their classifications, and continue with discussing the problem of capturing events in terms of photographs, videos, etc, as well as the methodologies for event storing  ...  This work was partially supported by QUB research project: Unusual event detection in audio-visual surveillance for public transport (NO.D6223EEC).  ...  Acknowledgements We appreciate for the great help from the colleagues of Queen's University Belfast(QUB): Prof. Danny Crookes, Dr. Weiru Liu, Dr. Paul Miller, and Dr. Xiwu Gu etc.  ... 
doi:10.1007/s11042-010-0560-9 fatcat:ak6u3eefefgjhmbpr7asru3n7u

Learn2Dance: Learning Statistical Music-to-Dance Mappings for Choreography Synthesis

Ferda Ofli, Engin Erzin, Yücel Yemez, A. Murat Tekalp
2012 IEEE transactions on multimedia  
Based on the first three of these statistical mappings, we define a discrete HMM and synthesize alternative dance figure sequences by employing a modified Viterbi algorithm.  ...  Finally, the generated motion parameters are animated synchronously with the musical audio using a 3-D character model.  ...  into high-level manifold and HMM-based representations.  ... 
doi:10.1109/tmm.2011.2181492 fatcat:uavnh4bvq5bnxmvzcxvzgb3epm

Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge

Björn Schuller, Anton Batliner, Stefan Steidl, Dino Seppi
2011 Speech Communication  
More than a decade has passed since research on automatic recognition of emotion from speech has become a new field of research in line with its 'big brothers' speech and speaker recognition.  ...  ' and to respond in an 'emotionally intelligent way'.  ...  On a different basis, hierarchical approaches [107] to feature selection try to optimise the feature set not globally for all emotion classes but for groups of them, mainly couples.  ... 
doi:10.1016/j.specom.2011.01.011 fatcat:x5jedtnwojdprkybbkxnah2dai

Automatic nonverbal analysis of social interaction in small groups: A review

Daniel Gatica-Perez
2009 Image and Vision Computing  
about 100 works addressing problems related to the computational modeling of interaction management, internal states, personality traits, and social relationships in small group conversations, along with  ...  scientific and technological value of the automatic understanding of faceto-face social interaction has motivated in the past few years a surge of interest in the devising of computational techniques for  ...  He also thanks Jean-Marc Odobez, Hayley Hung, and Sileye Ba (Idiap) for discussions about several of the topics presented here, and Dinesh Jayagopi (Idiap) for his comments and technical help with the  ... 
doi:10.1016/j.imavis.2009.01.004 fatcat:melyg25zhvcvbj2mthf5wmqcgq

Paralinguistics in speech and language—State-of-the-art and the challenge

Björn Schuller, Stefan Steidl, Anton Batliner, Felix Burkhardt, Laurence Devillers, Christian Müller, Shrikanth Narayanan
2013 Computer Speech and Language  
The responsibility lies with the authors. This work was supported by a fellowship within the postdoc program of the German Academic Exchange Service (DAAD).  ...  of temporal alignment and warping such as through Hidden Markov Models or general Dynamic Bayesian Networks.  ...  It further concerns the types of processing such as fully automatic chunking based on acoustic, phonetic, or linguistic criteria, dealing with ASR output (Metze et al., 2011) and not with forced alignment  ... 
doi:10.1016/j.csl.2012.02.005 fatcat:2izbs3usxbgj5drbehlyknfciq

Proceedings of eNTERFACE 2015 Workshop on Intelligent Interfaces [article]

Matei Mancas, Christian Frisson, Joëlle Tilmanne, Nicolas d'Alessandro, Petr Barborka, Furkan Bayansar, Francisco Bernard, Rebecca Fiebrink, Alexis Heloir, Edgar Hemery, Sohaib Laraba, Alexis Moinet (+58 others)
2018 arXiv   pre-print
nice schedule of social events.The authors would also thanks to Radhwan and Ambroise for their sympathy and for sharing good and bad moments with us during the workshop.  ...  The team would like to thank Metapraxis for supporting this project and lending us one of the tablets for the experiments.  ...  Audio For this work and in order to model the SCEs, 5-state left-to-right HMMs were used. HMM models were trained for each emotion and each level separately.  ... 
arXiv:1801.06349v1 fatcat:qauytivdq5axxis2xlknp3r2ne

MIRages: an account of music audio extractors, semantic description and context-awareness, in the three ages of MIR

Perfecto Herrera Boyer, Xavier Serra, Emilia Gómez
2018 Zenodo  
In the age of semantic descriptors work on describing music with high-level concepts, such as mood, instruments, similarities, cover versions or genres, usually inferred with machine learning from annotated  ...  In the age of context-aware systems we report on user models for recommendation and for avatar generation, in addition to factors that influence music listening decisions.  ...  Funollet for technical support, O. Meyers for technical support and proofreading, and all participants of the subjective evaluation.  ... 
doi:10.5281/zenodo.2278110 fatcat:uturvyw2gnfzdgtelvtxot3etq

MIRages: an account of music audio extractors, semantic description and context-awareness, in the three ages of MIR

Perfecto Herrera Boyer, Xavier Serra, Emilia Gómez
2018 Zenodo  
In the age of semantic descriptors work on describing music with high-level concepts, such as mood, instruments, similarities, cover versions or genres, usually inferred with machine learning from annotated  ...  In the age of context-aware systems we report on user models for recommendation and for avatar generation, in addition to factors that influence music listening decisions.  ...  Funollet for technical support, O. Meyers for technical support and proofreading, and all participants of the subjective evaluation.  ... 
doi:10.5281/zenodo.1882316 fatcat:6yhrlcyexrgyhhwayeau2gu7f4

Dynamic Music Representations for Real-Time Performance: From Sound to Symbol and Back

Grigore Burloiu
2016 Zenodo  
We extend this system with a robust tempo modelling engine, that adaptively switches its source between the alignment path and a beat tracking module.  ...  We design and implement a real-time performance tracker using the audio-to-audio alignment of acoustic input with a pre-existing reference track, without any symbolic score information.  ...  Aside from verifying our observations with more robust measurements, the plan is to experiment with other types of algorithms for emotion recognition from real-time EEG analysis, such as mu waves in connection  ... 
doi:10.5281/zenodo.4280757 fatcat:pnxntdy6hzbchhfdwuplpft2nu

State of the Art of Audio- and Video-Based Solutions for AAL

Slavisa ALeksic, Michael Atanasov, Jean Calleja Agius, Kenneth Camilleri, Anto Čartolovni, Pau Climent-Pérez, Sara Colantonio, Stefania Cristina, Vladimir Despotovic, Hazım Kemal Ekenel, Ekrem Erakin, Francisco Florez-Revuelta (+27 others)
2022 Zenodo  
The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action.  ...  In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness.  ...  The work in [IV.188] presents a method to recognise emotions from facial movements and gestures, which uses a complex deep neural network able to process spatio-temporal hierarchical features.  ... 
doi:10.5281/zenodo.6390708 fatcat:6qfwqd2v2rhe5iuu5zgz77ay4i

Issues on Modeling the Singing Voice

Alex Loscos, Xavier Serra
2003 Zenodo  
This set of gathered publications are mainly focused on the field of singing voice processing; more precisely, on spectral processing techniques and voice modeling for singing voice analysis, transformation  ...  I have worked with. Acknowledgements We would like to acknowledge the contribution to this research of the other members of the Music Technology Group of the Audiovisual Institute.  ...  Acknowledgments We would like to acknowledge the support from Yamaha Corporation and the contribution to this research of the other members of the Music Technology Group of the Audiovisual Institute.  ... 
doi:10.5281/zenodo.3739254 fatcat:jczy57vmfbbbbg2rrnnswiykvu

Affective Brain-Computer Interfaces [chapter]

Rafael Calvo, Sidney D'Mello, Jonathan Gratch, Arvid Kappas, Christian Mühl, Dirk Heylen, Anton Nijholt
2015 The Oxford Handbook of Affective Computing  
On the other hand, basic research in neuroscience advances our understanding of the neural processes associated with emotions.  ...  Furthermore, similar advancements are being made for more cognitive mental states, for example, attention, fatigue, and work load, which strongly interact with affective states.  ...  Visitors can ask the Guide for information using spoken or typed language as input, in combination with mouse clicks on a map of the environment (see Figure 4 ).  ... 
doi:10.1093/oxfordhb/9780199942237.013.024 fatcat:oia3yvk7hbf7ta6magl6bzjx6u
« Previous Showing results 1 — 15 out of 69 results