A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
Multimodal learning for facial expression recognition
2015
Pattern Recognition
In this paper, multimodal learning for facial expression recognition (FER) is proposed. ...
With the proposed multimodal learning network, the joint representation learning from multimodal inputs will be more suitable for FER. ...
Algorithm 1 . 1 Multimodal learning for facial expression recognition. ...
doi:10.1016/j.patcog.2015.04.012
fatcat:rhvqsaqnkvau3d32uoilpybkry
Deep learning framework for real-time face recognition using multimodal facial expression with feature optimization and hybrid classification
2023
Tuijin jishu
Multimodal facial expression detection is used for real-time face recognition to solve the aforementioned problems. ...
The facial expression based face recognition framework is most suited for real-time scenario like video surveillance. ...
Conclusion In this paper, we have proposed real-time face recognition and multimodal facial expression detection using hybrid deep learning techniques. ...
doi:10.52783/tjjpt.v44.i3.2308
fatcat:zuynsmmxkbhthosuivrw6uqmgm
A Transfer Learning-based Approach for Multimodal Emotion Recognition
2020
Turkish Journal of Computer and Mathematics Education
The goal of this research is to develop more robust and accurate models for multimodal emotion recognition that can be applied across a variety of contexts and populations. ...
The topic of multimodal emotion recognition is one that is expanding at a rapid rate. ...
facial expression characteristics using transfer learning with recognition Pre-trained models for facial expression recognition Pre-trained models for speech recognition Pre-trained models for speech and ...
doi:10.17762/turcomat.v11i3.13597
fatcat:uzcmc45ouvg4taqkl4k62uv2sm
Multimodal Emotion Recognition using Deep Learning
2021
Journal of Applied Science and Technology Trends
This paper presents a review of emotional recognition of multimodal signals using deep learning and comparing their applications based on current studies. ...
Multiple techniques can be defined through human feelings, including expressions, facial images, physiological signs, and neuroimaging strategies. ...
Facial expression recognition Facial gestures are important ways of expressing feelings in nonverbal contact. ...
doi:10.38094/jastt20291
fatcat:2ofkuynxebgb5glhsaii5zcq4u
Research on facial expression recognition based on Multimodal data fusion and neural network
[article]
2021
arXiv
pre-print
In this paper, a neural network algorithm of facial expression recognition based on multimodal data fusion is proposed. ...
Facial expression recognition is a challenging task when neural network is applied to pattern recognition. ...
Acknowledgment The authors received no financial support for the research, authorship, and/or publication of this article. ...
arXiv:2109.12724v1
fatcat:e4fwldphsnerfohxwmx2deh4ma
Data Fusion for Real-time Multimodal Emotion Recognition through Webcams and Microphones in E-Learning
2016
International Journal of Human-Computer Interaction
FILTWAM aims at deploying a real time multimodal emotion recognition method for providing more adequate feedback to the learners through an online communication skills training. ...
A hybrid method for multimodal fusion of our multimodal software shows accuracy between 96.1% and 98.6% for the best-chosen WEKA classifiers over predicted emotions. ...
We also thank the Netherlands Laboratory for Lifelong Learning (NELLL) of the Open University Netherlands that has sponsored this research. ...
doi:10.1080/10447318.2016.1159799
fatcat:joihp36runcwjf3vihksh6xlf4
Face and Body Gesture Analysis for Multimodal HCI
[chapter]
2004
Lecture Notes in Computer Science
Multimodal interfaces allow humans to interact with machines through multiple modalities such as speech, facial expression, gesture, and gaze. ...
Accordingly, in this paper we present a vision-based framework that combines face and body gesture for multimodal HCI. ...
We propose a vision-based framework that uses computer vision and machine learning techniques to recognize face and body gesture for a multimodal HCI interface. ...
doi:10.1007/978-3-540-27795-8_59
fatcat:ib3d3ek6fjbyzceinv73k5baxi
Multimodal Emotion Recognition Model Based on a Deep Neural Network with Multiobjective Optimization
2021
Wireless Communications and Mobile Computing
The model combines voice information and facial information and can optimize the accuracy and uniformity of recognition at the same time. ...
This paper proposes a multimodal emotion recognition model based on a multiobjective optimization algorithm. ...
Wireless Communications and Mobile Computing This paper constructs a deep learning algorithm based on deep separation convolution for facial expression. Szegedy et al. ...
doi:10.1155/2021/6971100
fatcat:6hlwc3brcbbmjkladym7bopjzq
Multimodal Data Fusion to Track Students' Distress during Educational Gameplay
2022
Journal of Learning Analytics
We conducted data wrangling with student gameplay data from multiple data sources, such as individual facial expression recordings and gameplay logs. ...
Also, this study proposes the benefits of optimizing several methodological means for multimodal data fusion in educational game research. ...
First, we gathered multi-channel data from students, including their facial data from two facial-expression detection toolkits (OpenFace and Facial Expression Recognition [FER-2013] ) as well as Zoombinis ...
doi:10.18608/jla.2022.7631
fatcat:oqh2enpdlrhjdo2dc4zds2kvny
Improved Multimodal Emotion Recognition for Better Game-Based Learning
[chapter]
2015
Lecture Notes in Computer Science
This framework enables real-time multimodal emotion recognition of learners during game-based learning for triggering feedback towards improved learning. ...
The main goal of this study is to validate the integration of webcam and microphone data for a real-time and adequate interpretation of facial and vocal expressions into emotional states where the software ...
We also thank the Netherlands Laboratory for Lifelong Learning (NELLL) of the Open University Netherlands that sponsors this research. ...
doi:10.1007/978-3-319-22960-7_11
fatcat:czizft3l7jgbzeugn6irpjshai
MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal Emotion Recognition
[article]
2021
arXiv
pre-print
In this paper, we propose a pre-training model MEmoBERT for multimodal emotion recognition, which learns multimodal joint representations through self-supervised learning from large-scale unlabeled video ...
Multimodal emotion recognition study is hindered by the lack of labelled corpora in terms of scale and diversity, due to the high annotation cost and label ambiguity. ...
recognition model (Sec. 3.3.1). 3 Experiments
Prompt-based Emotion Classification
Pre-training Dataset Learning a pre-trained model for multimodal emotion recognition requires large-scale multimodal ...
arXiv:2111.00865v1
fatcat:pzlft4ufwzgplb6gceehaz4sxm
Impact of multiple modalities on emotion recognition: investigation into 3d facial landmarks, action units, and physiological data
[article]
2020
arXiv
pre-print
Our analysis indicates that both 3D facial landmarks and physiological data are encouraging for expression/emotion recognition. ...
Considering this, we present an analysis of 3D facial data, action units, and physiological data as it relates to their impact on emotion recognition. ...
Acknowledgment This material is based on work that was supported in part by an Amazon Machine Learning Research Award. ...
arXiv:2005.08341v1
fatcat:g6aepnfzr5b57os3xi3jvo7vwu
Multimodal Affect Analysis for Product Feedback Assessment
[article]
2017
arXiv
pre-print
This research discusses a multimodal affect recognition system developed to classify whether a consumer likes or dislikes a product tested at a counter or kiosk, by analyzing the consumer's facial expression ...
The real-time performance, accuracy and feasibility for multimodal affect recognition in feedback assessment are evaluated. ...
and speed for affect recognition using supervised learning. ...
arXiv:1705.02694v1
fatcat:aijbeawcqzhkpnawekzivehu4i
FAF: A novel multimodal emotion recognition approach integrating face, body and text
[article]
2022
arXiv
pre-print
In this paper, we developed a large multimodal emotion dataset, named "HED" dataset, to facilitate the emotion recognition task, and accordingly propose a multimodal emotion recognition method. ...
Multimodal emotion analysis performed better in emotion recognition depending on more comprehensive emotional clues and multimodal emotion dataset. ...
Results Analysis
Unimodal sentiment recognition experiments The unimodal emotion recognition experiments were conducted for facial expressions, body gestures and text data. ...
arXiv:2211.15425v1
fatcat:cahrjipbnnd4vb4atfbppfcc34
2018 Index IEEE Transactions on Affective Computing Vol. 9
2019
IEEE Transactions on Affective Computing
-March 2018 3-13 Cross-Domain Color Facial Expression Recognition Using Transductive Transfer Subspace Learning. Zheng, W., þ, T-AFFC Jan. ...
-Dec. 2018 437-449 Cross-Domain Color Facial Expression Recognition Using Transductive Transfer Subspace Learning. Zheng, W., þ, T-AFFC Jan. ...
doi:10.1109/taffc.2019.2905448
fatcat:4a5hvv4bkneq5d6eilv6tcn37u
« Previous
Showing results 1 — 15 out of 13,894 results