Leveraging recent advances in deep learning for audio-Visual emotion recognition - Université Paris-Est-Créteil-Val-de-Marne
Article Dans Une Revue Pattern Recognition Letters Année : 2021

Leveraging recent advances in deep learning for audio-Visual emotion recognition

Résumé

Emotional expressions are the behaviors that communicate our emotional state or attitude to others. They are expressed through verbal and non-verbal communication. Complex human behavior can be understood by studying physical features from multiple modalities; mainly facial, vocal and physical gestures. Recently, spontaneous multi-modal emotion recognition has been extensively studied for human behavior analysis. In this paper, we propose a new deep learning-based approach for audio-visual emotion recognition. Our approach leverages recent advances in deep learning like knowledge distillation and high-performing deep architectures. The deep feature representations of the audio and visual modalities are fused based on a model-level fusion strategy. A recurrent neural network is then used to capture the temporal dynamics. Our proposed approach substantially outperforms state-of-the-art approaches in predicting valence on the RECOLA dataset. Moreover, our proposed visual facial expression feature extraction network outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets.
Fichier principal
Vignette du fichier
S0167865521000878.pdf (1.6 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04032955 , version 1 (24-04-2023)

Licence

Identifiants

Citer

Liam Schoneveld, Alice Othmani, Hazem Abdelkawy. Leveraging recent advances in deep learning for audio-Visual emotion recognition. Pattern Recognition Letters, 2021, 146, pp.1-7. ⟨10.1016/j.patrec.2021.03.007⟩. ⟨hal-04032955⟩

Collections

LISSI UPEC
36 Consultations
178 Téléchargements

Altmetric

Partager

More