HomeLREC 2022WorkshopsPVLAMlrec2022-ws-pvlam-2
Back to PVLAM 2022
LREC 2022workshop

Do Multimodal Emotion Recognition Models Tackle Ambiguity?

Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind

DOI:10.63317/44woxq82mxi9

Abstract

Most databases used for emotion recognition assign a single emotion to data samples. This does not match with the complex nature of emotions: we can feel a wide range of emotions throughout our lives with varying degrees of intensity. We may even experience multiple emotions at once. Furthermore, each person physically expresses emotions differently, which makes emotion recognition even more challenging: we call this emotional ambiguity. This paper investigates the problem as a review of ambiguity in multimodal emotion recognition models. To lay the groundwork, the main representations of emotions along with solutions for incorporating ambiguity are described, followed by a brief overview of ambiguity representation in multimodal databases. Thereafter, only models trained on a database that incorporates ambiguity have been studied in this paper. We conclude that although databases provide annotations with ambiguity, most of these models do not fully exploit them, showing that there is still room for improvement in multimodal emotion recognition systems.

Details

Paper ID
lrec2022-ws-pvlam-2
Pages
pp. 6-11
BibKey
tran-etal-2022-multimodal
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
N/A
ISBN
N/A
Workshop
Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind
Location
undefined, undefined
Date
20 June 2022 25 June 2022

Authors

  • HT

    Hélène Tran

  • IF

    Issam Falih

  • XG

    Xavier Goblet

  • EM

    Engelbert Mephu Nguifo

Links