Back to Main Conference 2008
LREC 2008main

Integrating Audio and Visual Information for Modelling Communicative Behaviours Perceived as Different

Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC 2008)

DOI:10.63317/4cmnmip85jki

Abstract

In human face-to-face interaction, participants can rely on a number of audio-visual information for interpreting interlocutors’ communicative intentions, such information strongly contributing to the successfulness of communication. Modelling these typical human abilities represents a main objective in human communication research, including technological applications like human-machine interaction. In this pilot study we explore the possibility of using audio-visual parameters for describing/measuring the differences perceived in interlocutor’s communicative behaviours. Preliminary results derived from the multimodal analysis of a single subject seem to indicate that measuring the distribution of some prosodic and hand gesture events which are temporally co-occurring contribute to the accounting of such perceived differences. Moreover, as far as gesture events are concerned, it has been observed that relevant information are not simply to be found in the occurences of single gestures, but mainly in some gesture modalities (for example, ’single stroke’ vs ’multiple stroke’ gestures, one-hand vs both-hands gestures, etc?). In this paper we also introduce and describe a software package, ViSuite, we developed for multimodal processing and used for the work described in his paper.

Details

Paper ID
lrec2008-main-464
Pages
N/A
BibKey
savino-etal-2008-integrating
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
2-9517408-4-0
Conference
Sixth International Conference on Language Resources and Evaluation
Location
Marrakech, Morocco
Date
28 May 2008 30 May 2008

Authors

  • MS

    Michelina Savino

  • LS

    Laura Scivetti

  • MR

    Mario Refice

Links