HomeLREC 2022WorkshopsPVLAMlrec2022-ws-pvlam-6
Back to PVLAM 2022
LREC 2022workshop

Face2Text revisited: Improved data set and baseline results

Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind

DOI:10.63317/23puh4b5ozpm

Abstract

Current image description generation models do not transfer well to the task of describing human faces. To encourage the development of more human-focused descriptions, we developed a new data set of facial descriptions based on the CelebA image data set. We describe the properties of this data set, and present results from a face description generator trained on it, which explores the feasibility of using transfer learning from VGGFace/ResNet CNNs. Comparisons are drawn through both automated metrics and human evaluation by 76 English-speaking participants. The descriptions generated by the VGGFace-LSTM + Attention model are closest to the ground truth according to human evaluation whilst the ResNet-LSTM + Attention model obtained the highest CIDEr and CIDEr-D results (1.252 and 0.686 respectively). Together, the new data set and these experimental results provide data and baselines for future work in this area.

Details

Paper ID
lrec2022-ws-pvlam-6
Pages
pp. 41-47
BibKey
tanti-etal-2022-face2text
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
N/A
ISBN
N/A
Workshop
Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind
Location
undefined, undefined
Date
20 June 2022 25 June 2022

Authors

  • MT

    Marc Tanti

  • SA

    Shaun Abdilla

  • AM

    Adrian Muscat

  • CB

    Claudia Borg

  • RF

    Reuben A. Farrugia

  • AG

    Albert Gatt

Links