Back to Main Conference 2024
LREC-COLING 2024main

Pre-training Cross-Modal Retrieval by Expansive Lexicon-Patch Alignment

Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

DOI:10.63317/3o4kcn6c7haq

Abstract

Recent large-scale vision-language pre-training depends on image-text global alignment by contrastive learning and is further boosted by fine-grained alignment in a weakly contrastive manner for cross-modal retrieval. Nonetheless, besides semantic matching learned by contrastive learning, cross-modal retrieval also largely relies on object matching between modalities. This necessitates fine-grained categorical discriminative learning, which however suffers from scarce data in full-supervised scenarios and information asymmetry in weakly-supervised scenarios when applied to cross-modal retrieval. To address these issues, we propose expansive lexicon-patch alignment (ELA) to align image patches with a vocabulary rather than only the words explicitly in the text for annotation-free alignment and information augmentation, thus enabling more effective fine-grained categorical discriminative learning for cross-modal retrieval. Experimental results show that ELA could effectively learn representative fine-grained information and outperform state-of-the-art methods on cross-modal retrieval.

Details

Paper ID
lrec2024-main-1136
Pages
pp. 12977-12987
BibKey
yiyuan-etal-2024-pre
Editor
N/A
Publisher
European Language Resources Association (ELRA) and ICCL
ISSN
2522-2686
ISBN
979-10-95546-34-4
Conference
Joint International Conference on Computational Linguistics, Language Resources and Evaluation
Location
Turin, Italy
Date
20 May 2024 25 May 2024

Authors

  • YY

    Yang Yiyuan

  • GL

    Guodong Long

  • MB

    Michael Blumenstein

  • XG

    Xiubo Geng

  • CT

    Chongyang Tao

  • TS

    Tao Shen

  • DJ

    Daxin Jiang

Links