Back to Main Conference 2024
LREC-COLING 2024main

Visual-Textual Entailment with Quantities Using Model Checking and Knowledge Injection

Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

DOI:10.63317/28wgweyeuvzn

Abstract

In recent years, there has been great interest in multimodal inference. We concentrate on visual-textual entailment (VTE), a critical task in multimodal inference. VTE is the task of determining entailment relations between an image and a sentence. Several deep learning-based approaches have been proposed for VTE, but current approaches struggle with accurately handling quantities. On the other hand, one promising approach, one based on logical inference that can successfully deal with large quantities, has also been proposed. However, that approach uses automated theorem provers, increasing the computational cost for problems involving many entities. In addition, that approach cannot deal well with lexical differences between the semantic representations of images and sentences. In this paper, we present a logic-based VTE system that overcomes these drawbacks, using model checking for inference to increase efficiency and knowledge injection to perform more robust inference. We create a VTE dataset containing quantities and negation to assess how well VTE systems understand such phenomena. Using this dataset, we demonstrate that our system solves VTE tasks with quantities and negation more robustly than previous approaches.

Details

Paper ID
lrec2024-main-1512
Pages
pp. 17397-17408
BibKey
iokawa-yanaka-2024-visual
Editor
N/A
Publisher
European Language Resources Association (ELRA) and ICCL
ISSN
2522-2686
ISBN
979-10-95546-34-4
Conference
Joint International Conference on Computational Linguistics, Language Resources and Evaluation
Location
Turin, Italy
Date
20 May 2024 25 May 2024

Authors

  • NI

    Nobuyuki Iokawa

  • HY

    Hitomi Yanaka

Links