Challenges in Image-Caption Association in Portuguese: Evaluating the CLIP Model on the FM30K Dataset
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
In recent decades, multimodal models such as CLIP have achieved significant advances in associating images and texts. However, most of these advances stem from models trained almost exclusively in English, which limits their effectiveness in other languages. This challenge is particularly relevant for Brazilian Portuguese, a language that still lacks dedicated multimodal resources and relies predominantly on automatic translations. This work investigates the performance of CLIP-based multimodal models in the task of associating images and descriptions written in Brazilian Portuguese. The analysis begins with a zero-shot scenario, in which different CLIP variants are directly evaluated on the FM30k dataset, composed of images and captions originally written in Portuguese. An additional experiment with automatic translations is also conducted to examine the impact of language on cross-modal retrieval tasks. Subsequently, fine-tuning is performed on the textual encoder of the ViT-B/32 model, keeping the visual encoder frozen, with the goal of adapting the model to the target language. The results show that models originally trained in English perform worse in Portuguese, while linguistically adapted variants, either multilingual or Portuguese-specific, achieve superior performance. The proposed fine-tuning approach was able to reduce this performance gap, leading to notable improvements. In the image-to-text scenario, the model achieved an absolute increase of 27.65 percentage points in the Accuracy@1 metric, representing a 209% relative gain over the original CLIP ViT-B/32. In the text-to-image scenario, the gain was 15.47 percentage points, amounting to an even higher 385% relative improvement, contributing to a more balanced association between images and captions.