Do Multimodal LLMs Understand Order? Measuring the Fragility of Multimodal Reasoning under Input Order Perturbations
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
Multimodal reasoning has progressed rapidly with large vision-language models (LVLMs), yet their robustness under input variations remains underexplored. This study investigates positional bias in LVLMs for multimodal multiple-choice questions. Our analysis shows that model predictions are sensitive to both choice and modality ordering. We conduct a large-scale evaluation on MMMU, CVQA, and MMBench using fourteen representative models. Further analysis examines how question properties, including difficulty, domain, and image type, affect robustness. We also assess whether text-based mitigation strategies transfer to the VQA setting and perform ablation studies on self-consistency and reasoning complexity. Overall, our findings provide the first comprehensive understanding of positional bias from a vision-language perspective, highlighting key challenges in achieving stable multimodal reasoning.