Abstract
Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities in processing both visual and textual information. However, the critical challenge of alignment between visual and textual representations is
not fully understood. This survey presents a comprehensive examination of alignment and misalignment in LVLMs through an explainability lens. We first examine the fundamentals of alignment, exploring its representational and behavioral aspects, training methodologies, and theoretical foundations. We then analyze misalignment phenomena across three semantic levels: object, attribute, and relational misalignment. Our investigation reveals that misalignment emerges from challenges at multiple levels: the data level, the model level, and
the inference level. We provide a comprehensive review of existing mitigation strategies, categorizing them into parameter-frozen and parameter-tuning approaches. Finally, we outline promising future research directions, emphasizing the need for standardized evaluation protocols and in-depth explainability studies.
not fully understood. This survey presents a comprehensive examination of alignment and misalignment in LVLMs through an explainability lens. We first examine the fundamentals of alignment, exploring its representational and behavioral aspects, training methodologies, and theoretical foundations. We then analyze misalignment phenomena across three semantic levels: object, attribute, and relational misalignment. Our investigation reveals that misalignment emerges from challenges at multiple levels: the data level, the model level, and
the inference level. We provide a comprehensive review of existing mitigation strategies, categorizing them into parameter-frozen and parameter-tuning approaches. Finally, we outline promising future research directions, emphasizing the need for standardized evaluation protocols and in-depth explainability studies.
| Original language | English |
|---|---|
| Title of host publication | Findings of the Association for Computational Linguistics: EMNLP 2025 |
| Editors | Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng |
| Publisher | Association for Computational Linguistics |
| Pages | 1713-1735 |
| Number of pages | 23 |
| ISBN (Electronic) | 979-8-89176-332-6 |
| DOIs | |
| Publication status | Published - 4 Nov 2025 |
| Event | The 2025 Conference on Empirical Methods in Natural Language Processing - Suzhou, China Duration: 4 Nov 2025 → 9 Nov 2025 https://2025.emnlp.org/ |
Conference
| Conference | The 2025 Conference on Empirical Methods in Natural Language Processing |
|---|---|
| Abbreviated title | EMNLP 2025 |
| Country/Territory | China |
| City | Suzhou |
| Period | 4/11/25 → 9/11/25 |
| Internet address |
Research Groups and Themes
- Intelligent Systems Laboratory