Revisiting Multimodal Positional Encoding in Vision–Language Models

Published: 26 Jan 2026, Last Modified: 28 Feb 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Vision-Language Models, Multimodal Position Encoding
TL;DR: We analyze multimodal RoPE, distill three guidelines, and introduce MHRoPE and MRoPE‑I—plug‑and‑play variants that consistently outperform prior methods.
Abstract: Multimodal position encoding is essential for vision-language models, yet there has been little systematic investigation into multimodal position encoding. We conduct a comprehensive analysis of multimodal Rotary Positional Embedding (RoPE) by examining its two core components: position design and frequency allocation. Through extensive experiments, we identify three key guidelines: positional coherence, full frequency utilization, and preservation of textual priors—ensuring unambiguous layout, rich representation, and faithful transfer from the pre-trained LLM. Based on these insights, we propose Multi-Head RoPE (MHRoPE) and MRoPE-Interleave (MRoPE-I), two simple and plug-and-play variants that require no architectural changes. Our methods consistently outperform existing approaches across diverse benchmarks, with significant improvements in both general and fine-grained multimodal understanding. Code is avaliable at https://github.com/JJJYmmm/Multimodal-RoPEs.
Supplementary Material: pdf
Primary Area: foundation or frontier models, including LLMs
Submission Number: 10980
Loading