Abstract: Vision Transformers (ViTs) have achieved remarkable success across various vision tasks. However, ViTs inherently lack spatial inductive biases, necessitating explicit position embedding (PE) schemes. Recently, many studies have adopted non-fixed length position embeddings (nFPEs) over traditional absolute or relative PEs. These nFPEs, typically implemented using inductive modules like convolutional layers, offer advantages such as adaptability to varying token sequence lengths and the potential for translation equivariance. However, our analysis reveals that prevalent nFPE methods often yield positional information that is significantly skewed by feature content, which is not discussed yet. In this paper, we argue that nFPEs in prior works have two common limitations. First, nFPEs exhibit a significant semantic bias, as they are strongly affected and distorted by the semantic content of input feature maps, leading to indistinct positional information. Second, although the intrinsic token order reamains constant throughout the network, nFPEs redundantly recompute positional information within each transformer block, leading to inefficiency and potentially inconsistent PE application. To overcome these drawbacks, we propose Centralized Position Embedding (CPE). The core idea of CPE is to replace the scattered PE module in each transformer block with a unified PE network per stage, whose output is broadcast to all transformer blocks within that stage. This centralized design allows for a significantly larger receptive field for PE network at a negligible computational overhead, facilitating the extraction of less biased and more consistent positional informations, thus addressing the aforementioned limitations of nFPEs. By applying the proposed CPE to various ViTs for several vision tasks, we show that CPE yileds more precise positional information, leading to consistent performance improvements over existing PE strategies, supporting our arguments.
External IDs:doi:10.1109/access.2025.3629376
Loading