Archiving Submission: No (non-archival)
Previous Venue If Non Archival: ICML 2025 Main Track (https://icml.cc/virtual/2025/poster/46284)
Keywords: Pooling, Attention, Vector Quantization, AdaPool, Transformer, Robustness
TL;DR: Existing methods for aggregating transformer embeddings (MaxPool, AvgPool, ClsToken) are vulnerable to noisy inputs, we show that an attention-based pooling mechanism is provably robust across the spectrum of input signal-to-noise ratios.
Abstract: We investigate the design of pooling methods used to summarize the outputs of transformer embedding models, primarily motivated by reinforcement learning and vision applications. This work considers problems where a subset of the input vectors contains requisite information for a downstream task (signal) while the rest are distractors (noise). By framing pooling as vector quantization with the goal of minimizing signal loss, we demonstrate that the standard methods used to aggregate transformer outputs, AvgPool, MaxPool, and ClsToken, are vulnerable to performance collapse as the signal-to-noise ratio (SNR) of inputs fluctuates. We then show that an attention-based *adaptive pooling* method can approximate the signal-optimal vector quantizer within derived error bounds for any SNR. Our theoretical results are first validated by supervised experiments on a synthetic dataset designed to isolate the SNR problem, then generalized to standard relational reasoning, multi-agent reinforcement learning, and vision benchmarks with noisy observations, where transformers with adaptive pooling display superior robustness across tasks.
Submission Number: 10
Loading