Robust Noise Attenuation via Adaptive Pooling of Transformer Outputs
By: Greyson Brothers
Potential Business Impact:
Helps AI focus on important info, ignoring distractions.
We investigate the design of pooling methods used to summarize the outputs of transformer embedding models, primarily motivated by reinforcement learning and vision applications. This work considers problems where a subset of the input vectors contains requisite information for a downstream task (signal) while the rest are distractors (noise). By framing pooling as vector quantization with the goal of minimizing signal loss, we demonstrate that the standard methods used to aggregate transformer outputs, AvgPool, MaxPool, and ClsToken, are vulnerable to performance collapse as the signal-to-noise ratio (SNR) of inputs fluctuates. We then show that an attention-based adaptive pooling method can approximate the signal-optimal vector quantizer within derived error bounds for any SNR. Our theoretical results are first validated by supervised experiments on a synthetic dataset designed to isolate the SNR problem, then generalized to standard relational reasoning, multi-agent reinforcement learning, and vision benchmarks with noisy observations, where transformers with adaptive pooling display superior robustness across tasks.
Similar Papers
Unlocking Noise-Resistant Vision: Key Architectural Secrets for Robust Models
CV and Pattern Recognition
Makes computer vision better at seeing noisy pictures.
Explainable speech emotion recognition through attentive pooling: insights from attention-based temporal localization
Sound
Helps computers understand emotions in voices better.
SpikePool: Event-driven Spiking Transformer with Pooling Attention
Neural and Evolutionary Computing
Makes computer vision faster and better.