Learning Time-Scale Invariant Population-Level Neural Representations
By: Eshani Patel, Yisong Yue, Geeling Chau
Potential Business Impact:
Makes brain-reading tools work better with different data.
General-purpose foundation models for neural time series can help accelerate neuroscientific discoveries and enable applications such as brain computer interfaces (BCIs). A key component in scaling these models is population-level representation learning, which leverages information across channels to capture spatial as well as temporal structure. Population-level approaches have recently shown that such representations can be both efficient to learn on top of pretrained temporal encoders and produce useful representations for decoding a variety of downstream tasks. However, these models remain sensitive to mismatches in preprocessing, particularly on time-scales, between pretraining and downstream settings. We systematically examine how time-scale mismatches affects generalization and find that existing representations lack invariance. To address this, we introduce Time-scale Augmented Pretraining (TSAP), which consistently improves robustness to different time-scales across decoding tasks and builds invariance in the representation space. These results highlight handling preprocessing diversity as a key step toward building generalizable neural foundation models.
Similar Papers
BaRISTA: Brain Scale Informed Spatiotemporal Representation of Human Intracranial Neural Activity
Machine Learning (CS)
Helps computers understand brain signals better.
Learning Scalable Temporal Representations in Spiking Neural Networks Without Labels
Emerging Technologies
Teaches computers to learn from pictures without labels.
On the Internal Semantics of Time-Series Foundation Models
Machine Learning (CS)
Shows how computers understand time patterns.