Score: 1

Hi-WaveTST: A Hybrid High-Frequency Wavelet-Transformer for Time-Series Classification

Published: November 3, 2025 | arXiv ID: 2511.01254v1

By: Huseyin Goksu

Potential Business Impact:

Finds hidden patterns in time data for better predictions.

Business Areas:
DSP Hardware

Transformers have become state-of-the-art (SOTA) for time-series classification, with models like PatchTST demonstrating exceptional performance. These models rely on patching the time series and learning relationships between raw temporal data blocks. We argue that this approach is blind to critical, non-obvious high-frequency information that is complementary to the temporal dynamics. In this letter, we propose Hi-WaveTST, a novel Hybrid architecture that augments the original temporal patch with a learnable, High-Frequency wavelet feature stream. Our wavelet stream uses a deep Wavelet Packet Decomposition (WPD) on each patch and extracts features using a learnable Generalized Mean (GeM) pooling layer. On the UCI-HAR benchmark dataset, our hybrid model achieves a mean accuracy of 93.38 percent plus-minus 0.0043, significantly outperforming the SOTA PatchTST baseline (92.59 percent plus-minus 0.0039). A comprehensive ablation study proves that every component of our design-the hybrid architecture, the deep high-frequency wavelet decomposition, and the learnable GeM pooling-is essential for this state-of-the-art performance.

Country of Origin
🇹🇷 Turkey

Page Count
4 pages

Category
Electrical Engineering and Systems Science:
Signal Processing