Score: 0

Towards Efficient Post-Training via Fourier-Driven Adapter Architectures

Published: December 26, 2025 | arXiv ID: 2512.22378v1

By: Donggyun Bae, Jongil Park

Potential Business Impact:

Makes AI learn new things faster and better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We propose a novel framework, termed Fourier-Activated Adapter (FAA), for parameter-efficient fine-tuning of large pre-trained language models. By incorporating random Fourier features into lightweight adapter modules, FAA decomposes intermediate representations into complementary low- and high-frequency components, enabling frequency-aware modulation of semantic information. This design allows the model to selectively emphasize informative frequency bands during adaptation while preserving the representational capacity of the frozen backbone. Extensive experiments on GLUE, E2E NLG, and instruction-tuning benchmarks demonstrate that FAA consistently achieves competitive or superior performance compared to existing parameter-efficient fine-tuning methods, while maintaining low computational and memory overhead. Ablation studies further verify the effectiveness of frequency-aware activation and adaptive weighting mechanisms, highlighting FAA as a robust and efficient approach for post-training large language models.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
17 pages

Category
Computer Science:
Computation and Language