Towards Efficient Post-Training via Fourier-Driven Adapter Architectures
By: Donggyun Bae, Jongil Park
Potential Business Impact:
Makes AI learn new things faster and better.
We propose a novel framework, termed Fourier-Activated Adapter (FAA), for parameter-efficient fine-tuning of large pre-trained language models. By incorporating random Fourier features into lightweight adapter modules, FAA decomposes intermediate representations into complementary low- and high-frequency components, enabling frequency-aware modulation of semantic information. This design allows the model to selectively emphasize informative frequency bands during adaptation while preserving the representational capacity of the frozen backbone. Extensive experiments on GLUE, E2E NLG, and instruction-tuning benchmarks demonstrate that FAA consistently achieves competitive or superior performance compared to existing parameter-efficient fine-tuning methods, while maintaining low computational and memory overhead. Ablation studies further verify the effectiveness of frequency-aware activation and adaptive weighting mechanisms, highlighting FAA as a robust and efficient approach for post-training large language models.
Similar Papers
Hyper Compressed Fine-Tuning of Large Foundation Models with Quantum Inspired Adapters
Machine Learning (CS)
Makes AI learn faster with less computer power.
Structure-Learnable Adapter Fine-Tuning for Parameter-Efficient Large Language Models
Computation and Language
AI learns new tasks without forgetting old ones.
Improvise, Adapt, Overcome -- Telescopic Adapters for Efficient Fine-tuning of Vision Language Models in Medical Imaging
CV and Pattern Recognition
Makes AI better at seeing medical pictures.