Score: 2

LAET: A Layer-wise Adaptive Ensemble Tuning Framework for Pretrained Language Models

Published: November 14, 2025 | arXiv ID: 2511.11315v2

By: Jawad Ibn Ahad , Muhammad Rafsan Kabir , Robin Krambroeckers and more

Potential Business Impact:

Makes smart money computers work faster, cheaper.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Natural Language Processing (NLP) has transformed the financial industry, enabling advancements in areas such as textual analysis, risk management, and forecasting. Large language models (LLMs) like BloombergGPT and FinMA have set new benchmarks across various financial NLP tasks, including sentiment analysis, stock movement prediction, and credit risk assessment. Furthermore, FinMA-ES, a bilingual financial LLM, has also demonstrated strong performance using the FLARE and FLARE-ES benchmarks. However, the high computational demands of these models limit the accessibility of many organizations. To address this, we propose Layer-wise Adaptive Ensemble Tuning (LAET), a novel strategy that selectively fine-tunes the most effective layers of pre-trained LLMs by analyzing hidden state representations while freezing less critical layers. LAET significantly reduces computational overhead while enhancing task-specific performance. Our approach shows strong results in financial NLP tasks, outperforming existing benchmarks and state-of-the-art LLMs such as GPT-4, even with smaller LLMs ($\sim$3B parameters). This work bridges cutting-edge financial NLP research and real-world deployment with efficient and scalable models for financial applications.

Country of Origin
🇧🇩 Bangladesh

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Computation and Language