Score: 2

Multi-Attribute Multi-Grained Adaptation of Pre-Trained Language Models for Text Understanding from Bayesian Perspective

Published: March 8, 2025 | arXiv ID: 2503.06085v1

By: You Zhang , Jin Wang , Liang-Chih Yu and more

Potential Business Impact:

Makes computer language models understand text better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Current neural networks often employ multi-domain-learning or attribute-injecting mechanisms to incorporate non-independent and identically distributed (non-IID) information for text understanding tasks by capturing individual characteristics and the relationships among samples. However, the extent of the impact of non-IID information and how these methods affect pre-trained language models (PLMs) remains unclear. This study revisits the assumption that non-IID information enhances PLMs to achieve performance improvements from a Bayesian perspective, which unearths and integrates non-IID and IID features. Furthermore, we proposed a multi-attribute multi-grained framework for PLM adaptations (M2A), which combines multi-attribute and multi-grained views to mitigate uncertainty in a lightweight manner. We evaluate M2A through prevalent text-understanding datasets and demonstrate its superior performance, mainly when data are implicitly non-IID, and PLMs scale larger.

Country of Origin
🇨🇳 🇹🇼 China, Taiwan, Province of China

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Computation and Language