A Lightweight Framework for Trigger-Guided LoRA-Based Self-Adaptation in LLMs
By: Jiacheng Wei, Faguo Wu, Xiao Zhang
Potential Business Impact:
Lets AI learn new things while solving problems.
Large language models are unable to continuously adapt and learn from new data during reasoning at inference time. To address this limitation, we propose that complex reasoning tasks be decomposed into atomic subtasks and introduce SAGE, a trigger-guided dynamic fine-tuning framework that enables adaptive updates during reasoning at inference time. SAGE consists of three key components: (1) a Trigger module that detects reasoning failures through multiple evaluation metrics in real time; (2) a Trigger Buffer module that clusters anomaly samples using a streaming clustering process with HDBSCAN, followed by stability checks and similarity-based merging; and (3) a Lora Store module that dynamically optimizes parameter updates with an adapter pool for knowledge retention. Evaluation results show that SAGE demonstrates excellent accuracy, robustness, and stability on the atomic reasoning subtask through dynamic knowledge updating during test time.
Similar Papers
Self-Abstraction from Grounded Experience for Plan-Guided Policy Refinement
Artificial Intelligence
Teaches computers to fix their own code better.
$\texttt{SAGE}$: A Generic Framework for LLM Safety Evaluation
Cryptography and Security
Tests AI to find hidden dangers in long talks.
SAGE: A Realistic Benchmark for Semantic Understanding
Artificial Intelligence
Tests if AI truly understands words, not just patterns.