GRIT -- Geometry-Aware PEFT with K-FACPreconditioning, Fisher-Guided Reprojection, andDynamic Rank Adaptation
By: Pritish Saha , Chandrav Rajbangshi , Rudra Goyal and more
Potential Business Impact:
Makes AI learn better with fewer changes.
Parameter-efficient fine-tuning (PEFT) is the default way to adapt LLMs, but widely used LoRA and QLoRA are largely geometry-agnostic: they optimize in fixed, randomly oriented low-rank subspaces with first-order descent, mostly ignoring local loss curvature. This can inflate the effective update budget and amplify drift along weakly constrained directions. We introduce GRIT, a dynamic, curvature-aware LoRA procedure that preserves the LoRA parameterization but: (1) preconditions gradients in rank space using K-FAC as a natural-gradient proxy; (2) periodically reprojects the low-rank basis onto dominant Fisher eigendirections to suppress drift; and (3) adapts the effective rank from the spectrum so capacity concentrates where signal resides. Across instruction-following, comprehension, and reasoning benchmarks on LLaMA backbones, GRIT matches or surpasses LoRA and QLoRA while reducing trainable parameters by 46% on average (25--80% across tasks), without practical quality loss across prompt styles and data mixes. To model forgetting, we fit a curvature-modulated power law. Empirically, GRIT yields lower drift and a better updates-vs-retention frontier than strong PEFT-optimizer baselines (Orthogonal-LoRA, IA3, DoRA, Eff-FT, Shampoo).
Similar Papers
Towards Higher Effective Rank in Parameter-efficient Fine-tuning using Khatri--Rao Product
Machine Learning (CS)
Makes AI learn better without needing more power.
FRoD: Full-Rank Efficient Fine-Tuning with Rotational Degrees for Fast Convergence
Machine Learning (CS)
Makes AI learn tasks faster with less effort.
GEM-Style Constraints for PEFT with Dual Gradient Projection in LoRA
Machine Learning (CS)
Teaches AI new things without forgetting old ones.