COSMOS: Predictable and Cost-Effective Adaptation of LLMs
By: Jiayu Wang, Aws Albarghouthi, Frederic Sala
Potential Business Impact:
Finds best AI settings without wasting computer power.
Large language models (LLMs) achieve remarkable performance across numerous tasks by using a diverse array of adaptation strategies. However, optimally selecting a model and adaptation strategy under resource constraints is challenging and often requires extensive experimentation. We investigate whether it is possible to accurately predict both performance and cost without expensive trials. We formalize the strategy selection problem for LLMs and introduce COSMOS, a unified prediction framework that efficiently estimates adaptation outcomes at minimal cost. We instantiate and study the capability of our framework via a pair of powerful predictors: embedding-augmented lightweight proxy models to predict fine-tuning performance, and low-sample scaling laws to forecast retrieval-augmented in-context learning. Extensive evaluation across eight representative benchmarks demonstrates that COSMOS achieves high prediction accuracy while reducing computational costs by 92.72% on average, and up to 98.71% in resource-intensive scenarios. Our results show that efficient prediction of adaptation outcomes is not only feasible but can substantially reduce the computational overhead of LLM deployment while maintaining performance standards.
Similar Papers
AdaptiveLLM: A Framework for Selecting Optimal Cost-Efficient LLM for Code-Generation Based on CoT Length
Software Engineering
Chooses best AI for coding tasks.
From Large to Super-Tiny: End-to-End Optimization for Cost-Efficient LLMs
Computation and Language
Makes smart computer programs cheaper and faster.
Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training
Distributed, Parallel, and Cluster Computing
Helps train big computer brains faster and cheaper.