Poodle: Seamlessly Scaling Down Large Language Models with Just-in-Time Model Replacement
By: Nils Strassenburg, Boris Glavic, Tilmann Rabl
Businesses increasingly rely on large language models (LLMs) to automate simple repetitive tasks instead of developing custom machine learning models. LLMs require few, if any, training examples and can be utilized by users without expertise in model development. However, this comes at the cost of substantially higher resource and energy consumption compared to smaller models, which often achieve similar predictive performance for simple tasks. In this paper, we present our vision for just-in-time model replacement (JITR), where, upon identifying a recurring task in calls to an LLM, the model is replaced transparently with a cheaper alternative that performs well for this specific task. JITR retains the ease of use and low development effort of LLMs, while saving significant cost and energy. We discuss the main challenges in realizing our vision regarding the identification of recurring tasks and the creation of a custom model. Specifically, we argue that model search and transfer learning will play a crucial role in JITR to efficiently identify and fine-tune models for a recurring task. Using our JITR prototype Poodle, we achieve significant savings for exemplary tasks.
Similar Papers
From Large to Super-Tiny: End-to-End Optimization for Cost-Efficient LLMs
Computation and Language
Makes smart computer programs cheaper and faster.
Performance Trade-offs of Optimizing Small Language Models for E-Commerce
Artificial Intelligence
Makes small computers understand online shoppers better.
The Case for Instance-Optimized LLMs in OLAP Databases
Databases
Makes smart computer questions faster and cheaper.