Uncovering Scaling Laws for Large Language Models via Inverse Problems
By: Arun Verma , Zhaoxuan Wu , Zijian Zhou and more
Potential Business Impact:
Finds best ways to build smart computer programs cheaper.
Large Language Models (LLMs) are large-scale pretrained models that have achieved remarkable success across diverse domains. These successes have been driven by unprecedented complexity and scale in both data and computations. However, due to the high costs of training such models, brute-force trial-and-error approaches to improve LLMs are not feasible. Inspired by the success of inverse problems in uncovering fundamental scientific laws, this position paper advocates that inverse problems can also efficiently uncover scaling laws that guide the building of LLMs to achieve the desirable performance with significantly better cost-effectiveness.
Similar Papers
Scaling Law Phenomena Across Regression Paradigms: Multiple and Kernel Approaches
Machine Learning (CS)
Makes AI smarter by understanding how to train them.
Generalizing Scaling Laws for Dense and Sparse Large Language Models
Machine Learning (CS)
Makes big computer brains train faster, cheaper.
Generalizing Scaling Laws for Dense and Sparse Large Language Models
Machine Learning (CS)
Predicts computer brain size and needs better.