Teaching LLMs How to Learn with Contextual Fine-Tuning
By: Younwoo Choi , Muhammad Adil Asif , Ziwen Han and more
Potential Business Impact:
Teaches computers to learn new things faster.
Prompting Large Language Models (LLMs), or providing context on the expected model of operation, is an effective way to steer the outputs of such models to satisfy human desiderata after they have been trained. But in rapidly evolving domains, there is often need to fine-tune LLMs to improve either the kind of knowledge in their memory or their abilities to perform open ended reasoning in new domains. When human's learn new concepts, we often do so by linking the new material that we are studying to concepts we have already learned before. To that end, we ask, "can prompting help us teach LLMs how to learn". In this work, we study a novel generalization of instruction tuning, called contextual fine-tuning, to fine-tune LLMs. Our method leverages instructional prompts designed to mimic human cognitive strategies in learning and problem-solving to guide the learning process during training, aiming to improve the model's interpretation and understanding of domain-specific knowledge. We empirically demonstrate that this simple yet effective modification improves the ability of LLMs to be fine-tuned rapidly on new datasets both within the medical and financial domains.
Similar Papers
Instruction Tuning and CoT Prompting for Contextual Medical QA with LLMs
Computation and Language
Helps computers answer medical questions better.
Beyond Correctness: Evaluating and Improving LLM Feedback in Statistical Education
Other Statistics
Helps teachers give better feedback to students.
Medical Knowledge Intervention Prompt Tuning for Medical Image Classification
CV and Pattern Recognition
Helps AI understand medical images better.