Data-Efficient Biomedical In-Context Learning: A Diversity-Enhanced Submodular Perspective
By: Jun Wang , Zaifu Zhan , Qixin Zhang and more
Potential Business Impact:
Helps AI learn new medical jobs faster.
Recent progress in large language models (LLMs) has leveraged their in-context learning (ICL) abilities to enable quick adaptation to unseen biomedical NLP tasks. By incorporating only a few input-output examples into prompts, LLMs can rapidly perform these new tasks. While the impact of these demonstrations on LLM performance has been extensively studied, most existing approaches prioritize representativeness over diversity when selecting examples from large corpora. To address this gap, we propose Dual-Div, a diversity-enhanced data-efficient framework for demonstration selection in biomedical ICL. Dual-Div employs a two-stage retrieval and ranking process: First, it identifies a limited set of candidate examples from a corpus by optimizing both representativeness and diversity (with optional annotation for unlabeled data). Second, it ranks these candidates against test queries to select the most relevant and non-redundant demonstrations. Evaluated on three biomedical NLP tasks (named entity recognition (NER), relation extraction (RE), and text classification (TC)) using LLaMA 3.1 and Qwen 2.5 for inference, along with three retrievers (BGE-Large, BMRetriever, MedCPT), Dual-Div consistently outperforms baselines-achieving up to 5% higher macro-F1 scores-while demonstrating robustness to prompt permutations and class imbalance. Our findings establish that diversity in initial retrieval is more critical than ranking-stage optimization, and limiting demonstrations to 3-5 examples maximizes performance efficiency.
Similar Papers
Learn to Select: Exploring Label Distribution Divergence for In-Context Demonstration Selection in Text Classification
Computation and Language
Picks best examples to teach computers faster.
Enhancing Contrastive Demonstration Selection with Semantic Diversity for Robust In-Context Machine Translation
Computation and Language
Teaches computers to translate languages better.
Fairness in Multi-modal Medical Diagnosis with Demonstration Selection
CV and Pattern Recognition
Makes AI see medical images fairly for everyone.