Complementary Learning Approach for Text Classification using Large Language Models
By: Navid Asgari, Benjamin M. Cole
Potential Business Impact:
Helps people and computers work together better.
In this study, we propose a structured methodology that utilizes large language models (LLMs) in a cost-efficient and parsimonious manner, integrating the strengths of scholars and machines while offsetting their respective weaknesses. Our methodology, facilitated through a chain of thought and few-shot learning prompting from computer science, extends best practices for co-author teams in qualitative research to human-machine teams in quantitative research. This allows humans to utilize abductive reasoning and natural language to interrogate not just what the machine has done but also what the human has done. Our method highlights how scholars can manage inherent weaknesses OF LLMs using careful, low-cost techniques. We demonstrate how to use the methodology to interrogate human-machine rating discrepancies for a sample of 1,934 press releases announcing pharmaceutical alliances (1990-2017).
Similar Papers
Efficient Strategy for Improving Large Language Model (LLM) Capabilities
Computation and Language
Makes smart computer programs run faster with less power.
Large Language Models for Full-Text Methods Assessment: A Case Study on Mediation Analysis
Computation and Language
Helps computers understand science papers better.
LLM-as-classifier: Semi-Supervised, Iterative Framework for Hierarchical Text Classification using Large Language Models
Computation and Language
Makes smart computer programs sort text better.