CoPE: A Small Language Model for Steerable and Scalable Content Labeling
By: Samidh Chakrabarti , David Willner , Kevin Klyman and more
This paper details the methodology behind CoPE, a policy-steerable small language model capable of fast and accurate content labeling. We present a novel training curricula called Contradictory Example Training that enables the model to learn policy interpretation rather than mere policy memorization. We also present a novel method for generating content policies, called Binocular Labeling, which enables rapid construction of unambiguous training datasets. When evaluated across seven different harm areas, CoPE exhibits equal or superior accuracy to frontier models at only 1% of their size. We openly release a 9 billion parameter version of the model that can be run on a single consumer-grade GPU. Models like CoPE represent a paradigm shift for classifier systems. By turning an ML task into a policy writing task, CoPE opens up new design possibilities for the governance of online platforms.
Similar Papers
CoPE: A Lightweight Complex Positional Encoding
Machine Learning (CS)
Helps computers understand word order better.
Personalized LLM Decoding via Contrasting Personal Preference
Computation and Language
Makes AI understand what you like best.
CoPL: Collaborative Preference Learning for Personalizing LLMs
Machine Learning (CS)
Teaches AI to understand what you like best.