Authors Should Annotate
By: Marcus Ma , Cole Johnson , Nolan Bridges and more
Potential Business Impact:
Lets writers label their own words for better AI.
The status quo for labeling text is third-party annotation, but there are many cases where information directly from the document's source would be preferable over a third-person proxy, especially for egocentric features like sentiment and belief. We introduce author labeling, an annotation technique where the writer of the document itself annotates the data at the moment of creation. We collaborate with a commercial chatbot with over 10,000 users to deploy an author labeling annotation system for subjective features related to product recommendation. This system identifies task-relevant queries, generates on-the-fly labeling questions, and records authors' answers in real time. We train and deploy an online-learning model architecture for product recommendation that continuously improves from author labeling and find it achieved a 534% increase in click-through rate compared to an industry advertising baseline running concurrently. We then compare the quality and practicality of author labeling to three traditional annotation approaches for sentiment analysis and find author labeling to be higher quality, faster to acquire, and cheaper. These findings reinforce existing literature that annotations, especially for egocentric and subjective beliefs, are significantly higher quality when labeled by the author rather than a third party. To facilitate broader scientific adoption, we release an author labeling service for the research community at academic.echollm.io.
Similar Papers
Evaluating Large Language Models as Expert Annotators
Computation and Language
Computers learn to label text like experts.
Can Third-parties Read Our Emotions?
Computation and Language
Computers better guess feelings from writing.
Annotation and modeling of emotions in a textual corpus: an evaluative approach
Computation and Language
Computers understand feelings in writing.