Copyright Detection in Large Language Models: An Ethical Approach to Generative AI Development
By: David Szczecina, Senan Gaffori, Edmond Li
Potential Business Impact:
Lets creators check if AI used their work.
The widespread use of Large Language Models (LLMs) raises critical concerns regarding the unauthorized inclusion of copyrighted content in training data. Existing detection frameworks, such as DE-COP, are computationally intensive, and largely inaccessible to independent creators. As legal scrutiny increases, there is a pressing need for a scalable, transparent, and user-friendly solution. This paper introduce an open-source copyright detection platform that enables content creators to verify whether their work was used in LLM training datasets. Our approach enhances existing methodologies by facilitating ease of use, improving similarity detection, optimizing dataset validation, and reducing computational overhead by 10-30% with efficient API calls. With an intuitive user interface and scalable backend, this framework contributes to increasing transparency in AI development and ethical compliance, facilitating the foundation for further research in responsible AI development and copyright enforcement.
Similar Papers
As If We've Met Before: LLMs Exhibit Certainty in Recognizing Seen Files
Artificial Intelligence
Finds if AI used copied text.
As If We've Met Before: LLMs Exhibit Certainty in Recognizing Seen Files
Artificial Intelligence
Finds if AI used copyrighted text.
Copyright in AI Pre-Training Data Filtering: Regulatory Landscape and Mitigation Strategies
Computers and Society
Stops AI from using copyrighted art without permission.