Score: 0

Copyright Detection in Large Language Models: An Ethical Approach to Generative AI Development

Published: November 25, 2025 | arXiv ID: 2511.20623v1

By: David Szczecina, Senan Gaffori, Edmond Li

Potential Business Impact:

Lets creators check if AI used their work.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The widespread use of Large Language Models (LLMs) raises critical concerns regarding the unauthorized inclusion of copyrighted content in training data. Existing detection frameworks, such as DE-COP, are computationally intensive, and largely inaccessible to independent creators. As legal scrutiny increases, there is a pressing need for a scalable, transparent, and user-friendly solution. This paper introduce an open-source copyright detection platform that enables content creators to verify whether their work was used in LLM training datasets. Our approach enhances existing methodologies by facilitating ease of use, improving similarity detection, optimizing dataset validation, and reducing computational overhead by 10-30% with efficient API calls. With an intuitive user interface and scalable backend, this framework contributes to increasing transparency in AI development and ethical compliance, facilitating the foundation for further research in responsible AI development and copyright enforcement.

Country of Origin
🇨🇦 Canada

Page Count
4 pages

Category
Computer Science:
Artificial Intelligence