Score: 0

LLM Harms: A Taxonomy and Discussion

Published: December 5, 2025 | arXiv ID: 2512.05929v1

By: Kevin Chen , Saleh Afroogh , Abhejay Murali and more

This study addresses categories of harm surrounding Large Language Models (LLMs) in the field of artificial intelligence. It addresses five categories of harms addressed before, during, and after development of AI applications: pre-development, direct output, Misuse and Malicious Application, and downstream application. By underscoring the need to define risks of the current landscape to ensure accountability, transparency and navigating bias when adapting LLMs for practical applications. It proposes mitigation strategies and future directions for specific domains and a dynamic auditing system guiding responsible development and integration of LLMs in a standardized proposal.

Category
Computer Science:
Computers and Society