Disclosing Generative AI Use in Digital Humanities Research
By: Rongqian Ma, Xuhan Zhang, Adrian Wisnicki
Potential Business Impact:
Helps researchers know when to tell about AI use.
Plain English Summary
Researchers found that while many academics agree it's important to be honest about using AI in their work, most aren't actually saying when they use it. This could make it hard to tell what parts of research were done by humans and what by AI. Having clear rules about disclosing AI use will help everyone trust the research they read.
This survey study investigates how digital humanists perceive and approach generative AI disclosure in research. The results indicate that while digital humanities scholars acknowledge the importance of disclosing GenAI use, the actual rate of disclosure in research practice remains low. Respondents differ in their views on which activities most require disclosure and on the most appropriate methods for doing so. Most also believe that safeguards for AI disclosure should be established through institutional policies rather than left to individual decisions. The study's findings will offer empirical guidance to scholars, institutional leaders, funders, and other stakeholders responsible for shaping effective disclosure policies.
Similar Papers
When Is Self-Disclosure Optimal? Incentives and Governance of AI-Generated Content
Computers and Society
Makes AI content labeling fair for creators.
Understanding Reader Perception Shifts upon Disclosure of AI Authorship
Human-Computer Interaction
Telling people AI wrote it makes them trust it less.
Generative Artificial Intelligence for Academic Research: Evidence from Guidance Issued for Researchers by Higher Education Institutions in the United States
Computers and Society
Helps schools guide students using AI responsibly.