Disclosing Generative AI Use in Digital Humanities Research
By: Rongqian Ma, Xuhan Zhang, Adrian Wisnicki
Potential Business Impact:
Helps researchers know when to tell about AI use.
This survey study investigates how digital humanists perceive and approach generative AI disclosure in research. The results indicate that while digital humanities scholars acknowledge the importance of disclosing GenAI use, the actual rate of disclosure in research practice remains low. Respondents differ in their views on which activities most require disclosure and on the most appropriate methods for doing so. Most also believe that safeguards for AI disclosure should be established through institutional policies rather than left to individual decisions. The study's findings will offer empirical guidance to scholars, institutional leaders, funders, and other stakeholders responsible for shaping effective disclosure policies.
Similar Papers
Understanding Reader Perception Shifts upon Disclosure of AI Authorship
Human-Computer Interaction
Telling people AI wrote it makes them trust it less.
Generative Artificial Intelligence for Academic Research: Evidence from Guidance Issued for Researchers by Higher Education Institutions in the United States
Computers and Society
Helps schools guide students using AI responsibly.
Detecting the Use of Generative AI in Crowdsourced Surveys: Implications for Data Integrity
Human-Computer Interaction
Finds fake answers in online surveys.