Vipera: Blending Visual and LLM-Driven Guidance for Systematic Auditing of Text-to-Image Generative AI
By: Yanwei Huang , Wesley Hanwen Deng , Sijia Xiao and more
Potential Business Impact:
Helps check AI art for bad or unfair pictures.
Despite their increasing capabilities, text-to-image generative AI systems are known to produce biased, offensive, and otherwise problematic outputs. While recent advancements have supported testing and auditing of generative AI, existing auditing methods still face challenges in supporting effectively explore the vast space of AI-generated outputs in a structured way. To address this gap, we conducted formative studies with five AI auditors and synthesized five design goals for supporting systematic AI audits. Based on these insights, we developed Vipera, an interactive auditing interface that employs multiple visual cues including a scene graph to facilitate image sensemaking and inspire auditors to explore and hierarchically organize the auditing criteria. Additionally, Vipera leverages LLM-powered suggestions to facilitate exploration of unexplored auditing directions. Through a controlled experiment with 24 participants experienced in AI auditing, we demonstrate Vipera's effectiveness in helping auditors navigate large AI output spaces and organize their analyses while engaging with diverse criteria.
Similar Papers
Vipera: Towards systematic auditing of generative text-to-image models at scale
Human-Computer Interaction
Helps AI make safer, fairer pictures.
Revisiting Data Auditing in Large Vision-Language Models
CV and Pattern Recognition
Finds if AI saw your private pictures.
What Lurks Within? Concept Auditing for Shared Diffusion Models at Scale
Machine Learning (CS)
Checks AI art for bad or copied ideas.