MIRAGE: Multi-model Interface for Reviewing and Auditing Generative Text-to-Image AI
By: Matheus Kunzler Maldaner , Wesley Hanwen Deng , Jason Hong and more
Potential Business Impact:
Helps people find bad AI pictures by comparing them.
While generative AI systems have gained popularity in diverse applications, their potential to produce harmful outputs limits their trustworthiness and usability in different applications. Recent years have seen growing interest in engaging diverse AI users in auditing generative AI that might impact their lives. To this end, we propose MIRAGE as a web-based tool where AI users can compare outputs from multiple AI text-to-image (T2I) models by auditing AI-generated images, and report their findings in a structured way. We used MIRAGE to conduct a preliminary user study with five participants and found that MIRAGE users could leverage their own lived experiences and identities to surface previously unnoticed details around harmful biases when reviewing multiple T2I models' outputs compared to reviewing only one.
Similar Papers
Seeing Twice: How Side-by-Side T2I Comparison Changes Auditing Strategies
Human-Computer Interaction
Helps find bad AI pictures by comparing them.
MIRAGE: Agentic Framework for Multimodal Misinformation Detection with Web-Grounded Reasoning
Artificial Intelligence
Finds fake news in pictures and words.
MIRAGE: Towards AI-Generated Image Detection in the Wild
CV and Pattern Recognition
Finds fake pictures made by computers.