OnomaCompass: A Texture Exploration Interface that Shuttles between Words and Images
By: Miki Okamura , Shuhey Koyama , Li Jingjing and more
Potential Business Impact:
Finds new materials using sound words.
Humans can finely perceive material textures, yet articulating such somatic impressions in words is a cognitive bottleneck in design ideation. We present OnomaCompass, a web-based exploration system that links sound-symbolic onomatopoeia and visual texture representations to support early-stage material discovery. Instead of requiring users to craft precise prompts for generative AI, OnomaCompass provides two coordinated latent-space maps--one for texture images and one for onomatopoeic term--built from an authored dataset of invented onomatopoeia and corresponding textures generated via Stable Diffusion. Users can navigate both spaces, trigger cross-modal highlighting, curate findings in a gallery, and preview textures applied to objects via an image-editing model. The system also supports video interpolation between selected textures and re-embedding of extracted frames to form an emergent exploration loop. We conducted a within-subjects study with 11 participants comparing OnomaCompass to a prompt-based image-generation workflow using Gemini 2.5 Flash Image ("Nano Banana"). OnomaCompass significantly reduced workload (NASA-TLX overall, mental demand, effort, and frustration; p < .05) and increased hedonic user experience (UEQ), while usability (SUS) favored the baseline. Qualitative findings indicate that OnomaCompass helps users externalize vague sensory expectations and promotes serendipitous discovery, but also reveals interaction challenges in spatial navigation. Overall, leveraging sound symbolism as a lightweight cue offers a complementary approach to Kansei-driven material ideation beyond prompt-centric generation.
Similar Papers
Textured Word-As-Image illustration
Graphics
Creates cool, custom text pictures from your ideas.
GenComUI: Exploring Generative Visual Aids as Medium to Support Task-Oriented Human-Robot Communication
Human-Computer Interaction
Helps robots understand and follow spoken instructions.
PromptMap: Supporting Exploratory Text-to-Image Generation
Human-Computer Interaction
Helps artists find good ideas faster.