Explaining Why Things Go Where They Go: Interpretable Constructs of Human Organizational Preferences
By: Emmanuel Fashae , Michael Burke , Leimin Tian and more
Robotic systems for household object rearrangement often rely on latent preference models inferred from human demonstrations. While effective at prediction, these models offer limited insight into the interpretable factors that guide human decisions. We introduce an explicit formulation of object arrangement preferences along four interpretable constructs: spatial practicality (putting items where they naturally fit best in the space), habitual convenience (making frequently used items easy to reach), semantic coherence (placing items together if they are used for the same task or are contextually related), and commonsense appropriateness (putting things where people would usually expect to find them). To capture these constructs, we designed and validated a self-report questionnaire through a 63-participant online study. Results confirm the psychological distinctiveness of these constructs and their explanatory power across two scenarios (kitchen and living room). We demonstrate the utility of these constructs by integrating them into a Monte Carlo Tree Search (MCTS) planner and show that when guided by participant-derived preferences, our planner can generate reasonable arrangements that closely align with those generated by participants. This work contributes a compact, interpretable formulation of object arrangement preferences and a demonstration of how it can be operationalized for robot planning.
Similar Papers
Beyond Mimicry: Preference Coherence in LLMs
Artificial Intelligence
AI doesn't always make smart choices when faced with tough decisions.
Towards Balancing Preference and Performance through Adaptive Personalized Explainability
Human-Computer Interaction
Helps robots explain their choices to people.
Learning to Plan with Personalized Preferences
Artificial Intelligence
Robots learn what you like to do tasks.