UniHOI: Unified Human-Object Interaction Understanding via Unified Token Space
By: Panqi Yang , Haodong Jing , Nanning Zheng and more
Potential Business Impact:
Helps computers understand how people use things.
In the field of human-object interaction (HOI), detection and generation are two dual tasks that have traditionally been addressed separately, hindering the development of comprehensive interaction understanding. To address this, we propose UniHOI, which jointly models HOI detection and generation via a unified token space, thereby effectively promoting knowledge sharing and enhancing generalization. Specifically, we introduce a symmetric interaction-aware attention module and a unified semi-supervised learning paradigm, enabling effective bidirectional mapping between images and interaction semantics even under limited annotations. Extensive experiments demonstrate that UniHOI achieves state-of-the-art performance in both HOI detection and generation. Specifically, UniHOI improves accuracy by 4.9% on long-tailed HOI detection and boosts interaction metrics by 42.0% on open-vocabulary generation tasks.
Similar Papers
Learning to Generate Human-Human-Object Interactions from Textual Descriptions
CV and Pattern Recognition
Teaches computers to show people interacting with objects.
Rethinking Human-Object Interaction Evaluation for both Vision-Language Models and HOI-Specific Methods
CV and Pattern Recognition
Helps computers understand what people are doing in pictures.
Spatial-Temporal Human-Object Interaction Detection
CV and Pattern Recognition
Helps computers understand what people do in videos.