HOI-R1: Exploring the Potential of Multimodal Large Language Models for Human-Object Interaction Detection
By: Junwen Chen, Peilin Xiong, Keiji Yanai
Potential Business Impact:
Lets computers understand actions between people and things.
Recent Human-object interaction detection (HOID) methods highly require prior knowledge from VLMs to enhance the interaction recognition capabilities. The training strategies and model architectures for connecting the knowledge from VLMs to the HOI instance representations from the object detector are challenging, and the whole framework is complex for further development or application. On the other hand, the inherent reasoning abilities of MLLMs on human-object interaction detection are under-explored. Inspired by the recent success of training MLLMs with reinforcement learning (RL) methods, we propose HOI-R1 and first explore the potential of the language model on the HOID task without any additional detection modules. We introduce an HOI reasoning process and HOID reward functions to solve the HOID task by pure text. The results on the HICO-DET dataset show that HOI-R1 achieves 2x the accuracy of the baseline with great generalization ability. The source code is available at https://github.com/cjw2021/HOI-R1.
Similar Papers
HOID-R1: Reinforcement Learning for Open-World Human-Object Interaction Detection Reasoning with Multimodal Large Language Model
CV and Pattern Recognition
Helps robots understand what people do with things.
Rethinking Human-Object Interaction Evaluation for both Vision-Language Models and HOI-Specific Methods
CV and Pattern Recognition
Helps computers understand what people are doing in pictures.
Incremental Human-Object Interaction Detection with Invariant Relation Representation Learning
CV and Pattern Recognition
Helps robots learn new object actions over time.