Score: 1

MCL-AD: Multimodal Collaboration Learning for Zero-Shot 3D Anomaly Detection

Published: September 12, 2025 | arXiv ID: 2509.10282v1

By: Gang Li , Tianjiao Chen , Mingle Zhou and more

Potential Business Impact:

Finds hidden flaws in 3D objects using different clues.

Business Areas:
Image Recognition Data and Analytics, Software

Zero-shot 3D (ZS-3D) anomaly detection aims to identify defects in 3D objects without relying on labeled training data, making it especially valuable in scenarios constrained by data scarcity, privacy, or high annotation cost. However, most existing methods focus exclusively on point clouds, neglecting the rich semantic cues available from complementary modalities such as RGB images and texts priors. This paper introduces MCL-AD, a novel framework that leverages multimodal collaboration learning across point clouds, RGB images, and texts semantics to achieve superior zero-shot 3D anomaly detection. Specifically, we propose a Multimodal Prompt Learning Mechanism (MPLM) that enhances the intra-modal representation capability and inter-modal collaborative learning by introducing an object-agnostic decoupled text prompt and a multimodal contrastive loss. In addition, a collaborative modulation mechanism (CMM) is proposed to fully leverage the complementary representations of point clouds and RGB images by jointly modulating the RGB image-guided and point cloud-guided branches. Extensive experiments demonstrate that the proposed MCL-AD framework achieves state-of-the-art performance in ZS-3D anomaly detection.

Page Count
14 pages

Category
Computer Science:
CV and Pattern Recognition