LLM-Guided Material Inference for 3D Point Clouds
By: Nafiseh Izadyar, Teseo Schneider
Potential Business Impact:
Tells computers what objects are made of.
Most existing 3D shape datasets and models focus solely on geometry, overlooking the material properties that determine how objects appear. We introduce a two-stage large language model (LLM) based method for inferring material composition directly from 3D point clouds with coarse segmentations. Our key insight is to decouple reasoning about what an object is from what it is made of. In the first stage, an LLM predicts the object's semantic; in the second stage, it assigns plausible materials to each geometric segment, conditioned on the inferred semantics. Both stages operate in a zero-shot manner, without task-specific training. Because existing datasets lack reliable material annotations, we evaluate our method using an LLM-as-a-Judge implemented in DeepEval. Across 1,000 shapes from Fusion/ABS and ShapeNet, our method achieves high semantic and material plausibility. These results demonstrate that language models can serve as general-purpose priors for bridging geometric reasoning and material understanding in 3D data.
Similar Papers
Point Linguist Model: Segment Any Object via Bridged Large 3D-Language Model
CV and Pattern Recognition
Helps computers understand 3D shapes from words.
Leveraging 2D-VLM for Label-Free 3D Segmentation in Large-Scale Outdoor Scene Understanding
CV and Pattern Recognition
Lets computers understand 3D shapes from pictures.
LLM-Guided Taxonomy and Hierarchical Uncertainty for 3D Point Cloud Active Learning
CV and Pattern Recognition
Teaches computers to understand 3D objects better.