LLM-Guided Material Inference for 3D Point Clouds
By: Nafiseh Izadyar, Teseo Schneider
Potential Business Impact:
Tells computers what objects are made of.
Most existing 3D shape datasets and models focus solely on geometry, overlooking the material properties that determine how objects appear. We introduce a two-stage large language model (LLM) based method for inferring material composition directly from 3D point clouds with coarse segmentations. Our key insight is to decouple reasoning about what an object is from what it is made of. In the first stage, an LLM predicts the object's semantic; in the second stage, it assigns plausible materials to each geometric segment, conditioned on the inferred semantics. Both stages operate in a zero-shot manner, without task-specific training. Because existing datasets lack reliable material annotations, we evaluate our method using an LLM-as-a-Judge implemented in DeepEval. Across 1,000 shapes from Fusion/ABS and ShapeNet, our method achieves high semantic and material plausibility. These results demonstrate that language models can serve as general-purpose priors for bridging geometric reasoning and material understanding in 3D data.
Similar Papers
Point Linguist Model: Segment Any Object via Bridged Large 3D-Language Model
CV and Pattern Recognition
Helps computers understand 3D shapes from words.
MeshLLM: Empowering Large Language Models to Progressively Understand and Generate 3D Mesh
Graphics
Lets computers describe and build 3D shapes with words.
Can Large Language Models Identify Materials from Radar Signals?
Signal Processing
Robots use radar to guess what things are made of.