Score: 0

QA-VLM: Providing human-interpretable quality assessment for wire-feed laser additive manufacturing parts with Vision Language Models

Published: August 20, 2025 | arXiv ID: 2508.16661v1

By: Qiaojie Zheng , Jiucai Zhang , Joy Gockel and more

Potential Business Impact:

Helps machines find flaws in 3D prints.

Business Areas:
Image Recognition Data and Analytics, Software

Image-based quality assessment (QA) in additive manufacturing (AM) often relies heavily on the expertise and constant attention of skilled human operators. While machine learning and deep learning methods have been introduced to assist in this task, they typically provide black-box outputs without interpretable justifications, limiting their trust and adoption in real-world settings. In this work, we introduce a novel QA-VLM framework that leverages the attention mechanisms and reasoning capabilities of vision-language models (VLMs), enriched with application-specific knowledge distilled from peer-reviewed journal articles, to generate human-interpretable quality assessments. Evaluated on 24 single-bead samples produced by laser wire direct energy deposition (DED-LW), our framework demonstrates higher validity and consistency in explanation quality than off-the-shelf VLMs. These results highlight the potential of our approach to enable trustworthy, interpretable quality assessment in AM applications.

Country of Origin
🇺🇸 United States

Page Count
28 pages

Category
Computer Science:
CV and Pattern Recognition