Text2VR: Automated instruction Generation in Virtual Reality using Large language Models for Assembly Task
By: Subin Raj Peter
Potential Business Impact:
Makes VR training lessons automatically from text.
Virtual Reality (VR) has emerged as a powerful tool for workforce training, offering immersive, interactive, and risk-free environments that enhance skill acquisition, decision-making, and confidence. Despite its advantages, developing VR applications for training remains a significant challenge due to the time, expertise, and resources required to create accurate and engaging instructional content. To address these limitations, this paper proposes a novel approach that leverages Large Language Models (LLMs) to automate the generation of virtual instructions from textual input. The system comprises two core components: an LLM module that extracts task-relevant information from the text, and an intelligent module that transforms this information into animated demonstrations and visual cues within a VR environment. The intelligent module receives input from the LLM module and interprets the extracted information. Based on this, an instruction generator creates training content using relevant data from a database. The instruction generator generates the instruction by changing the color of virtual objects and creating animations to illustrate tasks. This approach enhances training effectiveness and reduces development overhead, making VR-based training more scalable and adaptable to evolving industrial needs.
Similar Papers
Guided Reality: Generating Visually-Enriched AR Task Guidance with LLMs and Vision Models
Human-Computer Interaction
Shows you how to build things with AR.
Harnessing Large Language Model for Virtual Reality Exploration Testing: A Case Study
Software Engineering
Helps VR games check themselves for bugs.
How LLMs are Shaping the Future of Virtual Reality
Human-Computer Interaction
Makes game characters smarter and stories more exciting.