Code Digital Twin: Empowering LLMs with Tacit Knowledge for Complex Software Maintenance
By: Xin Peng , Chong Wang , Mingwei Liu and more
Potential Business Impact:
Helps computers understand old code to fix it.
While large language models (LLMs) have demonstrated promise in software engineering tasks like code completion and generation, their support for the maintenance of complex software systems remains limited. These models often struggle with understanding the tacit knowledge embedded in systems, such as responsibility allocation and collaboration across different modules. To address this gap, we introduce the concept and framework of \textbf{Code Digital Twin}, a conceptual representation of tacit knowledge that captures the concepts, functionalities, and design rationales behind code elements, co-evolving with the software. A code digital twin is constructed using a methodology that combines knowledge extraction from both structured and unstructured sources--such as source code, documentation, and change histories--leveraging LLMs, static analysis tools, and human expertise. This framework can empower LLMs for software maintenance tasks such as issue localization and repository-level code generation by providing tacit knowledge as contexts. Based on the proposed methodology, we explore the key challenges and opportunities involved in the continuous construction and refinement of code digital twin.
Similar Papers
Code Digital Twin: Empowering LLMs with Tacit Knowledge for Complex Software Development
Software Engineering
Helps computers understand old code better.
Leveraging Large Language Models for Enhanced Digital Twin Modeling: Trends, Methods, and Challenges
Emerging Technologies
Makes smart factories learn and fix themselves.
LSDTs: LLM-Augmented Semantic Digital Twins for Adaptive Knowledge-Intensive Infrastructure Planning
Emerging Technologies
Helps plan wind farms by understanding rules.