Evaluating LLMs and Prompting Strategies for Automated Hardware Diagnosis from Textual User-Reports
By: Carlos Caminha , Maria de Lourdes M. Silva , Iago C. Chaves and more
Potential Business Impact:
Helps computers understand broken device problems.
Computer manufacturers offer platforms for users to describe device faults using textual reports such as "My screen is flickering". Identifying the faulty component from the report is essential for automating tests and improving user experience. However, such reports are often ambiguous and lack detail, making this task challenging. Large Language Models (LLMs) have shown promise in addressing such issues. This study evaluates 27 open-source models (1B-72B parameters) and 2 proprietary LLMs using four prompting strategies: Zero-Shot, Few-Shot, Chain-of-Thought (CoT), and CoT+Few-Shot (CoT+FS). We conducted 98,948 inferences, processing over 51 million input tokens and generating 13 million output tokens. We achieve f1-score up to 0.76. Results show that three models offer the best balance between size and performance: mistral-small-24b-instruct and two smaller models, llama-3.2-1b-instruct and gemma-2-2b-it, that offer competitive performance with lower VRAM usage, enabling efficient inference on end-user devices as modern laptops or smartphones with NPUs.
Similar Papers
Large Language Models for Fault Localization: An Empirical Study
Software Engineering
Finds bugs in computer code faster.
Instruction Tuning and CoT Prompting for Contextual Medical QA with LLMs
Computation and Language
Helps computers answer medical questions better.
The Future of MLLM Prompting is Adaptive: A Comprehensive Experimental Evaluation of Prompt Engineering Methods for Robust Multimodal Performance
Artificial Intelligence
Teaches AI to understand pictures and words better.