Responsible LLM Deployment for High-Stake Decisions by Decentralized Technologies and Human-AI Interactions
By: Swati Sachan, Theo Miller, Mai Phuong Nguyen
Potential Business Impact:
Makes AI safer for important money choices.
High-stakes decision domains are increasingly exploring the potential of Large Language Models (LLMs) for complex decision-making tasks. However, LLM deployment in real-world settings presents challenges in data security, evaluation of its capabilities outside controlled environments, and accountability attribution in the event of adversarial decisions. This paper proposes a framework for responsible deployment of LLM-based decision-support systems through active human involvement. It integrates interactive collaboration between human experts and developers through multiple iterations at the pre-deployment stage to assess the uncertain samples and judge the stability of the explanation provided by post-hoc XAI techniques. Local LLM deployment within organizations and decentralized technologies, such as Blockchain and IPFS, are proposed to create immutable records of LLM activities for automated auditing to enhance security and trace back accountability. It was tested on Bert-large-uncased, Mistral, and LLaMA 2 and 3 models to assess the capability to support responsible financial decisions on business lending.
Similar Papers
LLMs in Cybersecurity: Friend or Foe in the Human Decision Loop?
Cryptography and Security
AI helps some people make better choices.
The Ethical Compass of the Machine: Evaluating Large Language Models for Decision Support in Construction Project Management
Artificial Intelligence
AI helps builders make safer, smarter choices.
What Would an LLM Do? Evaluating Policymaking Capabilities of Large Language Models
Artificial Intelligence
Helps computers suggest better plans to help homeless people.