Understanding and Mitigating Risks of Generative AI in Financial Services
By: Sebastian Gehrmann , Claire Huang , Xian Teng and more
Potential Business Impact:
Keeps AI from giving bad money advice.
To responsibly develop Generative AI (GenAI) products, it is critical to define the scope of acceptable inputs and outputs. What constitutes a "safe" response is an actively debated question. Academic work puts an outsized focus on evaluating models by themselves for general purpose aspects such as toxicity, bias, and fairness, especially in conversational applications being used by a broad audience. In contrast, less focus is put on considering sociotechnical systems in specialized domains. Yet, those specialized systems can be subject to extensive and well-understood legal and regulatory scrutiny. These product-specific considerations need to be set in industry-specific laws, regulations, and corporate governance requirements. In this paper, we aim to highlight AI content safety considerations specific to the financial services domain and outline an associated AI content risk taxonomy. We compare this taxonomy to existing work in this space and discuss implications of risk category violations on various stakeholders. We evaluate how existing open-source technical guardrail solutions cover this taxonomy by assessing them on data collected via red-teaming activities. Our results demonstrate that these guardrails fail to detect most of the content risks we discuss.
Similar Papers
Model Risk Management for Generative AI In Financial Institutions
Risk Management
Banks use AI safely by checking for mistakes.
Generative AI in Financial Institution: A Global Survey of Opportunities, Threats, and Regulation
Cryptography and Security
Makes money tasks faster, but watch out for scams.
A First-Principles Based Risk Assessment Framework and the IEEE P3396 Standard
Computers and Society
Helps AI make safer, more honest choices.