FinLoRA: Benchmarking LoRA Methods for Fine-Tuning LLMs on Financial Datasets
By: Dannong Wang , Jaisal Patel , Daochen Zha and more
Potential Business Impact:
Makes AI understand money and finance better.
Low-rank adaptation (LoRA) methods show great potential for scaling pre-trained general-purpose Large Language Models (LLMs) to hundreds or thousands of use scenarios. However, their efficacy in high-stakes domains like finance is rarely explored, e.g., passing CFA exams and analyzing SEC filings. In this paper, we present the open-source FinLoRA project that benchmarks LoRA methods on both general and highly professional financial tasks. First, we curated 19 datasets covering diverse financial applications; in particular, we created four novel XBRL analysis datasets based on 150 SEC filings. Second, we evaluated five LoRA methods and five base LLMs. Finally, we provide extensive experimental results in terms of accuracy, F1, and BERTScore and report computational cost in terms of time and GPU memory during fine-tuning and inference stages. We find that LoRA methods achieved substantial performance gains of 36\% on average over base models. Our FinLoRA project provides an affordable and scalable approach to democratize financial intelligence to the general public. Datasets, LoRA adapters, code, and documentation are available at https://github.com/Open-Finance-Lab/FinLoRA
Similar Papers
LoRA Is Slower Than You Think
Machine Learning (CS)
Makes AI learn faster and use less power.
BeamLoRA: Beam-Constraint Low-Rank Adaptation
Computation and Language
Makes AI smarter by finding better ways to learn.
RefLoRA: Refactored Low-Rank Adaptation for Efficient Fine-Tuning of Large Models
Machine Learning (CS)
Makes AI learn faster and better.