Instruction Finetuning LLaMA-3-8B Model Using LoRA for Financial Named Entity Recognition
By: Zhiming Lian
Particularly, financial named-entity recognition (NER) is one of the many important approaches to translate unformatted reports and news into structured knowledge graphs. However, free, easy-to-use large language models (LLMs) often fail to differentiate organisations as people, or disregard an actual monetary amount entirely. This paper takes Meta's Llama 3 8B and applies it to financial NER by combining instruction fine-tuning and Low-Rank Adaptation (LoRA). Each annotated sentence is converted into an instruction-input-output triple, enabling the model to learn task descriptions while fine-tuning with small low-rank matrices instead of updating all weights. Using a corpus of 1,693 sentences, our method obtains a micro-F1 score of 0.894 compared with Qwen3-8B, Baichuan2-7B, T5, and BERT-Base. We present dataset statistics, describe training hyperparameters, and perform visualizations of entity density, learning curves, and evaluation metrics. Our results show that instruction tuning combined with parameter-efficient fine-tuning enables state-of-the-art performance on domain-sensitive NER.
Similar Papers
FinLoRA: Benchmarking LoRA Methods for Fine-Tuning LLMs on Financial Datasets
Computational Engineering, Finance, and Science
Makes AI understand money and finance better.
Financial Text Classification Based On rLoRA Finetuning On Qwen3-8B model
Machine Learning (CS)
Helps computers understand money news faster.
Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA
Machine Learning (CS)
Makes AI learn faster and better with less data.