Augmented Fine-Tuned LLMs for Enhanced Recruitment Automation
By: Mohamed T. Younes , Omar Walid , Khaled Shaban and more
Potential Business Impact:
Finds best job candidates faster and better.
This paper presents a novel approach to recruitment automation. Large Language Models (LLMs) were fine-tuned to improve accuracy and efficiency. Building upon our previous work on the Multilayer Large Language Model-Based Robotic Process Automation Applicant Tracking (MLAR) system . This work introduces a novel methodology. Training fine-tuned LLMs specifically tuned for recruitment tasks. The proposed framework addresses the limitations of generic LLMs by creating a synthetic dataset that uses a standardized JSON format. This helps ensure consistency and scalability. In addition to the synthetic data set, the resumes were parsed using DeepSeek, a high-parameter LLM. The resumes were parsed into the same structured JSON format and placed in the training set. This will help improve data diversity and realism. Through experimentation, we demonstrate significant improvements in performance metrics, such as exact match, F1 score, BLEU score, ROUGE score, and overall similarity compared to base models and other state-of-the-art LLMs. In particular, the fine-tuned Phi-4 model achieved the highest F1 score of 90.62%, indicating exceptional precision and recall in recruitment tasks. This study highlights the potential of fine-tuned LLMs. Furthermore, it will revolutionize recruitment workflows by providing more accurate candidate-job matching.
Similar Papers
Augmented Relevance Datasets with Fine-Tuned Small LLMs
Information Retrieval
Helps computers learn what search results are best.
Improving LLM-based Ontology Matching with fine-tuning on synthetic data
Computation and Language
Helps computers understand and connect different information.
Fine-Tuned Language Models for Domain-Specific Summarization and Tagging
Computation and Language
Helps computers understand new slang for faster info.