Score: 1

Domain Adaptation of LLMs for Process Data

Published: September 3, 2025 | arXiv ID: 2509.03161v1

By: Rafael Seidi Oyamada , Jari Peeperkorn , Jochen De Weerdt and more

Potential Business Impact:

Helps computers predict what happens next in a process.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In recent years, Large Language Models (LLMs) have emerged as a prominent area of interest across various research domains, including Process Mining (PM). Current applications in PM have predominantly centered on prompt engineering strategies or the transformation of event logs into narrative-style datasets, thereby exploiting the semantic capabilities of LLMs to address diverse tasks. In contrast, this study investigates the direct adaptation of pretrained LLMs to process data without natural language reformulation, motivated by the fact that these models excel in generating sequences of tokens, similar to the objective in PM. More specifically, we focus on parameter-efficient fine-tuning techniques to mitigate the computational overhead typically associated with such models. Our experimental setup focuses on Predictive Process Monitoring (PPM), and considers both single- and multi-task predictions. The results demonstrate a potential improvement in predictive performance over state-of-the-art recurrent neural network (RNN) approaches and recent narrative-style-based solutions, particularly in the multi-task setting. Additionally, our fine-tuned models exhibit faster convergence and require significantly less hyperparameter optimization.

Page Count
12 pages

Category
Computer Science:
Computation and Language