Score: 0

On the Simplification of Neural Network Architectures for Predictive Process Monitoring

Published: September 21, 2025 | arXiv ID: 2509.17145v1

By: Amaan Ansari, Lukas Kirchdorfer, Raheleh Hadian

Potential Business Impact:

Makes smart computer predictions much faster and smaller.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Predictive Process Monitoring (PPM) aims to forecast the future behavior of ongoing process instances using historical event data, enabling proactive decision-making. While recent advances rely heavily on deep learning models such as LSTMs and Transformers, their high computational cost hinders practical adoption. Prior work has explored data reduction techniques and alternative feature encodings, but the effect of simplifying model architectures themselves remains underexplored. In this paper, we analyze how reducing model complexity, both in terms of parameter count and architectural depth, impacts predictive performance, using two established PPM approaches. Across five diverse event logs, we show that shrinking the Transformer model by 85% results in only a 2-3% drop in performance across various PPM tasks, while the LSTM proves slightly more sensitive, particularly for waiting time prediction. Overall, our findings suggest that substantial model simplification can preserve predictive accuracy, paving the way for more efficient and scalable PPM solutions.

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)