Score: 1

Towards a standardized methodology and dataset for evaluating LLM-based digital forensic timeline analysis

Published: May 6, 2025 | arXiv ID: 2505.03100v1

By: Hudan Studiawan, Frank Breitinger, Mark Scanlon

Potential Business Impact:

Tests how computers find clues in digital crime scenes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) have seen widespread adoption in many domains including digital forensics. While prior research has largely centered on case studies and examples demonstrating how LLMs can assist forensic investigations, deeper explorations remain limited, i.e., a standardized approach for precise performance evaluations is lacking. Inspired by the NIST Computer Forensic Tool Testing Program, this paper proposes a standardized methodology to quantitatively evaluate the application of LLMs for digital forensic tasks, specifically in timeline analysis. The paper describes the components of the methodology, including the dataset, timeline generation, and ground truth development. Additionally, the paper recommends using BLEU and ROUGE metrics for the quantitative evaluation of LLMs through case studies or tasks involving timeline analysis. Experimental results using ChatGPT demonstrate that the proposed methodology can effectively evaluate LLM-based forensic timeline analysis. Finally, we discuss the limitations of applying LLMs to forensic timeline analysis.

Country of Origin
🇮🇪 🇮🇩 Ireland, Indonesia

Page Count
12 pages

Category
Computer Science:
Cryptography and Security