Score: 0

BUSTED at AraGenEval Shared Task: A Comparative Study of Transformer-Based Models for Arabic AI-Generated Text Detection

Published: October 23, 2025 | arXiv ID: 2510.20610v1

By: Ali Zain, Sareem Farooqui, Muhammad Rafi

Potential Business Impact:

Finds fake Arabic writing using smart computer programs.

Business Areas:
Text Analytics Data and Analytics, Software

This paper details our submission to the Ara- GenEval Shared Task on Arabic AI-generated text detection, where our team, BUSTED, se- cured 5th place. We investigated the effec- tiveness of three pre-trained transformer mod- els: AraELECTRA, CAMeLBERT, and XLM- RoBERTa. Our approach involved fine-tuning each model on the provided dataset for a binary classification task. Our findings revealed a sur- prising result: the multilingual XLM-RoBERTa model achieved the highest performance with an F1 score of 0.7701, outperforming the spe- cialized Arabic models. This work underscores the complexities of AI-generated text detection and highlights the strong generalization capa- bilities of multilingual models.

Country of Origin
🇵🇰 Pakistan

Page Count
5 pages

Category
Computer Science:
Computation and Language