PoETa v2: Toward More Robust Evaluation of Large Language Models in Portuguese
By: Thales Sales Almeida, Rodrigo Nogueira, Hélio Pedrini
Potential Business Impact:
Tests how well computers understand Portuguese.
Large Language Models (LLMs) exhibit significant variations in performance across linguistic and cultural contexts, underscoring the need for systematic evaluation in diverse languages. In this work, we present the most extensive evaluation of LLMs for the Portuguese language to date. Leveraging our newly introduced PoETa v2 benchmark -- a comprehensive suite of over 40 tasks in Portuguese -- we assess more than 20 models covering a broad spectrum of training scales and computational resources. Our study reveals how computational investment and language-specific adaptation impact performance in Portuguese, while also analyzing performance gaps in comparison to equivalent tasks in English. Through this benchmark and analysis, PoETa v2 lays the groundwork for future research on Portuguese language modeling and evaluation. The benchmark is available at https://github.com/PoETaV2/PoETaV2.
Similar Papers
CAMÕES: A Comprehensive Automatic Speech Recognition Benchmark for European Portuguese
Computation and Language
Helps computers understand European Portuguese speech better.
Tradutor: Building a Variety Specific Translation Model
Computation and Language
Helps computers understand less common Portuguese.
Zero-shot Performance of Generative AI in Brazilian Portuguese Medical Exam
Computation and Language
AI helps doctors in Brazil understand medical questions.