Score: 0

Efficient Reinforcement Learning from Human Feedback via Bayesian Preference Inference

Published: November 6, 2025 | arXiv ID: 2511.04286v1

By: Matteo Cercola, Valeria Capretti, Simone Formentin

Potential Business Impact:

Teaches computers faster by asking them what they like.

Business Areas:
Personalization Commerce and Shopping

Learning from human preferences is a cornerstone of aligning machine learning models with subjective human judgments. Yet, collecting such preference data is often costly and time-consuming, motivating the need for more efficient learning paradigms. Two established approaches offer complementary advantages: RLHF scales effectively to high-dimensional tasks such as LLM fine-tuning, while PBO achieves greater sample efficiency through active querying. We propose a hybrid framework that unifies RLHF's scalability with PBO's query efficiency by integrating an acquisition-driven module into the RLHF pipeline, thereby enabling active and sample-efficient preference gathering. We validate the proposed approach on two representative domains: (i) high-dimensional preference optimization and (ii) LLM fine-tuning. Experimental results demonstrate consistent improvements in both sample efficiency and overall performance across these tasks.

Country of Origin
🇮🇹 Italy

Page Count
7 pages

Category
Computer Science:
Machine Learning (CS)