RLPO: Residual Listwise Preference Optimization for Long-Context Review Ranking
By: Hao Jiang , Zhi Yang , Annan Wang and more
Review ranking is pivotal in e-commerce for prioritizing diagnostic and authentic feedback from the deluge of user-generated content. While large language models have improved semantic assessment, existing ranking paradigms face a persistent trade-off in long-context settings. Pointwise scoring is efficient but often fails to account for list-level interactions, leading to miscalibrated top-$k$ rankings. Listwise approaches can leverage global context, yet they are computationally expensive and become unstable as candidate lists grow. To address this, we propose Residual Listwise Preference Optimization (RLPO), which formulates ranking as listwise representation-level residual correction over a strong pointwise LLM scorer. RLPO first produces calibrated pointwise scores and item representations, then applies a lightweight encoder over the representations to predict listwise score residuals, avoiding full token-level listwise processing. We also introduce a large-scale benchmark for long-context review ranking with human verification. Experiments show RLPO improves NDCG@k over strong pointwise and listwise baselines and remains robust as list length increases.
Similar Papers
In-context Ranking Preference Optimization
Machine Learning (CS)
Helps computers learn to rank answers better.
SoLoPO: Unlocking Long-Context Capabilities in LLMs via Short-to-Long Preference Optimization
Computation and Language
Helps AI understand long stories better.
CRPO: Confidence-Reward Driven Preference Optimization for Machine Translation
Computation and Language
Improves computer translation by picking harder examples.