Score: 1

Reinforced Interactive Continual Learning via Real-time Noisy Human Feedback

Published: May 15, 2025 | arXiv ID: 2505.09925v1

By: Yutao Yang , Jie Zhou , Junsong Li and more

Potential Business Impact:

AI learns new things from people, even mistakes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper introduces an interactive continual learning paradigm where AI models dynamically learn new skills from real-time human feedback while retaining prior knowledge. This paradigm distinctively addresses two major limitations of traditional continual learning: (1) dynamic model updates using streaming, real-time human-annotated data, rather than static datasets with fixed labels, and (2) the assumption of clean labels, by explicitly handling the noisy feedback common in real-world interactions. To tackle these problems, we propose RiCL, a Reinforced interactive Continual Learning framework leveraging Large Language Models (LLMs) to learn new skills effectively from dynamic feedback. RiCL incorporates three key components: a temporal consistency-aware purifier to automatically discern clean from noisy samples in data streams; an interaction-aware direct preference optimization strategy to align model behavior with human intent by reconciling AI-generated and human-provided feedback; and a noise-resistant contrastive learning module that captures robust representations by exploiting inherent data relationships, thus avoiding reliance on potentially unreliable labels. Extensive experiments on two benchmark datasets (FewRel and TACRED), contaminated with realistic noise patterns, demonstrate that our RiCL approach substantially outperforms existing combinations of state-of-the-art online continual learning and noisy-label learning methods.

Country of Origin
🇨🇳 China

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)