Enhanced-FQL($λ$), an Efficient and Interpretable RL with novel Fuzzy Eligibility Traces and Segmented Experience Replay
By: Mohsen Jalaeian-Farimani
Potential Business Impact:
Teaches robots to learn with fewer mistakes.
This paper introduces a fuzzy reinforcement learning framework, Enhanced-FQL($λ$), that integrates novel Fuzzified Eligibility Traces (FET) and Segmented Experience Replay (SER) into fuzzy Q-learning with Fuzzified Bellman Equation (FBE) for continuous control tasks. The proposed approach employs an interpretable fuzzy rule base instead of complex neural architectures, while maintaining competitive performance through two key innovations: a fuzzified Bellman equation with eligibility traces for stable multi-step credit assignment, and a memory-efficient segment-based experience replay mechanism for enhanced sample efficiency. Theoretical analysis proves the proposed method convergence under standard assumptions. Extensive evaluations in continuous control domains demonstrate that Enhanced-FQL($λ$) achieves superior sample efficiency and reduced variance compared to n-step fuzzy TD and fuzzy SARSA($λ$) baselines, while maintaining substantially lower computational complexity than deep RL alternatives such as DDPG. The framework's inherent interpretability, combined with its computational efficiency and theoretical convergence guarantees, makes it particularly suitable for safety-critical applications where transparency and resource constraints are essential.
Similar Papers
QeRL: Beyond Efficiency -- Quantization-enhanced Reinforcement Learning for LLMs
Machine Learning (CS)
Makes smart computer programs learn faster, cheaper.
Expressive Temporal Specifications for Reward Monitoring
Machine Learning (CS)
Teaches robots to learn faster with better feedback.
Expressive Temporal Specifications for Reward Monitoring
Machine Learning (CS)
Teaches robots to learn faster with better feedback.