Score: 1

LLM-Driven Composite Neural Architecture Search for Multi-Source RL State Encoding

Published: December 7, 2025 | arXiv ID: 2512.06982v1

By: Yu Yu , Qian Xie , Nairen Cao and more

Potential Business Impact:

Helps robots learn faster from many senses.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Designing state encoders for reinforcement learning (RL) with multiple information sources -- such as sensor measurements, time-series signals, image observations, and textual instructions -- remains underexplored and often requires manual design. We formalize this challenge as a problem of composite neural architecture search (NAS), where multiple source-specific modules and a fusion module are jointly optimized. Existing NAS methods overlook useful side information from the intermediate outputs of these modules -- such as their representation quality -- limiting sample efficiency in multi-source RL settings. To address this, we propose an LLM-driven NAS pipeline that leverages language-model priors and intermediate-output signals to guide sample-efficient search for high-performing composite state encoders. On a mixed-autonomy traffic control task, our approach discovers higher-performing architectures with fewer candidate evaluations than traditional NAS baselines and the LLM-based GENIUS framework.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡ΊπŸ‡Έ China, United States

Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)