Tiny Recursive Control: Iterative Reasoning for Efficient Optimal Control
By: Amit Jain, Richard Linares
Neural network controllers increasingly demand millions of parameters, and language model approaches push into the billions. For embedded aerospace systems with strict power and latency constraints, this scaling is prohibitive. We present Tiny Recursive Control (TRC), a neural architecture based on a counterintuitive principle: capacity can emerge from iteration depth rather than parameter count. TRC applies compact networks (approximately 1.5M parameters) repeatedly through a two-level hierarchical latent structure, refining control sequences by simulating trajectories and correcting based on tracking error. Because the same weights process every refinement step, adding iterations increases computation without increasing memory. We evaluate TRC on nonlinear control problems including oscillator stabilization and powered descent with fuel constraints. Across these domains, TRC achieves near-optimal control costs while requiring only millisecond-scale inference on GPU and under 10~MB memory, two orders of magnitude smaller than language model baselines. These results demonstrate that recursive reasoning, previously confined to discrete tasks, transfers effectively to continuous control synthesis.
Similar Papers
COMponent-Aware Pruning for Accelerated Control Tasks in Latent Space Models
Robotics
Makes smart robots work with less power.
NanoControl: A Lightweight Framework for Precise and Efficient Control in Diffusion Transformer
CV and Pattern Recognition
Makes AI art creation faster and cheaper.
Nuclear Microreactor Control with Deep Reinforcement Learning
Systems and Control
Makes tiny nuclear power plants run smarter.