ReLook: Vision-Grounded RL with a Multimodal LLM Critic for Agentic Web Coding
By: Yuhang Li , Chenchen Zhang , Ruilin Lv and more
Potential Business Impact:
Helps computers build websites by looking at them.
While Large Language Models (LLMs) excel at algorithmic code generation, they struggle with front-end development, where correctness is judged on rendered pixels and interaction. We present ReLook, an agentic, vision-grounded reinforcement learning framework that empowers an agent to close a robust generate--diagnose--refine loop by invoking a multimodal LLM (MLLM) as a tool. During training, the agent uses the MLLM-in-the-loop both as a visual critic--scoring code with screenshots--and as a source of actionable, vision-grounded feedback; a strict zero-reward rule for invalid renders anchors renderability and prevents reward hacking. To prevent behavioral collapse, we introduce Forced Optimization, a strict acceptance rule that admits only improving revisions, yielding monotonically better trajectories. At inference, we decouple the critic and run a lightweight, critic-free self-edit cycle, keeping latency comparable to base decoding while retaining most of the gains. Across three widely used benchmarks, ReLook consistently outperforms strong baselines in vision-grounded front-end code generation, highlighting the benefits of agentic perception, visual rewards, and training-inference decoupling.
Similar Papers
RECODE: Reasoning Through Code Generation for Visual Question Answering
CV and Pattern Recognition
Makes computers understand charts by turning them into code.
RECODE-H: A Benchmark for Research Code Development with Interactive Human Feedback
Computation and Language
Helps AI write better science code with feedback.
Grounding Multimodal LLMs to Embodied Agents that Ask for Help with Reinforcement Learning
Artificial Intelligence
Robots learn to ask questions to do jobs better.