Where to Search: Measure the Prior-Structured Search Space of LLM Agents
By: Zhuo-Yang Song
Potential Business Impact:
Helps AI search better for science discoveries.
The generate-filter-refine (iterative) paradigm based on large language models (LLMs) has achieved progress in reasoning, programming, and program discovery in AI+Science. However, the effectiveness of search depends on where to search, namely, how to encode the domain prior into an operationally structured hypothesis space. To this end, this paper proposes a compact formal theory that describes and measures LLM-assisted iterative search guided by domain priors. We represent an agent as a fuzzy relation operator on inputs and outputs to capture feasible transitions; the agent is thereby constrained by a fixed safety envelope. To describe multi-step reasoning/search, we weight all reachable paths by a single continuation parameter and sum them to obtain a coverage generating function; this induces a measure of reachability difficulty; and it provides a geometric interpretation of search on the graph induced by the safety envelope. We further provide the simplest testable inferences and validate them via a majority-vote instantiation. This theory offers a workable language and operational tools to measure agents and their search spaces, proposing a systematic formal description of iterative search constructed by LLMs.
Similar Papers
Where to Search: Measure the Prior-Structured Search Space of LLM Agents
Artificial Intelligence
Helps AI learn to solve problems faster.
Where to Search: Measure the Prior-Structured Search Space of LLM Agents
Artificial Intelligence
Helps AI learn to solve problems faster.
Grammar Search for Multi-Agent Systems
Artificial Intelligence
Builds smarter AI agents with simpler, cheaper code.