A Declarative Language for Building And Orchestrating LLM-Powered Agent Workflows
By: Ivan Daunis
Building deployment-ready LLM agents requires complex orchestration of tools, data sources, and control flow logic, yet existing systems tightly couple agent logic to specific programming languages and deployment models. We present a declarative system that separates agent workflow specification from implementation, enabling the same pipeline definition to execute across multiple backend languages (Java, Python, Go) and deployment environments (cloud-native, on-premises). Our key insight is that most agent workflows consist of common patterns -- data serialization, filtering, RAG retrieval, API orchestration -- that can be expressed through a unified DSL rather than imperative code. This approach transforms agent development from application programming to configuration, where adding new tools or fine-tuning agent behaviors requires only pipeline specification changes, not code deployment. Our system natively supports A/B testing of agent strategies, allowing multiple pipeline variants to run on the same backend infrastructure with automatic metric collection and comparison. We evaluate our approach on real-world e-commerce workflows at PayPal, processing millions of daily interactions. Our results demonstrate 60% reduction in development time, and 3x improvement in deployment velocity compared to imperative implementations. The language's declarative approach enables non-engineers to modify agent behaviors safely, while maintaining sub-100ms orchestration overhead. We show that complex workflows involving product search, personalization, and cart management can be expressed in under 50 lines of DSL compared to 500+ lines of imperative code.
Similar Papers
A Survey on Agent Workflow -- Status and Future
Artificial Intelligence
Organizes AI helpers to work together safely.
A Practical Guide for Designing, Developing, and Deploying Production-Grade Agentic AI Workflows
Artificial Intelligence
Builds smarter AI that can do many jobs.
Declarative Data Pipeline for Large Scale ML Services
Distributed, Parallel, and Cluster Computing
Builds better computer programs faster and smarter.