AgentX: Automating AI Agent Creation with No-Code, Prompt-Driven Orchestration

Published: 22 Sept 2025, Last Modified: 22 Sept 2025WiML @ NeurIPS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI Agent, No-Code, Prompt Driven Orchestration, Model Context Protocol, Retrieval Augmented Generation, Multi-Agent Systems, Workflow Automation
Abstract: Building autonomous agents is often associated with significant technical expertise, requiring programming skills, orchestration frameworks, and careful prompt engineering. This barrier limits participation in AI development to specialists, excluding a wide range of practitioners who could benefit from task-specific automation. We introduce AgentX, a no-code platform that automatically constructs and orchestrates AI agents directly from user prompts. By lowering the entry threshold for agent design, AgentX advances the democratization of AI while providing a foundation for studying scalable multi-agent collaboration. At the core of AgentX is a prompt-to-agent paradigm: users specify a task in natural language, optionally add context and desired output schemas, and the system automatically generates the corresponding agent or multi-agent workflow. Context can be enriched through external knowledge using Retrieval Augmented Generation (RAG) or connected tools via the Model Context Protocol (MCP), enabling agents to reason over both internal and external information sources. Simple tasks are executed by a single agent, while more complex instructions are decomposed into subtasks by a planner module that produces a task graph. These subtasks are automatically assigned to specialized agents through a capability registry and orchestrated to collaborate via a shared blackboard architecture. The system enforces schema-level validation of intermediate outputs to ensure correctness and composability. Two technical innovations distinguish AgentX from prior frameworks. First, it supports automatic task decomposition and orchestration without requiring user intervention, enabling scalable workflows where multiple agents collaborate asynchronously yet coherently. Second, it introduces human-in-the-loop contextualization, allowing users to influence agent behavior while preserving automation. Compared to existing frameworks such as LangChain and AutoGen, AgentX eliminates coding requirements, enabling non-experts to build general-purpose agents while maintaining flexibility and robustness. We further propose a methodology for continuous improvement of AgentX through reinforcement-style updates. By logging successful agent runs and scoring them with human preferences and automated heuristics, we iteratively fine-tune the underlying models and exemplar selection strategies. This hybrid approach, combining supervised learning, reward modeling, and few-shot retrieval, enables the platform to adapt dynamically to new tasks and improve over time. We present preliminary experiments on benchmark tasks spanning information retrieval, text synthesis, and multi-step planning. Results indicate that AgentX achieves comparable or superior task success rates relative to monolithic LLM baselines, while offering significant improvements in usability and transparency. Ongoing work investigates safety, reward hacking mitigation, and user-centered evaluations to ensure that the system remains reliable in real-world deployments. AgentX opens a new research direction at the intersection of AI systems, human-computer interaction, and democratized automation. By enabling prompt-driven, no-code agent creation with automatic orchestration and contextual enrichment via RAG and MCP, we take a step toward making sophisticated AI workflows accessible to a broader community.
Submission Number: 215
Loading