RAAS: Relative Architecture Adaptive Search for Agentic Supernet Optimization

Published: 20 Nov 2025, Last Modified: 09 Mar 2026AAAI 2026 TrustAgent Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, Agent, Multi-Agent System, Agentic System
Abstract: Large Language Model (LLM) agentic systems solve complex tasks through coordinated workflows, but designing them is a fragile, labor-intensive process. The **Agentic Supernet** paradigm automates this by optimizing a probabilistic space of architectures. However, its reliance on *absolute* performance scores creates a critical flaw: the learning signal entangles an architecture's intrinsic merit with the extrinsic difficulty of the evaluation query. This entanglement leads to unstable search, where simple queries misleadingly inflate weak designs and difficult queries unfairly suppress strong ones. To resolve this, we introduce **RAAS** (Relative Architecture Adaptive Search), a framework that disentangles architectural quality from problem difficulty. Instead of relying on noisy absolute scores, RAAS evaluates a cohort of candidate architectures head-to-head on the *same query*. By learning from their **relative advantage**, it synthesizes a stable, context-fair learning signal that isolates true architectural superiority. This intra-group, relative assessment provides clear and consistent guidance for the search process. Extensive experiments across six benchmarks show that RAAS not only discovers significantly more performant architectures—improving HumanEval pass@1 from 92.23% to 96.31% and MATH accuracy from 52.08% to 60.87%—but also does so with greater sample efficiency and stability, demonstrating that disentangled, relative evaluation is key to robust agentic architecture search.
Submission Number: 41
Loading