AutoMix: Mixing Models with Few-shot Self and Meta Verification

Published: 01 Nov 2023, Last Modified: 12 Dec 2023R0-FoMo PosterEveryoneRevisionsBibTeX
Keywords: Few-shot learning, Zero-shot learning, Self-Verification, Decision making, Prompting, LLMs
TL;DR: AutoMix robustly routes queries among language models of varying sizes using a context-driven self-verifier and a POMDP-based meta-verifier, efficiently balancing computational cost and solution accuracy.
Abstract: Large language models (LLMs) are now available in various sizes and configurations from cloud API providers. While this diversity offers a broad spectrum of choices, effectively leveraging the options to optimize computational cost and performance remains challenging. In this work, we present AutoMix, an approach that strategically routes queries to larger LMs, based on the approximate correctness of outputs from a smaller LM. Central to AutoMix is a few-shot self-verification mechanism, which estimates the reliability of its own outputs without requiring training. Given that verifications can be noisy, we employ a meta verifier in \ours to refine the accuracy of these assessments. Our experiments using LLAMA2-13B and LLAMA2-70B, on five context-grounded reasoning datasets demonstrate that AutoMix surpasses established baselines, improving the incremental benefit per cost by up to 57%.
Submission Number: 118
Loading