Keywords: Knowledge Mining, Large Language Models, Agents
Abstract: At the core of Deep Research is knowledge mining, the task of extracting structured information from massive unstructured text in response to user instructions.
Large language models (LLMs) excel at interpreting such instructions but are prohibitively expensive to deploy at scale, while traditional pipelines of classifiers and extractors remain efficient yet brittle and unable to generalize to new tasks.
We introduce Falconer, a collaborative framework that combines the agentic reasoning of LLMs with lightweight proxy models for scalable knowledge mining.
In Falconer, LLMs act _as planners_, decomposing user instructions into executable pipelines, and _as annotators_, generating supervision to train a compact proxy.
The framework unifies classification and extraction into two atomic operations, _get\_label_ and _get\_span_, enabling a single instruction-following model to replace multiple task-specific components.
To evaluate the consistency between proxy models incubated by Falconer and annotations provided by humans and large models, we construct new benchmarks covering both planning and end-to-end execution.
Experiments show that Falconer closely matches state-of-the-art LLMs in instruction-following accuracy while reducing inference cost by up to 90\% and accelerating large-scale knowledge mining by more than 20x, offering an efficient and scalable foundation for Deep Research.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 9738
Loading