Abstract: One of the main challenges in AI foundation model pretraining, as well as in fine-tuning transfer learning,
is hallucinations. In this paper, we examine how orchestrating multiple specialized agents can reduce such
hallucinations, with an emphasis on systems that employ NLP (Natural Language Processing) to coordinate
agent interactions. We test a pipeline that introduces three hundred and ten prompts, specifically engineered
to induce hallucinations, into a front-end agent. This agent’s output is then reviewed and refined by secondand
third-level agents, each of which employs different large language models and strategies to flag unverified
claims, provide explicit disclaimers, and clarify any speculative elements. Key Performance Indicators (KPIs)
are collected to measure hallucination-related behaviors with evaluations performed by a fourth-level agent.
Our findings demonstrate the feasibility of multi-agent orchestration for hallucination mitigation and highlight
the value of maintaining a structured exchange of meta-information.
Loading