MAJI: A Multi-Agent Workflow for Augmenting Journalistic Interviews

ACL ARR 2025 July Submission529 Authors

28 Jul 2025 (modified: 29 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Journalistic interviews are creative, dynamic processes where success hinges on insightful, real-time questioning. While Large Language Models (LLMs) can assist, their tendency to generate coherent but uninspired questions optimizes for probable, not insightful, continuations. This paper investigates whether a structured, multi-agent approach can overcome this limitation to act as a more effective creative partner for journalists. We introduce MAJI, a system designed for this purpose, which employs a divergent-convergent architecture: a committee of specialized agents generates a diverse set of questions, and a convergent agent selects the optimal one. We evaluated MAJI against a suite of strong LLM baselines. Our results demonstrate that our multi-agent framework produces questions that are more coherent, elaborate, and original (+36.9\% for our best model vs. a standard LLM baseline), exceeded strong LLM baselines on key measures of creative question quality. Most critically, in a blind survey, professional journalists preferred MAJI's selected questions over those from the baseline by a margin of more than two to one. We present the system's evolution, highlighting the architectural trade-offs that enable MAJI to augment, rather than simply automate, journalistic inquiry. We will release the code upon publication.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: task-oriented, grounded dialog, dialogue state tracking, conversational modeling
Contribution Types: NLP engineering experiment
Languages Studied: English, Chinese
Submission Number: 529
Loading