Keywords: Novelty Generation; Monte Carlo Tree Search; Scientific ideas Generation
Abstract: Large Language Models (LLMs) often struggle with generating truly innovative ideas, typically defaulting to high-probability, familiar
concepts within their training data's "gravity wells." While advanced search-based methods like Tree of Thoughts (ToT) attempt to
mitigate this, they are fundamentally limited by their reliance on unprincipled, inconsistent self-evaluation heuristics to guide
exploration. To address this gap, we introduce \textbf{Magellan}, a novel framework that reframes creative generation as a principled,
guided exploration of an LLM's latent conceptual space. At its core, Magellan employs Monte Carlo Tree Search (MCTS) governed by a
hierarchical guidance system. For long-range direction, a "semantic compass" vector, formulated via orthogonal projection, steers the
search towards relevant novelty. For local, step-by-step decisions, a landscape-aware value function replaces flawed self-evaluation with
an explicit reward structure that balances intrinsic coherence, extrinsic novelty, and narrative progress. Extensive experiments
demonstrate that Magellan significantly outperforms strong baselines, including ReAct and ToT, in generating scientific ideas with
superior plausibility and innovation. Our work shows that for creative discovery, a principled, guided search is more effective than
unconstrained agency, paving the way for LLMs to become more capable partners in innovation.
Supplementary Material: zip
Submission Number: 315
Loading