PARALLELPROMPT: Extracting Parallelism from Large Language Model Queries

Published: 18 Sept 2025, Last Modified: 30 Oct 2025NeurIPS 2025 Datasets and Benchmarks Track posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: intra-query parallelism, large language models, prompt decomposition, LLM serving, efficiency, benchmark, dataset, structured prompting
TL;DR: We introduce ParallelPrompt, the first benchmark and dataset for studying intra-query parallelism in real-world LLM queries, enabling reproducible evaluation of structured execution strategies.
Abstract: LLM serving systems typically treat user prompts as monolithic inputs, optimizing inference through decoding tricks or inter-query batching. However, many real-world prompts contain *latent semantic parallelism*—decomposable structures where subtasks can be executed independently to reduce latency while preserving meaning. We introduce PARALLELPROMPT, the first benchmark for measuring intra-query parallelism in natural user prompts. Our dataset comprises over 37,000 real-world prompts from public LLM chat logs, each annotated with a structured schema capturing task templates, shared context, and iteration inputs. These schemas are extracted using LLM-assisted prompting with rule-based multilingual validation. To evaluate the benefits of decomposition, we provide an execution suite that benchmarks serial vs. parallel strategies, measuring latency, structural adherence, and semantic fidelity. Our results show that intra-query parallelism can be successfully parsed in over 75\% of curated datasets, unlocking up to *$5\times$ speedups* on tasks like translation, comprehension, and comparative analysis, with minimal quality degradation. By releasing this benchmark, curation pipeline, and evaluation suite, we provide the first standardized testbed for studying structure-aware execution in LLM serving pipelines.
Croissant File: json
Dataset URL: https://huggingface.co/datasets/forgelab/ParallelPrompt
Code URL: https://github.com/stevenkolawole/ParallelPrompt
Primary Area: Datasets & Benchmarks illustrating Different Deep learning Scenarios (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 1440
Loading