Keywords: structured deliberation, online moderation, collective intelligence, fine-tuned LLMs, argument mining, multi-turn dialogue, fairness and evaluation
Abstract: Online deliberation platforms promise scalable collective intelligence, yet their free form threads are difficult to navigate, summarize, and moderate. We argue that progress requires treating structured deliberation as a formal natural language processing (NLP) problem with civic significance: reliably mapping raw discussions into a deliberation native schema so that key barriers, solutions, metrics, and stances are visible at scale. We introduce CIVICPARSE, a two stage pipeline that operationalizes this problem as extraction and classification over a domain grounded schema. Stage 1 extracts distinct points from threads; Stage 2 assigns Barrier, Solution, or Metric types together with Pro/Con roles. Trained on 840 curated Deliberatorium examples, CIVICPARSE attains 88.5% accuracy with strong precision (91.1%) and recall (96.5%), substantially outperforming identical prompt only baselines. Beyond the gains from fine tuning, we contribute a reproducible extractor classifier design, a curated dataset, and an evaluation protocol that together cast structured deliberation as a benchmarkable task for AI assisted civic decision making.
Submission Number: 4
Loading