BiasLab: Toward Explainable Political Bias Detection with Dual-Axis Human Annotations and Rationale Indicators
Keywords: political bias detection, explainable AI, interpretable NLP, dataset annotation, dual-axis sentiment, rationale annotations, human annotation, large language models, perception, crowdsourcing
TL;DR: A dataset of political news articles annotated with dual-axis bias labels and rationale explanations to support human-AI alignment and explainable bias detection models.
Abstract: We present BiasLab, a dataset of 300 political news articles annotated for perceived ideological bias. These articles were selected from
a curated 900-document pool covering diverse political events and source biases. Each article is labeled by crowdworkers along two independent scales, assessing sentiment toward the Democratic and Republican parties, and enriched with rationale indicators adapted from media literacy guidelines. The annotation pipeline incorporates targeted worker qualification and was refined through pilot-phase analysis. We quantify interannotator agreement, analyze misalignment with source-level outlet bias, and organize the resulting labels into interpretable subsets. We complement human annotations with schema-constrained large language model (LLM) labeling to compare and align human and machine interpretability. These annotations captured mirrored asymmetries, especially in misclassifying subtly right-leaning content.
We define two modeling tasks: perception drift prediction and rationale type classification, and report baseline performance to illustrate the challenge of explainable bias detection. Our analysis reveals notable disagreement patterns and perception drift, underscoring the subjectivity inherent in bias perception. We release the dataset, annotation schema, and baseline code, enabling feedback-driven alignment research using structured human labels and rationales for political bias perception.
Code & data: https://github.com/ksolaiman/PoliticalBiasCorpus
Submission Number: 15
Loading