Do LLMs Plan Like Human Writers? Comparing Journalist Coverage of Press Releases with LLMs

ACL ARR 2024 June Submission3613 Authors

16 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Journalists engage in multiple steps in the news writing process that depend on human creativity, like exploring different ``angles'' (i.e. story directions). These can potentially be aided by large language models (LLMs). By affecting planning decisions, such interventions can have an outsize impact on creative output. We advocate a careful approach to evaluating these interventions, to ensure alignment with human values, by comparing LLM decisions to previous human decisions. In a case study of journalistic coverage of press releases, we assemble a large dataset of 250k press releases and 650k human-written articles covering them. We develop methods to identify news articles that \textit{challenge and contextualize} press releases. Finally, we evaluate suggestions made by LLMs for these articles and compare these with decisions made by human journalists.
Paper Type: Long
Research Area: Human-Centered NLP
Research Area Keywords: quantitative analyses of news and/or social media
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Publicly available software and/or pre-trained models, Data resources, Data analysis
Languages Studied: English
Submission Number: 3613
Loading