SAI What?! Ping the Bots Before You Probe the People: Testing Large Language Models for Pre-Cognitive Interviewing in Survey Research
Keywords: Cognitive Interviewing, Large Language Models, Persona Prompting, Questionnaire Design, Survey Evaluation and Development
TL;DR: This study systematically investigates the potential of LLMs to surface issues typically identified through human cognitive interviewing before involving human participants.
Submission Type: Non-Archival
Abstract: With the rapid evolution and availability of large language models (LLMs) such as GPT-4 (OpenAI, 2023), Claude, and Gemini, there is growing interest in integrating these tools into the survey research process. There is emerging and current work focusing on tasks such as developing questionnaires on specific topics (Padgett et al., 2024), assessing readability of survey items (Olson and Buskirk, 2025), revising existing scales (Yun et al., 2023) and open-ended response classification or summarization (Gilardi, Gessler, & Kubli, 2023; Lerner at al., 2024). This work expands these applications by investigating applications of LLMs to the earliest stage of survey instrument development: using LLMs to simulate cognitive interviews prior to engaging human respondents. In essence the LLM models would be used to conduct what we are calling a “pre-cognitive interview,” where AI is used to evaluate how questions might be interpreted before human field testing. This early-stage AI-assisted cognitive interview could offer early insights into comprehension, recall, judgment, and response challenges that may arise among human participants (Willis, 2005) and could then be incorporated by researchers in drafting more applicable (human) cognitive interviewing scripts and protocols.
Submission Number: 27
Loading