THE UPSTREAM BLIND SPOT: PEGASUS AND SURVEILLANCE INFRASTRUCTURES AS DUAL-USE AI RISK

Published: 01 Mar 2026, Last Modified: 01 Mar 2026AI4PeaceEveryoneRevisionsCC BY 4.0
Track: tiny / short paper (up to 4 pages)
Keywords: AI dual-use risk, upstream surveillance infrastructure, Pegasus spyware, zero-click exploitation, commercial spyware, intelligence infrastructures, AI-enabled intelligence, data extraction systems, surveillance governance, privatized surveillance markets, mercenary spyware, algorithmic accountability, security and military AI, epistemic asymmetry, intelligence data pipelines
TL;DR: AI dual-use risk in security contexts begins upstream of models, as surveillance infrastructures like Pegasus automate covert data extraction that pre-conditions the feasibility, scale, and governance of downstream AI-mediated intelligence.
Abstract: Assessments of artificial intelligence (AI) dual-use risk in security and military contexts often focus on downstream algorithms while overlooking the infrastructural conditions that enable their deployment. This paper argues that upstream surveillance infrastructures constitute a critical but underexamined locus of AI dual-use risk. Through an analysis of zero-click surveillance and a case study of Pegasus spyware, we show how automated, covert data extraction systems enable persistent, large-scale intelligence collection and condition the data environments upon which downstream analytic processes depend. We further examine how privatized surveillance markets diffuse responsibility across public and private actors, constraining oversight and accountability. The paper advances an infrastructural perspective on AI dual-use that foregrounds surveillance architectures alongside algorithmic systems.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 9
Loading