Keywords: Human rights, India, Digital governance, AI ethics, AI safety, Facial recognition technology
TL;DR: With India as a case study, we call for cohesive action, arguing that current AI regulatory frameworks cannot prevent governments from misusing AI to target its critics and minorities, especially in democracies with weak institutional safeguards.
Abstract: With greater data consolidation capabilities, artificial intelligence synergizes with extended dataveillance to give malicious actors in power, including private and state institutions, better tools to achieve objectives harmful to the greater population. With an urban population of $\sim$500 million out of a total $\sim$1.45 billion, and an emerging base of around 750-800 million active internet users, India provides a compelling case study to analyze AI misuse. Being a non-western democracy, it also allows us to critique the existing and proposed frameworks to regulate the state use of surveillance and AI technology for public safety.
We argue that the recent deterioration of human rights safety in India, and state and military deployment of facial recognition in policing points to a larger threat in impending expansion of harmful use of AI systems. We observe that frameworks proposed to risk assess and regulate such systems are found lacking in this case. Combined with weaponization of AI towards swaying public opinion to favor the government, the situation sets the stage for dangerous advancements in the near future. Based on our observations, we call for a re-evaluation of global regulatory frameworks and to extend this reasoning to other nations with systems vulnerable to AI-driven misuse by harmful state actors.
Submission Number: 17
Loading