TL;DR: Achieving meaningful scientific and societal advances with artificial intelligence (AI) requires a responsible, application-driven approach (RAD) to AI research.
Abstract: This position paper argues that achieving meaningful scientific and societal advances with artificial intelligence (AI) requires a responsible, application-driven approach (RAD) to AI research. As AI is increasingly integrated into society, AI researchers must engage with the specific contexts where AI is being applied. This includes being responsive to ethical and legal considerations, technical and societal constraints, and public discourse. We present the case for RAD-AI to drive research through a three-staged approach: (1) building transdisciplinary teams and people-centred studies; (2) addressing context-specific methods, ethical commitments, assumptions, and metrics; and (3) testing and sustaining efficacy through staged testbeds and a community of practice. We present a vision for the future of application-driven AI research to unlock new value through technically feasible methods that are adaptive to the contextual needs and values of the communities they ultimately serve.
Lay Summary: AI is increasingly used across society in sectors such as healthcare, agriculture, and public broadcasting—but too often, it’s developed in isolation from the real-world contexts it is ultimately used in. This disconnect can lead to ineffective or even harmful outcomes. We propose an approach called Responsible, Application-Driven AI (RAD-AI). It combines deep technical expertise with context-specific ethical, legal, and societal considerations from the start. RAD-AI follows a three-step process: build diverse, people-centred studies; navigate complexity in context-specific methods, ethical commitments, assumptions, and metrics; and test and sustain efficacy in real-world environments.
RAD-AI helps AI researchers break out of the “black box” by designing systems that are not only technically sound but also socially responsible. This approach builds a civically engaged research community, enhances public confidence in AI, and ensures AI solutions are aligned with the values and needs of the people they serve. It’s a roadmap for making AI research more adaptive and responsive to contextual needs and societal values, and it’s totally RAD!
Primary Area: Research Priorities, Methodology, and Evaluation
Keywords: Artificial Intelligence, Human-AI Collaboration, Responsible AI, Application-driven Research, Social Impacts, AI Safety, Digital Trust
Submission Number: 424
Loading