Keywords: Embodied AI, policy, evaluations, benchmarks, liability, autonomous vehicles, robots, privacy, jailbreaking
TL;DR: Machine learning researchers must urgently work with policymakers to address growing risks from embodied AI by plugging gaps in existing frameworks.
Abstract: The field of embodied AI (EAI) is rapidly advancing. Unlike virtual AI, EAI systems can exist in, learn from, reason about, and act in the physical world. With recent advances in AI and hardware research and design, EAI systems are becoming increasingly capable across an expanding set of operational domains. While EAI systems can offer many benefits, they also pose significant short- and long-term risks, including physical harm, surveillance, and societal disruption. These risks require urgent attention from policymakers, as existing policies for industrial robots and autonomous vehicles are insufficient to manage the full range of concerns EAI systems present. To address this issue, this paper makes three contributions. First, we provide a taxonomy of the physical, informational, economic, and social risks EAI systems pose. Second, we analyze policies in the US, UK, and EU to assess how existing frameworks address these risks and to identify critical gaps. We conclude by offering policy recommendations for the safe and beneficial deployment of EAI systems, such as mandatory testing and certification schemes, clarified liability frameworks, and strategies to manage EAI’s potentially transformative economic and societal impacts.
Lay Summary: Robots and AI systems that can move and act in the real world — like delivery drones and self-driving cars — are becoming more advanced and widespread. We call these "embodied AI" systems because they have bodies in the physical world, unlike AI that only exists on computers. These embodied AI systems can physically interact with people and the environment. While they promise many benefits, like providing companionship to lonely elderly people or helping out around the house with time-consuming chores, they also create new risks: they could physically harm people, be used for surveillance, disrupt job markets, or cause other societal problems. Current laws for industrial robots and self-driving cars don't cover all these concerns. To address this gap, we first came up with a detailed list of risks these systems pose — from physical dangers to economic disruption. With these categories in mind, we analyzed how existing policies in the United States, United Kingdom, and European Union address these risks and tried to identify which areas could be most improved with new laws or ideas. Based on our analysis, we recommend specific policy solutions: for example, requiring embodied AI systems to pass safety tests before deployment, clarifying who is legally responsible when something goes wrong, and developing strategies to manage the economic and social changes these systems will bring. These policies can help ensure embodied AI benefits society in the short- and long-term while minimizing harm.
Submission Number: 654
Loading