Embodied AI: Emerging Risks and Opportunities for Policy Action
By: Jared Perlo , Alexander Robey , Fazl Barez and more
Potential Business Impact:
Helps robots act safely in the real world.
The field of embodied AI (EAI) is rapidly advancing. Unlike virtual AI, EAI systems can exist in, learn from, reason about, and act in the physical world. With recent advances in AI models and hardware, EAI systems are becoming increasingly capable across wider operational domains. While EAI systems can offer many benefits, they also pose significant risks, including physical harm from malicious use, mass surveillance, as well as economic and societal disruption. These risks require urgent attention from policymakers, as existing policies governing industrial robots and autonomous vehicles are insufficient to address the full range of concerns EAI systems present. To help address this issue, this paper makes three contributions. First, we provide a taxonomy of the physical, informational, economic, and social risks EAI systems pose. Second, we analyze policies in the US, EU, and UK to assess how existing frameworks address these risks and to identify critical gaps. We conclude by offering policy recommendations for the safe and beneficial deployment of EAI systems, such as mandatory testing and certification schemes, clarified liability frameworks, and strategies to manage EAI's potentially transformative economic and societal impacts.
Similar Papers
Embodied Intelligence: The Key to Unblocking Generalized Artificial Intelligence
Artificial Intelligence
Makes robots learn and act like humans.
Multi-agent Embodied AI: Advances and Future Directions
Artificial Intelligence
Robots learn to work together in the real world.
Expert Assessment: The Systemic Environmental Risks of Artficial Intelligence
Computers and Society
AI harms nature in hidden, big ways.