Ethical Issues for Robotics and Autonomous Systems

Robot ethics - tough questions in system design

Ethical Issues for Robotics and Autonomous Systems

By Chris Middleton

In recent years we’ve heard some of the ethical challenges associated with AI – in particular, the risk of it automating systemic bias via flawed training data – but we’ve heard less about the ethical challenges of robotics and autonomous systems (RAS) themselves, beyond their apparent threat to employment. This is something that the UK-RAS Network, the robotics research group of the Engineering and Physical Sciences Research Council (EPSRC), is seeking to put right with the publication of a new white paper.

One impetus behind the paper, Ethical Issues for Robotics and Autonomous Systems, is the principle that engineers should hold paramount the health and safety of others, draw attention to hazards, and ensure that their work is both lawful and justified. Launching the white paper at UK Robotics Week in London, co-author John McDermid, Professor of Software Engineering at York University, described it as “pragmatic guidance to help designers and operators”, not just informing them of the dangers, but also reminding them of principles they should already hold dear.

Wise words. To their credit, the paper’s authors acknowledge that many of the ethical questions surrounding robots have no definitive answer as yet. But one in particular demands our attention: whether robots should themselves be regarded as moral machines or moral agents, with responsibility delegated to them directly rather than to their human designers or minders.

This is a real ethical minefield, given that – despite UN declarations – there is no universal definition of good and bad behavior in a world in which rights for women, children, atheists, LGBTQ people, dissidents, journalists, immigrants, asylum seekers, and ethnic minorities are seen very differently in some parts of the world to others. Indeed, arguments still rage about them in our own societies.