Legal responsibility for a robot’s actions

On Tuesday night I attended the launch of the Strathclyde Centre for Internet Law and Policy. The launch of the centre is in tandem with Strathclyde University’s rebranding of its renowned LLM in Information Technology Law and Telecoms (which yours truly completed in 2003), which is now known as the LLM in Internet Law and Policy.

Marking the launch was a lecture on “Regulating Robots: Re-Writing Asimov’s Three Laws in the Real World?” by Professor Alan Winfield, Director of the University of West of England Science Communication Unit, EPSRC Senior Media Fellow and Lilian Edwards, Professor of E-Governance at Strathclyde University.

The lecture sought to address legal responsibility for a robot’s actions, and whether, given the rapid advances in robotics, we need to legislate for Asimov’s Three Laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I found the topic particularly interesting because I had just read an article called “Towards new recognition of liability in the digital world: should we be more creative?” in the International Journal of Law and Information Technology, which discussed the attribution of liability for “intelligent software”. I felt that the article raised a lot of interesting issues, but its conclusion – that we need some collective form of liability taking into account the role every party plays in producing the liability in question – was perhaps impractical.

I was therefore hoping that Professor Winfield and Professor Edwards might reach a different conclusion, and they didn’t disappoint.

It’s impossible to neatly summarise an hour-long lecture, but I think they were proposing that liability for robots should arguably mirror liability for software. This would mean that the party best placed to manage risk assumes it (and insures against it), and if that a robot is subsequently hacked and causes damage, then the hacker should probably be liable for any damage caused.

As for Asimov’s three laws, the Professors acknowledged that the laws were instructive, but proposed that they should be replaced with a new five-part ethical code.

Alan Winfield was very effective at making everybody in the room think differently about “robots”. I appreciate you have probably read to this point and found the confident way I’m talking about “robots” a bit silly.  Well, it turns out that robots are already all around us!  Alan pointed out, quite rightly, that nobody speaks about the “dish washer robot” – it’s just the dishwasher! (Disappointingly the montage of sci-fi robots in Alan’s introductory powerpoint slide didn’t include Optimus Prime, but since Alan bears more than a passing resemblance to the stately Patrick Stewart, I lacked the courage to complain!) The serious point here is that as society increases its use of (and reliance upon) robots, liability for their actions is something that lawyers will increasingly need to consider.

Overall it was a very enjoyable and thought provoking lecture, and I look forward to hearing more from these speakers on this subject in the future.

1 Response to “Legal responsibility for a robot’s actions”

  1. 1 martinsloan November 17, 2011 at 11:57 am

    Interesting stuff. I wonder if Nova Laboratories tried to disclaim liability for the chaos Johnny 5 created in Short Circuit?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Twitter: @BrodiesTechBlog feed

November 2011
« Oct   Dec »

%d bloggers like this: