Claims Transformation | Georgina | 28 June 2019
Our latest white paper explores where the liability lies for autonomous vehicles and insurance. As driverless cars become more and more futuristic, the humble ‘robot’ will need to become more human – but without making any errors whatsoever.
Read our white paper in full here.
The legal notion of the ‘reasonable person’ is of interest here. The reasonable person is a hypothetical member of the community, who owes a duty to act in, well, a reasonable manner. In that way, the reasonable person is a composite of that community’s judgement of how an individual in that community should behave, particularly when it comes to actions which might cause harm to others.
This is a fundamental basis of negligence under tort, where if a person is acting unreasonably, they may be considered to have acted negligently. For example, it is reasonable for a driver to wait stationary at a red traffic light, where it is predictable that to jump the light could result in harm to others. Thereby, by jumping the light, the driver can be seen to be acting unreasonably and therefore is acting in a negligent manner.
It’s all too easy to place agency upon an autonomous vehicle and to consider the incident through the same legal lens, when it comes to case law. Case law is previous rulings made by a court, which are considered to be precedents. We would then ask, did the autonomous vehicle act in the same way that a reasonable person would, according to precedent?
For a Claims Handler, ‘Bingham & Berrymans’ Personal Injury and Motor Claims Cases’ is the bible of precedents, detailing thousands of examples of case law. These previous rulings guide the Claims Handler on those precedents, when assessing liability for a specific motor accident. A central question is whether these precedents, accrued over several decades, are still relevant.
Is there such a thing as the reasonable robot?
Can we simply retain existing case law and replace a driver with the entity which wrote the self-driving software? After all, for an autonomous vehicle, the software is meant to make the same decisions a real driver would. The software decides whether to proceed at a red light or wait for the light to turn green. The software which allows an autonomous vehicle to function is inherently complex, possibly too complex for a court to decide whether the code is at fault, especially where the experts in that software will work for the defendant, the vehicle manufacturer.
While discussion around autonomous vehicle artificial intelligence and machine learning is commonplace, little attention has been paid to the actual rules they will follow and whether these rules are being adopted by all autonomous vehicle manufacturers and software providers.
Since 1942, when Isaac Asimov gave us his three laws of robotics, the topic of robot has been extensively explored in books, movies and television.
Isaac Asimov’s three laws of robotics:
First Law:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law:
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
As a pedestrian, knowing what laws an autonomous vehicle operates under might mean the difference between life or death. According to Immanuel Kant’s moral theory, a person acts morally, when acting in a manner which one expects all people to act. Similar in affectation to legal fiction of the reasonable person, Kant expects that members of a community will have a moral regard for each other, only committing acts which are moral.
In many senses, Asimov’s three laws of robotics are an attempt to broaden those morals into robotics and by extension, we expect the laws an autonomous vehicle follows to mirror the morality perspective of the community the vehicle operates within.
The trouble with exploration of morality is the innate subjectivity and complexity of human moral decisions.
We can explore this with a Kant morality test.
An autonomous vehicle is traveling along a road at 60mph. There are five passengers on board, a family of two adults and three children. A human controlled vehicle in an adjacent lane suddenly and violently serves into the lane in front of the autonomous vehicle carrying the family. The autonomous vehicle knows that it cannot stop in time to prevent a high-speed impact with the other vehicle, so must brake and steer either left or right. A steer to the left would cause the autonomous vehicle to strike a 17-year-old pedestrian, likely killing them. A steer to the right will place the autonomous vehicle in the path of a third vehicle containing two elderly adults.
What is the right moral decision, and would the autonomous vehicle be able to make it? Under Asimov’s three laws, the autonomous vehicle must protect human life, but which lives? The family, the teenager, the elderly couple? If the autonomous vehicle decides to allow the teenager to die, rather than the elderly couple, purely on the mathematical calculation, are we happy about that?
In the future, will we need personal morality imperatives? A directorate which allows automated vehicles to understand a person’s wishes in such an event. Could the elderly couple have a declared waiver that permits an autonomous vehicle to kill them rather than a teenager? The philosophical implications are tremendous, of course. What if I was to suggest that an autonomous vehicle should prioritise law abiding citizens over convicted criminals? Or that those who can afford it, should be able to buy a higher priority status in the event that an autonomous vehicle accident occurs?
“Even if we wanted to imbue an autonomous vehicle with an ethical engine, we don’t have the technical
capability today to do so.” – Karl Iagnemma, Author