✍️ Original article by Zac Amos, the Features Editor of ReHack, where he writes about his favorite tech topics like AI, cybersecurity and more!
Self-driving cars are closer to going mainstream than they have ever been before. As autonomous vehicle technology advances, serious ethical concerns are surfacing.
Who is responsible when a self-driving car gets into an accident? How should engineers program autonomous vehicles for accident situations? The answers to these questions will shape the future of self-driving vehicles and AI in general.
Technical Reliability
One of the most commonly quoted benefits of autonomous vehicles is that they are safer. By this, people generally mean that an AI-driven vehicle is more precise and consistent and therefore less likely to cause an accident. After all, an algorithm cannot fall asleep at the wheel, get distracted, or even drive over the speed limit.
So, from an ethical standpoint, many see autonomous vehicles as safer than conventional vehicles. On a technical level, this may be true. The sensor technology certainly exists today for self-driving vehicles to be safe. It is simply a matter of getting the driving algorithms comprehensively trained and fine-tuned.
This is where the major ethical considerations surrounding self-driving cars originate. It is not a question of whether or not the technology itself is safe or reliable. The algorithms are the issue – specifically, how they are trained and what they are trained to do during high-stakes maneuvers.
Accident Programming
When a human driver gets into an accident, they don’t make an analytical, calculated response. They react instinctively and sometimes, essentially, randomly. There is no way of changing or controlling this response, so humans have come to accept the fact that reactions that lead to harm are simply unfortunate accidents.
This kind of stance is much more difficult to defend with a self-driving car. An algorithm can’t make an instinctive decision. Every decision an autonomous vehicle makes has to be intentionally programmed and trained into it.
While this means first and foremost that the algorithm avoids dangerous situations, accidents can never be 100% prevented – especially when AI vehicles are driving alongside human drivers.
So, this brings up one of the most challenging ethical questions concerning self-driving cars. How should the car respond in an accident? Specifically, how should it respond in a no-win scenario?
For example, if an autonomous vehicle is in a crash situation where, no matter what, there is a significant chance someone will get hurt, how does the algorithm prioritize? Does it prioritize saving the driver and passengers first, pedestrians, or other drivers? What if the driver and passengers are safe but the car has to hit one of two pedestrians?
Who Gets to Decide?
This is a serious concern for autonomous vehicle manufacturers and developers. Who gets to make a decision that affects lives inside and outside the vehicle? Should government bodies determine how autonomous vehicles respond in a crash, or should it be the driver’s preference?
A study was launched in 2016 that attempted to answer this question. The project used an online game called “Morality Machine” to collect input on what people all over the world would want their self-driving car to do in an accident. The results showed significant bias based on physical traits, though the specific traits varied between geographic locations.
For example, people in Western countries were more likely to save an elderly person rather than a young person. Some themes, like saving women over men, were largely consistent across cultures.
One of the most important takeaways from this study is that morals aren’t consistent around the world. In different cultures, people will have a bias toward saving one type of person in a car accident over another.
These subjective biases are not a fair way to train autonomous vehicles, though. The solution may be to train algorithms to save the most lives possible, regardless of any physical characteristics. At the very least, self-driving cars should be programmed to prioritize human life over property or animals, a requirement Germany already has in place.
Driver Licenses and Liability
As fully self-driving cars creep closer and closer to becoming a reality, many are wondering whether or not driver licenses will even be necessary anymore.
Along the same lines, this brings up the question of who, or what, is responsible on the road. Could an autonomous vehicle drive a child to school with no licensed driver in the vehicle? Who is liable for damages when accidents occur on the road?
Usually, this would be the person driving the car. However, one of the biggest misconceptions about AI is that these algorithms can think for themselves, which is not (yet) the case. The AI that is actually doing the driving is merely software carrying out its programming — the program itself cannot be held responsible for its actions. Does that mean the car manufacturer is responsible for any accidents? The software developers?
This is the type of ethical question that will likely fall to government bodies to sort out. Local road laws may require a licensed driver to be in the car and ready to take over if needed. Similarly, accident liability will likely be determined based on the cause of accidents. If an accident is determined to have been caused by a technical glitch or shortcoming, only then would the manufacturer be responsible.
The Issue of Hacking
On an even deeper level, it may be worth considering whether or not it is safe to trust human lives in the hands of AI at all. Putting a computer in control of something as potentially dangerous as a car may lead to more risks than it removes. For example, it opens up the door to potential hacking incidents that could be fatal.
Unfortunately, this has already happened. Hackers have already figured out how to remotely hack and take control of vehicles, even those that aren’t self-driving. With autonomous vehicles, this danger is heightened, since the vehicle would have to be connected to the internet for things like software updates and GPS. A hacker could carjack someone while they were driving, anonymously steal their car in the middle of the night, or remotely unlock their doors.
From an ethical standpoint, is this situation safer than the risks of human error on the road? Considering the rising rates of cybercrime, some may argue self-driving cars pose too great a security risk to go mainstream.
Morality, AI, and the Roads
Self-driving cars have been a staple of the future for decades. However, the reality is that autonomous driving technology poses some serious ethical concerns that get left out of idealistic representations of tomorrow. Self-driving cars could very well bring a collection of incredible benefits to the world, such as mobility for those who cannot drive and time-saving opportunities for everyone.
Before this can happen, though, engineers, leaders, psychologists, philosophers, and society at large must determine what moral code will be behind the computer code of our autonomous vehicles.