Why Self-Driving Cars Can’t Make An Ethical Judgement (& Why It Doesn’t Matter)

by

in

Shortly you will be able safely text in the wheel and this is something Tesla CEO Elon Musk guarantees will soon be possible in the immediate future. ‘I’m very confident about full self-driving functionality being complete by the end of this year,” Musk asserted. While the promise of being able to relax while commuting or during long car trips is very alluring, the possibility of self-driving cars also presents many ethical questions. Although few people think about ethics each day as they pull out of their driveways, any driver must be prepared to make ethical judgements at a moment’s notice.

Far from being premeditated, most people wind up making ethical judgements while driving based on instinct and unconscious biases. When a driver is slowing down at a red light and sees that the car behind them is not, they have to decide in a split second whether to allow the crash from behind to occur, possibly putting themselves in danger, or to run the red and risk hitting a distracted pedestrian. Few people could say with confidence what they would do in such a situation.

Related: How A ‘White Obama’ Image Highlights AI’s Racial Bias

The truth is that most people’s decisions will depend more on biases compared to anything else, as the The Moral Machine Experiment study in Nature concluded. Although this is deeply troubling, the problem doesn’t have one definite root cause. With the onus of forcing switching from fallible humans to AI, nonetheless, those in charge of designing the algorithms today must decide beforehand how best to respond with the diverse iterations of the problem. Many philosophers and ethicists see the problem of AI being programmed to make ethical decisions about who lives and who dies for a struggle that cannot be overcome. When AI is programmed to always favor the motorist, being a pedestrian will get too hazardous, they argue. Conversely, who’d purchase a car that didn’t even favor protecting the life of its passengers? These dilemmas are tough to solve and each self-driving automobile that finally ends up on the street will have to be programmed to produce these decisions, but is this fundamentally different from the drivers presently on the street who have to make these identical moral decisions anyhow?

When it comes to security, AI that’s carefully programmed to respond quickly, and which will never be diverted or fall asleep, is obviously far superior to the variety of drivers that are presently on the street. Instead of inexperienced teens and frail elderly people that are most likely to err behind the wheel, self-driving cars will by definition not make mistakes. The fact that self-driving cars won’t ever make any unforeseen choices, which is exactly what makes them safe, is also what makes them potentially filled with ethical quandaries.

Having to confront the biases with which individuals usually make decisions and program ethical standards into self-driving cars will be difficult and problematic. However, these decisions that will have to be made beforehand in the instance of self-driving cars are those that have to get made daily on the street regardless. Ultimately, the appearance of self-driving automobiles doesn’t present some new ethical dilemmas. Instead, it only forces those involved with the development of those AI driven cars to confront biases and hopefully make better decisions. When coupled with the simple fact that self-driving cars will completely eliminate careless accidents, it is hard to observe a legitimate moral debate against them.

More: Tesla Continues To Defeat Aftermarket Hacks, But For How Long?

Source: Nature, Independent

Article Source and Credit screenrant.com https://screenrant.com/self-driving-cars-ethics-decisions-drivers/ Buy Tickets for every event – Sports, Concerts, Festivals and more buytickets.com

Discover more from Teslas Only

Subscribe now to keep reading and get access to the full archive.

Continue reading