Artificial Intelligence Ethics: The Case of Self-Driving Cars

A case study on Artificial Intelligence ethics – Welmo’s self-driving cars.

Artificial Intelligence, or simply AI, has always been a fascinating (and a bit scary) concept. However, no matter how you look at it, there is no denying that AI plays a huge role in in our day-to-day activities, even if we sometimes don’t realize it.

When we think about it, we think of robots and self-driving cars – but it is present even when we do something simple, like looking up a term on Google. Yes, that’s right – AI is deeply integrated in many products from Google, including its famous search engine.

artificial intelligence ethics

Today, more than 37% of all businesses employ AI in one or another way.

However, the development of AI-based products go beyond just technology.

Companies also worry about Artificial Intelligence ethics – the concern of moral behavior as they design, build, use and treat AI systems. There are many questions that often arise when developing AI-based products:

Can AI systems make ethical decisions? What problems can Artificial Intelligence cause in terms of moral behavior? Is it possible to prevent unethical situations? What happens when an AI system makes a mistake?

There isn’t an easy answer to these questions. In fact, some of them are so complicated that they don’t have any definitive answers at all. To better understand these issues, today we will look into artificial intelligence ethics from the lens of Welmo’s self-driving cars.

So, let’s dive right into it:

Artificial Intelligence Ethics: the Self-Driving Cars Dilemma

Originating as the Google Self-Driving Car Project in 2009 until becoming a stand-alone subsidiary in December of 2016, the technology development company Waymo launched its first commercial self-driving car service in December 2018.

By October 2018, the autonomous car had completed over 10 million miles of driving on public roads, and the astonishing 7 billion simulation miles on a virtual-world program called Carcraft.

artificial intelligence ethics

However, despite stunning the world with a revolutionary technology based on full autonomy with sensors that provide 360-degree views and lasers that are able to detect objects up to 300 metres away, the company valued at more than $100 billion is yet to resolve some important moral challenges

To explain these ethical challenges in a practical example, let’s take a look at this video called The ethical dilemma of self-driving cars by Patrick Lin, and analyze it from the Magna Carta’s perspective: 

ethical dilemma of self driving cars

Artificial Intelligence Ethics: The ethical dilemma of self-driving cars (watch video)

In this thought experiment, Patrick Lin presents us with a practical case in which a self-driving car, boxed from all sides on the road, is threatened by the fall of a heavy object, and needs to make an important decision – swerve left into a SUV, swerve right into a motorcycle, or continue straight and getting hit by the object.

In this situation, Patrick Lin asks the following morally-based question:

Should the car prioritize the passenger’s safety by hitting the motorcycle, minimize danger to others by not swerving (but risking the life of the passenger), or hit the SUV? What would be the most ethical decision in this case?

In this mental exercise, Patrick Lin states that if the decision is to be taken by a person manually driving a regular vehicle, it could be interpreted as a panic-based, impulsive reaction rather than an actual decision.

However, in the case where a self-driving vehicle is making a decision based on pre-programmed situations and circumstances, would that be considered a “pre-meditated homicide”?

Are the outcomes of possible accidents going to be determined months in advance by programmers? What factors should be taken into account beforehand in order to minimize harm?

The thought experiment conducted by Patrick Lin leaves a lot of room for the understanding, analysis, and implementation of artificial intelligence.

Considering possible future scenarios like this one, let’s take a look at what challenges Waymo will have to solve in order to succeed with its self-driving cars.

Balance of Power & Machine Decisions

Undoubtedly, one of the main challenges for Waymo and other companies developing self-driving technology remains in determining the balance of power between humans and machines – at what point should the power switch from machines to humans, and from humans to machines?

Can we ever be fully and unconditionally dependent on them?

At this stage of a still emerging technology, probably not. This becomes even more clear when looking at the recent crash in Ethiopia of the Boeing 737 Max, where the anti-stall MCAS system automatically forced the nose of the plane down due to incorrect sensor readings, making pilots practically incapable to do anything to correct the machine’s error.

mission statement examples

Was the system given too much power and priority over human intervention? While it’s true that artificial intelligence reduces human error to a huge extent, it doesn’t mean that machine error won’t happen at some point of the process.

Personal choice & Polarization

The next reflection when it comes to artificial intelligence ethics has to do with personal choice and polatization.

It is one of the biggest aspects when it comes to Magna Carta – a guide for inclusivity and fairness in the Global AI Economy, presented by experts Olaf GrothMark Nitzberg and Mark Esposito.

This guide has the purpose of supporting organizations in developing a successful AI strategy, with focus on Artificial Intelligence ethics. It raises major questions about the degree of human choice and inclusion in AI development.

How are we going to govern this brave new world of machine meritocracy? Will machines eliminate personal choice?

artificial intelligence ethics

While personal choice and polarization are some of the key aspects of the Magna Carta, self-driving technology might not necessarily have a strong negative impact on people and their day-to-day lives.

This type of technology is designed with the idea of making better, faster, and more environmentally-friendly decisions that would end up benefiting practically all users of this service. It might reduce personal choice to a certain extent, but I don’t think it will eliminate it completely.

Judgements, discrimination and bias

As we already discussed beforehand, machines with artificial intelligence will make decisions about our safety that might compromise the well-being of others if they were already pre-programmed to “react” in a certain way depending on the situation.

As we saw in the example of the car getting threatened by a heavy object, would the priority be minimizing overall harm, or saving the owner of the self-driving vehicle?

As Patrick Lin asks, would you choose a car that always saves as many lives as possible in an accident, or one that would save you at any cost? These are just some of the questions that arise when it comes to Artificial Intelligence ethics.

google artificial intelligence

Moreover, what would happen if the car starts analyzing data based on a programmer’s personal history, predisposition, and unseen biases? Is there any guarantee that the decisions that self-driving cars make will always be completely objective, and who would decide them?

Programmers, companies, maybe even governments? What is the possibility of machine discrimination based on algorithms and certain pattern recognition? In this case, I think that self-driving technology is not compliant yet. 

Conclusion

In conclusion, I believe that the key to answering and resolving these ethical questions would be balancing out the power between machines and humans, and deciding to what extend would machines (in this case, self-driving cars) be able to make life-depending decisions.

I think that as the technology is still barely emerging, humans should have the power and priority to make moral-based decisions while machines are evolving and becoming capable to make objective decisions that minimize harm for everyone.

What do you think about artificial intelligence ethics? Should humans reign over machines, machines over humans, or there has to be a well-calculated balance? Let me know in the comments below! If you liked this article, you might also like 12 Ways Machine Learning Can Improve Marketing.


Written by
animitevabg
Join the discussion

Ads

Follow Me

Follow my LinkedIn page for the latest updates!

Ads

Ads

Ads