Artificial Ethics

Which way - up or down?

Which way – up or down?

Virtually everyone agrees that machines are getting smarter all the time. Artificial intelligence (AI) researchers and computer scientists debate whether that means we will be meeting robots as smart as humans anytime soon. Many predict human-level AI by 2030 or sooner and others argue that we are still a long way from creating anything closely resembling general human intelligence (often referred to as “strong AI”). But the vast majority of experts agree that, regardless of how long it takes, strong AI is all but inevitable.

The question that usually arises when we realize that we’ll eventually share the planet with machine intelligence at least as smart as us is “will they be nice? … or will they be lethal?”  There’s nothing new about this discussion, but it has been rapidly growing in volume and urgency as exponentially growing technological advances make it more apparent that strong AI is no longer only the stuff of science fiction.  AI is a growing aspect of everyday life and there are enormous market pressures that ensure that its rate of development will continue to increase for the foreseeable future.

These discussions of AI ethics often assume that the most important questions revolve around how to imbue machines with moral reasoning.  I think this assumption is badly mistaken, for several reasons.  While this is certainly an important topic, it is virtually irrelevant to the broader questions concerning the risks that are posed by strong AI.

For thousands of years, humans have fought wars over religious differences.  At root, these differences usually embody fundamental conceptual conflicts concerning the proper foundations of ethics.  The caption for religious wars may mention differences of opinion about metaphysical ideas (my god is better than your god, etc.), but the real reasons have typically included disputes about what kinds of activities are right and proper.

Clearly, this tendency continues unabated, as conflict rages throughout the middle east.  These battles are seldom due to either side being amoral (despite vigorous assertions to the contrary, usually proclaimed by both sides about the other).  Instead, both sides are passionately committed to a clear and powerful set of moral values.  They fight precisely because of their moral convictions.

The problem is that the ethical systems are fundamentally incompatible.  On both sides, the leaders cite moral responsibility as a reason to fight, but the moral differences are irreconcilable.

Consider, for example, the greatest lesson of the 20th century, that a technologically advanced and “civilized” democracy could give birth to the Holocaust.  The Nazis were not, by and large, amoral. They were highly moral, but their moral values were radically different than ours.

Whether or not AI can be “moral” should not be our primary concern.  Instead, we need to recognize that humans cannot agree about the foundational principles of ethics.  It’s easy to imagine that any future “moral” AI will be moral in ways that we like.  But what if that highly moral AI is based on radical Islamic Jihadist principles?

 

, , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.