“To Be Or Not To Be” is Not the Question

Dark or Light? Should we predict storms or sunshine?

Dark or Light? Should we predict storms or sunshine?

Anyone reading recent articles about possible futures for Artificial Intelligence (AI) would get the impression that:

  1. We are on the brink of a new age that will see most (if not all) of humanity’s greatest problems defeated by the capabilities of beneficial AI (often described as the singularity); or
  2. By the end of this century (and maybe a lot sooner than that), humanity will encounter a threat to its survival greater than any before, thanks to an exponential increase in the intelligence and power of AI.

As one of the leading proponents of the “AI is beneficial” view, Ray Kurzweil is among many thinkers who divide the field into optimists and pessimists.  Increasingly, AI experts and other technology futurists describe their views according to that dichotomy.

There are several problems with this way of reducing the issue to manageable proportions; chief among them is the simple fact that it is a false dichotomy. Intelligent analysts recognize that nobody knows whether AI will ultimately prove to be a bane or a boon.  All that anyone can do at this time in human history is make predictions, based on vastly incomplete information.  Technology forecasters, like weather forecasters, are limited by the models they have.  Their predictions are correct only to the extent that the models are complete and accurate.

Instead of looking at the current situation through this either-or filter, we should be asking about relative risks.  Weather forecasters often predict weather events in terms of probabilities.  For example, they might tell us that we’re looking at a “70% chance of rain tomorrow.”  The more cynical among us may want to say that they are simply trying to protect their reputations: in any event they can say they were correct.  But in fact (assuming they are utilizing the tools at their disposal), they are giving us the results of a statistical analysis of models built from current information.

Similarly, we need to understand possible future outcomes regarding complex technologies in terms of probabilities. We don’t have enough information to be able to forecast the state of technology even 5 years down the road with anything approaching complete accuracy.  The farther out our predictions go, the less likely we are to be correct.  So we are arguing about chimeras when we debate whether or not AI will terminate humanity.

Risk always requires uncertainty regarding the future.  We don’t talk about a risk of night following day, but we might talk about a risk that going for a long hike late in the afternoon might leave us stuck in the forest after dark.  We decide on courses of action in the near future based upon probabilities regarding their longer-term outcomes.  The future of AI is highly uncertain.  Does that mean there are no risks? Obviously not.  Does it mean that we should ignore the risks?  Only if we don’t mind getting lost in the dark.

The most important AI questions to be addressed are all related to fully identifying and assessing the risks.  There are so many complex factors involved that nobody should pretend to have fully analyzed them.  Coming up with an approximately correct probabilistic assessment of how those risk factors will play out will require an enormous research and analysis effort, distributed globally and utilizing sophisticated computer modeling.

It’s probably a good job for AI.

, , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.