There’s No Cause for Alarm – Just Be Quiet and Drink Your Kool-Aid

I'm sure there's no problem, because I don't see one.

I’m sure there’s no problem, because I don’t see one.

In a Feb. 6, 2015 article in The Washington Post titled “No, the robots are not going to rise up and kill you,” IBM researcher David W. Buchanan writes:

Science fiction is partly responsible for these fears. A common trope works as follows: Step 1: Humans create AI to perform some unpleasant or difficult task. Step 2: The AI becomes conscious. Step 3: The AI decides to kill us all. As science fiction, such stories can be great fun. As science fact, the narrative is suspect, especially around Step 2, which assumes that by synthesizing intelligence, we will somehow automatically, or accidentally, create consciousness. I call this the consciousness fallacy. It seems plausible at first, but the evidence doesn’t support it. And if it is false, it means we should look at AI very differently.  …

To someone who succumbs to the consciousness fallacy, this history must be deeply disappointing. Because consciousness has not magically appeared, we must be missing something deep about intelligence. But if we let go of the fallacy, we immediately benefit in two major ways. The first is in ethics: An intelligence that is not conscious is not a person, so any ethical problems related to the treatment of an AI system evaporate. The second is that AI may be much less dangerous than many people believe. Without a consciousness to drive it, there is no reason to expect such an intelligence to rise against us.

Buchanan bases his argument on an idea that he calls “the consciousness fallacy.”  This seems to boil down to the view that consciousness is inherently a “harder” problem than other challenges that confront neuroscientists.  Ironically, in the same article, he provides a reason for us to reject his own claim, by telling us that, “While most of us would agree that we know consciousness when we see it, scientists can’t really agree on a rigorous definition, let alone a research program that would uncover its basic mechanisms.”

Buchanan’s basic argument could be summarized as follows:

  1. AI would require consciousness in order to do something really bad to us.
  2. So far, there is no scientific consensus regarding the physical basis of “consciousness”, so we don’t understand what consciousness really is;
  3. Machines certainly are not conscious now and they probably won’t be anytime soon;
  4. Therefore we don’t need to worry.

There are obvious logical errors in this argument.  I think the most important problem is the assumption that consciousness (or sentience) is a kind of dividing line that separates ordinary old software from some kind that is potentially malevolent.  Software is created precisely to give machines capabilities that mimic intelligence in ways that range from calculation to memory to predictive analytics.  The notion that computers would have to be conscious in order to be dangerous is completely unfounded.  Programs are written precisely in order to give machines a certain type of autonomous agency, within various clearly-defined ranges of activity.  That’s why spreadsheets work.  The predefined instructions, when executed by the machine, result in the intended consequence.  The machine follows its pre-programmed instructions.

It’s true that (for now) computers by and large require human programmers to give them instructions, but that should not give us a great deal of comfort.  First, all programmers make mistakes and create bugs; second, not all programmers are well-intentioned.

A second big mistake is to assume that there is a clear relationship between the lack of scientific consensus regarding the physical basis of consciousness and the question of whether or not we can properly assign consciousness to machines (or anything else, for that matter).  The idea that scientists don’t agree about consciousness does not entail that machines cannot have it (either now or anytime in the future).  It’s true that scientists have yet to reach any general consensus about the mechanisms underlying consciousness, but that fact says nothing at all about whether machines can become conscious.  It only tells us that we will not be able to identify and measure such an event if it occurs.

There’s a lot of disagreement about which non-human animals (if any) are conscious.  Most people would agree that probably worms are not, but what about whales?  How would we know?  We determine that people are conscious primarily because they are people and we know that people are conscious when they are awake and behaving normally.

Buchanan says, “The best definitions capture the idea that consciousness grounds our experiences and our awareness. Certainly consciousness is necessary to be ‘someone,’ rather than just ‘something.'”  His argument appears to rest on the speculative hypothesis that a machine cannot be conscious, because it is not a person.  But, as decades of the abortion debate have taught us, personhood is a concept nearly as elusive as consciousness.  How can we be so sure that machines are not “persons”?  Can we really know that chimpanzees cannot be “persons”?  Where and how do we draw such a line?

Even if we could easily provide a clear set of specifications for personhood, it would not justify his bald claim that “Certainly consciousness is necessary to be ‘someone,’ rather than just ‘something.'” Why should we assume that non-human entities certainly lack consciousness, merely because they don’t seem to meet some arbitrary criterion for being “‘someone,’ rather than just ‘something'”? Buchanan seems to be appealing only to common intuitions to support this idea.  But common intuitions often turn out to be very wrong.  The earth is not flat, nor is it at the center of the universe.

Until scientists have generally agreed on some reliable empirical method for identifying and measuring consciousness, we only have non-scientific anecdotal evidence for its existence, along the lines of, “It doesn’t seem like it could be conscious, because it doesn’t look or act like a human being.”

Clearly, this is inadequate.  Without a solid science of consciousness, the only reasonable strategy is to completely disregard it with regard to the risks potentially posed by AI.  Otherwise, the risk-assessment will be based on whatever folk-psychological and/or culturally popular conceptions that seem intuitively plausible.

Buchanan also seems to ignore lessons that we should have learned from the history of nuclear technologies over the past 75 years.  He writes, “AI may be much less dangerous than many people believe. Without a consciousness to drive it, there is no reason to expect such an intelligence to rise against us.” Surely he doesn’t believe that consciousness is a necessary attribute of a potentially lethal technology. AI does not require consciousness in order to be used in very powerful weapons such as drones.

The biggest error that Buchanan makes (along with most others who deny the potential dangers of emerging AI technologies) is that he fails to account for the dynamic interplay among risk factors that accompanies all modern complex technologies and especially those with very broad areas of potential application, such as AI.

Consider electrical power.  It’s a feature of modern existence so common that we might not even think of it as technology.  But it only became a common feature of modern life in the early part of the twentieth century and, to its earliest users, it must have been an astonishing new technology.  Suppose that, during the early adoption of electrical power, right at the beginnings of the modern electrical grid, scientists were to seriously investigate its risk potential.  Would the investigators, in their wildest dreams, been able to imagine that a century later the global economy could be devastated by the cumulative effects of millions of different electric-powered devices (varying from electric lights to computers), all with very different purposes and all required by the standard practices of financial institutions? They probably would have been more concerned about the possibility that users of electrical devices would experience electric shocks.  AI, like electricity, will inevitably impact virtually every aspect of human life.  To assume that the only serious risks are from killer robots is like imagining that the only harmful use of electricity is the electric chair.

In my posts The Four Horsemen of the Datapocalypse and Malevolentware, I summarize reasons we should expect AI to be incorporated into malware.  Cyber security experts and senior defense officials warn that cyber security constitutes an enormous and growing threat, even without the supercharging effect of AI.  We must start seriously and intensively examining the consequences of AI combined with other rapidly advancing technologies, ranging beyond computer technologies, to drones, satellites, biotechnologies, nanotechnologies, pharmaceuticals, and others that have not even been named yet.

None of this means we can afford the luxury of fear-mongering. These issues are not about facing down killer robots. Nor are they black-and-white.  But to say that the risks are not real is not only wrong and misleading, it undermines the very potential for good that it proclaims to be assured.

 

, , ,

Leave a Reply

Your email address will not be published. Required fields are marked *