Ok, logic isn’t the sexiest thing going. But, when I taught logic to undergrads, I learned something very cool about it. The essential value of logic is that it “preserves truth values.” And this is not merely a philosophical nicety. The benefit of logic is that, if you start with true assumptions and you perform logically valid operations on those assumptions, your conclusions must be true. Not probably true, but necessarily true. In my opinion, that makes logic very cool. Start with truth, analyze it correctly, and you will end up with true conclusions.
So let’s apply that to the current state of artificial reality. First, computing is continuing to overcome obstacles. Aspects of our world that were once thought to be completely distinct from computers are now routinely recognized as essentially computational. An obvious example is automobiles. Not only are they built around computer processing, but they are also driven by computers – still in testing, but self-driven cars seem destined to soon become a normal feature of daily life.
Computation is so much a part of our everyday world, from geopolitical to microscopic scales, that we no longer expect anything else. It is the norm. And barriers to computer technology continue to erode. Biology, once considered fundamentally distinct from computer science, now incorporates computation in many ways, as exemplified by the rapidly growing fields of computational biology and biological computation. Of course, these advances in computational reach and power benefit all of us in many ways. But their exponential growth in virtually all fields of human endeavor is one of the most significant events to hit humanity since our ancient ancestors developed language. It’s worth taking a few minutes to consider the implications.
We all know that the growth of computation has led to a deluge of data. We are barely able to keep track of the many different tools that we must use to simply sort and analyze the many sources of information that dominate everyday life. As the data proliferates, our dependence on automated systems increases. We can no longer navigate the world effectively with only the cognitive capabilities of our natural intelligence. We require digital assistants, merely to get by.
All of which leads to the rapid growth of artificial intelligence (AI). This is not merely another facet of pervasive computing. AI is not just bigger, faster, or more powerful computing. It’s a fundamentally different approach to the basic capabilities of computation. AI systems are not simply aids to calculation and data storage (which have been the major roles of computation during the past half-century). They are beginning to conquer cognitive realms that have been, until now, exclusively human. AI researchers sometimes refer to “artificial general intelligence” (AGI), the threshold at which AI becomes approximately equivalent to human levels of intelligence. AI may still be quite a long way from achieving that goal; estimates differ significantly. But few experts doubt that it is clearly on the horizon.
If this were all being wielded for the betterment of humankind, we could rest easy. Sadly, human beings do not always treat each other (or other species) with optimal benevolence. Crime, aggression, and warfare are (at least so far in human history) inescapable features of the human condition. As our powers are augmented by computation and AI, so too is our capacity for computerized violence and the militarization of cyberspace.
So long as humans continue to cultivate violence, the threat posed by AI will grow at exactly the same rate as its beneficial features. We will not be able to harvest only the positive fruits of this technology, because the harm and the value are inseparable. Like all other technologies, AI is morally neutral. But it will certainly be used to cause great damage, because it is a powerful tool. And humans always utilize the most powerful tools at their disposal to advance their particular purposes, whether helpful or deadly.
Several AI researchers are pursuing a program that is often called “Friendly AI.” The basic idea is that, by designing the systems to behave in ways that will not cause great damage to human interests, the dangers posed by AGI and Superintelligence can be addressed. Clearly, sane people will prefer friendly AI to unfriendly AI. Such research efforts are unquestionably important and valuable. But it is equally clear (at least from my perspective) that no research program into Friendly (or Moral) AI can possibly ensure that all of the players will follow the rules.
A large number of very powerful entities will not be bound by any agreement, contract, or moral imperative beyond the dictates of raw power. Examples include the military-industrial apparatuses of the world’s major governments, several global corporations, most criminal syndicates, terrorist organizations such as ISIS and Al Qaeda, as well as various other well-heeled wackos. Once they suspect that their adversaries could be taking advantage of the capabilities of AI, they will certainly not honor any contract to eschew ultra-powerful weaponized AI.
Unlike programs that can somewhat restrict membership in the international nuclear weapons club, no enforcement regime could even begin to stop all transgressors. And the very nature of artificial superintelligence is that it only needs to fail once to spell universal and permanent disaster for all of humanity.
This is logic that warrants a very close look.