Why Should We Care? (part 2)

From a YouTube video of work done at UCSD's Machine Perception Lab

From a YouTube video of work done at UCSD’s Machine Perception Lab

(This is Part 2 of a 3-part article.)  Part 1 began by considering your possible wellbeing in ten years.  In Part 2, I look at some likely implications of converging trends and make some educated guesses about what we should expect, based on the assumption that these trends will continue to follow the growth curve of the past ten years.

As Yogi Berra said, “It’s tough to make predictions, especially about the future.”  Hi might well have added “Expect the unexpected – just don’t expect anything you can expect.”

In Part 1, I began by considering a future ten years out.  I should emphasize that ten years is only an indicator of an approximate time span; it is not exact.  The potentially worrisome outcomes could start to become obvious in as few as five or as many as fifteen years, but they will be apparent to most of us within our expected lifetimes.  Regardless of whether they will become clearly evident in five years, or not until fifteen years, they are coming and they will make the world very different from the one we live in today.

My answer to the question “why should we care?” – actually, this entire blog – attempts to walk the fine line between warning about real, important dangers — and scare-mongering.  There’s no lack of the latter in today’s popular culture and I try to avoid that swamp.  But I see many clear indicators that we’re heading for trouble.  It seems that we’re in a world awash with warnings (global climate change, deepening disasters in the Middle East, the demise of the middle class, etc.).  Despite that fact, I think that very few people are paying attention to what I believe are the most serious threats.

The general lack of attention to these risks sometimes causes me to doubt the validity of my own forecast.  At those times I am especially disturbed by the possibility that my warnings could veer into fear-mongering.  At other times, the evidence seems to be so clear and to be mounting so quickly that I should step up the urgency of my warnings.  When I follow that path, it’s all too easy to ring the alarm bell too vigorously.  In short, I am often pulled in one direction or the other – and often in both directions at once.

My goal is to present a clear and level-headed case that will persuade intelligent readers that we should start carefully thinking about the logical consequences of the convergence of several major trends in technological development.  In general, I think that several individual trends are widely discussed, but the cross-pollination of their emerging results is receiving far too little attention.

All of us understand that causes seldom happen in isolation, especially in the real world outside of a controlled laboratory setting.  Invariably, the major catastrophes throughout history have been the result of multiple contributing factors.  As we hurtle along our modern speedway of the history of our own future, we are also looking toward the converging effects of many forces.  Many of those forces are the same as they have been for countless thousands of years.  But we live in an era unlike any other. 

Today, the major forces shaping human existence include artificial technologies that far outweigh the impact of natural causes.  In short, we have wrestled the fate of humanity away from the rule of the natural world.  We now live in a time in which artificial reality dominates human experience.  The artificial causes that shape our lives are no less interdependent than the natural causes.  But, unlike natural causes, which we have learned about for thousands of years, the artificial causes have complex and far-reaching effects that we have barely begun to imagine.

This technological daisy chain will increase the complexity of the systems and raise the risks of massive breakdowns, either through an inadvertent glitch or a malicious attack.

The problem is humans can’t keep up with all the technology they have created,” said Avivah Litan, an analyst at Gartner. “It’s becoming unmanageable by the human brain. Our best hope may be that computers eventually will become smart enough to maintain themselves.”

Technology already is controlling critical systems such as airline routes, electricity grids, financial markets, military weapons, commuter trains, street traffic lights and our lines of communications.

Now, computers are taking other aspects of our lives as we depend on smartphones to wake us up in the morning before an app turns on the coffee pot in the kitchen for a caffeine fix that can be enjoyed in a the comfort of a home kept at an ideal temperature by an Internet-connected thermostat designed to learn the occupant’s preferences.

Deepening dependency on technology raises risk of breakdowns
by Michael Liedtke and Barbara Ortutay (AP, Jul. 9, 2015)

The consequences of converging technological trends are not necessarily alarming when considered in isolation from human society, psychology, ideology, and politics.  Despite their complexity, these technologies are ultimately tools and the ways in which they are used can be either harmful or beneficial, or both.  If we could be sure that artificial technologies would always and only be used to advance the most beneficial of possible outcomes (beneficial, that is, to the overall, long-term flourishing of humanity), we would have very little to fear.

The core of the problem is that human beings are governed by biological processes that evolved over millions of years (billions, if we include the underlying laws of physics and chemistry).  We are animals and our abilities and operating systems are shaped by the mandates of natural evolution.  Fortunately, we can learn, so we have developed societies and institutions that help us to limit the most destructive tendencies of our biology.  But we have a vast reservoir of evidence that should prove to us that our ability to learn is insufficient to eliminate hatred, violence, and greed.

Human beings are driven by impulses that are notoriously irrational.  If the excellent movie Ex Machina has any overriding theme, it is that artificial intelligence and human intelligence are motivated by very different fundamental goals.  We cannot fully fathom the ways that we are ruled by our emotions and our basic biological drives.  Famously, these are the very qualities that make us human.  Love, joy, anger, fear, lust, hunger, addiction … human irrationality is at the core of the “human heart.”

Despite thousands of years of effort, we have not succeeded in ensuring that all people are perfectly ethical.  Even attempting to achieve any such aim strikes me as quixotic to the point of absurdity.  But that effort is exactly what will be required if we are to realize the positive promise of our artificial prospects. 

Ironically, the basic reason we must attempt this absurd quest is that humans have achieved a state of unprecedented empowerment.  In their excellent book The Future of Violence: Robots and Germs, Hackers and Drones—Confronting A New Age of Threat, Benjamin Wittes and Gabriella Blum present extensive evidence supporting their central thesis that new technologies are characterized by a fundamental feature that they call “mass empowerment.” They focus on three major examples of what they call “technologies of mass empowerment”: networked computers, biotechnology, and robotics.  Clearly, all three are increasingly driven by advances in artificial intelligence.

The fact that technology can serve both useful and destructive purposes is as much a feature of fire, rocks, and spears as of any newfangled invention.  In our modern age, however, new technologies are able to generate and channel mass empowerment, allowing small groups and individuals to challenge states and other institutions of traditional authority in ways that used to be the province only of other states.  They are growing increasingly cheap and available.  They defy distance and other physical obstacles.  And, ultimately, they create the world of many-to-many threats, a world in which every individual, group, or state has to regard every other individual, group, or state as at least a potential security threat. [emphasis added] (The Future of Violence, p. 19)

Modern technologies of mass empowerment of mass empowerment … put more power, potentially a lot more power, in the hands of more people, potentially a lot more people.  They thus push toward an extreme in which we have to fear ever more remote and ever more lethal attacks from an ever wider array of ever less accountable people.”  (The Future of Violence, p. 22)

The power of technology is available to people throughout the world in ways that allow them to achieve their goals more easily, quickly, and effectively.  That’s great, as long as all of those goals are good for everyone, as well as for the entire planet and countless other species that live on it. 

Sadly, we can be sure that not all people’s goals will, within any foreseeable future, be so benign.  Some people will seek to employ their power to the detriment of others.  And the clear, unequivocal consequence of their acquiring power today is that they will soon try to use that power to harm you and me.

There is no way to stop the development and adoption of these technologies of mass empowerment.  The forces that drive them are simply too powerful and the momentum is too great, even if somehow there could be any global decision to stop them.  Those forces include the competing military interests of all of the world’s most powerful nation-states, the economic incentives deriving from enormous commercial opportunities, and the profound ideological conflicts that rage throughout much of the world.

The advance of technologies of mass empowerment will continue unabated and it will do so at an ever-increasing velocity.  One reason for the exponential growth in the acceleration of technological development is the fact that the technologies themselves are part of a kind of feedback loop (or vicious circle).  The more that these technologies advance, the more quickly they are able to advance. 

Perhaps the clearest and most obvious example of this phenomenon is Moore’s Law, but that is by no means the only instance of such self-sustaining feedback loops.  The basic nature of scientific research ensures that new advances in scientific and technical knowledge are shared openly among all interested researchers.  As Isaac Newton famously said, scientific discoveries are made by “standing on the backs of giants.”  Today, the giants are growing at an exponential rate and scientists worldwide are able to leap onto their backs more quickly than ever before, thanks to the internet.

We are all vulnerable now and we will become increasingly vulnerable as technologies advance.  Certainly, one important feature of technological progress is ever more sophisticated defenses against the increasing threats.  This typically appears to be a kind of arms-race, as is most obvious in the area of cyber-hostilities.  But ever since 911, we have often been warned that the terrorists only have to succeed once, whereas the defenders have to succeed every time. 

The balance between attacker and defender is unequal in the extreme and the imbalance is becoming more disparate as the technologies evolve. Not only are they becoming more powerful and sophisticated, but as a consequence they are also becoming more and more complex, which inevitably leads to new and undefended vulnerabilities. 

As the attackers’ capabilities increase, the range and extent of the potential targets increase at an even greater rate, because virtually every increase in capability also increases the number of ways in which the attack can cause harm.  Defenders not only have to defend against old, known methods of attack, but also against new, previously unknown, often unexpected threats.  Not long ago, we didn’t worry about our phones getting hacked.  Today we do.  Soon, our self-driving cars will also be targets.

The world of 2025 will be significantly more dangerous for humans than the world of 2015.  We cannot know the exact shape or nature of the coming dangers, but we can be sure that environmental, geopolitical, economic, and technological factors will all combine to form increasingly complex systems with ever growing numbers of vulnerabilities. 

In the midst of unprecedented uncertainty concerning the future, one thing we can be sure of is that the cumulative “threat surface” will continue to grow.  And, as an inevitable consequence, that growth will also increase the overall number of normal risks that daily confront every man, woman, and child.  Largely because the ability to wield power for ill has been democratized, we cannot count on protection from any quarter.  Neither the state, nor the church, nor our own weaponry can protect us from the immense destructive capabilities that are now emerging as “technologies of mass empowerment.”

With each passing year, we live in a harsher, more frightening world.  That fact leads to a steady rise in mass insecurity and anxiety.  Mental health problems will continue to increase and incidents of isolated violence will almost certainly continue to multiply as a result.  Few of us feel entirely safe today, but we will be significantly less safe in 10 years.

The rapidly growing divide between our technological powers and our biologically-constrained powers of rational thought suggest that the risks will only continue to grow, both in number and severity, for many years to come.  Eventually, human brains will probably be supplemented with artificial intelligence (AI) capabilities in ways that will shrink that gap.  Maybe someday normal human irrationality will no longer be a problem. 

But such a merger will not occur overnight.  Making that union complete and successful will require a great deal of experimentation, trial, and error.  During that lengthy interval of partial integration between AI and human brains, one could reasonably expect that some humans will be artificially enhanced so as to be much smarter than anyone today, but still be partially driven by unconscious biological processes.  Until such time as human intelligence has become fully integrated with AI and is universally and entirely rational, we will still face increasing degrees of risk.  After we arrive at such a perfect and rational union, we might have finally overcome human irrationality. 

A shape that such an eventual seamless union might take was envisioned well by writers of Star Trek: The Next Generation. They called their vision “the Borg.”

Irrationality is an essential part of what we most value about humanity, so even if we overcome it, we might not recognize the result as truly human.

Such a future “technological singularity,” in which humans have, in one way or another, been entirely replaced (or subsumed) by technology, is not likely to occur within the next ten years.  Before that happens, artificial reality will continue to advance at an exponential rate, creating an increasingly complex world that is also increasingly “unmanageable by the human brain” (in Avivah Litan’s memorable words).

We cannot fully comprehend such a world.  It has never existed before.  But we are experiencing the beginnings of it today.  And within 10 years, it will be unavoidable.

In Part III of this series, I will turn to the possibilities for solutions.  The situation is grim, but avenues of response are still open.  We must start exploring them with all of our strength and ability, for with every passing day they become less accessible.

, , , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *