It’s About Time (Pay Me Now, or Pay Me Later)


The argument gets louder and more heated each day, as a fast-growing movement starts getting real about the future of AI.

On one side, we have “optimists,” such as futurist Ray Kurzweil and Microsoft’s Director of Research Eric Horvitz, who see a very bright future for the relationship between humans and Artificial Intelligence (AI).  On the other side, some very smart people such as Steven Hawking, Elon Musk, and Bill Gates, are being cast as “pessimists,” due to their warnings about catastrophic consequences that could arise from this brave new technology.

As I’ve written in other posts, this is a false dichotomy.  Nobody can predict the future with certainty, so we must assess the current trends realistically and assign probabilities to possible outcomes.  This involves thinking about time, particularly about how to weigh relationships among:

  1. Magnitude (scale, scope, size) of potential benefits and/or hazards (how great is the potential damage or benefit? – often called severity when referring to damage)
  2. Probability that the potential benefits and/or hazards will occur with specified time-frames (how likely is it that the various possible outcomes could occur when it would really matter?)
  3. Time-scales of the processes involved, both with regard to what might happen and with regard to how long it will take to respond effectively (is it already too late to make any meaningful change?)

Whenever we must choose among various possible courses of action, we face questions about probable outcomes and, consciously or not, we perform some sort of cost-benefit analysis.  Most of the time we are unaware of the values, goals, and motivations that contribute to that analysis, but they constantly shape our choices.  The three factors listed above always come into play.

Although the first two of these three factors are usually described in standard discussions of risk-management, the third is frequently ignored.  For an example of how time-scales come into play when assessing potential risk, consider the forecasting of Hurricane Katrina.  If there had been months to prepare (after its eventual landing in New Orleans was confirmed), Katrina probably would have been much less devastating.  On the other hand, if the amount of time between predicting and experiencing its landfall had been seconds instead of days, the damage would have been even greater than what actually occurred.  As another example of the importance of time-scales, imagine that you are the captain of an aircraft carrier.  Every time you have to make a navigation decision, you’d better remember that it requires a significant distance and length of time for your vessel to come to change course.

Another way that time comes into play concerns balancing the effects of decisions over varying time-periods. Often we have to weigh short-term benefits against long-term costs (“Yes, I could probably pay for that nice new (fill in the blank) on credit right now, but what would that do to my monthly budget? And what would my spouse think about it? … “).  Also, to make matters more confusing, the possible outcomes are usually mixtures of cost and benefit.  Even when the overall benefits and costs are fairly obvious, there may be mitigating factors.  For example, the enjoyment of an evening with a group of friends may mean missing out on an intimate dinner with a loved one.  Most planning involves trade-offs.

The growing debate about the potential benefits and/or dangers of AI must take such temporal considerations into account, or it will be meaningless.  The degree of overall impact that emerging AI will have on everyday human life is certain to be enormous, even in the near future; it is virtually incalculable when considering the long-term prospect.  That is a relatively uncontroversial claim.  The arguments come into play with questions about whether the effects will be, on balance, overall positive or negative.  This is clearly a very important conversation, regardless of the conclusion(s) one reaches.

The conversation will be more valuable if we are clear about the various factors involved.  It seems to me that one of the most important considerations must be the exploration of potential costs, benefits, and risks – as they play out over time.  The following matrix provides a simple but useful heuristic for exploring the space of possibilities:

Short-term Long-term
Positive  Opportunity  Vision
 Negative  Crisis  Risk

An important realization when we start thinking about AI in this way, is that as we start listing possible outcomes in each of the four boxes, we quickly see that there are many possibilities in each of the four categories. For example, here’s one way of starting to fill in the boxes:

Short-term Long-term
Positive Opportunities:
Medicine, education, scientific research
Humans freed from drudgery; unlimited knowledge and education; virtual immortality; space-exploration; overcoming of most disease; super-intelligent governance and policy-making
Negative Crises:
Loss of privacy, cyber-attacks,
de-humanization of ordinary life, job-loss, use in drones and weaponry
AI vastly exceeds human intelligence and takes control; bio-implants create the “Borg”; Degradation of democracy; Unchecked cyber-warfare & AI weapons

These possibilities are only the beginning and reflect one person’s limited perspective. If a great deal of intensive and collaborative research is put into analyzing the implications of current trends, we will be able to develop very rich and useful models.  And that will almost certainly affect the probabilistic calculus that we will employ during the continuous refinement process used to ensure that those models stay current and within the bounds of reason.  In any case, we need to be prepared, whatever it is that is hurtling down the road toward us.  Regardless of what sort of impact it turns out to actually have on human existence, the thing is definitely picking up speed.

Any intelligent, reasonably informed adult who engages with technology (i.e. most of us), cannot fail to be struck by the ever-increasing velocity of technological change.  Change is not only occurring extremely rapidly, its rate is continually accelerating. Advanced technology itself is playing an increasingly central role in the processes involved in its advancement, such as research, development, and manufacturing.  One obvious example of that trend is the growing incorporation of computerized automation in every aspect of ordinary existence.  Computers do many information processing tasks much faster than humans.  Although the human brain is still unparalleled in its overall computational ability, humans are rapidly losing ground in the contest for sheer computational power and speed.  Computers continue to become faster and more powerful at the rate of Moore’s Law, and with memcomputing and quantum computing on the horizon, researchers see no obvious end to that acceleration.

The take-home message is that AI operates at time-scales that are much, much faster than the causal factors that we typically deal with (micro-seconds, as opposed to hours, days, or years).  Ten years ago, none of us had an iPhone.  Now, Siri is a common companion for many in their ordinary lives.  But many areas of ordinary life still operate at much the same time-scales that were common a century ago.  Creating and passing comprehensive legislation is a major example.  As we have seen with the climate-change debate, it takes years to even agree that there is a problem, much less start to take concerted political action to address it.  Climate-change is occurring over decades, and we still are unable to keep pace with the speed at which its probable impact is becoming actual.  What can we learn from that recent history about the time needed to identify, understand, and address the potential effects of self-improving AI?

We need to get a better quantitative handle on the many interacting processes that can give us objective, measureable evidence for supporting the quality of our probabilistic calculus.  We need to start using the enormous resources at the disposal of the global community to start measuring the highly complex, time-relative, and interdependent causal processes at play, in order to know what has to be done and how long we have before it’s too late to change course, if that’s what’s required for human flourishing.

It’s about time.


Leave a Reply

Your email address will not be published. Required fields are marked *