What is The Singularity?

Vernor Vinge

The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur):

  • There may be developed computers that are “awake” and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is “yes, we can”, then there is little doubt that beings more intelligent can be constructed shortly thereafter.)
  • Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity.
  • Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
  • Biological science may provide means to improve natural human intellect.

The first three possibilities depend in large part on improvements in computer hardware. Progress in computer hardware has followed an amazingly steady curve in the last few decades. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt has pointed out that AI enthusiasts have been making claims like this for the last thirty years. Just so I’m not guilty of a relative-time ambiguity, let me more specific: I’ll be surprised if this event occurs before 2005 or after 2030.)

What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities—on a still-shorter time scale. The best analogy that I see is with the evolutionary past: Animals can adapt to problems and make inventions, but often no faster than natural selection can do its work—the world acts as its own simulator in the case of natural selection. We humans have the ability to internalize the world and conduct “what if’s” in our heads; we can solve many problems thousands of times faster than natural selection. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals.

From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in “a million years” (if ever) will likely happen in the next century. ( Greg Bear paints a picture of the major changes happening in a matter of hours.)

I think it’s fair to call this event a singularity (“the Singularity” for the purposes of this paper). It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown. In the 1950s there were very few who saw it: Stan Ulam paraphrased John von Neumann as saying:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

Von Neumann even uses the term singularity, though it appears he is thinking of normal progress, not the creation of superhuman intellect. (For me, the superhumanity is the essence of the Singularity. Without that we would get a glut of technical riches, never properly absorbed.)

In the 1960s there was recognition of some of the implications of superhuman intelligence. I. J. Good wrote:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. … It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make.

Read the full article

More Links on The Singularity

From Here to There By New Scientist
In Ray Kurzweil’s The Singularity is Near, physicist Sir Roger Penrose is paraphrased as suggesting it is impossible to perfectly replicate a set of quantum states, so therefore perfect downloading (i.e., creating a digital or synthetic replica of the human brain based upon quantum states) is impossible. But how perfect does this copy need to be? This New Scientist article approaches the question of replication of quantum states from a similar perspective–that of quantum teleportation. And how is this complicated by the infinite possible universes that exist–or don’t–based on possible quantum states? (Added August 7th 2001)

Taming the Multiverse By New Scientist
In Ray Kurzweil’s The Singularity is Near, physicist Sir Roger Penrose is paraphrased as suggesting it is impossible to perfectly replicate a set of quantum states, so therefore perfect downloading (i.e., creating a digital or synthetic replica of the human brain based upon quantum states) is impossible. What would be required to make it possible? A solution to the problem of quantum teleportation, perhaps. But there is a further complication: the multiverse. Do we live in a world of schizophrenic tables? Does free will negate the possibility of perfect replication? (Added August 7th 2001)

The Singularity Is Near – Ray Kurzweil at Extro5 (Video) By Raymond Kurzweil
Ray Kurzweil presents his law of accelerating returns at EXTRO-5. (Added July 30th 2001)

Excerpts from The Spike: How Our Lives Are Being Transformed By Rapidly Advancing Technologies By Damien Broderick
Damien Broderick takes us to the edge of a technological Singularity, where the Internet reaches critical mass of interconnectivity and “wakes up,” and mountain ranges may mysteriously appear out of nowhere. Then again, is the rampant techno-optimism surrounding the imminent Singularity just exponential bogosity? (Added July 26th 2001)

The coming superintelligence: who will be in control? By Amara D. Angelica
At some point in the next several decades, as machines become smarter than people, they’ll take over the world. Or not. What if humans get augmented with smart biochips, wearables, and other enhancements, accessing massive knowledge bases ubiquitously and becoming supersmart cyborgs who stay in control by keeping machines specialized? Or what if people and machines converge into a mass-mind superintelligence? (Added July 25th 2001)

Is A Singularity Just Around The Corner? By Robin Hanson
Robin Hanson explores the economics of the Singularity. (Added June 4th 2001)

Surfing The Singularity: Damien Broderick By Amara D. Angelica
In The Spike (Forge, 2001), Damien Broderick takes us on a wild, hyperkinetic ride through some of the planet’s most imaginative ideas on the accelerating times ahead. (Added May 18th 2001)

Tearing Toward the Spike By Damien Broderick
We will live forever; or we will all perish most horribly; our minds will emigrate to cyberspace, and start the most ferocious overpopulation race ever seen on the planet; or our machines will transcend and take us with them, or leave us in some peaceful backwater where the meek shall inherit the Earth. Or something else, something far weirder and… unimaginable. (Added May 7th 2001)

What is Friendly AI? By Eliezer S. Yudkowsky
How will near-human and smarter-than-human AIs act toward humans? Why? Are their motivations dependent on our design? If so, which cognitive architectures, design features, and cognitive content should be implemented? At which stage of development? These are questions that must be addressed as we approach the Singularity. (Added May 3rd 2001)

Singularity Math Trialogue By Hans Moravec, Vernor Vinge, and Raymond Kurzweil
Hans Moravec, Vernor Vinge, and Ray Kurzweil discuss the mathematics of The Singularity, making various assumptions about growth of knowledge vs. computational power. (Added March 28th 2001)

The Law of Accelerating Returns By Raymond Kurzweil
Raymond Kurzweil’s essay on the confluence of exponential trends known as the Law of Accelerating Returns. (Added March 7th 2001)

What is the Singularity? By John Smart
This introduction to the Singularity includes a brief history of the idea and links to key Web resources. (Added February 27th 2001)

Thanks to: Kurzweil AI Net