Singularity is More Radical Than We Think

Michael A. Bukatin

The concept of Technological Singularity was introduced by Vernor Vinge. Scientists are divided on whether the full-scale artificial intelligence is possible. Let’s presume for this scenario, that it is possible. Then, due to the exponential nature of progress, only a few months will pass from the moment when a computer becomes as smart as a human, till the moment when it(?) becomes 100 times as smart, and will keep getting better. Thus the situation would change radically, since humans would not be on the top of the evolutionary chain any longer. Everything would be completely different. To think that the governments, which cannot stop the spread of computer viruses, can do something about this is entirely naive.

A typical consensus estimate in terms of timing is 2020-2025, but I think it rather overestimates the time we actually have until this event.

Despite the fact, that by definition we cannot meaningfully describe this singularity from our current viewpoint, it is too tempting for visionaries not to try to paint a vision of the post-singularity world. I think that their descriptions are typically way too mild, when they speak about the post-singularity world, dominated by the self-evolving society of intelligent “AI” post-human agents, and about the place of humans in such society.

Here is what I think should happen, when we takes into account how physics and ethics can and will be developed by those agents.

I think that in the absence of outside “power” (God or a better developed civilization) the two main alternatives are quite clear — they all boil down to ethics, not surprisingly…

Basically, it is likely that those first post-humans (AI, which is 100 times smarter, than a human), would either have as their main goal the “power” in all senses, or the “ethics” (and “power” then will be subservient to it).

The mixed scenario is highly unlikely, because even a few weeks of head start would probably be more than enough. So I’ll just analyze the pure scenarios, where the first post-humans are either governed by quest for “power”, or by quest for “ethics”.

1. “Power” scenario This all boils down to physics. We know that our physical models always had very limited applicability and have nothing to do with absolute truth, since Niels Bohr explained this to us.

Now the progress of physics have slowed down, because people’s brain does not evolve quickly, and there is no resources for radically new experiments.

With post-humans with superior brain power, this would change in a moment. The radical discoveries (as radical as relativity or quantum mechanics) would follow once a month, then once a week, then once a day, then once a second — and will be used to change the very nature of “derivative” physical “laws”, which govern what’s going on here, until the very notion of time would stop making sense — we know what even simple gravitational fields do to time and space — those new discoveries would lead to technologies, incomparably more radical than a nuclear bomb… The very nature of space and time would change, the Solar system will be just gone… Never mind humans, nobody would even notice them — it’s highly unlikely that even anything of that Society of Mind will survive, if proper ethical control and self-control are not exercised.

Just imagine various parts of the system trying to fight each other about who will unfavorably change the structure of space 0.0000000001 of a second before the opponent would do the same to them…

Basically, this is just an equivalent of a collision with a black hole…

2. “Ethical” scenario If the self-reflection of the first post-humans would be governed by ethics, they would understand all this and much more much quicker, than our weak brains. It’s likely, that if the strong security would be considered paramount, all self-reflective creatures would be protected to the most, just because it’s very dangerous to draw an artificial line somewhere… So, “Golden Age” in some very strange and strong version is highly likely… For “humans” too. The “perfect ethics”, with implementation, will follow quickly. All meta-considerations, like dangers of excessive controls, would be taken into account properly.

3. Ethical implications for us Of course, this set of two scenarios can be faulty, but it seems, that if we assume, that we can talk about singularity at all, instead of just assuming that it is outside of our discourse (also not a bad idea, philosophically), they are the most likely. All this assumes no interference of an existing outside power, of course…

If this is correct, then indeed what we do and how we approach this is crucial… What kinds of things we are creating as agents, learning systems, etc, will become decisive (that is, what matters is only who wins first, as obviously there will be attempts of both kinds).

So I feel that any picture of a “cloud of agents” with vague ethical basis is very naive — it’s either fierce warfare on the scale infinitely beyond any imagination, or a strongly ethical system, with whatever ethics it chooses to develop (but the collective survival, stability, and control of the rate of progress would be paramount). That is, if we can talk about this thing at all…

More by Michael A. Bukatin

More Links on The Singularity