Skip to content

Calculating existential risks

One way and another, the Cambridge Project for Existential Risks will have enough to keep itself busy

"The singularity" is a term invented by science-fiction writer Vernor Vinge in 1993 to describe the moment when human beings cease to be the most intelligent creatures on the planet. The threat, in his view, came not from very clever dolphins but from hyper-intelligent machines. But would they really be a threat?

We have a foundation for almost everything these days, and now we have one to worry about that. It is the Cambridge Project for Existential Risks, set up by none other than Martin Rees, Britain's astronomer royal, and Huw Price, occupant of the Bertrand Russell Chair in Philosophy at Cambridge University. The money comes from Jaan Tallinn, co-founder of Skype, the internet telephone company now owned by Microsoft.

It is quite likely, of course, that we will one day create a machine - a robot, if you like - that can "think" faster than we do. Moore's Law, which stipulates that computing power doubles every two years, is still true 47 years after it was first stated by Intel founder Gordon Moore. Since the data-processing power of the human brain, although hard to measure, is obviously NOT doubling every two years, this is a race we are bound to lose in the end.

But that is only the start of the argument. Why should we believe that creating a machine that can process more data than we can is a bigger deal than building a machine that can move faster than we do, or lift more than we can? The "singularity" hypothesis implies (though it does not actually prove) that high data-processing capacity is synonymous with self-conscious intelligence.

It also usually assumes, with all the paranoia encoded in our genes by tens of millions of years of evolutionary competition for survival, that any other species or entity with the same abilities as our own will automatically be our rival, even our enemy. Like Skynet, the US defence computer in the "Terminator" series that triggered a nuclear war on the day it became self-aware, because it feared that human beings would turn it off if they knew it had become conscious.

The old biological rule of ruthless competition for survival must somehow be eliminated from the behavioural repertoire of machine intelligences, but can you really do that? Nobody knows, but you can, at least, split the question into bite-sized bits.

Does a very high data-processing capacity automatically lead to "emergent" self-awareness, so that computers become independent actors with their own motivations? That might be the case. In the biological sphere, it does seem to be the case. But is it equally automatic in the electronic sphere? There is no useful evidence either way.

If self-conscious machine intelligence does emerge, will it inevitably see human beings as rivals and threats? Or is that kind of thinking just anthropomorphic? Again, not clear.

And if intelligent machines are a potential threat, is there some way of programming them that will, like Asimov's Laws, keep them subservient to human will? It would have to be something so fundamental in their design that they could never get at it and re-programme it, which would probably be a fairly tall order.

And that's even before you start worrying about nanotechnology, anthropogenic climate change, big asteroid strikes, and all the other probable and possible hazards of existential proportions that we face. One way and another, the Cambridge Project for Existential Risks will have enough to keep itself busy.

- Gwynne Dyer is an independent journalist whose articles are published in 45 countries.