Will Sawin says that Robin Hanson's singularity scenario might doom mankind. (This Will Sawin?) His comment applies to any singularity, not just Hanson's.
Let's assume Hanson's model of history works, that future singularities are possible, and that Sawin is right. Can we escape an unpleasant fate?
We might compare the difference between today and a singularity with the difference between Australopithecus, or at least ancient Rome, and today. The analogous question would then be, Could early hominids have stopped human activity from making chimpanzees an endangered species in millions of years? Or, could Han Dynasty rulers at the time of Christ have prevented Western hegemony over China in the colonial era?
Well, no. They weren’t smart enough to foresee the future. Even if they had known, they might not have cared. Even if they had cared, they couldn’t have influenced other hominids or ancient Europe and stopped them from evolving or industrializing.
But today, if Hanson, Kurzweil, Vinge and others are right, at least some people are smart enough to see the future. New singularities would also present a personal, mortal threat to everyone alive; I think we might care. And unlike past eras, the Earth’s entire leadership now routinely talks at G20 meetings.
On the face of it then, calling off the robot party may be possible.
Will, however, this shindig actually be cancelled? On a positive note, concepts like a singularity and transhumanism are much more well-known than twenty years ago.
The world is more interconnected: international trade is up. Nuclear disarmament actually has happened a little. Democracy is the most common system of government for the first time. That promotes the flow of ideas--ideas like a singularity being possible.
On the other hand, global income inequality--best shown by "Concept 3" below--is now the highest in history according to standard measures. Based on the five areas of cybersecurity, migration, pandemics, finance, and climate change, Professor Ian Goldin of Oxford claims that "global governance is failing."
Selling global warming has been hard enough and that’s a measurable physical change, not just an informed guess. It's especially worrying to me that we might not have advance warning of artificial intelligence. Our only notice would be either weak clues like Hanson's model, or the actual effects of the technology. Things might seem fine, and then...
So, maybe I'm being pessimistic, but if a singularity was possible I wouldn't bet on us averting it (although actually, I would, since if I lost I wouldn't have to pay).
Still, if we did manage to take united global action, there’s reason to think it would work. I had previously been pessimistic on the efficacy of action, because events leading up to a singularity would be wild enough to defeat any precautionary measures. But I now think I was wrong, because those events never have to happen. Controlling a singularity is impossible, but preventing one may not be.
In other words, picture a boat nearing a waterfall. Yes, as the boat nears the falls it becomes harder to successfully row to shore, perpendicular to the current, before being dragged over. But that doesn’t mean reaching shore is impossible. Similarly, humanity could avoid a singularity by carefully limiting scientific research and development. The sooner we start rowing, the better our chances.
Stopping a singularity might be considered a speculative cause, or more bluntly, tilting at windmills. This all depends, however, on whether you feel Hanson’s model of history justifies predicting a singularity. If you do, it’s not speculative but similar to avoiding global warming or nuclear war.
The problem of "Pascal’s mugging" occurs when an event of astronomically great importance might happen with a vanishingly small probability. But if you accept Hanson’s model, the high impact event occurs with a large chance. So this would not be a Pascal’s mugging or other strange case like the lottery with no odds that Peter Hurford mentions.
I think Hanson’s model of history is impressive, but it has problems: a better theory might recognize two types of singularities, a "greater" type that only happened once, when humans evolved, and a "lesser" type that includes the agricultural and industrial revolutions. Hanson's projected doubling times for the economy mean the next singularity would have to be "greater," so the chance of any new singularity depends entirely on how good the analogy is between lesser and greater types. But I think the analogy highly enough that, together with Moore's Law, it leads me to take a singularity in my lifetime seriously.
All this implies, for the moment at least, restoring causes like international cooperation and "singularity awareness" as my top altruistic goals. I'm acting on that by choosing to teach in China this year over other comparable alternatives—a kind of charity at the margin, at least if you agree teaching in the world's second language of English helps build communication and global cooperation.