Wednesday, February 12, 2014

The End

This will be the last post on this blog for the foreseeable future. I've decided it's best that I concentrate on improving my own life rather than addressing politics.

The last couple of posts on this blog have been about whether we should take action to prevent a singularity (or new "growth mode" as defined by Robin Hanson).

I first said that internal details of a singularity would be unforeseeable, so there was no way to measure the marginal usefulness of actions taken to influence it. This implies doing nothing.

In a later post I decided, per contra, that you could prevent a singularity even if you couldn't control it. So preventive action seemed justified.

My feeling now is that useful action is probably impossible. The flaw in my last post is that stopping a singularity would have many side effects. Those effects would likely greatly limit the potential of humanity. So you could prevent the singularity but this resembles playing blackjack. It's not clear when to stop drawing cards or discovering new technologies. You want to avoid the loss of human nature but not stop drawing too early and stunt our growth.

I also think there are two distinct regimes of "complexity" growth in Earth's history--the era before human intelligence and the era after. The current regime hitting physical limits doesn't imply a new regime taking over. It could just as well mean a plateau or collapse.

There may be no endeavor now that "can reasonably be expected to be better than anything else available." Much may depend on your particular situation. At this point, if I were to pick political causes to campaign for, they might be (1) making sure that actual biotechnology development reflects public opinion and (2) furthering global cooperation. But then, I might also avoid taking action on any trend I lacked a model for. Overall, the field is open.

Friday, September 13, 2013

The Singularity Returns

Will Sawin says that Robin Hanson's singularity scenario might doom mankind. (This Will Sawin?) His comment applies to any singularity, not just Hanson's.

Let's assume Hanson's model of history works, that future singularities are possible, and that Sawin is right. Can we escape an unpleasant fate? 

We might compare the difference between today and a singularity with the difference between Australopithecus, or at least ancient Rome, and today. The analogous question would then be, Could early hominids have stopped human activity from making chimpanzees an endangered species in millions of years? Or, could Han Dynasty rulers at the time of Christ have prevented Western hegemony over China in the colonial era?

Well, no. They weren’t smart enough to foresee the future. Even if they had known, they might not have cared. Even if they had cared, they couldn’t have influenced other hominids or ancient Europe and stopped them from evolving or industrializing.

But today, if Hanson, Kurzweil, Vinge and others are right, at least some people are smart enough to see the future. New singularities would also present a personal, mortal threat to everyone alive; I think we might care. And unlike past eras, the Earth’s entire leadership now routinely talks at G20 meetings.  

On the face of it then, calling off the robot party may be possible. 

Will, however, this shindig actually be cancelled? On a positive note, concepts like a singularity and transhumanism are much more well-known than twenty years ago.

The world is more interconnected: international trade is up. Nuclear disarmament has happened a little. And democracy is the most common system of government for the first time, which in turn promotes the flow of ideas--ideas like a singularity being possible.

On the other hand, global income inequality--best shown by "Concept 3" below--is now the highest in history according to standard measures. Based on the five areas of cybersecurity, migration, pandemics, finance, and climate change, Professor Ian Goldin of Oxford claims that "global governance is failing." 

Selling global warming has been hard enough and that’s a measurable physical change, not just an informed guess. It's especially worrying to me that we might not have advance warning of artificial intelligence. Our only notice would be either weak clues like Hanson's model, or the actual effects of the technology. Things might seem fine, and then...

So, maybe I'm being pessimistic, but if a singularity was possible I wouldn't bet on us averting it (although actually, I would, since if I lost I wouldn't have to pay). 

Still, if we did manage to take united global action, there’s reason to think it would work. I had previously been pessimistic on the efficacy of action, because events leading up to a singularity would be wild enough to defeat any precautionary measures. But I now think I was wrong, because those events never have to happen. Controlling a singularity is impossible, but preventing one may not be. 

In other words, picture a boat nearing a waterfall. Yes, as the boat nears the falls it becomes harder to successfully row to shore, perpendicular to the current, before being dragged over. But that doesn’t mean reaching shore is impossible. Similarly, humanity could avoid a singularity by carefully limiting scientific research and development. The sooner we start rowing, the better our chances.

Stopping a singularity might be considered a speculative cause, or more bluntly, tilting at windmills. This all depends, however, on whether you feel Hanson’s model of history justifies predicting a singularity. If you do, it’s not speculative but similar to avoiding global warming or nuclear war.

The problem of "Pascal’s mugging" occurs when an event of astronomically great importance might happen with a vanishingly small probability. But if you accept Hanson’s model, the high impact event occurs with a large chance. So this would not be a Pascal’s mugging or other strange case like the lottery with no odds that Peter Hurford mentions. 

I think Hanson’s model of history is impressive, but it has problems: a better theory might recognize two types of singularities, a "greater" type that only happened once, when humans evolved, and a "lesser" type that includes the agricultural and industrial revolutions. Hanson's projected doubling times for the economy mean the next singularity would have to be "greater," so the chance of any new singularity depends entirely on how good the analogy is between lesser and greater types. But I think the analogy highly enough that, together with Moore's Law, it leads me to take a singularity in my lifetime seriously.

All this implies, for the moment at least, restoring causes like international cooperation and "singularity awareness" as my top altruistic goals. I'm acting on that by choosing to teach in China this year over other comparable alternatives—a kind of charity at the margin, at least if you agree teaching in the world's second language of English helps build communication and global cooperation. 

Friday, August 16, 2013

Radically Uncertain

This was written in response to "Why I'm Skeptical About Unproven Causes (And You Should Be Too)" over at the Effective Altruism blog

I now wonder if we can usefully predict the long-term future at all. The trouble is that deciding which long-term problem is most important at the margin depends on predicting not just problems but failures of solutions, and such failures depend on social changes and how many people work on problems. Especially in scenarios with a technological singularity, I find these daunting to guess. 

For example recently I had a thought related to the first "speculative" cause this post mentions on cultural exchange programs with China: namely, that international cooperation would be very important in a singularity because with awesome technological powers a world war might be terminal, and even more mundane competition might send civilization down a dark road. E.g.,

So we should promote global cooperation, right? But the problem is that if cooperation is vital then in the long run people may realize this and adapt. Maybe rising global trade and cultural integration (perhaps enabled by machine translation) will inevitably result in a suitably coordinated outcome anyway. E.g.,

Or why even assume there is any chance of such coordination or a decent future? Maybe we're doomed. How can we claim anything given such vast changes?

Robin Hanson says he thinks world government is unlikely because we live in "a very competitive world." Nick Bostrom's opinion is that a "singleton" is "more likely than not." Whom should I trust? In a singularity we have an event comparable to the rise of humans. Has political organization changed much in human history? Yes. A lot.

Any exceptions to long-term ignorance must include not only projecting future trends but a causal model for why solutions will fail. The model might be, say, underproduction of a global public good under our current world order with global warming, or rivalries leading to unexpected wars in the case of nuclear conflict. But social models only seem reliable over periods that lack big broad technological changes, because technology affects society. So long-term predictions of catastrophic risk only seem possible if technology stagnates.

When the risk is due to a hypothetical revolutionary future technology, for prediction social changes would have to be somehow put on hold until after a threshold, like a waiting avalanche. This seems unlikely--as R. R. Nelson says, technological change is a "collective, cumulative, evolutionary process."

The most plausible case I happen to know for a quick huge leap in technology with forecastable effects is Robin Hanson's ems scenario, which would take advantage of the complexity stored in the human brain over eons to leapfrog the usual techno-development process. Even this though seems pretty unlikely.

The real world analogue of Hanson's theory is the Human Brain Project headed by Henry Markram, which is supposed to produce fully-fledged emulated human minds in ten years. I think Markram has less than a 1% chance of success. Maybe none. Still, this getting a billion euros calls for analyzing what in the world EU planners were thinking--figuring that out is an untested cause I'd support.

Thursday, June 27, 2013

NPR Show on Genetic Selection

I just listened to a radio show on NPR about the BGI-Shenzhen Cognitive Genomics project to understand intelligence. It featured both Steve Hsu and Lee Silver, who have different perspectives on transhumanism vs. feasibility skepticism: Mr. Hsu is more bullish on the short to medium-run technical feasibility of genetic enhancement.

Hsu says that he expects genetic selection for intelligence in ten years. I was hoping to get some clarification by Silver on why exactly he doubts Hsu's time frame. Instead though, Silver pushed the strange argument that a selected embryo would not contain everything parents might want. (This is around 22:00 in the show, give or take a few minutes.) But that is really beside the point. Hsu is absolutely correct to highlight the significance of allowing parents to choose among say 20 potential genetic codes for their children. Siblings born in the same family can vary widely in IQ, personality, and other traits. This capability wouldn't guarantee brilliance, but it would still have great effects.

However, in a reply in the comments of a blog post about two months after this show, Hsu describes genetic selection for IQ as "highly speculative" and mentions a time frame of 20 instead of 10 years. He also says he is entering a "quiet phase" because of "too much wacky coverage of our project." What prompted this change?

Anyway, the radio show was informative but one-sided. In fact, guest Nita Farahany and Silver teamed up in a debate before this to argue the side of legal human genetic modification! Oh where, oh where are the informed advocates for a human future?

Saturday, June 15, 2013

Realist Bioconservatism

I recently got into a discussion about human cloning at the Mary Meets Dolly blog run by Rebecca Taylor. She's doing a fine job over there discussing biotechnology from a Catholic pro-human perspective. As a Catholic myself it's certainly refreshing to see people discussing these issues from some framework other than utilitarianism. So it's sort of odd that, regarding how to go about banning or proactively stopping the development of various dangerous biotechnologies, I often find myself more aligned with left-leaning organizations like the Center for Genetics and Society than with religious folks like Mrs. Taylor or (no bio-) conservatives.

The question sparking this observation is whether or not we should exclusively advocate bans on both reproductive and therapeutic cloning, both of which Catholic teachings say are sinful, or be willing to settle for a mere ban on reproductive cloning only (the kind that produces cloned infants). From a Catholic perspective, banning both is surely better. However, situations exist in which such a complete ban is not politically possible, and one very important example was at the United Nations. When the UN tried to pass some kind of cloning ban in 2004 and 2005, a minority of countries tried to ram through a ban on both the reproductive and therapeutic types despite the objections of the majority. And the Bush administration led the charge. The result was a failure in which nothing at all was passed and the issue has been dead ever since.

I think the fate of this attempt is no small matter; it may be the best effort yet made toward global regulation of dangerous human biotechnologies. There are more important technologies than cloning, but this actually managed to stir up enough interest for legislative action, even if that action ultimately flopped.

Any kind of successful effort to regulate technologies dangerous to humankind clearly has to be worldwide in the long run. And the world is, as a matter of fact, a very diverse place. Christianity may be the largest single religion but less than a third of the world's people are Christian. That means at least the other two thirds are not going to subscribe to policy prescriptions that make sense only from a Christian worldview. So there are only a few alternatives for bioconservative Christians. Either we work with people in the rest of the world to construct a unified opposition to transhumanism, or else concede that "resistance is futile." Or perhaps we decide that most of the world converting to Christianity is necessary for the pro-human cause's success. Count me out on that: this fight will be difficult enough as it is and I for one have no intention of tying it to a global religious crusade.

My main interest in the cloning issue is not for its own sake. Cloning is one of the least dangerous in the class of harmful biotechnologies that coming decades might bring; it would copy a person's genes, but wouldn't necessarily create superior beings who might oppress the rest of us. I should also be clear that the current church teaching saying that embryos are fully-fledged persons strikes me, for one, as puzzling, and I point out that the church has not always maintained this. But that being said, even if you accept this view or that therapeutic cloning is doubly immoral compared with reproductive cloning, a reproductive cloning ban might still be an important first step toward a comprehensive technology treaty addressing still worse dangers. In other words, you might see therapeutic cloning as doubly immoral, but you should consider ten times as immoral the creation of superior breeds of humans. Genetic selection for enhancement could plausibly start in the next ten years and would be easier to prevent if we had some precedent beforehand. In my opinion this is the strongest argument for restarting a drive to ban reproductive cloning at the UN.

[Edit on June 22: This post originally claimed that there was little chance of banning therapeutic cloning in the US at the federal level either. That was going too far, because the US House actually voted 241-155 for such a ban in 2003. Therefore I have edited the post to pertain only to the UN.]