When I was a kid, I read a book called Future Shock.

At first, it quite impressed me. All those interesting things that were going to happen in the future! Instead of raising their own kids, people were going to hire professional child-raisers and be more like uncles. We were going to have paper clothes that people wore once and then threw out. There would be cities underwater!

Then I noticed that, well, many of these predictions were said to happen by the year 2000 and… it was observably the year 2000 and there were not any underwater cities. No one was even making any movements towards underwater cities. As far as I could tell, no one wanted to live underwater at all.

I think this exercise left me with a lifelong suspicion of futurologists.

Predicting the future is really, really hard.

Most rationalists have probably heard of Tetlock’s studies. For the two people who followed my Tumblr for Disney movie liveblogging and are very confused: Tetlock is a researcher who, over a period of twenty years, asked a variety of experts in many fields and with opinions from the far left to the far right to make 28,000 predictions. His finding was that, as a whole, you could probably get about as accurate results by consulting a psychic or a dart-throwing monkey, both of which are substantially cheaper than Harvard professors.

This is a somewhat depressing result.

And, yes, it’s still true: look at how few people predicted Russia’s invasion of the Ukraine.

Given Tetlock’s research, my prior about anyone who predicts the future is that I should change my behavior based on their predictions to the exact degree that I would change my behavior based on the predictions of a dart-throwing monkey, i.e. not at all. The only thing that will change my mind is a strong track record of accurate predictions, unambiguously better than chance. And the larger the change you demand, the stronger the track record of accurate predictions should be.

Now, when I get to this point in my “why I am a singularity agnostic” explanation, a rationalist immediately pops up to explain to me that it’s not a prediction, it’s an antiprediction. An antiprediction is not making a prediction, they say. For instance, it is an antiprediction to say that alien life would probably be far ahead of us or far behind us: there are lots of possible ways that a species could be ahead of us or behind us, and only a few ways that they could be at approximately our technological level, so we get a counterintuitive prediction just by refusing to privilege the dramatically interesting “approximately at our level of technology” hypothesis. Similarly, they argue, believing in the Singularity is an antiprediction. There are lots of ways that things can be way way smarter than us, and only a few ways that they couldn’t.

However, I would like to point out that antipredictions are, despite the name, not actually how you don’t make a prediction. How you don’t make a prediction is like this: “I am not predicting anything.” An antiprediction is a prediction. And thus the same rules apply to antipredictions that apply to other predictions: you probably suck at them and I require good evidence to believe that you don’t.

Usually, people then proceed to be like “look, here are a bunch of reasonable explanations about why the Singularity makes sense.”

I don’t care why the Singularity makes sense.

Marxists have a bunch of reasonable arguments about why capitalism is withering away any day now. Libertarians have a bunch of reasonable arguments about why the social safety net is going to collapse and only capitalism will remain. Alvin Toffler had a bunch of reasonable arguments about why we would have underwater cities. Everyone in the world has reasonable arguments. You think people become Harvard professors without being able to come up with explanations that sound hella plausible?

But they are still wrong.

I am in a position of epistemic learned helplessness here. Reasonable arguments do not allow me to distinguish between people who can accurately predict the future and those who can’t. Maybe, in the future, when the Good Judgment Project has outlined its recommendations, I will be able to go “ah, this is following best practices, it will probably be correct.” But until then the only thing that is reliable is that the person has predicted well in the past.

Many Singulatarians want me to make big changes in my life: I donate a tenth of my income to charity, and I am somewhat concerned about what happens to a tenth of an income. However, MIRI does not have a track record of correct predictions. It also, as far as I know, doesn’t have a track record of incorrect predictions: it has one prediction (the intelligence superexplosion). I think keeping yourself to one prediction on a subject you know a lot about is probably a good method of increasing your accuracy; however, it does leave me with no evidence about whether they are good at predicting things or not, and I am left with my prior that they are about as good at predicting things as a dart-throwing monkey.

And this is where my post was when I made my fatal mistake of showing it to my friend Pedro, who pointed out an extremely obvious thing I had somehow missed: people make accurate predictions about what technology is going to happen all the time. Specifically, the people who fund technological development.

Of course, this doesn’t change my overall point. Technological development is mostly not funded by individuals, like MIRI is; they are funded by corporations and the government. The individual donations to research that do exist are mostly to do something incredibly vague like “kick cancer”; experts figure out how to actually disburse the money. There’s no reason to think artificial intelligence is any different. The layperson, unless they happen to find themself fascinated by a particular area, can more-or-less treat it as a black box from which iPhones come out.

Much of the government’s research budget is spent on the military, that is, on various more efficient methods of killing people. One hundred percent of a corporation’s research budget is spent on directly or indirectly maximizing profit. Fortunately, a lot of the time, both of these happen to go along with human flourishing: profit is often maximized by giving people things that they want; ARPANET improves the efficiency of killing both people and time.

Unfortunately, when this fails, it fails spectacularly. See: nukes.

Artificial general intelligence is, predictably, a place where both the profit motive and the mass death motive will fail spectacularly, probably more spectacularly than nukes. Nuclear bombs are an xrisk but at least they are not agents scheming to leave their Nuke Boxes. If the human race is made extinct due to nuclear war, we will be comforted in our last breaths by the knowledge we brought it on ourselves.

Imagine, in the 1930s, a Nuclear Intelligence Research Institute. A wise person, foreseeing that the invention of a nuclear bomb is an existential risk, chose to research the bomb before it could fall into the wrong hands. But once NIRI invents the bomb, what are they going to do with it? They can blow up anyone else who makes nukes, but they could do that much more simply by lobbying the government to adopt a “blow up anyone else who makes nukes” policy. Admittedly, that leaves them vulnerable to the government being made of people less wise and public-spirited than they are, but NIRI has that problem too– even if the original founders are all trustworthy, at some point they’re going to die and leave nukes to less trustworthy heirs.

Nukes and ARPANET and so on all basically do the same thing regardless of who makes them: nukes blow people up, ARPANET facilitates exchanges of important logistical information and cat pictures, etc. So independent attempts to develop nukes don’t work that well: there’s no way you can design a nuclear bomb that isn’t an existential risk and preempt the Manhattan Project designing one that is. Artificial general intelligence, however, does massively different things based on who programs it: minds can be programmed with a wide variety of value sets, and only a relatively small number of value sets wind up preserving conditions humans find important, such as the continued existence of the human race.

AGI presents a perhaps unique situation: the chance for the public to fund the creation of a technology by correctly incentivized people that will completely eliminate the risk of the creation of a destructive technology by poorly incentivized people.

However, I am still about as good at predicting technological development as a dart-throwing monkey. I am not a professional Technology Funder Person, I have no idea how they do it, I don’t know anything about AI. I have no idea whether AGI is going to take five years or a hundred years or a thousand; I don’t know if it is even possible; I don’t know if the research MIRI is doing is going to lead to better AIs; I know nothing, Jon Snow!

And something between “ten percent of my income” and “literally trillions of lives” is relying on me figuring out how to know something, Jon Snow.

The one glimmer of hope here is that, however poorly aligned their incentives are, DARPA and Silicon Valley don’t want the entire world to be destroyed [citation needed]. So it seems possible the solution is not independent funding, but getting the entire AGI community on board with Friendliness as a project. At that point, I can assume that they will deal with it and I can return to thinking of technology funding as a black box from which iPhones and God-AIs come out. But I am unclear how likely this solution is to be effective. 

In short, I am still confused, but on a higher level and about more important things, which is pretty much where you want to be at the end of an essay.