Tags

, ,

Sociology degrees are not useful for much, but one of the things a good sociology education gives you is a healthy disrespect for sociology.

Bright-eyed and bushy-tailed, the young sociology student takes her first theory class. The first week, she reads Smith, who presents a plausible and insightful argument that the invisible hand of the market causes goods to be distributed in the way that best benefits everyone. The second week, she reads Marx, who presents a plausible and insightful argument that capitalism is a product of bourgeoisie ownership of the means of production which alienates the proletariat from their labor. The third week, she reads Durkheim, who presents a plausible and insightful argument that industrialization leads to anomie, a condition in which society provides little moral guidance to individuals. The fourth week, she reads Weber, who presents a plausible and insightful argument that the capitalist spirit originates in a Calvinist urge to find signs whether or not one is a member of the elect. The fifth week, she reads Mills, who presents a plausible and insightful argument that the ordinary citizen is a powerless tool in the hands of corporate, military, and political leaders who control society for their own ends.

At this point, if all goes well, she storms into her professor’s office and says “okay, I can kind of harmonize Smith and Weber, or Marx and Durkheim, but mostly these authors not only don’t agree with each other, they don’t even seem to be describing they same thing! They are at utter disjoint! None of them even agree about what categories we should be using! And yet when I read Weber, he makes sense, and when I read Marx, he makes sense, and when I read Smith, he makes sense! HOW CAN THIS BE? AAAAAAAAAAAAAA!”, and a sociologist is born.

Before I get into the meat of this post, I would like to make it clear that I am not criticizing Scott, Eliezer, or lesser writers of amateur sociology. Amateur sociology is fun. I’ve indulged in it myself quite a lot. And these are posts on their respective blogs, not peer-reviewed journal articles. Bullshitting about your grand theories of how society works is exactly what the essay form is designed for.

This post, in fact, is blaming everyone else.

Tossing out plausible hypotheses about how the world might work, with appropriate caveats, is perfectly reasonable behavior which advances the intellectual discourse. Deciding that these hypotheses are true because a smart charismatic member of your ingroup wrote them is not. And, frankly, if you identify as a rationalist, it is super-embarrassing.

This goes double if you’ve changed your behavior based on it.

I have often seen the post Evaporative Cooling of Group Beliefs discussed by rationalists when we are talking about how cults form. In many cases, this is discussed as if it is settled sociology: evaporative cooling of group beliefs is how cults form. Done.

Eliezer provides a cute analogy with the concept of Bose-Einstein condensates; while this is adorable, analogies to physics are not typically considered evidence. He references the classic “When Prophecy Fails”, which actually concludes that cults are a product of cognitive dissonance and not of evaporative cooling of group beliefs at all. While reinterpreting other people’s data is a perfectly reasonable exercise, one must have some reason to believe that your hypothesis is more likely than their hypothesis; otherwise, we might as well go with the conclusions of the people who actually did the ethnography in the first place.

Eliezer also provides an uncited description of the dynamics of the Nathaniel Branden/Ayn Rand split. I consulted my friend Shea Levy, a dissident Objectivist with contacts among other dissident Objectivists, who claims this description is inaccurate: Objectivists were actually evenly split between Branden and Rand, mostly based on personal loyalties, not based on who believed Rand most fervently. While it is certainly possible that Shea is mistaken, Eliezer’s lack of a citation makes this less than credible.

Finally, Eliezer presents an anecdote about a mailing list he was on. Forgive me if I do not find this terribly compelling.

Similarly, I have also seen people cite the Slate Star Codex post The Toxoplasma of Rage. Some Tumblr users have even adopted the habit of tagging “toxoplasma cw”.

Once again, much of the text of the essay is devoted to a cute, vaguely scientific analogy, this time to the disease toxoplasmosis. However, unlike Eliezer, Scott does give several examples of situations in which his model holds true: an instance of PETA being obnoxious; the Michael Brown shooting; the University of Virginia rape case; Tumblr’s reblogging dynamics; and which Slate Star Codex posts get the most hits.

With the exception of the last item, all these occurred within a few weeks of each other: indeed, this is, as best as I can tell, the articles that happened to be popular on Scott Alexander’s Tumblr dash one week. Like… instead of this whole complicated theory, perhaps we should just adopt the null hypothesis of that week being a really bad week for sympathetic people who were telling the truth being figureheads of social movements, and Tumblr being a totally nonfunctioning blogging platform. I mean, the last one is practically an axiom at this point.

Consider how much effort Scott puts into his posts about SSRIs or Alcoholics Anonymous. Imagine how much he would rip apart a study that consisted of only three participants deliberately selected because their response to a drug fit the narrative the person writing the paper preferred to push. Now consider why you think that is not sufficient evidence for whether a chemical influences people’s brains, and yet it is sufficient evidence for grand historical theories.

The next post I will address is Meditations on Moloch.

Meditations on Moloch refers to several thought experiments, mostly from economics and game theory: the prisoner’s dilemma; the tragedy of the commons; dollar auctions. Now, it is incontrovertible that these situations accurately describe some situations which happen in the real world. But that is a long way from saying that they cause all the problems– or even a significant number of problems– in the world. For instance, humans may be very good at coming to utility-maximizing solutions in those situations. (The actual commons that gave the tragedy of the commons its name did not become overgrazed, but instead was well-managed for centuries.) Or the suboptimal solution for the participants is optimal for everybody else. (The classic example, of course, is corporations fixing prices, which can be modeled as a prisoner’s dilemma/tragedy of the commons.) Or they might describe some problems very well, but not the most important, urgent, or common problems.

Scott gives a lot more examples in Meditations on Moloch than he does in Toxoplasma of Rage, so I’m only going to examine one of them, although I believe my argument applies to all his examples. He says:

13. Government corruption. I don’t know of anyone who really thinks, in a principled way, that corporate welfare is a good idea. But the government still manages to spend somewhere around (depending on how you calculate it) $100 billion dollars a year on it – which for example is three times the amount they spend on health care for the needy. Everyone familiar with the problem has come up with the same easy solution: stop giving so much corporate welfare. Why doesn’t it happen?

Government are competing against one another to get elected or promoted. And suppose part of optimizing for electability is optimizing campaign donations from corporations – or maybe it isn’t, but officials think it is. Officials who try to mess with corporate welfare may lose the support of corporations and be outcompeted by officials who promise to keep it intact.

So although from a god’s-eye-view everyone knows that eliminating corporate welfare is the best solution, each individual official’s personal incentives push her to maintain it.

Scott doesn’t provide a source on the number, but the Cato Institute has a paper that comes to the conclusion that $100 billion is spent on corporate welfare, so I’m going to assume he is using their calculations. The Cato Institute’s list of corporate welfare includes subsidies for the development of alternate energy sources; applied R&D conducted by groups such as NASA, the NIH, the NSF, and the Defense Department; subsidies for farmers; and support for minority-owned businesses. It seems to me that there are quite a lot of people who think, in a principled way, that these programs are a good idea.

But let’s grant to Scott that there is a bunch of corporate welfare that no reasonable person would support. Are we certain his explanation is correct? Perhaps the real mechanism is that congresspeople don’t understand all the businesses they’re supposed to be regulating, and so rely on help from lobbyists. Perhaps it’s because lobbyists are nice, charming people and people naturally want to do favors for people they like and believe what people they like say. Perhaps it is some other mechanism. Scott puts no effort into discussing or disproving alternate hypotheses.

Furthermore, he doesn’t examine empirical data. Some countries are autocracies which don’t have elections: do they have a lower rates of corporate welfare? Some countries have publicly funded elections: do they have lower rates of corporate welfare? Either way, are there alternate explanations?

Of course, I’m being kind of unreasonable here. Scott should not be expected to write a book about lobbying reform every time he wants to write an essay, particularly since he’d also have to write books about capitalism, the rise of agriculture, and the reform of both education and scientific research. The man has a day job.

But my point is that unless someone puts in that work, we can’t say that the hypothesis is true. Scott is saying, “I hypothesize that a lot of problems are caused by runaway optimization processes, and I hypothesize that one of these problems is corporate welfare.” That is importantly different from “one of the biggest problems in the world is runaway optimization processes, such as that which causes corporate welfare.”

Now, you might say to me “Ozy, I don’t believe these essays because of the evidence they cite! Those are just illustrative examples! I believe them because they explain observations I’ve made in my daily life.” That seems superficially reasonable. However, we’re running into the problem the young sociologist did at the beginning of this essay: something sounding plausible doesn’t mean a damn thing. Imagine if instead of Evaporative Cooling of Group Beliefs, Eliezer concluded:

When something happens that disproves the cult’s beliefs, all the doubts of moderate members come to a head. Once, they could think ‘well, maybe the cult leader is wrong about aesthetics, but they’re right about everything else, so it’s okay’; now, it is starkly obvious to them that they must choose between staying and leaving. But their friends are in the cult; they may have been isolated from people who aren’t cult members, or been lonely and disconnected to begin with. Humans are social animals, and leaving your group is terrifying. For this reason, after they receive evidence against the cult, moderate members tend to drop their doubts– which now, it is clear, entail leaving– and become hardcore members.

Imagine if instead of Toxoplasma of Rage, Scott had written an essay that could be summarized like this:

Because of confirmation bias, people tend to signal-boost stories that fit in with their preconceptions. Reading anecdotes that fit your model of the world is comforting; reading something that might disprove it is scary and leads to cognitive dissonance. So the feminist reads endless stories of oppressive white men, while the MRA reads stories of feminists creating oppressive laws that screw over men. And since writers know that they have to cater to their audiences, stories that don’t fit a convenient model get buried. The Internet has only made this worse, because we can get into tiny filter-bubbles. In the old days, we watched the conservative TV news network, and the libertarians had to watch stories that fit the evangelicals’ model of the world. Nowadays, even the ancaps and the minarchists get their news from different websites.

Imagine if instead of Meditations on Moloch, there was this essay:

Human beings evolved to know less than two hundred people. We have scope insensitvity: we didn’t evolve to understand the difference between two hundred humans and two hundred thousand. We don’t help the global poor, because in the environment of evolutionary adaptedness we would have no way of helping people whose faces we couldn’t see. We feel scared and threatened by ultimately harmless online dogpiling, because we evolved to know that if you were hated by two hundred people you would die. Humans, empirically, are quite good at sorting out tragedies of the commons within small groups, via compassion, guilt, social isolation, violence, etc. It’s only when we need to coordinate millions of people that we have coordination failures. Hell, even the famous nervousness of shy male nerds is an instance of this problem: their emotions haven’t caught up to the fact that if they fuck up with one girl, she’s not going to tell Literally Every Girl In The Whole Entire World.

I flatter myself that all of these are prima facie as plausible as their respective opposites. But they make completely different predictions! Do events that disprove the cult leader’s belief draw moderate members closer in to the fold, or drive them away? Do people seek out easy, clear-cut stories because of confirmation bias, or thorny, complicated stories because of controversy? Are runaway optimization processes a big deal, or merely the consequence of the natural human ways of dealing with their intuitions failing to scale?

You can’t tell, without more evidence. And evidence– good, strong evidence that leads one to believe something more complicated than “I don’t know”– is what those posts conspicuously fail to provide.

I picked three prominent posts, but there’s a lot more examples: pretty much anything that classifies itself as “insight porn”, for instance, is an example of the amateur sociology which I am critiquing. I think there are two sources of this toxicity: the love of meta and the wide gap between the level of knowledge required to have an opinion on sociology and the level of knowledge required to know things about sociology.

(Yes, I am fully aware of the irony of doing speculative amateur sociology in my post about how you shouldn’t do speculative amateur sociology.)

First, consider physics. The average person does not have that much of an opinion on physics: they don’t know a baryon from a fermion, and the math is tremendously intimidating. However, many of the things that trained physicists know they know for certain: discoveries regularly have a p-value of 1 in 3×10-7 and a strong theoretical grounding. Therefore, if someone tells you about physics, it’s very likely they’re telling the truth.

Conversely, everyone lives in a society [citation needed], and thus everyone feels qualified to have an opinion on sociology. Things like “marginalization”, “signalling”, “cults”, “social class”, and so on are the stuff of everyday life. On the other hand, sociology is really hard. You can’t do controlled experiments where you give five hundred societies public funding of elections and five hundred societies private funding of elections and see which one has the higher rate of corporate welfare: all your data is observational. Your sample sizes are tiny: there simply aren’t enough different countries, religions, wars, or what have you. And as soon as you get a conclusion that you think really holds, someone comes along and invents the birth control pill and fucks everything up for you.

So what happens is that you have a bunch of people talking about one of the disciplines it’s most difficult to know anything about for certain, based on their dozen friends and the last four newspaper articles they read. No wonder we end up falling into such difficulties.

Second, a lot of rationalists love “meta”.

They don’t want to say things about boring, object-level politics, like what sort of environmental regulations are a good idea or whether Trump is going to win in Iowa. They want to say things about Politics! About Discourse! About How People Think! About Memes!

Let’s compare it with biology. The object level of biology contains many fascinating articles about snail sex. The meta level of biology is, of course, the theory of evolution. The meta meta level of biology is the concept of the scientific method.

So I’d like to draw your attention to two points here. First, there is a lot of evidence behind the theory of evolution. On The Origin of Species covers, in depth, topics ranging from fancy pigeon breeding to slave-making ants, and today the evidence for evolution is even more varied. When you try to speculate about the meta level of biology without this sort of grounding in evidence, you wind up with “there is a Form of each species, which individuals may exhibit more or less well; these Forms have existed since the beginning of time, and all species have been in more-or-less the same situation since the Creation” which is, not too fine a point on it, exactly backwards.

Second, talking about meta is entirely throwing the virtue of narrowness out the door. It is said: “What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world.” Similarly, while you can have endless conversations about snail sex, there’s really not a lot that can be said about the scientific method. It’s a good idea and we should do more of it. And far more can be said about how people talk to each other on Tumblr, or among rationalists, or in Timbuktu, than can be said about the concept of Conversations In General.

Of course, that’s assuming you don’t want to completely make up nonsense. If you don’t confine yourself to having opinions that are technically speaking ‘true’, you can say as much as you like about the meta meta level of biology, because nonsensespace is much larger than sensespace.

Again, I’m not saying that it’s wrong to hang out in nonsensespace shooting the shit. I’m literally doing it right now. It’s fun! But I think we need to keep a firm wall between the part of our brain that does amateur sociology and the part of our brain that has real grown-up endorsed opinions. I like classifying all my friends in sortinghatchats, but when it comes time to have a discussion of EA PR, I don’t say “how can we reach out to Gryffindor Primaries?” Similarly, you can have fun talking about toxoplasma of rage as much as you like, but it does not have a place in serious discussions.