Tags

,

[I would like to thank Rob Nostalgebraist, dataandphilosophy, and Andrew Rettek for inspiring this post, which is primarily a synthesis of their various viewpoints– although of course not necessarily endorsed by any of the above.]

I think that the rationalist movement can be modeled as a spectrum between two different groups. Nostalgebraist calls them, perhaps unclearly, the Yudkowskians and the Tumblr Academic Freedom Club; because it amuses me, I’ll call them the Craft and the Community.

The basic belief of the Craft is that society is approaching a local maximum in a wide variety of fields; however, there are much better local maxima (or perhaps even a global maximum) over there. In order to reach the global maximum, you have to acquire a certain set of skills, so you don’t end up in the much worse trench between maxima. In this, they’re not dissimilar from (to pick a few random examples) mystics or leftists hungering for revolution. However, unlike those groups, the Craft believes that the appropriate skills to develop are those that fall broadly under the aegis of “rationality”, from Bayesianism to awareness of cognitive biases.

Conversely, the Community tends to be fairly skeptical about the Craft’s project. The Community is primarily interested in creating a social group with norms they find pleasant: statistical literacy; citing sources; civility; rewarding people for changing their minds; the not-geek not-autism thing; a high tolerance for really absurdly long web fiction. The most controversial such norm, of course, is refusing to shun people for having beliefs generally considered to be evil.

You could also talk about a middle position, where rationality is useful for, say, effective altruism and existential risk, but resoundingly useless in one’s day-to-day life. This seems to be the position outlined by Scott Alexander’s Extreme Rationality: It’s Not That Great. I would call this the Compromise, to keep up the C theme.

One important thing to note is that the Craft, the Community, and the Compromise all share a lot of similar beliefs. While of course getting rationalists to reach consensus is something like herding cats, typical rationalist philosophical positions include reductionism, materialism, moral non-realism, utilitarianism, anti-deathism and transhumanism. Rationalists across all three groups tend to have high opinions of the Sequences and Slate Star Codex and cite both in arguments; rationalist discourse norms were shaped by How To Actually Change Your Mind and 37 Ways Words Can Be Wrong, among others.

There are people who agree on few to no rationalist positions but still like going to our parties and reading our blog posts. I coined the term “rationalist-adjacent” for this group before I got the idea that the names of all subdivisions of the rationalist community should begin with the letter C.

Nostalgebraist wonders why the Craft and the Community continue to both identify as rationalists. I think this makes more sense than he thinks.

The Craft gets two primary benefits from the existence of the Community. First, social interaction. Most people see the Craft as crackpots. But the Community doesn’t instantly respond to “I work at MIRI” with “you’re going to prevent Skynet?” or “Are you sure this isn’t a cult?” The Craft is unlikely to recruit enough people to be able to have an entire small-c community on its own. Second, new ideas and members. The Community is selected for people who like thinking about stuff. The Community can come up with useful ideas for the Craft even if they don’t believe the Craft is right. Furthermore, a lot of people are going to be converted to the Craft’s side of things if they hang around the community for long enough, in the same way that you become a feminist if you spend a lot of time hanging around feminists, and convincing people who already subscribe to a lot of rationalist ideas is easier than convincing people who don’t.

In the Community’s case, we already have a norm against excluding people just because they believe things we think are stupid or evil, and that applies to the Craft as well as to neoreactionaries. The Craft follows the norms that the Community finds congenial, so there’s no particular reason to separate ourselves from them– particularly since we agree with them on a lot of our fundamental assumptions.

However, I would like to highlight that the Craft and the Community’s incentives are not necessarily aligned. In particular, the Craft has an incentive to get good PR, both from high-status people (who can be convinced that AI risk is important) and the general public (who can be convinced to adopt the Community’s ideals of rationality). Furthermore, the Craft has a much stronger interest in intellectual diversity than the Community does.

To talk about a specific example: a lot of Less Wrong references a lot of nerd culture, such as catgirls, anime, fanfiction, Harry Potter, My Little Pony, etc. From the Community’s perspective, this is awesome, because it means the community has shared cultural references. However, the vast majority of people, including intelligent and well-educated people, would either not understand those references or think that they’re off-putting in a serious philosophical debate. It’s not well-advised to select strongly for a trait totally unrelated to any of the traits you want to optimize for, since you’d be ruling out smart, curious, hard-working people because they happen to prefer James Joyce to Takeshi Obata. In the worst-case scenario, nerds have particular common failure modes (the Geek Cognitive Fallacies?) and selecting for nerds increases the likelihood the community would fall into those pitfalls.

Similarly, having neoreactionaries around selects for people who will not punish you for contemplating any idea, no matter how evil. Again, that’s one of the primary traits the Community is selecting for, so it’s cool with the Community. But the Craft, while it considers open-mindedness important, needs to select for other virtues, from a burning curiosity to a passion for scholarship. It must weigh carefully the benefits of the insight from this one fringe political group and the selection pressure it exercises on the rationalist community’s members, versus saying “okay, we can settle for ‘open-minded enough to want to build a Friendly AI because they read about it on the Internet’ and instead exert more selective pressure for people who have some other desirable trait.”