If you’re reading this you probably already know what Effective Altruism (EA) is. But in case you don’t, click here. The short version is: it’s both a social movement that encourages people to donate money to particularly effective charities (or work for particularly effective charities), and just a general attitude towards how people should donate their money. If you’re going to give £100 to charity, why not make it a really good charity? Don’t give money to a sanctuary for blind lemurs, give it to the Against Malaria Foundation (which will save one life for every $5,000 donated).
I’ve been interested in EA for a few years, and thought it might be useful to write up a ‘my time in EA’ type piece, giving some thoughts on the current state of the movement.
Groupthink and pervasive memes
I think one of the worst things about EA is that Effective Altruists (and rationalists) have the awful combination of being extremely susceptible to memes, while believing that they’re especially resistant to them. The most obvious one that I’ve encountered is the ‘education is mostly signalling’ stuff. Education may well be mostly signalling, but EAs I bump into seem way more confident about this than the evidence justifies.
I think what’s happening here is that the arguments are interesting, Caplan is sort of a fringe EA-adjacent figure (lots of people read his stuff), and so you get this education meme that people just accept without really looking at much of the research. If you ask someone who strongly believes that education is mostly signalling what they’ve read on this other than the Caplan book, you’ll see what I mean.
The ‘we should forecast on everything’ meme is another one. Or the stronger version: ‘forecasting is extremely important and will make everything better, and we should also spend loads of money on forecasting’. Maybe? Or maybe not. How strong is the evidence that having access to forecasts actually improves decision-making? I like Michael Story’s point here:
Nearly all forecasters are paid more by their day jobs to do something other than forecasting. The market message is “don’t forecast”! Forecasting websites also don’t exactly get a huge amount of traffic, so it isn’t like huge numbers of people are relying on these forecasts to make important decisions yet. If this was immediately valuable to people, they would be looking at it all the time, and they’re not.
I’m not opposed to forecasting, I just think it’s entered the meme territory where people push for it everywhere without really knowing why. People suggest putting forecasts next to every newspaper article ever, seemingly not stopping to consider that there may actually be good reasons that no newspaper does this. Are we really expected to believe that it’s a coincidence that EAs are the exact sort of people who would love forecasting as a hobby and that they happen to think that forecasting orgs are among the very best ways that money can be spent? Hmm…
There was a telling shortform post on the EA forum a few months ago called ‘bad things are bad: a short list of common views among EAs’, which included the following as those common views among EAs:
No, we should not sterilize people against their will.
No, we should not murder AI researchers. Murder is generally bad. Martyrs are generally effective. Executing complicated plans is generally more difficult than you think, particularly if failure means getting arrested and massive amounts of bad publicity.
Sex and power are very complicated. If you have a power relationship, consider if you should also have a sexual one. Consider very carefully if you have a power relationship: many forms of power relationship are invisible, or at least transparent, to the person with power. Common forms of power include age, money, social connections, professional connections, and almost anything that correlates with money (race, gender, etc). Some of these will be more important than others. If you're concerned about something, talk to a friend who's on the other side of that from you. If you don't have any, maybe just don't.
And yes, also, don't assault people.
Sometimes deregulation is harmful. "More capitalism" is not the solution to every problem.
Very few people in wild animal suffering think that we should go and deliberately destroy the biosphere today.
Racism continues to be an incredibly negative force in the world. Anti-black racism seems pretty clearly the most harmful form of racism for the minority of the world that lives outside Asia.
Much of the world is inadequate and in need of fixing. That EAs have not prioritized something does not mean that it is fine: it means we're busy.
What’s the subtext here? The guy who wrote this is obviously trying to insist that even though you may have heard that EAs have the opposite views to those expressed here, they actually definitely don’t. But why is he posting this on the EA forum? I think the truth is that a few of these actually are memes among a small subset of EAs, and my cynical interpretation is that the post is gently reminding EAs that these views might be harmful to the cause. You could easily imagine ‘education isn’t evil’ or ‘climate change is actually quite bad and important’ being on this list.
On systemic change
There’s a common criticism that EAs are bad about thinking about the root causes of problems. Is this criticism a good one? I think it’s better than EAs give it credit for, but still misses a lot. Sam Bowman once made the point that surprisingly few EAs seemed to make much fuss about the UK government’s decision to cut foreign aid spending to below the 0.7% target, which is probably true. And in general, sometimes I think EAs spend much less time thinking about politics than they ought to. This ‘EAs love systemic change’ article put out by 80,000 hours seems a bit like cope - if we really loved systemic change, maybe we wouldn’t have to say so all the time.
And much of the other thinking on systemic change seems, again, a bit meme-y. EAs support much higher levels of immigration from developing to developed countries (as do I). But is the reason that most EAs have thought about it a lot, or is it that Bryan Caplan advocates for it all the time? Probably a little bit of both! Another meme-y one is YIMBYism (and again, I’m a YIMBY too), where you can’t help but feel that perhaps it isn’t a coincidence that EAs, who mostly rent, are obsessed with this at least partially because it’s in their economic interests to be obsessed with it.
The Castle (or is it an Abbey?)
Effective Altruists (the Centre for Effective Altruism or CEA, specifically) bought a castle called Wytham Abbey for £15m. I’m not saying this was definitely a bad idea, but the justifications often seemed a bit sloppy. Take Geoffrey Miller’s comment on the EA Forum:
I've been to about 70 conferences in my academic career, and I've noticed that the aesthetics, antiquity, and uniqueness of the venue can have a significant effect on the seriousness with which people take ideas and conversations, and the creativity of their thinking. And, of course, it's much easier to attract great talent and busy thinkers to attractive venues. Moreover, I suspect that meeting in a building constructed in 1480 might help promote long-termism and multi-century thinking.
And perhaps he’s right. But then again, it does sound awfully convenient that the very best use of EA funds was to buy a lovely castle that lots of EAs would be able to spend a lot of time in. And are we really sure that staying in a lovely building is so good for productivity and attracting talent?
I’m coming around to the view that this was extremely bad for optics. Owen Cotton-Barrett wrote that he was ‘a little nervous about the optical effects’, but thought that ‘it’s better to let decisions be guided less by what we think looks good, and more by what we think is good’. And as I’ve written before, this seems like a really weird view. If something passes a cost-benefit analysis when you don’t consider optics, but fails when you do, it isn’t good!
And as I’ve spoken to more people who are sort of on the edge of Effective Altruism, where they’re not quite EAs but also very sympathetic, I’ve realised they’re much more annoyed about the castle than most dyed-in-the-wool EAs seem to realise. The common sentiment is ‘I thought this was about helping some of the world’s poorest people, why the fuck did you buy yourselves a castle?!’.
What are EAs like as people?
EAs are usually very friendly, which is good. They’re very nerdy too, and almost certainly disproportionately autistic. They give much more money to charity than most people, although I’m starting to think that many give less than you would think (I think the mean may be 3% of their salary, with a huge number giving exactly 10% and many seeming to give around 1%). Here’s a passage from an EA Forum post by the account ‘Concerned EAs’:
The EA community is notoriously homogenous, and the “average EA” is extremely easy to imagine: he is a white male in his twenties or thirties from an upper-middle class family in North America or Western Europe. He is ethically utilitarian and politically centrist; an atheist, but culturally protestant. He studied analytic philosophy, mathematics, computer science, or economics at an elite university in the US or UK. He is neurodivergent.
He thinks space is really cool. He highly values intelligence, and believes that his own is significantly above average. He hung around LessWrong for a while as a teenager, and now wears EA-branded shirts and hoodies, drinks Huel, and consumes a narrow range of blogs, podcasts, and vegan ready-meals. He moves in particular ways, talks in particular ways, and thinks in particular ways. Let us name him “Sam”, if only because there’s a solid chance he already is.
Most of these apply to me1, and the stereotypes here seem basically correct. I don’t think most of these things are worrying, and some seem a bit silly to care about (EAs wearing EA-branded shirts is about as surprising as Arsenal fans wearing Arsenal shirts), but the lack of cognitive diversity is clearly a big weakness among EAs.
I would love it if there were more EAs who studied literature or History of Art or the humanities, if more EAs had an eye for aesthetics, and so on. Not exactly for impact reasons, just because it would be nice to have a chat at EAG that involved someone recommending a novel, or basically anything other than talking about the percentage chance that AI kills everyone.
EAs are often bad in social situations, and say things that might come across as hurtful despite their friendliness. Once, at a party, a very well-known EA said to my girlfriend (who is about to start a master’s degree in speech language therapy) ‘Well… I guess it’s not the worst thing you could be doing’. We laughed it off.
If you’re interested in which of these don’t apply to me, I can tell you that I’ve never been very interested in space, I did political science at universities that are a tier below elite (the University of Manchester and UCL), and I didn’t go on LW when I was a teenager.
My "normal person, enjoys politics, not interested in murder" shirt has people asking a lot of questions already answered by my shirt.
Laughed out loud at reading the passage from an EA Forum about the typical EA.
Meanwhile, I wanted to respond to
>> "Nearly all forecasters are paid more by their day jobs to do something other than forecasting."
I don't think this is right. Although it might seem this way from talking to side-gig forecasters, I know some people who make a lot more than the average CFA [1] working for hedge funds and they follow principles from the book Superforecasting as part of their work. They don't have a lot of time for side-gig forecasting, but they are professional forecasters.
[1] https://finance.uworld.com/cfa/salary/