Discover more from Samstack
Stuff I found interesting in January
1) Some research from about a year ago on the personality traits that predict interest in EA. The finding isn’t very surprising, but the whole thing is worth a read:
We found two psychologically distinct moral factors that predict whether someone is a proto-EA. First, such a person is particularly willing to give away personal resources (time or money) to help others in need, even if they are far away. Second, such a person is focused on effectiveness when allocating their altruistic resources. We found that both expansive altruism and effectiveness-focus significantly predicted positive attitudes and interest towards effective altruism in non-EA samples. And we found that both scales predict stronger identification with effective altruism in pre-existing effective altruists. In short, both the E and the A are required to make someone a proto-EA, but only few people score highly on both.
I would love to see research that looks at what other things predict those personality traits, it might make finding proto-EAs a lot easier. I’d also be interested to see research on the link between autism and EA, or the classic big five and interest in EA.
2) Some concerns about the use of MTurk in research:
I stopped using MTurk in 2020. I was running a large study (N > 2000) in which I asked participants to describe episodes of their lives in which they experienced such and such emotions – which meant that participants’ answers to open-ended questions were central to the study. This is where I noted that a lot of participants (actually more than half of them) produced nonsensical or low-effort responses – or simply copy-pasted them from the Internet. And those participants were able to succeed at the obvious attention checks. This meant that I would have to sort them one by one – this was inefficient, tiresome and left too many degrees of freedom to the experimenters.
3) I got a Meta Quest headset, which I tentatively recommend. I’m significantly more impressed with it than I thought I would be, but I’m still not really bullish on the whole Metaverse thing. I hadn’t realised how good it is for fitness/weight loss (the game ‘Thrill of the Fight’ is great fun), I should’ve included it as one of the recommended lifestyle changes in this post.
I know nothing about picking stocks (and don’t do it), but I would probably rather buy Meta stock than short at this point - the P/E looks pretty appealing and I think people are too down on the prospects of their core businesses. My hot take is that I think it’s kinda lame that Apple crippled Meta with all the privacy stuff.
4) No idea if anyone’s interested in this, but here’s the old music playlist I used to listen to when I was studying during my undergraduate degree. I think there’s some good underrated stuff in there.
5) These critiques of Effective Altruism from concerned EAs were interesting, this part was especially enjoyable:
The EA community is notoriously homogenous, and the “average EA” is extremely easy to imagine: he is a white male in his twenties or thirties from an upper-middle class family in North America or Western Europe. He is ethically utilitarian and politically centrist; an atheist, but culturally protestant. He studied analytic philosophy, mathematics, computer science, or economics at an elite university in the US or UK. He is neurodivergent. He thinks space is really cool. He highly values intelligence, and believes that his own is significantly above average. He hung around LessWrong for a while as a teenager, and now wears EA-branded shirts and hoodies, drinks Huel, and consumes a narrow range of blogs, podcasts, and vegan ready-meals. He moves in particular ways, talks in particular ways, and thinks in particular ways. Let us name him “Sam”, if only because there’s a solid chance he already is.
6) The Bostrom stuff was pretty crazy, but I guess you’ve already formed your view here. I was considering writing a piece about it, but didn’t really think I had much value to add. The original post was obviously pretty awful IMO, and the apology was bad too. The weirdest thing he did was update his website to allude to a ‘swarm of bloodthirsty mosquitos’:
He later updated the website again to remove this. Why did he add it at all? It definitely makes me think the apology wasn’t exactly sincere. Another hot take is that I’m generally quite happy that someone like Émile Torres exists and is laser-focused on pointing out problems with EA. Émile does sometimes point out things that are important, and people are too obsessed with the fact that Émile seems to only care about the bad parts of EA - they’re still often important criticisms. The people who call them EA’s worst critic or whatever do them a disservice, they gets real scoops!
7) New on Substack: SBF, apparently. No idea what the hell he’s doing, I don’t have a model of him that makes any sense, who knows what’s going on?
8) This FT series on Apple and China seems pretty interesting.
10) This Verge account of Elon Musk’s first few months in charge of Twitter was a great read, although you’ll probably know a lot of it already if you’ve been following the acquisition (or if you use Twitter a lot).
12) Good in-depth read on Larry Ray’s cult.
14) I am pretty surprised by how common concussions are in boxing (h/t SSC subreddit):
A concussion was recorded in 47/60 fights. The mean number of concussions per minute of fight time was 0.061 (0.047 for boxers and 0.085 for MMA). When stratifying by outcome of the bout, the mean number of concussions per minute for the winner was 0.010 compared to the loser at 0.111 concussions per minute. The fighter that sustained the first concussion ultimately lost 98% of the time. The physician and non-physician raters had high agreement regarding the number of concussions that occurred to each fighter per match. The physician raters judged that 24 of the 60 fights (11 boxing [37%]; 13 MMA [43%]) should have been stopped sooner than what occurred.
On the one hand, it’s not completely surprising that getting hit in the head over and over again is bad for you. But still, 47 out of 60! How about banning boxing (or at least making it much safer) as a new EA cause area?
17) Dominic Cummings on Steve Hsu’s podcast was worth listening to.
19) Friend of the blog Maxim Lott writes about Sweden’s Covid performance:
— Sweden did in fact do the best of any country in Europe (or, at least, tied with Norway) when all factors are considered and the most appropriate data and methods are used.
— Sweden’s success without lockdowns suggests that western lockdowns were really not effective beyond the very short run. We can now be even more sure of that than we were previously.
— It’s worth further investigation about exactly why Sweden did well compared to its neighbors despite no lockdowns. Could early theories, such as letting young “superspreaders” get natural immunity to slow spread later, have had validity?
The piece is useful and appears to make a genuine effort to assess Sweden’s performance on Covid, but I will gently suggest that Max has a slight anti-lockdown bias, so I still recommend taking the post with the necessary dose of salt. Still, a worthwhile read.
21) Did I already post this one?
Firms arguably price at 99-ending prices because of left-digit bias—the tendency of consumers to perceive a $4.99 as much lower than a $5.00. Analysis of retail scanner data on 3500 products sold by 25 US chains provides robust support for this explanation. I structurally estimate the magnitude of left-digit bias and find that consumers respond to a 1-cent increase from a 99-ending price as if it were more than a 20-cent increase.
If/when I go paid, I will have my prices end in .99. This might be annoying (I find it a bit irritating - who do I think I am? Walmart?), but at least now you know why.
23) I enjoyed the book Neurotribes. I was planning on writing a longer post on Autism which may come out at some point in the future, but if not I still think this history of Autism and the differences between Leo Kanner’s work and Hans Asperger’s work is fascinating. I know it’s received some criticism but I liked it regardless. Maybe you can skip some of the later chapters, probably you could only read the first half of the book (skipping the first chapter too) if you want.
24) A question for readers: I’m quite sceptical that AI safety research is likely to be particularly helpful, despite knowing very little about the technical details of how it all works. My reasoning is pretty basic (and probably wrong in various ways) - if I imagine an ‘internet safety’ researcher trying to figure out how to make the internet much less dangerous many years prior to the internet actually existing, I cannot imagine them having much success at all.
Given I know very little about either AI or the history of the internet, does this analogy work? Were there people thinking about ‘internet safety’ when it became thought of as a possibility? If there weren’t, was it possible to do anything useful if someone had thought about doing this? Excuse my ignorance!
25) And while I’m asking questions to readers - has anyone done any online courses or MOOCs they’ve found particularly useful? I used DataCamp (in conjunction with my MSc) to learn R a few years ago, but other than that, I’ve never found anything that seems so valuable. How about you guys?