Discover more from Samstack
Stuff I found interesting in September
1) The reddit rumour mill on Ibram X. Kendi’s Center for Antiracist Research:
Apparently, they have laid off half of the 45 people working there. If you go on the BU website, the team pages are all offline. Anywhere you click "Meet the Team" it gives an error.
Some news sources are covering this but they are all... biased and critical of the center altogether... Boston Globe is best so far.
Folks are saying Kendi blew through his over $10 million in donations, he is being branded as "exploitative," and apparently they are moving to a "fellowship model"? Apparently Kendi has gone silent and is not responding to anything.
Another commenter writes:
i’ve been in there a couple of times, they were never actually doing anything worthwhile from what i could tell. usually had a few people sat contemplatively around a whiteboard listing “ways to promote social activism for underprivileged communities” with one measly bullet point that says “social media posts.” felt like he was just scamming the school out of money, shame there’s some very nice people there.
2) This study on how politicians learn about public opinion:
What’s the biggest surprise here? How would we expect social desirability bias to affect these results? I would’ve thought that politicians would think ‘I’ll look pretty thick if I say I just talk to random people over using actual representative polling’. Maybe politicians think it’s virtuous to get their views from people they actually meet, so are happy to say so? Not sure.
3) Trump on Citizen Kane:
4) A useful piece [FT, paywalled] for non-believers: ‘Repeat after me: building any new homes reduces housing costs for all’.
5)a new podcast by Tom Chivers and Stuart Ritchie, is really quite good. They seem to have just been getting the hang of things in the first few episodes so perhaps they aren’t quite as polished - why not start with one of the more recent ones? I thought the episode on Cash Transfers was rather good.
6) I am coming round to the view that this chap called Crémieux on Twitter (and here aton Substack) is really pretty interesting. Here’s an extract from an interesting Tweet of his (annoyingly, I can’t embed Tweets on Substack anymore):
7) Friend of the blog Jonathan Mann has an upcoming series on how to identify talented forecasters.
8) I try not to link to ACX here because I guess the majority of you are subscribed already, but I did particularly enjoy this review of an old Elon Musk biography:
The main answer to the paradox of “how does he succeed while making so many bad decisions?” is that he’s the most focused person in the world. When he decides to do something, he comes up with an absurdly optimistic timeline for how quickly it can happen if everything goes as well as the laws of physics allow. He - I think the book provides ample evidence for this - genuinely believes this timeline, or at least half-believingly wills for it to be true. Then, when things go less quickly than that, it’s like red-hot knives stabbing his brain.
He gets obsessed, screams at everyone involved, puts in twenty hour days for months on end trying to try to get the project “back on track”. He comes up with absurd shortcuts nobody else would ever consider, trying to win back a few days or weeks. If a specific person stands in his way, he fires that person (if they are an employee), unleashes nonstop verbal abuse on them (if they will listen) or sues them (if they’re anyone else). The end result never quite reaches the original goal, but still happens faster than anyone except Elon thought possible. A Tesla employee described his style as demanding a car go from LA to NYC on a single charge, which is impossible, but he puts in such a strong effort that the car makes it to New Mexico.
10) Nielsen on AI x-risk:
With all that said: practical alignment work is extremely accelerationist. If ChatGPT had behaved like Tay, AI would still be getting minor mentions on page 19 of The New York Times. These alignment techniques play a role in AI somewhat like the systems used to control when a nuclear bomb goes off. If such bombs just went off at random, no-one would build nuclear bombs, and there would be no nuclear threat to humanity. Practical alignment work makes today's AI systems far more attractive to customers, far more usable as a platform for building other systems, far more profitable as a target for investors, and far more palatable to governments. The net result is that practical alignment work is accelerationist.
This seems pretty silly, right? I flirted with far-left politics in my teens, and don’t really think I have much to apologise for other than being a little naive. I read Jerry Cohen and the rest of the Analytical Marxists. I read Kropotkin and Bakunin and Bookchin. It was interesting and I don’t think you have to be a total fool to find much of it convincing. Did I really do much wrong? Seems more akin to being a very hardcore libertarian to me…
13) Hasan Minhaj’s “Emotional Truths” - In his standup specials, the former “Patriot Act” host often recounts harrowing experiences he’s faced as an Asian American and Muslim American. Does it matter that much of it never happened to him?
14) How does your opinion of Exiting the Vampire Castle change given the Russell Brand accusations? How about ‘Dadfight’? How about the Morning Joe appearance? How about the Savile interview? How about the godawful Miliband interview? How about the wonderful piece he wrote after Thatcher died? How about the turn to conspiracy?
(Dadfight, I have to say, is one of the most compelling short ultra-low-budget documentaries I’ve ever seen.)
15) Berlin by Bea Setton was fun for those of you who have lived or spent significant time in Berlin, review here.
16) If you haven’t already, consider following me on Twitter (X?).
17) Stephen Bush on the Bradley Cooper fake nose controversy [FT, paywalled]:
That lesson is this: given it is very rare indeed that you can make a decision which is guaranteed to please everybody, you should stand firm when the inevitable complaints come in. Cooper consulted Bernstein’s family and his friends, and then he made the film he wanted to make.
18) Niplav’s ‘pickup reports’. Perhaps satirical? Who knows!
I approached her during a Saturday daygame session with a wing. Probably in her late twenties, dressed kind of weirdly (dark grayish green coat and pants). When I started talking to her, I started stacking about her outfit, and suggested that she was probably off to mountaineering.
She took the hooks perfectly, and I was already internally fist-pumping, but then she started talking non-stop about her hiking trip to a holy mountain in Tibet, and how 10 people had died there when ascending the mountain against the better advice of the locals. I've never had any experience like this in set: she went on rambling for probably 3 minutes (those of you who have done daygame approaches know that this is super uncommon, and only happens if either the women is super into you or crazy—possibly both).
19) I hadn’t noticed that Sam Kriss now apparently writes every so often in The Spectator now.
20) On Goofymaxxing:
21) On vegan activism:
Being in that egg farm made me want to glue myself to the floor of a basketball stadium or chain myself to an assembly line. It made me want to confront people picking up their plastic-wrapped cuts at the grocery store, nourishing themselves with another creature’s misery while telling themselves they love animals, because in some contradictory way they really do. And it made me furious that whenever the animal-rights movement suggests that we as a society should stop doing this, it gets a barrage of criticism about its messaging and tactics and strategies.
That is true even though the critiques of radical vegans are well founded. Nothing I saw in my months of reporting persuaded me that DxE or any other animal-rights group has a plausible theory of success. And DxE’s efforts at mobilization seemed likelier to alienate potential supporters than to persuade them.
22) Sufjan is sick. Why not listen to Chicago?
23) Pomo generator.
24) Herzog on John Waters:
25) Hatchet job on EA in the New Statesman. I think it uses some slightly irritating sleight of hand to make EA seem worse: quoting Peter Singer using the word ‘retarded’ in a way that wouldn’t have been controversial in 1975 but seems awful now, failing to understand the use-mention distinction when talking about Bostrom’s email, and acting as though QALYs are some weird thing that EAs came up with. Wytham Abbey is of course mentioned.
As an aside, I think we aren’t really onto a winner with the ‘longtermism’ branding. Now that lots of people take AI risk seriously, why not just say we care about AI risk and preventing future pandemics? That covers most longtermist funding, right? LTFF also claim that they want to ‘promote long term thinking’, I would need to look into the details of grants that have been made to promote long-term thinking, but colour me sceptical.
26) Rishi on AI safety:
An AI safety summit at Bletchley Park in November is expected to focus almost entirely on existential risks and how to negate them.
Despite myriad political challenges, Sunak is understood to be deeply involved in the AI debate. “He’s zeroed in on it as his legacy moment. This is his climate change,” says one former government adviser.
27) Is this the first genuine group of satanic paedophiles? I always thought they were a myth resulting from moral panics, turns out maybe they’re real (at least in this instance).