Discussion about this post

User's avatar
Peter McLaughlin's avatar

I think really the most important consideration here is not the hypothetical risk that the industrial revolution will end up causing something really bad that produces a lot of suffering and death, but the fact that it's already causing a lot of suffering and deaths on industrial factory farms. When you just look at the human-level data, the industrial revolution looks pretty good on net, and it's just hypothetical tail risks that could have it turn out to be net-negative. But when you add the data point that, e.g., 50 billion broiler chickens are killed every year (which adds up to trillions since the advent of industrial farming), after lives that only lasted a few weeks and were basically pure suffering from the start, it's distinctly possible this has all been net-negative. To be clear, I'm not sure I believe that the industrial revolution has actually been negative on net, but I think this is the most relevant consideration.

Expand full comment
Matthew Ritter's avatar

I'll admit I've thought about it much less than the people who make it their career, but taking advantage of the "Epistemic confidence: Very Low" context, I've always wondered whether these existential risk scenarios are truly wiping out people, or 'just' pushing civilization back to the dark ages. For example, some tiny fraction of people are likely to be naturally immune to a given supervirus. The Subreddit 'Preppers' has about 290k members. Chuck Norris.

There would be a ton of suffering to get there, of course, but if the alternative was never having come out of the dark ages, it seems like a relevant distinction.

Expand full comment
7 more comments...

No posts