9 Comments

I think really the most important consideration here is not the hypothetical risk that the industrial revolution will end up causing something really bad that produces a lot of suffering and death, but the fact that it's already causing a lot of suffering and deaths on industrial factory farms. When you just look at the human-level data, the industrial revolution looks pretty good on net, and it's just hypothetical tail risks that could have it turn out to be net-negative. But when you add the data point that, e.g., 50 billion broiler chickens are killed every year (which adds up to trillions since the advent of industrial farming), after lives that only lasted a few weeks and were basically pure suffering from the start, it's distinctly possible this has all been net-negative. To be clear, I'm not sure I believe that the industrial revolution has actually been negative on net, but I think this is the most relevant consideration.

Expand full comment

I'll admit I've thought about it much less than the people who make it their career, but taking advantage of the "Epistemic confidence: Very Low" context, I've always wondered whether these existential risk scenarios are truly wiping out people, or 'just' pushing civilization back to the dark ages. For example, some tiny fraction of people are likely to be naturally immune to a given supervirus. The Subreddit 'Preppers' has about 290k members. Chuck Norris.

There would be a ton of suffering to get there, of course, but if the alternative was never having come out of the dark ages, it seems like a relevant distinction.

Expand full comment

Strongly recommend https://www.youtube.com/watch?v=EO5jJFpbJvg, Tyler Cowen lecture, "Is Economic Growth a Moral Imperative?" who continues extensively where you left off.

Expand full comment

Clever!

Is this a problem of the industrial revolution, or a problem for utilitarianism? It sounds like an applied version of the repugnant conclusion. And it’s for this reason I’ve always been deeply skeptical of utilitarianism. I’ve never seen a good reply to the repugnant conclusion, and yet it is accepted in a “shut up and calculate” way quite broadly in the EA and rationalist communities. If you’re just adding up people, these are the (often counterintuitive) results you get.

Expand full comment