Discussion about this post

User's avatar
Jonathan Mann's avatar

This article's candid engagement with so-called "dumb questions" is a breath of fresh air, as addressing these inquiries can lead to a deeper understanding of the challenges with AI. It's intriguing how, when faced with a terrifying yet plausible scenario, many make the leap from comprehending how something "could" occur to believing it inevitably "will" happen.

Within the EA and rationalist communities, certain narratives of AI-driven devastation have been reiterated so often that they seem to have become the default perspective, with some individuals struggling to envision alternative outcomes. It's possible they possess insights that remain elusive to me and others; for me though, the inconsistencies in the answers I've heard so far reminds me of the confusion I experienced as a child when asking different church elders about the distinction between miracles and magic. While all agreed that miracles weren't magic, no two individuals could provide the same explanation or even a consistent framework for understanding the differences.

Also, thank you for saying AI and not AGI!

Expand full comment
SkinShallow's avatar

I kinda have the same dumb question that I'm formulating in a slightly different way: not so much HOW a misaligned AI could kill everyone but what I see as a totally fatalistic idea that those advanced AI's will inevitably BE GIVEN ACCESS TO TOOLS to do it -- so, my dumb question is not "how once deployed" (it's obvious how, from drone operations to developing gain of function viruses etc) but "why do we assume free and immediate deployment without serious railings, in places where the tools are located (robotic bodies, virus labs, drones, warheads, chemical factories) as something totally inevitable?

I think it's the tech industry mindset and not necessarily correct.

Expand full comment
9 more comments...

No posts