6 Comments

An issue is that focusing on extreme honestly makes you liable to focus on things which can be thus quantified. That said I'm also not sure the big issue with the news as such is that they don't make empirically verifiable claims as much as either they lie outright or they write to the converted with a clear bias.

Expand full comment
Jun 7, 2022·edited Jun 7, 2022Liked by Sam Atis

Strong agree. I'd go further and say most of my favourite writers might be actively less accurate, even in purely numerical terms, if they adopted 'extreme honesty'. A lot of people think through writing (including myself) and writing columns or posts about an issue is one of the best ways to figure out what you know about it and what you don't, and to build a detailed mental map of the issue. But there are many topics that (a) most people aren't confident enough to make quantifiable predictions about but which (b) feed into other topics, such that knowledge of the 'unquantifiable'* topic can lead to better quantifiable predictions on other topics. If you prevent people from writing on these 'unquantifiable' topics, they won't be able to develop good knowledge about them, and in turn will make worse predictions when dealing with related questions.

To take a pretty narrow example from my own thinking, my understanding of nationalist / unionist dynamics at Queen's University Belfast would definitely not be amenable to quantification: some people have tried stuff like 'how many unionists vs nationalists would respond in surveys that they feel uncomfortable expressing their political beliefs among peers', but there are just way too many confounding factors there. So if I were holding to 'extreme honesty', I'd avoid writing or speaking about that topic. But that would make it incredibly difficult for me to get clear about what I know, what I don't know, and what I believe. And since my thinking about this topic quite heavily informs my thinking about Northern Irish politics,** extreme honesty would make me actively less accurate when I come to make numerical predictions about Assembly elections or whatever.

I think this applies even more to actual good writers than it does to just some schmuck like me. The most interesting people almost always find certain things uniquely salient, even if they can't quantify what this salience means: think about how Tyler Cowen often asks follow-up questions in interviews that seem odd or even bizarre at first blush, only to lead to incredibly helpful responses. Suppressing this individual interest in things that people just happen to find salient, only because they can't quantify their thinking, would lead to less well-formed reasoning about topics further downstream.

* I don't mean to imply that these are topics where you simply CANNOT make quantifiable predictions, only that most people would not.

** QUB is a pretty representative sample of relatively young and relatively affluent people from NI, and there's good numerical reason for thinking this group will have outsized influence on election results going forward.

Expand full comment

All true.

There are two separate issues: one is communicating with a general audience. Another is actually finding the truth — which in most cases EA/rationalists don’t already know.

Your points apply to the former case, but, regarding the latter… I think we can at least say something like:

Instead of “every writer should make quantifiable predictions at all times” I think a more reasonable goal would be “IF you are going to make a *prediction*, make it quantifiable, if possible.” This forces the writer to think more carefully, and allows others who care about the truth to get some sense of track record.

Expand full comment

I feel like that's a good norm, but probably too weak. Most writers, after all, don't make predictions: they insinuate or hint at them without outright stating them. You're much more likely to see "it is looking like Boris Johnson might be risking defeat" than explicitly "I think Johnson will lose the VONC". If we implemented a norm of "IF you make a prediction, ensure it's quantifiable", then I think all you do is turn the few remaining people who write things like the latter into people who write things like the former.

Expand full comment
deletedJun 7, 2022Liked by Sam Atis
Comment deleted
Expand full comment
author

Interesting, but I think the norm of Extreme Honesty among EAs/Rationalists emerged before many of them were on Substack.

Expand full comment

Yeah, as a matter of *history* (why the norm emerged as opposed to why it's kept going), I think the 'quantifiable forecasts' thing came from three interrelated sources: Eliezer Yudkowsky's insistence that we must "make beliefs pay rent", which required people to back up all of their claims by some "anticipated experience" that the claims mapped onto; Tetlock's work on good judgment which seemed to suggest that the ability to anticipate and forecast required numerical precision; and Robin Hanson's idea of prediction markets, which offered a way to operationalise numerical precision about predictions. I think the influence of these figures on rationalism explains where the norm comes from. Ultimately, all of them are grounded in the same basic impulse: Bayesian epistemology.

Expand full comment