Discover more from Samstack
In Praise of Hunches
I like forecasting. I’m not bad at it, my scores on Good Judgment Open are pretty good. If you’ve read Phil Tetlock’s brilliant book Superforecasting (Scott Alexander review here), or some of the other content put out by good forecasters, you might have an impression of how a forecaster comes up with their view about how likely an outcome is. They start by calculating a base-rate. For example, if you want to know whether the Conservatives are likely to win a by-election in the UK, you might start just by figuring out what percentage of the time the government has won in all by-elections in the past 20 or 30 years, rather than delving into the specifics of whatever by-election you’re forecasting. Then you might update your beliefs on the basis of other information - if you started with a base-rate of 5% (or whatever the real base-rate is for governments winning by-elections), you might update a lot if the Conservatives are polling really well. If you’re really smart, you might do something like figure out the average government lead in the polls during by-elections that have occurred in the past, and then update based on whether they’re currently polling better or worse than that.
But you might also just have a hunch. Tetlock’s book makes you think that really good forecasters don’t rely on their intuition very often. He also mentions that one thing that forecasters are good at is reflecting to check whether their intuitions are correct (they do particularly well on the Cognitive Reflection Test, for instance). The Cognitive Reflection Test asks questions like ‘A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?’. The correct answer is 5 cents, but most people answer 10 cents - good forecasters are disproportionately likely to get the right answer. But I think intuition is generally underrated - although it’s true that I think you should always be ready to overrule your instincts, they’re still an extremely useful signal.
If my intuition and my reasoning are leading me to hugely different conclusions, I think my intuition is probably more reliable most of the time. This isn’t always the case - sometimes it’s clear there’s some relevant cognitive bias at play, and I can come up with a helpful explanation of why it may be the case that my intuition is leading me astray, and rely on my reasoning. But if I can’t come up with some reason for my intuition being flawed, I’ll often defer to the hunch rather than the reasoned conclusion. When there was a by-election in Hartlepool in 2021, my gut feeling was that the Tories would clinch it. It is very rare for the government to win by-elections, so I felt dumb saying that I thought the Tories would win. But they did. I haven’t ever tracked the record of ‘hunch forecasts’ versus my ‘reasoned forecasts’, but I would guess that the hunch forecasts have a slightly better record, although there is a confounder in that I’m more likely to have and rely on hunches on topics about which I already know a decent amount.
I’m pretty good at forecasting, but Magnus Carlsen is probably a slightly better chess player than I am a forecaster. He has said roughly the same thing about intuition - most of the time when he plays chess, he knows pretty much immediately what the best move is. He’ll use his time to check whether the move is actually the best move, but it basically always is. A Magnus move on the basis of a hunch is much better than moves by lesser players that have been thought about for minutes. I can’t find the exact place where Carlsen says this (although I’m sure he has), but if you Google ‘Magnus Carlsen + intuition’ you’ll find a ton of stuff where he talks about how important intuition is, like this.
When I first started forecasting, sometimes people would make long comments justifying their forecasts that made them sound as though they knew what they were talking about - I quickly learnt that there isn’t a particularly strong correlation between long, detailed comments and forecasts that are actually good. If you see two comments on a forecasting question that you knew nothing about - a detailed, seemingly plausible justification for an 80% forecast by a forecaster with a mediocre record, and one that says something like ‘this feels like a 30%’ by a forecaster with an excellent record, I would place more faith in the latter.
A lot of the time, ‘evidence’ presented in favour of an argument just shouldn’t cause you to update a lot, even if it feels like you’re being foolish when you don’t take it into account. If I read a political science paper saying that personal scandals like affairs or corruption accusations usually don’t do any long-term damage to a politician (even when it has short-term effects on their polling), this wouldn’t cause me to update much on whether Partygate will hurt Boris’ prospects at the next election. Because I have a hunch that PartyGate has done a lot of damage to Boris, and one pol-sci paper isn’t strong enough evidence to cause me to update much. I could try and justify this by talking about priors and how I’m actually being a good Bayesian or something like that - but the truth is just that sometimes ~vibes~ are more reliable than a few social-science papers (that might not apply to the thing you’re thinking about anyway). New forecasters or people thinking about academia probably rely on their intuition too little rather than too much - if some social science paper sounds like it probably isn’t true, I think you should trust your gut. And if you actually start forecasting, you’ll learn pretty quickly when to trust your hunches and when to defer to other evidence. So, maybe go and start forecasting!