Discussion about this post

User's avatar
Astrid the Rationalist Witch's avatar

My main takeaway from this is the career highlight of seeing one of my icebreakers mentioned in a Substack post! 😎

Am I right in thinking there's a scoring system that tries to prevent this by scoring individual forecasts based not only on ground truth, but on how much value they add to the aggregate? So if you always went with the crowd, you'd do poorly on that metric.

Expand full comment
James Özden's avatar

For the passage below, don't you mean the mean is 1.333? As the median should be 2 given the set of scores will be [0,2,2]:

"A simple answer is that if you give a probability of 1/3rd for each option, under standard Brier scoring rules for questions with multiple possible answers, you’ll do much better than the median forecaster. Let’s say that Curtis pulls off a huge upset, and wins the election. The Zohran fan gets a Brier score of 2 (the worst possible score), the Cuomo fan gets a Brier score of 2, the ‘I hate the other candidates’ guy gets a perfect score of 0. You get a score of 0.666, leaving you beating the median of 1.333 by a margin comfortable enough to make you a Superforecaster if you’re able to find more questions with bad forecasters1, without even knowing anything about the question you’re answering."

Expand full comment
7 more comments...

No posts