Former SALT speaker Philip Tetlock spoke with Edge recently about his research into forecasting. In 02005, he published Expert Political Judgement: How Good is it? How Can We Know?, for which he spent over a decade recording and assessing the predictions made by public policy experts. He found them to be not much better than coin-flipping, but was also able to specify that “Hedgehogs” (those holding a single grand theory and fitting events into its framework) did much worse than “Foxes” (skeptical, flexible thinkers).
In his conversation with Edge, he expands on what makes Foxes better predictors, using Nate Silver as a jumping off point, and offers an update on his work since Expert Political Judgement:
Perhaps the most important consequence of publishing the book is that it encouraged some people within the US intelligence community to start thinking seriously about the challenge of creating accuracy metrics and for monitoring how accurate analysts are–which has led to the major project that we’re involved in now, sponsored by the Intelligence Advanced Research Projects Activities (IARPA). It extends from 2011 to 2015, and involves thousands of forecasters making predictions on hundreds of questions over time and tracking in accuracy.
Exercises like this are really important for a democracy. The Nate Silver episode illustrates in a small way what I hope will happen over and over again over the next several decades, which is, there are ways of benchmarking the accuracy of pundits. If pundits feel that their accuracy is benchmarked they will be more careful about what they say, they’ll be more thoughtful about what they say, and it will elevate the quality of public debate.
By the way, the forecasting contest he mentions is accepting submissions.