Collecting a large amount of data can lead to overconfidence in analysts’ forecasts referred to as the illusion of knowledge when the analyst thinks they are smarter than they are. This, in turn, makes them think their forecasts are more accurate than the evidence indicates.
Self-calibration is the process of remembering their previous forecasts more accurately in relation to how close the forecast was to the actual outcome. Getting prompt and immediate feedback through self evaluations, colleagues, and superiors, combined with a structure that rewards accuracy, should lead to better self-calibration. Analysts’ forecasts should be unambiguous and detailed, which will help reduce hindsight bias.
Analysts should seek at least one counterargument, supported by evidence, for why their forecast may not be accurate. They should also consider sample size. Basing forecasts on small samples can lead to unfounded confidence in unreliable models. Lastly, Bayes’ formula is a useful tool for reducing behavioral biases when incorporating new information.