The Signal and the Noise: Why So Many Predictions Fail--but Some Don't

  • The instinctual shortcut that we take when we have “too much information” is to engage with it selectively, picking out the parts we like and ignoring the remainder, making allies with those who have made the same choices and enemies of the rest.
  • We need to stop, and admit it: we have a prediction problem. We love to predict things—and we aren’t very good at it.
  • We must become more comfortable with probability and uncertainty. We must think more carefully about the assumptions and beliefs that we bring to a problem.
  • The most calamitous failures of prediction usually have a lot in common. We focus on those signals that tell a story about the world as we would like it to be, not how it really is. We ignore the risks that are hardest to measure, even when they pose the greatest threats to our well-being. We make approximations and assumptions about the world that are much cruder than we realize. We abhor uncertainty, even when it is an irreducible part of the problem we are trying to solve.
  • Risk, as first articulated by the economist Frank H. Knight in , is something that you can put a price on.
  • Uncertainty, on the other hand, is risk that is hard to measure. Risk greases the wheels of a free-market economy; uncertainty grinds them to a halt.
  • Too many investors mistook these confident conclusions for accurate ones,
  • There is a common thread among these failures of prediction. In each case, as people evaluated the data, they ignored a key piece of context:
  • There is a technical term for this type of problem: the events these forecasters were considering were out of sample. When there is a major failure of prediction, this problem usually has its fingerprints all over the crime scene.
  • The housing collapse was an out-of-sample event, and their models were worthless for evaluating default risk under those conditions.
  • We forget—or we willfully ignore—that our models are simplifications of the world.
  • Even if the amount of knowledge in the world is increasing, the gap between what we know and what we think we know may be widening.
  • Financial crises—and most other failures of prediction—stem from this false sense of confidence.
  • Big, bold, hedgehog-like predictions, in other words, are more likely to get you on television.
  • Harry Truman famously demanded a “one-handed economist,”
  • Ultimately, the right attitude is that you should make the best forecast possible today—regardless of what you said last week, last month, or last year.
  • There is wisdom in seeing the world from a different viewpoint.
  • But statheads can have their biases too. One of the most pernicious ones is to assume that if something cannot easily be quantified, it does not matter.
  • The key to making a good forecast, as we observed in chapter , is not in limiting yourself to quantitative information. Rather, it’s having a good process for weighing the information appropriately.
  • Collect as much information as possible, but then be as rigorous and disciplined as possible when analyzing it.
  • When we have trouble categorizing something, we’ll often overlook it or misjudge.
  • Perfect predictions are impossible if the universe itself is random.
  • What are your odds of being struck—and killed—by lightning? Actually, this is not a constant number; they depend on how likely you are to be outdoors when lightning hits and unable to seek shelter in time because you didn’t have a good forecast.
  • Most of you will have heard the maxim “correlation does not imply causation.” Just because two variables have a statistical relationship with each other does not mean that one is responsible for the other. For instance, ice cream sales and forest fires are correlated because both occur more often in the summer heat. But there is no causation; you don’t light a patch of the Montana brush on fire when you buy a pint of Häagen-Dazs.
  • Extrapolation is a very basic method of prediction—usually, much too basic. It simply involves the assumption that the current trend will continue indefinitely,
  • “A bubble is something that has a predictable ending. If you can’t tell you’re in a bubble, it’s not a bubble.”
  • The goal of any predictive model is to capture as much signal as possible and as little noise as possible. Striking the right balance is not always so easy, and our ability to do so will be dictated by the strength of the theory and the quality and quantity of the data. In economic forecasting, the data is very poor and the theory is weak, hence Armstrong’s argument that “the more complex you make the model the worse the forecast gets.”