Behavioral economics deals with the differences between what the classical theory assumes the “homo economicus” would do, and what happens in reality because of human biases that arise from psychological, cognitive, emotional and other effects. By understanding how that affects decision-making, people can train to make better decisions under uncertainty.

One of the first theories in behavioral economics came not from economists but from psychologists Daniel Kahneman and Amos Tversky. Prospect theory is a descriptive theory of how human beings make gains and losses decisions and has two elements: a value function and a weighting function. The value function is concave down for gains and concave up for losses, meaning agents value losses at a higher absolute value than gains. The weighting function deals with the fact that people exaggerate probabilities that are close to zero and underestimate probabilities near one.

Kahneman and Tversky also worked on other relevant theories like trying to understand when people make judgments based on similarity with no regard to base rates and have a tendency to see patterns in a random walk, or how people rely on an initial piece of information even if it is not related to the problem at hand. The first theory deals with representativeness heuristic, while the second one deals with anchoring. To learn more about anchoring, they devised an experiment in which people spun a wheel of fortune with numbers from 0 to 100 that was rigged to land on number 10 or 65 depending on the group being tested. After the result of the wheel, the researchers would ask people to estimate how many African countries were part of the U.N. The group that had the wheel stop at 10 would guess the number was around 25%, and the group that had the wheel stop at 65 would guess the number was approximately 45%. The exercise displays the human mind’s propensity to integrate its most recent observation into its decision-making process regardless of its relevancy. Those participants who spun a 10 were subconsciously anchored to a lower estimation, while those who spun a 65 were psychologically influenced to approximate a higher figure.

By understanding the issues you face, you can devise a strategy to improve how you make decisions and forecast results. One such strategy was unveiled by a team led by researchers Philip Tetlock and Barbara Mellers as part of a contest run by the Intelligence Advanced Research Projects Activity, which conducts research to tackle the most difficult challenges facing intelligence professionals. Their team’s strategy was to gather thousands of volunteer forecasters and subject them to personality and cognitive tests to select those who portrayed the least amount of biases in their decision-making. Then, the team put the most successful 2% of forecasters on teams with one another, creating a team of “superforecasters” who proved that “the wisdom of the crowd” could further outperform individual forecasters. These teams would be presented with questions on current events and would be given time to research and collaborate with their teammates. When the results came in, these teams of superforecasters beat the control groups by more than 50% and also defeated the intelligence community’s experts by 30%. This advance was the most significant improvement in judgmental forecasting accuracy observed in the literature.

Putting together the findings on biases and forecasting above, I believe that by following two simple rules, anyone can reduce the mistakes they make because of biases.

First, try to focus on finding the proper base rate so you can decrease the effects of anchoring and representativeness heuristics. For example, let’s say your best friend got married and asks what you think the chances are that it will end in divorce. You might be inclined to ask yourself how happy the couple is or how affectionate they are, and try to translate that feeling into a probability estimate. However, the correct base rate should relate to the question of how often couples in this demographic divorce. If 40% of marriages end in divorce, this is your starting point, even if you’d rather not tell your friend that.

This strategy works the same for businesses. When the news of a merger hits the wire, instead of asking yourself, “Will these businesses have synergies?” which is a challenging thing to estimate, the question should be, “How often do mergers fail to meet expectation?” In that case, research has shown that more than 60% of mergers destroy value for investors. If you are an investor, this knowledge can save you a few bucks.

Second, overconfidence can lead to a calibration problem in your weighting function. Think in terms of 90% confidence bands around a forecast, with wider bands where you have less certainty. If business analysts were collectively well-calibrated, their buy, hold and sell ratings would correlate with stronger subsequent performance. However, they tend to have no correlation between sell side and buy side analyst recommendations. Ask yourself, “How much do I know about this subject?” “Am I sure?” and “What is the counterargument?”

There are still many other behavioral issues you can train and watch for. By understanding the works of behavioral finance, anyone can improve their decision-making process, and if you are an investment manager, you can more efficiently fulfill your fiduciary duty.