Investing with Perfect Confidence
Perfectly Confident, by Don Moore, is an excellent resource for decision makers that reads like a sequel to Superforecasting, a BAM favorite first highlighted in a post titled, Perpetual Beta. Today, I’d like to supplement those lessons and share some insight into how this work has influenced our investment process at Broyhill.
Click here for a fancy PDF of this post.
“I think there’s a 70% chance you’re going to lose all your money, so don’t invest unless you can afford to lose it.”
– Jeff Bezos, Advice to Early Investors
Confidence is easier to assess than actual ability. As a result, we tend to trust those who appear most confident, often ignoring credible evidence to the contrary. Daniel Kahneman joked that experts who admit the full extent of their ignorance should expect to be replaced by more confident competitors. Fortunately, for Amazon shareholders, Jeff Bezos was an exception.
A few years back, we shared our thoughts on Wall Street’s overconfidence in a paper titled, Certainty vs Uncertainty. In contrast to the egotism often found in finance, we noted that Richard Feynman freely admitted that a scientist is never certain: “We know that all our statements are approximate statements with different degrees of certainty; that when a statement is made, the question is not whether it is true or false but rather how likely it is to be true or false.”
This is an important distinction. Yet, investors often struggle with uncertainty, one of the most fundamental attributes of financial markets. Complicating matters further, Moore warns that overconfidence is the “Mother of all Biases.” It’s a gateway bias which impacts almost every other psychological bias. So, it’s critical that we understand the risks of overconfidence and work toward a more balanced perspective.
Moore outlines three forms of confidence but for our purposes we’ll focus on the one which represents the greatest risk to investors, overprecision.
“Overprecision occurs when you are excessively sure that you know the truth. It leads you to be too confident that your interpretation of the facts is the right one. It leads investors to be excessively sure they know what an investment is worth. It leads you to be too quick to disparage those who disagree with you as either evil or stupid.”
It feels good to be overly precise and optimistic. There is always a long line of people eager to hear comforting lies, while the provider of unpleasant truths sits alone and dejected. All of us love hearing flattering news. But eagerly accepting good news and skeptically ignoring bad news, blinds us to risk. And comforting lies are not a great starting point for formulating an investment thesis.
“The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it.”
– Sir Francis Bacon
To avoid this confirmation bias, we must work hard to consider alternate views. This is more difficult than it sounds, because it’s the opposite of what we do naturally. We are better at testing positive hypotheses, because we naturally seek out information supporting our viewpoints
Our default is to consider the data that allows us to say yes because it is easier to identify the presence of something than its absence. So when we start by asking, “Is this true?” we are more likely to generate affirmative answers and find evidence that supports our hypothesis. We may think we are being neutral, but we are far from it.
The way we pose the question can have a surprising influence on our answer. The way we test the hypothesis can lead us to be overconfident in our conclusion. A better approach is to ask the question, “Is this false?” Disconfirmation creates a different perspective, different thinking, and different conclusions.
“If we are uncritical, we shall always find what we want: we shall look for, and find, confirmations, and we shall look away from, and not see, whatever might be dangerous to our pet theories. In this way it is only too easy to obtain what appears to be overwhelming evidence in favor of a theory which, if approached critically, would have been refuted.”
– Karl Popper
Testing our beliefs requires that we ask, “What evidence is there that could disprove my hypothesis?” Testing the alternative hypothesis is the simplest strategy for removing bias from decision making.” This is why we begin every investment discussion with a simple question, “How might we be wrong?” And once we have identified the key risks, we consider the probabilities of each scenario as well as the consequences of being wrong. More on this in a moment.
Reviewing past mistakes is a key part of learning and post-mortems are the most effective means of doing so. Many investors pay lip service to this exercise. At Broyhill, postmortems are a critical component of our investment process. Every time we exit a position, we analyze our performance, review our investment thesis, and analyze our results relative to our expectations.
This takes time, but the lessons learned through these formal write-ups and discussions have proved invaluable. Even still, too much of a good thing can come with its own risks, particularly for those perfectionists among us who are constantly striving for improvement. Here, Moore offers an important warning:
“Regret is common when you take a risk that turns out badly. Critical self-reflection can be wise and helpful but don’t let it eat you up. Postmortem analyses can provide insight into what went wrong and are helpful when they show how to avoid those mistakes in the future. But it is common for postmortem analysis to drift into regretful rumination about what might have been. Regret takes the sharp edge of your prior misfortune, uses it to slice open an emotional wound, and can turn postmortem analysis into pathological self-recrimination.”
Bottom line: understanding the cause of death is useful if you want to avoid a similar fate, but don’t beat yourself up too bad. Keep it simple. Ask two questions: What happened? What can we learn from this?”
Oscar Wilde was right. Experience is the name we give our mistakes. But that kind of experience can be very expensive. Better yet to avoid those mistakes altogether. Learning from the mistakes of others is a much better, and much cheaper approach.
One way to do this is the pre-mortem. Daniel Kahneman recommends pre-mortems as an anecdote for overconfidence. Imagining failure is useful because it runs against our natural bias to think positively. At Broyhill, this means identifying risks in advance and brainstorming potential reasons for failure from the outset. The easiest way to do this is to simply fast forward across your time horizon, and imagine you’ve lost half of your investment. Then ask yourself a single question. “What happened?”
Wall Street loves price targets. But a discrete target is rife with problems. Ignoring for a moment, analysts’ tendency for anchoring, herding, and short-termism, there are several issues that arise from any single point estimates. First, they are almost guaranteed to be wrong. Making matters worse, by focusing on just a single (wrong) estimate, we magnify overprecision in our judgment. And at the same time, we completely ignore the full range of possibilities.
A probability distribution is a far superior alternative, which is why we forecast multiple scenarios for every investment in the portfolio. Thinking in terms of distributions better calibrates our confidence, by forcing us to consider multiple outcomes, broadening our thinking, and explicitly estimating the probability that we are wrong.
Probability distributions illustrate the relationship between potential outcomes and the likelihood of each occurring. So rather than a single price target, which ignores the risk inherent in any investment, we forecast upside and downside scenarios, in addition to our base case for every position in the portfolio. The average of these scenarios results in our probability weighted expected values which drive our position sizing and portfolio construction.
At Broyhill, we have built an internal database to track and rank current and prospective investments. For anyone interested in a similar system, available right out of the box, we’d highly recommend reaching out to our friend Cameron Hight at AlphaTheory whose work on this subject is second to none.
Expected value calculations are far from fool proof. The output, like that of any model, is only as good as the inputs that go in. Yet, even when the output is wrong, the process of considering the full range of outcomes is more valuable than a single point estimate based on a hunch. By explicitly laying out our assumptions behind each forecast, we can assess their accuracy and consider ways to improve them.
Expected value estimates are usually the easier of the two variables to nail down. It’s much more difficult to quantify the probabilities associated with each scenario, because doing so is much more subjective. One area with meaningful potential for improvement, is the middle ground.
In “The Probability Weighting Function” shown above, note that the distance between the lines is greatest at both ends. At the low end, we put too much weight on the likelihood of extremely rare events (i.e. the fear of flying or of terrorist attacks immediately after 9/11). At the other end of the distribution, we neglect highly likely events (i.e. the refusal to wear a mask because we lack 100% certainty of its effectiveness).
Despite these extremes, most of us have the most room for improvement right in the middle of the curve, where we ignore large differences in probabilities in favor of a simpler 50/50 guess. If it’s not an absolute certainty, we are too quick to call it a coin toss. This could not be further from the truth. The difference between pros and amateurs according to Annie Duke, is that pros know the difference between a 60/40 bet and a 40/60 bet. Small probabilities in the middle of the range make a big difference.
The Wisdom of the Crowd
When everyone in a crowd shares the same biases, it quickly becomes a mob. In other words, the wisdom of the crowd depends on its independence. It is most wise when it is most diverse as individual errors cancel each other out, resulting in less noise and a stronger signal. Composed correctly, the crowd is wiser than the sum of its parts.
Discussion can actually diminish the quality of our decisions for this reason. One of the ways we can maintain the diversity of opinions is to ask people to state their views at the outset. At Broyhill, this means that each member of our investment team inputs their expected value estimates individually, before we discuss the outputs. In addition, we leverage the wisdom of the crowd by going out of our way to understand the perspectives of others whose views differ from our own. In doing so, we reduce the risk of overconfidence.
Another technique we have begun to use is simply asking each other to quantify our conviction level when making a statement or offering an opinion. We simply ask, “What is your confidence level in that statement?” This is akin to Annie Duke’s, “Wanna bet?” Just extending an invitation to bet on disagreement can uncover insights that would otherwise remain unsurfaced.
When someone is eager to take the other side of a bet, it’s worth pausing to consider why. In the same fashion, whenever you make an investment decision, it’s worth pausing to consider who is on the other side. You should ask, “Why might I be wrong?” It’s not possible for both parties to be right, so the fact that another rational person is willing to take the opposite view represents useful information that shouldn’t be ignored.
In his book, Principles, Ray Dalio recommends that you find people who disagree with you and try to understand their reasoning. The purpose is not to let anyone think for you, but to understand their perspective. Their conclusions are less important than the reasoning that led them to those conclusions. Doing so increases the probability of being right and reduces the probability of being wrong.
“I’ve missed more than 9,000 shots in my career. I’ve lost almost 300 games. Twenty-six times I’ve been trusted to take the game-winning shot and missed. I’ve failed over and over and over again in my life. And that’s why I succeed.”
– Michael Jordan
“The Lake Wobegon Effect” was named for a fictional little town where all the women are strong, all the men are good-looking, and all the children are above average. But we don’t live on Lake Wobegon and we aren’t all above average.
Calibrating our confidence is hard work. It takes humility to objectively learn from your mistakes. It’s important to recognize and remind yourself that imperfect estimates are better than no estimates. Keeping a record of our expected value calculations allows us to keep score and helps us improve our forecasts over time. The alternative makes it too easy to fall victim to Hindsight Bias.
We’ll leave the final word on the subject to Don Moore.
Well-calibrated confidence is exceptionally rare. It requires that you understand yourself and what you are capable of achieving. It requires that you know your limitations and what opportunities are not worth pursuing. It requires that you act confidently based on what you know, even if it means taking a stand, making a bet, or speaking up for a viewpoint that is unpopular. But it also requires the willingness to consider the possibility that you are wrong, to listen to evidence, and to change your mind. It requires an uncommon combination of courage and humility. It takes the perfect amount of confidence.