“If you prefer Mr. Martin to every other person; if you think him the most agreeable man you have ever been in company with, why should you hesitate?”
This is the second installation in the Algorithms to Live By series. In this piece, we are going to consider the problem of optimal stopping.
Christian and Griffiths introduce the problem using an amusing example of selecting a life partner. This type of problem can be defined by a choice with far too many possibilities to do thorough due diligence on all of them; for example vetting the entirety of single adults (or at least half of them) to find a companion! Moving from a romantic example to a more prosaic one, an investment research process shares many attributes of this problem type.
There are many more investment opportunities to evaluate than time allows. How can we be sure we’ve found the “best” ones upon which to capitalize? We can learn a great deal from how computer science approaches these types of problems.
When a problem seems unsolvable, often the best way of approaching it is to redefine it. For the above type, we can lay aside the process of making the choice but consider instead controlling the timing of it. Specifically, the question to consider is how many options one should consider in a search.
One must make a trade-off between the time spent on the search and finding the absolute best selection. The balance is to spend enough time to ensure finding as optimal of a choice as the problem requires, but no more than that.
The classic case for optimal stopping is called the “secretary problem.” The parameters are that one is examining a pool of candidates sequentially; one cannot define the absolute suitability of a choice with an independent metric, but only a rank order; and one cannot recall a candidate once he/she has been passed over.
The goal of the algorithm is to maximize the chance of making the best overall selection. The way to approach the solution mathematically is to consider that for a pool of two candidates, if you select the first one, you have a 50% chance of being right, but if you reject and move to the second, your chances remain identical. Therefore, if you have two candidates, the optimal process wastes no time – just flip a coin!
If you have more options, though, the chance of the next one being the best steadily decreases. For example, if you have five choices, the chance of the fifth one being the best is only 20%, but you’ve already irreversibly dismissed the four others. It turns out that you maximize your chances of making the optimal choice by looking at 37% (1/e) of the candidates and then selecting the next candidate that ranks at the top of those considered thus far.
So how might we apply this to the investment selection process?
Consider the hypothesis that the market has written off all brick-and-mortar food sales to Amazon a bit prematurely, potentially rendering some grocery store stocks a bargain. Should we exhaustively research all publicly-traded grocery store companies before making our selection, or just a subset? Note: this is a hypothetical idea only for the sake of discussion.
Optimization problems have been solved for a number of different starting parameters. We just need to define the correct parameters that apply to the current investing environment. Where our problem might differ from the classic case presented above is in our ability to recall earlier choices. If we research Kroger, then move on to Supervalu, but later decide we really want to buy some KR, the stock will not spurn you like a scorned lover.
Recall that these were the parameters defined for our initial classic “secretary” problem, i.e. we had a 0% chance of recalling initial candidates once we had rejected them in order to consider the next. For a stock purchase we may not have the luxury of infinite time as we research our short list of stocks before the market moves. However, our chance of investing in earlier choices is not always 0% once we have completed our research and moved onto the next name on the list. So let’s instead consider the case where you have a 50:50 chance to recall an earlier choice. It is not surprising that this increases the optimal balance of how many candidates to consider from 37% (with no chance of recall) to 61% (with a 50:50 chance of recall) before committing to the next best choice. But for our problem, we’re not really certain what our chances of recalling an earlier choice are. So how can we use this algorithm effectively given our problem’s uncertainties?
The best way is to turn the recommendations of the algorithm around. Instead of an optimal stop, we can consider the 37% a minimum percentage of the category to research before committing to an initial purchase. If the fundamentals change during the time period required to do this research, then we can table the idea without a capital commitment knowing that we were not able to complete adequate due diligence for an optimal stop. Moreover, after the 37% research efforts are complete and an initial purchase made, we can continue the research on to another stopping point (say the 61%) and either add to the initial position or start another. The attractiveness of the opportunity combined with the continued availability of the prices for the desired choices can guide how far along the optimization path the research should proceed.
The most important take home from this algorithm is that one should always be stopping at some point. The optimal is not necessarily the exhaustive.
We often suffer from the completeness syndrome. We want to push down to the end of the list to feel that buzz of satisfaction at the end. But for many problems, this is simply not feasible.
So, for us, it is helpful to have a mathematical imperative to call a halt, but not ‘till you get enough!