“The number” vs. outcomes of varying likelihood: gaining perspective

How many times have we heard a manager or executive say “Just give me the number!”? This request smacks of frustration and conveys an unrealistic expectation.  While this is slowly changing, single-point estimates are still the goal when making managerial projections at many healthcare providers today. In this post, I want to discuss an alternative approach.

What is “the number”? It is a single value, obtained by calculation, most likely from a set of other numbers, also assumed to have a single value. The assumption implicit in using a single value for anything is that this is a value known with certainty, something which is often not true. Worse, “the number” often tends to be an average, since many seem to find this reassuring and tend to think this measure of central tendency is representative enough to characterize a variety of situations properly and base decisions on. Rarely is consideration given to the spread of values around said average, be it in terms of the the range, standard deviation, or similar measure of dispersion. Sadly, people will at times even calculate something just because a tool allows it, regardless of whether it makes sense. Related to this, I wrote a post about tool complexity.

For example, one may need to forecast what profit one’s business is going to make by this time next year, given certain forecast revenues and expenses.  Revenues depend on services provided, which themselves are calculated from volume of customers times unit price charged per service. Expenses are calculated from salaries and from supplies used in delivering the service, as well as fixed costs (ex. rent for facility where services are provided.) Salaries come from hourly rates times total hours, and cost of supplies from unit cost times quantity. These are informal statements that, when put in mathematical form, constitute a financial model. Models can be relatively simple or very complex, as required.

Now, how many of us can really state that numerical values they use are stable or known with 100% certainty, especially when some lengthy time period is involved as when one needs to make a projection a year or more away? What is the guarantee that an interest rate will be constant, say? Or that customer volume will not change from what we originally envisioned it to be, based on linear extrapolations from past values?  Surely at the beginning of a projection effort, one makes some assumptions, but how long do these hold?  Managerial focus on arriving at the security blanket of a single-point estimate some time into the future — a forecast that usually ends up being wrong — is misguided to begin with, as it does not allow for volatility other than by recalculation of the single-point estimate, by which time it is usually too late to take corrective action.  Is there an alternative?

 

Estimating outcomes of varying likelihood

There is.  Simply put, the alternative is to understand that the factors involved in a projection are likely to change over time (volatility), and that it is reasonable and necessary to allow for it.  In other words, one needs to build a model — a fancy term for a calculation or series of interrelated calculations that depict the simplified behavior of a system — and let input values and other parameters vary over a range.  If one performs the calculation several times with these varying values, the result or outcome of the calculation will no longer be one number but a range of values as well. In addition to having a good model, the goal then becomes one of understanding what are reasonable ranges of values over which to allow these variations in inputs, so that the range of outcomes of the model is of reasonable predictive value. Clearly, domain knowledge is crucial in building credible models.

A model can be simulated — a way of running a series of calculations thousands of times in a few seconds on a computer — by associating likelihoods with ranges of input values. The range of outcome values will also have what is known as a distribution, with some values more likely to occur than others, and with various properties that can be analyzed. Probabilistic models allow for this, as opposed to deterministic ones, where all factors that go into a series of calculations are assumed to be known with certainty. In simulations, the distributions of values — normal, uniform, Poisson, Erlang, and so on — are generated by an underlying pseudo-random number generator (PRNG) and selected by the user to best reflect the real life situation being modeled, be it a queue of customers at a bank teller, car traffic through tolls, airport traffic, items on an assembly line, or patients moving from service point to service point within a hospital.

What is important then is the change in mindset needed when making projections, from trying to arrive at a single-point estimate at all costs to focusing instead on a well-reasoned estimate as to the range of outcomes and their associated likelihoods. A model that incorporates these capabilities can be quite powerful in understanding the odds of success, trade-offs between factors and their impact on projections and scenarios with more than one goal. In addition, the awareness is raised as to the possibility of negative outcomes — not reaching a goal, and losing money — because these scenarios as well and their odds of occurring can be generated on the computer and taken into consideration. This means that the expected outcome is often worse than what might be foreseen from naively taking into account average inputs only.

One need not make life harder than it needs to be and, in my view, being humble doesn’t hurt.  As Charles Kettering said, “a problem well posed is a problem half solved.”  Certainly, having the right expectations as to what can be calculated — given the inherent volatility of important factors — and knowing the limitations of a model used when researching an answer to a question can help focus effort and minimize waste. One can also avoid the disappointment of missed predictions, by not setting oneself up to fail with unrealistic expectations as to their accuracy, as exemplified by focusing on a single value for an outcome as the desired result for a projection.

In closing, I want to point out that it is quite possible to get caught up in the glamour of ever more powerful computing tools, and grow overconfident in results obtainable at the press of a button.  A simulation is exactly that, and any model is an incomplete representation of reality based on certain assumptions. Models and their outputs must always be checked against the real world and true outcomes to confirm their validity, something that is not always done.  Reality changes, and even the best model must change with it to retain its worth.  Over-reliance on modeling tools at the expense of measuring and experimenting, when it is possible to do so, can also be a costly mistake.

 

 

 

 

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *