Forecasting models: fundamentals

Forecasting methods and models can be qualitative or quantitative.  In this post, I discuss some of their characteristics and differences.

Types of forecasting models

Qualitative models rely on human judgment — a domain expert, a consensus panel, Delphi-based interaction.  Quantitative models rely on data and are known as intrinsic if driven by historical data, or extrinsic, if oriented to finding relationships among variables.

Intrinsic models depend on the future being similar to the past, a not insignificant assumption. Data points are decomposed into trends, cyclical variations, seasonal variations, and random effects (noise), at which point one can decide how best to act upon the data.  When curve-fitting, linear and non-linear models are used, the latter being more complex to develop and more intensive computationally. In extrapolating from current and past data values to future ones, methods employed include moving averages that may discount old data more heavily. Using adaptive filtering, predictive models can be modified on the fly to immediately use new data.

Extrinsic models attempt to detect patterns and correlations among independent and dependent variables. A more complex model, incorporating more variables, is not necessarily better. One should favor the simplest approach that works until a more complex one which explains things better is available — not quite the same as saying that the simplest explanation is the correct one, a common fallacy!

Now, a caveat about data use.  Consider a desired future state requiring a departure from traditional behavior, such as a hospital deciding to hold its staff to a higher standard of accountability.  An instance could be minimizing harm in the form of premature discharges potentially leading to avoidable readmissions by heavily modifying the plan of care for inpatients.  This means the predictive value of past data may be quite low. This is hardly a situation to be glossed over, but unfortunately the awareness of the impact of changing contexts is something that is often not weighed carefully in assessing the forecasting value of reams of data collected.  In such an instance, one should additionally guard against the insidious thought that new tools and systems can be a way out of poorly understood situations, as it is only a clear grasp of data and processes that will eventually come to the rescue.

As we gain insight into data, we should also simplify models we first overbuilt. Models that are as comprehensive as needed yet also minimal in complexity are worth aiming for. Many extrinsic models are built as “black boxes”, by pairing input and output data sets, which are used first for discovering relationships and later for prediction.  Consideration also has to be paid to data conditioning prior to model building, and the use of carefully bounded (ex. sensible ranges of values) and derived data (ex. ratios) in addition to or in place of raw measurements.  Considerable skill is required so as not to overfit a model to data, which can lead to poor prediction ability of the model when used on new data.

Just as for the degree of granularity and fidelity of operations models used in simulation, one needs to develop a keen sense of when to stop developing a forecasting model.  Anything that is not fit for the original purpose — for example, is one trying to predict whether the return on an investment is likely to be positive or negative, or is one actually looking for a specific figure? — incurs unnecessary costs and is yet another form of waste masquerading as a more sophisticated solution.

 

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *