One of the most widely heard terms in healthcare circles today is ‘outcomes.’ This is both good and bad. It is certainly positive that outcomes appear to act as a rallying flag for improvement efforts. A potential downside exists, however, depending on how things are done once an outcome is deemed important enough for management to allocate improvement resources to it.
Think length of stay, mortality, wrong-side surgery, hospital-acquired infections, and falls on the clinical side, and revenue and surgical inventory turns on the financial and supply chain side, to name a few. These are all categories with outcomes that are often logged and reviewed periodically both internally by staff tasked with monitoring patient safety, materials usage, and financial performance, as well as by external auditing organizations.
In many instances, and under increasing pressure from regulatory entities, management may be unclear as to what is truly important and on the interrelationships of many factors and indicators of performance. After struggling for a while and juggling and dropping more than a few balls, and in an effort to look decisive, a few indicators are typically chosen from a myriad of possibilities, and management proceeds to intervene, often in an abrupt manner in an attempt to rectify things. For example, a drop in revenue may well trigger an almost panicked and reflex response to launch a marketing campaign targeting a cross-section of customers and spend funds that are in short supply.
The belief that one can improve an outcome by acting directly on said outcome is illusory and can be costly. Another way of saying this is that aggregate data — often the only data provided to upper management — do not suffice to actually understand how best to improve matters.
An outcome is only the result of one or more processes. The need to focus on various aspects of said processes does not go away because management has officially targeted a specific outcome for improvement or because of corporate needs to achieve results quickly — as one may suspect, having 1,000 workers dig a hole does not lead to it being done 1000 times faster than if only one worker did it — rather, it over-complicates things and dooms the outcome of the effort to failure.
The seemingly simple idea of looking at the processes leading to an outcome, apparently obvious to many, typically does not get executed well in practice. The all too human tendency to avoid what is thought of as tedious, detailed work trumps common sense, and conclusions are reached ‘intuitively’ without the necessary work to back them up with evidence, despite everyone’s stated best intentions. On top of this, there is usually managerial pressure from above to ‘get things done.’ Deadlines are imposed arbitrarily, often based on aggregate data, which puts an upper limit on how much understanding can be achieved regarding a situation that was paid little attention to or analyzed with any degree of success until that moment.
I believe there are two obstacles to achieving substantial progress as to the improvement of outcomes. The first one, not addressed here, is simply being in denial and wanting to project an image of one’s performance that is much better than reality and utterly divorced from it. This may be due to a combination of bad politics, fear, and entrenched poor habits. Clearly, in this situation, improvement initiatives will be given lip service and not much more.
The other obstacle, to my mind, is that not being in denial, meaning well, and choosing a proper set of outcomes to focus on do not suffice if one does not then proceed to do the grunt work of data collection and analysis at the required level of detail. All too often, even when root causes are analyzed and processes are mapped out, they are done in a way that is more oriented to ‘checking the boxes’ than to discovery and solving a recurring problem for good. True discovery, such as one may attempt during the development of an A3, say, precludes premature conclusions. Unfortunately, many in operations management are uncomfortable with the idea of their staff engaging in what they think of as applied research. This, however, is exactly what is needed to achieve greater understanding of a problem or situation. Instead, time limits are imposed, and other constraints such as scarce resources cause analysis to be short-changed. This is worse than doing no analysis at all, because the illusion persists that the proper work was done. Particularly when taking the first steps using a new PI methodology, one should be mindful that no tool or suite of tools suffices if it is not applied at the correct level of detail.
What, then, is the correct level of detail? In measurement as in analysis and correction, the concept of ‘fitness for purpose’ rules. Knowing what one is trying to achieve will dictate or at least limit the type and amount of data to be collected and the depth of the analysis. Since processes are sequences of steps, each of whose outputs become inputs to succeeding ones, one has to at least understand 1) how long each step takes (throughput), and 2) whether the step is being performed correctly (yield.) These measures of time and correctness are independent of any PI methodology. So, a ‘bounding’ answer to the question above is that, unless these two measures are well understood at a granular enough level, one cannot be said to have a sufficient grasp of the process. Often, of course, more is needed.
It is unlikely that the requisite level of understanding will be achieved by developing a high-level process map or flowchart alone. Indeed, several iterations, each of increasing detail, and each ‘blowing up’ the individual steps at the level above it into a series of sub-steps, are required. Alas, I cannot say that I have seen this done often in healthcare, or that too many could adequately discuss yield and throughput for each step of any process they were involved in improving.
Returning to the revenue recognition example mentioned previously, a proper mapping of all activities that take place at a hospital from the moment patients arrive to when they are discharged may reveal that not all charges are being captured, that unit conversions are misapplied across medication orders, or that data in one system are being overwritten by data from another. Addressing these potential faults by focusing on raw rather than aggregate data once all process steps are identified will likely save time and prevent much needed funds from being misspent on initiatives that may not be evidence-based, such as an impromptu marketing campaign to increase revenue.
It takes experience, not to mention intellectual curiosity and a ferret-like dedication to digging, as well as enlightened management to really get to the bottom of things and be able to find the proverbial answer to a problem. Understanding cannot be rushed, and a laser-like focus must be trained on raw (not aggregate) data at a satisfactory level of detail if sustainable progress is to be made. The alternative is to be surprised again and again by outcomes that fail to meet our expectations.