Optimizing distributed processes locally, and why it may not be a good idea

In healthcare, many tend to work in silos, through narrow specialization or a physical area of focus, be it the ED, an OR, the Cath Lab, Labor and Delivery, and so on. As such, staff driven to reduce harm and waste embark on process improvement (PI), frequently thinking that fixing their own workflows and work-spaces will do. This is often not the case, and bears rethinking from a broader perspective.

A while back I was chatting with ER staff at a client, who were entirely focused on improving efficiencies and throughput in their area.  In doing this, they explained how registration upon patient arrival had been sped up, how patients were being periodically and personally engaged while in the waiting area, how triage had been improved, and so on. The same could be said for the “back end” of the ER, which is the area where “emergency inpatients” are actually seen to: more beds had been put in, etc.

They also remarked that throughput was not necessarily better, or at least it was so only sporadically.  A clear explanation was not forthcoming, or rather, there were several attempts, none convincing enough to sway the others.

I spent some time thinking about the flow of patients through the ER, and how they are eventually discharged (home), transferred (to another facility), or admitted as inpatients to the main hospital.  Many issues came to the fore as I did this: inconsistencies in use of paper vs. the EMR, wait times that were unreasonable, disagreements as to when “time stamps” should be assigned to determine the start or end of a given sub-process, etc.  To wit, when is a patient truly discharged from the ER: does this occur when a physician orders it, when a nurse enters the order and it becomes part of the “official record”, or when a patient is physically moved out of the ER?

States, events, and transitions in model-building

I also thought about previous experiences in analyzing engineering processes, where it is rare that one can improve matters by paying attention only to one sub-process.  Therefore, I tried to understand the workflow beyond the ER as well, and came up with a model for simulation purposes, which is reflected in the illustration below:



The picture shows the triage and back end of the ER, as well as the client’s hospital. The abbreviations stand for, respectively, Leave Before Treatment (LBT), Waiting To Be Seen (WTBS), Treatment In Progress (TIP), Waiting To Be Admitted (WTBADM), and Admitted (ADM). They all refer to various patient states and to the flow of the patient within the facility.

Oddly, the flow is not only forward and away from the ER, as might be suspected if one only thinks from an ER-centric perspective and in terms of discharge, transfer, or admission.  Indeed, there is a back-flow. It is clear that sometimes the ER is used for holding patients because the hospital is not ready to receive them, although their status already shows “discharged from the ER” and “admitted.”  Regardless of what the records show, it seemed to me the patient may at times be in a sort of limbo.

The hospital may not be ready for a variety of reasons, including the mundane one of beds not being available. I researched how the bed-readying process occurs, and below is a simplified visualization:

Slide1 Slide1


The above illustrate state-based modeling, where events trigger transitions from one discrete state to another. If the events repeat, the sequence becomes a cycle.  The bottom diagram is a refinement of the one above it. At this level of granularity, one can then focus on how the bed cleaning is occurring and try to improve the process and eventually speed it up.  This, in itself, may yield increased efficiencies in getting discharges from the ER accomplished sooner and more smoothly, and even reduce costs if management’s focus has been on increasing bed capacity as opposed to reducing “bed downtime.”

These states have been simulated as sub-processes of the bigger picture/model shown earlier.  To my mind, this is an example of how discrete-event simulation (DES) can help understand what is really going on, quiet the noise of uninformed opinion by showing the interconnection of multiple workflows and the impact events at one location can have on outcomes at another, and make way towards sustainable change.

No distributed process can be fixed or improved as a whole, if only local optimization is attempted.  While not shown here, attempts at optimizing things under the narrow perspective afforded by operating in silos may well make matters worse and fixes more costly overall, as staff unknowingly pull in different and even opposite directions. 










Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *