Big data, fancy tools, one-click-itis: science or magic?

As someone with a lifelong involvement in problem-solving and IT, I see a trend that gives me pause.  It concerns users who, overwhelmed by the increasing complexity of their work, employ ever more sophisticated computing tools and technology in haphazard fashion, often lacking the know how to be able to gauge the validity, usefulness, or limits of applicability of a given answer or result. Many are left with one option, to simply believe what the tool tells them and accept it as fact, a reluctance to ask questions coming both from a sheer inability to do it as well as from the sizable capital investment often made.

An ounce of understanding vs. a pound of number crunching

Barely keeping their head above water in an ocean of data made choppier by constantly changing regulatory constraints, healthcare workers reach out for any tool that promises to digest and crunch “big data” and give them some relief in return.  There is no shortage of vendors promising exactly that via their sales pitches, which increasingly work on the customers’ fear of being left out of the promised land, and take the “do or die” delivery tone of certain TV evangelists when they address the masses. Rarely do tool vendors, however, have a clear grasp of or concern for what the backgrounds and domain expertise of their customers are.

With upper management’s relentless focus on presentation rather than substance, all agog about fashionable operational dashboards, spark-lines and colorful bubble charts, and analytics to drive “evidence-based decision-making,”  many find themselves relying increasingly on the output of these new, fancy tools on a daily basis while staying at arm’s length from the workings under the hood. Everybody talks the talk, and many pat themselves on the back for being on the “bleeding edge” of something or other, while seeking reassurance by speaking in jargon with colleagues who clearly are as uncomfortable as they are. Being able to view a pie chart, unfortunately, does not equate to really understanding how the pie was baked, and tells one nothing about what ingredients were used or the chef’s skills.

To me, the downside to these insights gained effortlessly thanks to awe-inspiring tools involves the understanding that is lost when you do not know how a result was arrived at.  One may argue that this has always been the case with any automated IT tool. And one may decide, as many have already, that what works for them is to focus on a very narrow area of specialization and leave data to the data geeks.

My question then is, how will one know that the answers one is getting are accurate and fit for the purpose required? With the volume of data currently being fed into tools and systems that promise crunch-driven insight at the other end, with this alleged enlightenment essentially one or two mouse clicks away, and with the time pressures everyone is under, who is not tempted to just go along with this? I mentioned this very problem in earlier posts on simulation and naive over-reliance on flight instrumentation, where the ease of setting up an animation of an industrial workflow or the availability of reams of real-time flight data coming one’s way from several screens can easily deceive the end user into believing they are in control of a situation and have an understanding of the fundamentals. That this is in many instances simply not the case has been proven by a variety of incidents and poor outcomes, in healthcare as well as in industry, many causing great harm and some costing human lives.

Historically, what we did not understand was viewed as magic. Over time, science stepped in to decipher many misconceptions, dispel fears, and provide explanations based on reason. In a sense, blind reliance on tools whose algorithmic complexity is beyond the majority of the targeted users to fathom puts us squarely back into the magic realm as far as understanding how an answer is arrived at. Traceability of the reasoning or of the calculation process is part of what is required, but even that does not obviate the need for a level of know-how that many users simply lack, as traceability just gives one passive visibility into a sequence of steps taken and choices made. How can one decide if those steps and choices are correct or applicable in a given situation, and what limitations are inherent to a line of reasoning, if one does not understand their knowledge domain well enough to begin with? The potential for great waste is clear.

Perhaps it is not the brute-force approach to crunching big data in the hope of gaining an ounce of insight that is required, at least not by itself.  Human judgment in general, and in particular the education and ability to circumscribe specific problems to data that are relevant and solve them, are not developed overnight or without effort. Those that think that these skills have somehow become less relevant and are over-enthusiastically listening to the siren song that supports this view may need to strap themselves to the mainmast or face a rude awakening. Without the confidence and self-reliance that come from developing our own understanding of specific knowledge domains through education and its diligent application to real-world situations, we can only put our faith in the know-how and good intentions of others, and in doing so abdicate important responsibilities.

 

 

 

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *