If you are reading this blog three things are likely true. (1) you are somehow involved in healthcare IT (2) you can be classified somewhere on the information-nerd spectrum and (3) you are looking for some answers to the issues that plague our industry. If you fit into those three categories (as I do) you almost certainly realize that if our industry had a “coin of the realm” it would be information.
We collect it, receive it, store it, display it, send it, aggregate it and analyze it. So it is a safe bet to say that information is important to us. Unfortunately, it is also the bane of our very existence. Why? The answer is simple... Sometimes it lies to us.
Just as “good data” is useful and illuminating, “bad data” is noisy and deceiving.
In healthcare IT, information can function in a number of roles. We have reference information that comes from elsewhere and is leveraged to define a portion of our application. We have master information, which is typically something we create and manage to define other, more local, portions of our application. We have instance information, which moves through and accumulates in our application. Regardless of which type of information you are considering, being able to identify which pieces of information are good and which are bad can dramatically affect the performance of your application.
While instance information is typically the most abundant form of information in an application, the impact of bad instance information is typically limited to the instance or the information it is directly related to. Reference information and master information, however, are typically what the instance information relies on to define its identity, classification and other critical meta-data that defines the path and patterns of the instance, and all other instances, through the application. As a result, bad reference or master information can have a significant impact on the performance of an application.
As we as an industry have tried to leverage our information in bigger and better ways it has created an awareness of the importance of data quality, data management and data governance. But before we can discuss any of these notions lets first ask ourselves a question: “What is the definition of bad information?” Assuming that if we can identify bad information and remove it, whatever remains will be good information.
For the purposes of this article we will consider the concept of bad information within a specific environment, a software application.
If you were to ask most people who is impacted by bad information in a healthcare application they would likely respond with “the user”. This typically being a provider or someone who supports the care process. They might also respond with “the patient”. In this case the patient is a manifestation of the instance information so that is also a valid response. Some, hopefully the engineers, would also respond with “the application”, this is true and it is something people do not always consider.
I'm sorry Dave, I'm afraid I can't do that
In a modern application the system does not just hold and display information to the user, it is also a consumer of information. In fact, the application itself is the most susceptible to the impact of bad information. It does not have the ability to independently question whether the information is good or bad – it must believe that the information is good in order to function. This should not be a surprise, the annals of science fiction are littered with the rusted corpses of robot villains that, when presented with the notion that their data was incorrect, tragically self-destructed with smoke billowing from their cooling ports and shuddering cries of “does not compute!”
My point is, there is a significant difference to what a human considers bad information and what a software application considers bad information. (There is more to be said about this but I am saving that for the next post)
In software design we spend a great deal of time trying to document and understand the players involved in the use of our solutions. This same consideration is rarely extended to the software itself. This can be a fairly significant oversight. This imbalance in consideration can result in a situation where we build an application that streamlines the entry of data for our user population that is totally useless when the software is trying to assist the provider with decision support or research. In order to truly manage information, we need to understand and respect this dynamic. For lack of a better term let’s call this awareness digital empathy.
Digital Empathy: Understanding that a modern healthcare application is a legitimate consumer of all variants of information and must act on the available information in a literal and logical manner.
What are the major axioms of digital empathy? Let’s try to think like an application.
1. Words are meaningless
When I watched the Charlie Brown Thanksgiving special as a kid I would always be annoyed when an adult spoke to one of the peanuts kids. “Waa wa wah, wah wa wawa wah”. What the heck are they saying and why can’t I understand. This is what it is like for the application whenever someone enters free text information. Software relies on structured data sets, using terminologies in order to process information. Unstructured free text is just “Wa wah wawa wah” that it can store and display later for another peanut parent to interpret (not Pigpens parents – they were hauled off by social services).
2. Every term is sacred
There is a part of human brain called the reticular activating system or RAS. This RAS is the mechanism that constantly pays attention to the world around you, tunes out the noise and alerts you when something needs your attention. Software applications do not have an RAS, so every piece of information is viewed as relevant unless there is specific logic or content that tells it otherwise. Part of making this leap in understanding is realizing that ALL the information we feed software is important to the software, regardless of where it comes from.
3. Terminology matters
Software knows the codes systems that is knows. For an application to consume information, it can’t just be a structured code, it must be the code system the application is expecting. When I am orchestrating information across an enterprise consistency in terminology is huge. The exact same application, operating in multiple locations, with different local terminologies is not the exact same application. Also, in most applications, the ‘words are meaningless’ axiom applies to terminologies as well. The software pays attention to the code and where it believes it came from (the code system or dictionary). If you change the description on a code and, in doing so, change the meaning… guess what… “waa waah wawa wah wawa”…
There are innovative technologies, like our Symedical® platform, that help applications cope with these and other limitations that are common in healthcare applications. But even with that kind of advantage, possessing digital empathy is an important tool when you are trying to understand and isolate bad information.In the next post in this series we will discuss the different types of bad information and where they come from.Please feel free to share your thoughts on this post and if you feel any important axioms for digital empathy were missed.