The healthcare industry is inundated with numbers: length of stay, cost per case, clinical outcomes, staffing, patient satisfaction, wait times, procedure times, turnaround times, testing volumes, net income, monthly expenses, and many others dominate monthly reports. This data can change greatly from one period to the next and, unfortunately, a reliable way to accurately analyze and interpret these changes has not been readily available. The traditional ways of examining these numbers have fatal drawbacks and, as a result, the customary responses to both “good” and “bad” numbers are usually misguided. Fortunately, there are two types of graphs that I have found helpful with a statistically-based method of thinking about, presenting, and responding to data that will help healthcare providers develop a more accurate and more complete understanding of the meaning of the numbers at their disposal.
Imagine the following scenario. The nurses in the birthing center are reading their monthly report. It is organized in a table format, with two columns highlighted. The first is the difference between patient satisfaction ratings in the current month and patient satisfaction ratings from the previous month. The number shows an increase of five percent over the previous month. Things are obviously getting better! The other number indicates the difference between patient satisfaction ratings in the current month and patient satisfaction ratings from the same month one year ago. This number, however, shows a decrease of eight percent. Wait just a minute, what’s happening here? Which number is correct? Are things getting better, or are they getting worse? This type of situation reminds many of the adage that there are three kinds of deception: “lies, damn lies, and statistics.” Sound familiar?
Consider this scenario. Because the director of patient services was concerned about the high cost of I.V. waste, he began monitoring the amount of I.V. waste from all units. The volume of I.V.’s administered per week remained relatively constant across the units, so it was fairly easy to track and compare the overall percentage of I.V. waste from week to week. Looking at the numbers for the first week, the average amount of I.V. waste for all units was about 11.7 percent. Wanting to take immediate action, the director decided to send a memorandum to the head nurses of all units admonishing them for having so much I.V. waste and demanding that they improve immediately. In the following week, the overall percentage of I.V. waste dropped to 7.0 percent. The director of patient services concluded that his memo was effective and that he would have to send it out again if the percentage of I.V. waste rose too high.
These situations are not uncommon in healthcare because of the manner in which most people think about numbers and, as a result, interpret and respond to those numbers. Too often, decisions are made or actions are taken without fully understanding why such numbers are changing. We have been taught how to perform various mathematical functions (e.g., addition, multiplication, etc.), but few have learned how to interpret data within its context. In order to truly understand the meaning of data through Lean Healthcare, we must learn the importance of measuring performance and displaying data in order to see variation and how to analyze data to determine when and how to respond to variation.
What Is Variation?Variation refers to the way the performance of a process changes over time. There will be fluctuations in all processes over time (e.g., day-to-day, week-to-week, month-to-month, etc.). This variation occurs naturally in all processes and should be expected. It is due to a myriad of sources such as equipment, materials, procedures, electronic systems, etc. that are always present in a process and that effect all elements of a process. The variation inherent in a process is referred to as common cause (or random) variation. Consider your home electric bill — it is probably different every month because your electricity use varies from month to month. But you probably have a range of cost that’s normal for your family. Within this range, we see common cause (random) variation because such fluctuations are normally present in your family’s electricity usage “process.”
Consider the first situation described earlier where the nurses in the birthing center are confronted with two indicators for their line of business that are discrepant. A comparison of patient satisfaction from the present month with the previous month says things are getting better, while a comparison of the present month with the same month from last year suggests that things are getting worse. How can they begin to understand the variation present in patient ratings of satisfaction? Figure 1 contains a run chart showing the monthly patient satisfaction ratings for the previous two years. Notice that the average rating is different each month. Some months it goes up, in others it goes down. But despite these monthly differences, there seems to be a range of values that seems “normal” for patient satisfaction ratings. Within this range, we are seeing the common cause (random) variation of monthly patient satisfaction ratings.
Consider the two situations presented at the beginning of the article. One basic comparison was made: the current value versus some previous value. Although the comparisons made in each situation are technically correct, they are not conclusive. Simple comparison between two values, no matter how easy they are to make or how intuitive they appear to be, cannot fully convey the behavior of any data collected over time because both numbers are subject to the common cause variation that is inevitably present in all data. Since both the current value and the comparison value (e.g., previous month, year to date) are subject to this variation, it is nearly impossible to determine how much of the difference is due to common cause (random) variation and how much is due to true differences in the numbers. Furthermore, the way data are usually presented — in tables of numbers — does not help us to see how the numbers change (i.e., the variation in the data). In fact, tables of numbers often hide the information we really need in order to make the best decisions.
The only way to see variation and get a picture of what’s happening in your organization is to graph data over time. In addition to being easily understood because of their visual nature, graphs provide a context for interpreting the current numbers because they include the relevant previous numbers. Graphs also remove extraneous details often embedded in tables of numbers.
Displaying VariationTwo basic graphs have proven their usefulness in displaying variation and in detecting the presence or absence of special causes — run charts and control charts. Run charts and control charts help people concentrate on the behavior of the underlying process rather than on individual data points. These charts help filter out the common cause (random) variation in a process that clouds comparisons between single values and obscures special causes.
Run charts (like the one pictured in figure 1 above) are graphs of data over time. The horizontal axis represents the sequence of data as it occurs over time. The vertical axis represents the values you are measuring, such as LOS, cost per case, laboratory volumes, etc. Changes in the measured values can be seen as one examines the chart from left to right. Run charts have a horizontal line through the data which represents the central tendency of the data. The central tendency is usually the arithmetic average (or mean) of the data, but may sometimes be the median value. The center line is a convenient numerical summary of the location of the data set and is used to make judgments about special cause variation that might be present.
With data plotted on a run chart, it is fairly easy to detect the presence of special cause variation. In fact, the presence of any one of the following conditions on a run chart indicates a special cause:
Look at the chart below – if seven or more consecutive values are either above or below the average line (this is known as a shift or run);
Control charts also present a dynamic representation of the behavior of a process over time. Like run charts, control charts display the values of some process or output variable over time and indicate the center line of the data. But the distinguishing characteristic of control charts is the presence of statistically determined upper and lower control limits. These limits, drawn above and below the average line, are computed from the data. The control limits represent the range of the variation expected in the measurements of a process. That is, they define what the process will deliver as long as it continues to operate in its current manner. The process limits also provide another method to detect the presence of special cause variation. In addition to the three conditions that can be applied to run charts, a point outside the control limits also indicates the presence of a special cause.
If the process is in control (i.e., only common cause variation is present), reacting to changes from one data point to the next — regardless of how much they change — is inappropriate. You should not react to the inherent variation present in a stable process as if it were special and required adjustment. Processes that are in control are behaving consistently, and will require fundamental changes in the underlying system in order to change the output of the process. Setting goals, exhorting workers, or looking for alternative ways to examine the data will not permanently change a stable process.
The following table summarizes the appropriate actions to take in response to common and special cause variation.
Consider the run chart illustrated in figure 1 or the control chart illustrated in figure 2. Although there is fluctuation in the month-to-month and week-to-week numbers, none of the “out of control” conditions exist in either set of data. This suggests that the processes are exhibiting a reasonable degree of control (stability) and that no action should be taken to “correct” the monthly fluctuations. Any action taken in response to this common cause variation would be referred to as tampering, and would probably increase the variation in the process, possibly leading to an out of control condition.
The stability of a process indicates its predictability. That is, a stable process is predictable within a given range of values. However, just because a process is stable does not necessarily mean that the performance is acceptable. If the process itself needs to be improved — because it is not meeting patient expectations — we must change the capability of the entire process. To do this requires more in-depth study of the process itself and significant changes in how the work is actually performed.
ConclusionUnfortunately, situations like those portrayed at the beginning of this blog are all too common. We are constantly making decisions about data contained in reports or data we collect ourselves. Unfortunately, we often react to data without really being able to understand why the numbers are changing. In order to make better decisions, we need to understand the true nature of the changes in process performance. In Lean Healthcare as we work to continually improve, it is necessary to learn new strategies which will give us a better ability to predict future performance and to minimize waste and related costs.
The value of run and control charts have been proven over and over in numerous industries and the reasons are straightforward. First, run and control charts offer an effective way of synthesizing important information so it can be readily understood. Because these charts are pictorial displays of information, everyone concerned can have the same level of understanding of the situation, be it good or bad. Second, run and control charts will reveal opportunities for improvement by directing scrutiny to events that involve special causes of variation. In this sense, they make it clear when corrective action is necessary, and even more importantly, when no action is appropriate. Finally, once a key process is tuned to eliminate special cause variation, it is as well-suited as it can be for alterations aimed at reducing common cause variation or producing more desirable mean values of a process variable.
Aaron has twenty years of experience helping organizations align and improve their personnel and technical systems to accomplish strategic business objectives. He has consulted with leading healthcare organizations across the country and has proven success guiding organizations through strategically driven changes and enhancing business performance. Aaron also has significant experience in needs assessment, best practice analysis, performance measurement, process improvement, and behavioral change management.
Aaron holds a Ph.D. in Industrial/Organizational Psychology from the University of Tennessee with a minor in Industrial Engineering.
Source URL: http://www.leanhealthcareexchange.com/?p=3562