In last week’s post, I argued that observed over expected analysis (O/E) was commonly misused as a method for doing “level playing field” performance comparisons. I recommend against using it for that purpose.
But, is there some other good use for O/E analysis?
I can’t think of a good use for the O/E ratio itself — the metric derived by dividing observed performance by expected performance. But, it turns out that the underlying idea of developing a model of performance that has been shown to be achievable is very useful to identify and prioritize opportunities for improvement. The idea is to apply such a model to a health care provider’s actual population, and then compare those “achievable” results with the actual performance of the provider to see how much room there is for improvement. I like to call this “opportunity analysis.”
There are two main variations on the “opportunity analysis” theme. The first approach is to consider the overall average performance achieved by all providers as the goal. The basic idea is to estimate how much the outcome will improve for each provider if they focused on remediating their performance for each risk cell where they have historically performed worse than average. The analysis calculates the magnitude of improvement they would achieve if they were able to move their performance up to the level of mediocrity for such risk cells, while maintaining their current level of performance in any risk cells where they have historically performed at or above average. A good name for this might be “mediocrity opportunity analysis,” to emphasize the uninspiring premise.
The second variation on this approach challenges providers to achieve excellence, rather than just mediocrity. I like to call this “benchmark opportunity analysis.” The idea is to create a model of the actual performance of the one or more providers that achieves the best overall performance, called the “benchmark providers.” Then, this benchmark performance model is applied to the actual population of each provider, to estimate the results that could be achieved, taking into account differences in the characteristics of the patient populations. These achievable benchmark results are compared to the actual performance observed. The difference is interpreted as the opportunity to improve outcomes by emulating the processes that produced the benchmark performance.
As shown in this illustrative graphic, a benchmark opportunity analysis compares different improvement opportunities for the same provider. In the example, Acme Care Partners could achieve the greatest savings by focusing their efforts on improving the appropriateness of high tech radiology services. In contrast, Acme Care Partners is already achieving benchmark performance in appropriateness of low tech radiology services, and therefore has zero opportunity for savings from improving up to the benchmark level. That does not mean that they can’t improve. No analysis can predict the opportunity for true innovation. Benchmark opportunity analysis is just a tool for pointing out the largest opportunities for emulating peers that already perform well, taking into account the differences in the mix of patients between a provider organization and it’s high performing peers.
This method is generally consistent with the “achievable benchmarks of care” (ABC) framework proposed more than 10 years ago by the Center for Outcomes and Effectiveness Research and Education at the University of Alabama at Birmingham. However, that group advises against using the method for financial performance measures, presumably out of fear that it could encourage inappropriate under-utilization. I consider that a valid concern. To reduce that risk, I advocate for a stronger test of “achievability” for cost and utilization performance measures. In the conventional ABC framework for quality measures, “achievability” is defined as the level of performance of the highest-performing set of providers that, together, deliver care to at least 10% of the overall population. Such a definition is preferable to simply setting the benchmark at the level of performance achieved by the single highest-performing provider because a single provider might have gotten lucky to achieve extremely favorable performance. When I apply the achievable benchmark concept to utilization or cost measures, I set the benchmark more conservatively than for quality measures. For such measures, I use 20% rather than 10% so as to avoid setting a standard that encourages extremely low utilization or cost that could represent inappropriate under-utilization.
Note that one provider may have benchmark care processes that would achieve the best outcomes in a more typical population, but that same provider may have an unusual mix of patients that includes a large portion of patients for whom they don’t perform well, creating a large opportunity for improvement. The key point is that opportunity analysis is the right method to compare and prioritize alternative improvement initiatives for the same provider. But the results of opportunity analyses should not be used to compare the performance of providers.
The following graphic summarizes the comparison of traditional risk adjustment, O/E analysis, and benchmark opportunity analysis.
Simple Example Calculations
For those readers interested in a little more detail, the following table uses the same raw data from the calculations from last week’s post to illustrate the approach.
As shown in this example, Provider A has worse performance (higher mortality rate) than Provider B in adults. So, Provider B is the benchmark performer in the adult risk cell. If Provider A improved from 6.41% mortality down to the 5.00% mortality level of Provider B, it could save the lives of 11 adults per year. Provider B has worse performance in children. If Provider B improved its performance in children up to the level achieved by Provider A, while still achieving its benchmark level of performance in adults, it could save 1 life per year.