EMR event log study shows 6 hrs of use per day, but implicitly belittles the clinician’s cognitive effort and the EMR’s support

In a recent paper in the Annals of Family Medicine by Brian Arndt and colleagues at the University of Wisconsin, the authors described the results of an analysis of the user event logs of their Epic EMR.  The authors determined that primary care clinicians used the system nearly 6 hours per day out of an 11.4 hour work day, and that 44% of that time was spent on tasks that the authors categorized as “clerical and administrative.”  It is an interesting paper, but I think it represents a lack of vision and insight on the part of the authors regarding the role that technology can and should play in supporting the cognitive effort of clinicians.
Most specifically, when a clinician was using EMR-based template charting and orders modules, the authors categorized that work as “clerical.”  Such a classification fails to acknowledge that the clinician is (or should be) using such modules to create a coherent, evidence-based plan of care:
  • using condition-specific templates that help remind her of the clinical observations and treatments to consider,
  • viewing reminders and order sets to help her to remember to include important evidence-based services in the plan of care,
  • receiving alerts to help her avoid ordering services that may cause an allergic response, conflict with other medications that the patient is taking, or that are dosed inappropriately considering the body mass of the patient,
  • viewing prompts for a needed referral for care management, or
  • receiving referral guidance to help direct specialty referrals to the specialists that have agreed to integrate care processes with the primary care practice within a clinically integrated network or accountable care organization.

The authors implicitly dismiss the cognitive work being done by the clinician when they are doing “documentation” and “ordering” and the support that the technology is providing to that cognitive work.  They reduce it all down to being wasteful paperwork, and suggest that it be eliminated through voice dictation or assignment of such paperwork to other members of the care team — both of which would preclude the decision support to those important cognitive processes.

I share the authors’ implied frustration with the failure of the current generation of health information technology to live up to the long-standing vision of supporting clinicians’ decision-making and coordination of care across a multi-disciplinary care team.  I acknowledge that today’s EMR-based care planning and coordination functionality feels like clerical work.  I’ve spent time on that problem.  I have two patents on proposed solutions to that problem.  I hope some day to help solve that problem, which I consider to be one of the most important problems in the broad fields of health care improvement, population health management and medical informatics.
I strongly agree with the idea of using user event logs to study how users actually spend their time and how they use application functionality, and I applaud the author’s efforts to validate the event log data with some direct observations.  But, I don’t agree with the authors’ suggestion that others should use their “EHR task categories” because they implicitly reject the vision of real technology-assisted care planning and coordination.  We shouldn’t give up on that vision.  We can and must do better to make it reality.
Read More

If repealing Obamacare is off the table, can we turn attention to improving it through cost-effective clinical protocols?

Paul Krugman recently wrote an article in the New York Times posing the question: if repealing Obamacare is off the table (for now), should people on the “progressive” end of the political spectrum push for a single-payer “Medicare for all” system or just advocate for incremental improvements to the privatized Obamacare model?  He says if we were starting with a blank slate, he would favor the single payer model.  But, he argues, the politics of moving to single payer are too difficult, and the evidence from other countries suggests that a privatized model can achieve comparable outcomes.  Therefore, he argues that progressive politicians should turn their attention to other social policy priorities like subsidized child care and pre-kindergarten education.

I generally agree with Krugman’s proposal to focus on incremental improvements to Obamacare, particularly if the objective is just to maintain and improve access to health insurance.  But that’s not our only objective.  We should also care about the quality and cost of health care.

I’ve long felt that policy to increase access to care should be linked to policy to assure the cost-effectiveness and value of care. Insurance is, by its nature, a collective, cooperative thing. In the long run, the people who are covered under the same insurance risk pool are sharing a finite resource. If they understood that, they would rationally desire for there to be protections against the pooled resource being squandered by other people for low value purposes. In health insurance, such protections primarily take the form of benefit policies. Benefit policies may define which services are not covered because they are considered experimental or cosmetic. They may define limitations based on age, gender, or medical history. They may also set quantity limits on coverage, such as defining the number of physical therapy visits or inpatient psychiatric hospital days covered. They may set lifetime maximum dollar amounts. But, such insurance benefit policies are very blunt instruments.  Insurance companies also protect against low-value uses of health care services using “utilization management” programs, including requiring pre-authorization processes, where doctors are required to submit justifications for proposed services and insurance company employees judge whether the proposed services meet “appropriateness” criteria.  Such utilization management programs create conflict, and insurance companies generally establish criteria that are very loose to minimize the conflict.  As a result, such programs are also very blunt instruments.

Clinical protocols, in contrast, can be more precise instruments, taking into consideration the details of a patient’s clinical situation. Clinical protocols are typically developed by physicians, and are ideally supported by evidence from clinical research studies.  Clinical protocols can take many different forms, and go by different names including “guidelines,” “algorithms,” “care maps,” and “standards of care.”  Whenever we have tried to design clinical protocols, especially for complex and costly care processes such as for low back pain, congestive heart failure, cancer or the care of frail elderly patients, we discovered that different protocols can have very different costs and outcomes.  Thoughtful design, rigorous implementation and continual evaluation and improvement of clinical protocols can lead to large improvements of outcomes and large reductions in cost. But, cost effective protocols do deny some people some treatments that would have helped them a little (just not enough to be “worth” the cost). The whole premise of designing cost-effective protocols depends on the recognition and acceptance of the collective nature of insurance and the finite nature of the resources being shared. Furthermore, it is essential that the people for whom such protocols are applied trust the people and the process of creating and implementing the protocols. In for-profit, commercial insurance companies, there is a fundamental conflict of interest if the owners of the insurance company get to control the design and implementation of the protocols and if they get to keep the money saved from denying services that could have helped people — even a little.

In a single payer system, the entire country (or each state) is treated as a risk pool, and the government plays the role of protocol developer. Some people are OK with that, while others are loathe to assign such authority to governments.  If  we continue to have private insurance companies or accountable care organizations bearing the risk for populations of patients (as in the current Obamacare system), then such organizations can make decisions about clinical protocols.  In either case, we absolutely need to create structures and mechanisms to ensure that the people receiving the care trust the people and process used to create and implement cost-effective protocols.  Some private organizations, such as Group Health Cooperative of Puget Sound (now part of Kaiser Permanente), created some structures and processes designed to build this trust back in the 1990s.

Although most other advanced countries already accept cost-effectiveness and pursue the development and implementation of cost-effective protocols, and although there would be a huge opportunity to reduce cost and improve outcomes by doing so in the U.S., making this policy shift in the U.S. will be very difficult. The U.S. population has been taught to be wary of “rationing” and “death panels” and U.S. doctors have been taught to reject “cookbook medicine.” Nevertheless, moving the health policy discussion in that direction may, in the long term, do some good.   Politicians asking “what’s next” after the apparent end of the quest to repeal Obamacare should consider turning attention to bringing cost-effectiveness to health care through clinical protocols.

Read More

Hospital Value-based Purchasing program 1% incentive is like homeopathic medicine — too diluted to actually work

In the June 15, 2017 issue of the New England Journal of Medicine, Andrew Ryan and colleagues from the University of Michigan published an evaluation of the Medicare Hospital Value-Based Purchasing Program (HVBP).

To summarize, if you offer a 1% incentive, and then dilute it by offering it only for the 40% of hospital patients covered by Medicare, and then dilute it further by spreading it across three domains (clinical process quality, patient experience and mortality), and then dilute each of these by basing them on multiple component metrics, and then dilute it more by choosing metrics that have already been reported for a number of years (and therefore the “low hanging fruit” improvement opportunities may already have been picked), and then further dilute it by offering the incentive mixed in with many other incentives for such things as meaningful use of EMRs…..

Wait for it….
You don’t see impact, even after 4 years.
The thinking behind HVBP is like homeopathy, where the practitioners assert that the more they dilute the homeopathic remedy ingredient, the more powerful the remedy becomes.
Imagine if a company hired a CEO and wanted to incentivize her to achieve growth and profitability. Would they consider a 1% incentive to be meaningful (even without further dilution).  No, the board would choose a number 50 to 75 times higher.
How about an equipment manufacturer choosing an incentive percentage for its sales team?  One percent sound like enough?
I’ve been exasperated for years that our value-based reimbursement designs – for both government and commercial payers – include an incentive that is far too small to motivate the types of changes they are intended to cause.  I fear we are just setting ourselves up for eventually someone saying “well, we tried incentives, and they don’t work.”
Read More

How to use and improve predictive models

Read More

Congressional Budget Office: 27 million will lose coverage and premiums will increase by 50% if repeal ACA without replacement

In my last post, I noted that Congress was eager to repeal Obamacare, but lacked consensus on a replacement.  I noted that Obamacare was designed to cover the sick people and keep premiums affordable through the individual mandate and subsidies.  These two provisions of Obamacare can be reversed through legislative mechanisms that avoid filibusters.  But, removing those provisions, while leaving in place Obamacare’s prohibition of denial for pre-existing conditions creates an unstable insurance market that could lead to an increase in the number of uninsured people, and an increase in the premiums paid by those that continue to be insured.

Today, the Congressional Budget Office published a report offering some non-partisan quantitative estimates of those outcomes:

The clear implication of these projections is that pursuing the repeal of just the portions of Obamacare that can be repealed unilaterally, while leaving in place the popular prohibition of denial of coverage for people with pre-existing conditions would be a disaster.  Hopefully, these projections will help convince Congress to resist the urge to seek quick repeal to appease fervent anti-Obamacare constituents.  Hopefully, Congress will take a breath and take the time to build bi-partisan consensus on a more comprehensive and coherent design for our health care insurance system.

Read More

Are high risk pools really a good replacement for Obamacare architecture?

Congress and the president-elect are enthusiastic about repealing Obamacare, but have not yet achieved any consensus about what to replace it with.  High risk pools figure prominently in various proposals, including Ryan’s “A Better Way” proposal.  But high risk pools are not a new idea.  Thirty five states had them in the years before Obamacare, so we have some experience to draw upon.  In general, they performed poorly, mostly because they were substantially underfunded, leading to high premiums and shameful waiting lists that withheld coverage for the sickest people – those that that needed coverage the most. The following explainer video was prepared last summer by the Kaiser Family Foundation, a health policy think tank.

Sounds Like A Good Idea? High-Risk Pools

High risk pools don’t reduce cost or risk. They just transfer it from private health plan premium payers to taxpayers — mostly state taxpayers. And, if the states fail to fund it properly (as has usually been the case), the wait lists associated with high risk pools creates a particularly cruel mechanism for keeping the most desperate citizens from the lifesaving care they need.

How did Romneycare and Obamacare Avoid the Need for High Risk Pools?

High risk pools consisted of the sickest people in the population. Since sick people incur health care expenses that they can’t afford, the money has to come from somewhere.  It can come from healthier members of the same health plan, from state taxpayers, or from federal taxpayers.  If it is to come from healthier members of the plan, there must enough of such healthy members to share the cost.  The healthy people can’t just wait until they are sick to buy insurance.  If too many healthy people opt out, the premiums for the people in the plan will be too high.  An insurance “death spiral” occurs when high premiums causes some of those healthy members to drop coverage, forcing premiums to go even higher for the remaining healthy members, ultimately leading to the failure of the plan.

So, to achieve coverage for sick people and affordability for all, the health insurance system must be designed to ensure that almost all the healthy people sign up for insurance, and that the healthy people don’t try to wait until they are sick to buy health insurance.   Romneycare and Obamacare attempted to accomplish this through two mechanisms:

  1. A subsidy for premiums for poor people to make them more affordable, and
  2. A “mandate” requiring that everyone have health insurance or pay a penalty.

However, as a political compromise to those that hated the idea of a mandate, the penalties were made quite small, making them only partially effective in getting enough healthy people to join the plan — ultimately causing the premium increases that people point to as evidence that Obamacare is “a disaster.”

Other than high risk pools, what is being proposed as an alternative to Obamacare?

I’ve seen nothing to suggest that there is any new innovation in health care finance that has been proposed as a superior solution.  So, what we’re likely to see is a return to health insurance designs that were used before Romneycare and Obamacare.  These mostly consist of the following:

  1. Reducing taxpayer expense by:
    • Kicking many poor people back out of government funded insurance plans (reversing federal subsidization of Medicaid expansion)
    • Reduce insurance benefits by covering fewer services and shifting more cost to patients for both government funded and private plans (reversing mandatory minimum benefits)
    • Allowing government funded insurance plans to negotiate with drug companies to demand volume discounts (against the wishes of the strong pharma lobby)
    • Increasing competition by reducing state-level insurance regulatory control to allow and facilitate building insurance plans that cut across state lines (against the states’ rights philosophy)
  2. Avoiding the insurance death spiral by:
    • Allowing insurance plans, once again, to reject sick people applying for private health insurance (reinstating pre-existing condition exclusions) and/or
    • High Risk Pools.

However, the president-elect and some members of Congress have claimed to be against some or all of these alternatives.  Except the last one — creating high risk pools. So, I think we’ll be hearing a lot more about that concept over the next few months.  Then, as people learn that high risk pools don’t do any magic and that they have a poor track record, I fear that framers of the “replacement” health insurance system will begin to acknowledge that replacement really means returning back to the other mechanisms listed above.

Read More

Three ways to keep it simple — one of which is bad

“Keep it simple, stupid.”   The “K.I.S.S.” principle.  Generally a good idea, but not always.

Types of Simplification

Consider three types of simplification:

  1. Leaning.  This is about getting rid of waste. When simplifying a design, leaning involves getting rid of unnecessary features.  When simplifying communications, leaning involves getting rid of information that is duplicative, unimportant or just decorative.  Edward Tufte, one of my heroes, is a statistician, artist and graphical designer and zen master of simplicity. He famously rails against “chart junk” and advocates for maximizing the “data – ink ratio.”
  2. Summarizing.  This is about dropping one or more layers of detail.  It is accomplished by grouping smaller details into categories or themes and dropping the details from the communication.  Summarization makes the information “blurry” but still tells the truth.  Summarization satisfies some readers.  To others, it serves as an introduction and and invitation to dive deeper.
  3. Glossing.  This requires making the information conform to a desired level of simplicity, even if it means fibbing. For example, a system may have three components that interact with one another.  Describing the interactions may be tedious to explain.  The interactions may require additional arrows on a diagram.  Glossing it involves escaping this annoying complexity by denying it.   Many companies create diagrams describing the components that make up their product or solution.  As described in Ian Gorton’s book on software architecture, such marketing diagrams are colloquially called “marketecture” diagrams.  Designers of such diagrams often take great liberties with their depiction of the solution, portraying it as being made up of components that conveniently correspond to the sources of value to the prospective customer, even when the actual technology components are organized in an entirely different way.  Glossing it can sometimes be helpful to communicate some deeper truth, almost like a metaphor or parable.  But, often times, glossing involves intentionally obfuscating the truth, making the solution appear to be better or simpler than it really is.

Einstein Simplification

Read More

First take on new CMS Comprehensive Primary Care Plus model

CMS CPC IconThis morning, I read about the recently announced next generation version of the CMS Comprehensive Primary Care model, which will require multi-payer participation and will involve up to 5K practices in 20 regions.

Sounds interesting.  I need to study it in more detail.  But based on my initial assessment:
  • I agree with the idea of pursuing payment and delivery system changes on a multi-payer basis to make it more compelling to providers.
  • I also agree with the idea of prepaying some E&M and then paying reduced FFS for E&M to cover only marginal cost of E&M office visits to make providers incentive-neutral on encounter modes.
  • But I disagree with move away from shared savings and implicit abandonment of the idea of non-governmental primary care-based organized systems of care pursuing care process innovation in favor of CMS taking over responsibility for defining a nationally-standardized multi-payer “care delivery model” and injecting it into individual primary care practices using a CMS-developed  “learning system.”
  • I also disagree with the Track 2 idea of partnering with “CMS-convened” IT vendors and contractual commitment to specific IT capabilities.  That approach basically takes MU, which was a huge distraction from real improvement, to even more obnoxious levels of micro-management.
Overall, I share the Fed’s frustration with the limited impact of previous efforts to transform primary care payment and delivery models, but I think the solution should be to define incentives which are more timely, coherent and consequential, enabled by more meaningful transparency requirements, clearer care relationships and some empowerment of primary care delivery organizations to define their own referral networks.
Read More

Criticism of ProPublica’s Surgical Scorecard fails to consider the possibility of real, useful analytics.

From https://projects.propublica.org/surgeons/

Last week, ProPublica published a scorecard of surgical death and complication rates of more than 17,000 surgeons  for 8 elective procedures using Medicare data.  As with prior releases of health care performance metrics, the response against such “transparency” was swift and bitter.  Among those many responses is a thoughtful blog post entitled “After Transparency: Morbidity Hunter MD joins Cherry Picker MD” by Saurabh Jha, MD in The Health Care Blog.  Definitely worth your time to read.

But, although it is a clever bit of commentary, it implicitly presents a false choice between using data and not using data.

In my opinion, the decision should be conceptualized as:
  • Option 1: Not using data (and relying instead on subjective assessment or chance)
  • Option 2: Using reported metrics, interpreted by people who lack the talent and training to understand the limitations of the metrics and the methods that can address some of those limitations, and
  • Option 3: Using data, interpreted through analysis, conducted by and interpreted with the aid of people with such analytic talent and training.
By talent and training, I don’t mean technology mavens.  Keep your business intelligence professionals, data miners, “big data” experts, and most that claim the fashionable title of “data scientist.” I mean people that have training in epidemiology, biostatistics, health economics and other social science disciplines, and that have sufficient knowledge of health care.  People that can conceptualize theories of cause and effect. People that understand bias and variation.  People that can tell an interesting and actionable story supported by data, rather than just generate a “dashboard” or “score card.”  And, they must be people who have integrity and who are free of conflicts of interest that could prevent them from telling stories that are true.

Before anyone writes off option #3 as idealistic and infeasible, we should at least take the time to think through how we might make it work.

Read More

How government surveillance of internet data relates to health care privacy and big data

Sound of Music ImageThis week, I accompanied my daughter and her elementary school classmates on a field trip to see the musical “The Sound of Music.”  To my daughter, it was a story about good vs. bad parenting.  But the context is 1930’s Austria, just as the Nazi regime is taking over.  The story illustrates how the battle for control, at least initially, takes place among business colleagues, neighbors, friends and even among family members.  And, it reminds us that, no matter how secure we seem to be in our personal lives and no matter how stable our society seems to be, our fortunes can quickly change.  We live in beautiful mountains, and those mountains have cliffs.

When I got home from the musical, I read the news reports regarding fresh evidence that the federal government may be engaged in electronic surveillance of the phone and internet communications of millions of people.  The new reports sparked a resurgence of the longstanding debate about whether such surveillance is a good thing because it might detect terrorist plots in time to disrupt them, or a bad thing, because it infringes our privacy rights and erodes the foundations of our liberty.  That’s a great debate for us to have.  That debate is oddly refreshing because it does not fall cleanly along  party lines.

Tyranny vs Anarchy ImageIn my opinion, it is a balancing act.  As amateur mountaineers, we hike along a treacherous ridgeline, with cliffs on both sides and slippery rocks underfoot.  On one side is the cliff of anarchy, where lawless invaders, terrorists and criminal gangs roam freely, pillaging our villages and murdering our children.  On the other side is the cliff of tyranny, where the checks and balances in our government break down, leaders get too powerful, levy heavy taxes, imprison or kill dissenters, and generally tell everyone what to do.  As I’ve noted before, most important things involve balancing between undesirable extremes. We must be vigilant to avoid falling over either side.

However, it is useful to note that the risk is not symmetrical.  The steepness of the cliff on the side of anarchy makes it look more frightening.  But the loose gravel on the side of tyranny makes that the more slippery slope.  Over the course of history, its seems as if oppressive leadership has been a far more common state than lawlessness.  So, my advice is to pay a little extra attention to avoiding missteps that erode checks, balances or rights.

How does this relate to health care?

Two ways come to mind.

First, our healthy debate about the privacy and accessibility of health care data is analogous.  One the one side, we want to defend the right of patients to control who has access to their own confidential medical records and for which purposes they are used.  On the other side, we want everyone who cares for us to have access to the information they need to enable them to make good decisions about our care, without missing any needed services or subjecting us to the risks of duplicative services.  Furthermore, our ability to assure the safety and effectiveness of health care processes, to hold health care providers accountable, and to learn what works and what doesn’t work requires access to data for populations of patients. We can error in either direction.  But, as with the ridgeline hikers, the dangers are not symmetrical.  In my experience, there are far more people harmed by medical errors caused by lack of access to information and by ineffective care processes than are harmed by inappropriate disclosures of their confidential health information.  So, we need to worry in both directions, but pay a little extra attention to avoiding missteps that erode our clinical decision support and our improvement efforts.

Second, our discussion of “big data” is analogous.  In both terrorist surveillance and health care, there is widespread faith in the value of large quantities of data, without regard to the quality and completeness of that data.  In an opinion piece published on June 7, 2013 on the CNN web site, Shane Harris, the author ofThe Watchers: The Rise of America’s Surveillance State,” challenges the evidence that surveillance data mining reduces terrorist events and saves lives. He asserts:

“To date, there have been practically no examples of a terrorist plot being pre-emptively thwarted by data mining these huge electronic caches. (Rep. Mike Rogers, chairman of the House Intelligence Committee, has said that the metadatabase has helped thwart a terrorist attack “in the last few years,” but the details have not been disclosed.)

When I was writing my book, “The Watchers,” about the rise of these big surveillance systems, I met analyst after analyst who said that data mining tends to produce big, unwieldy masses of potential bad actors and threats, but rarely does it produce a solid lead on a terrorist plot.

Those leads tend to come from more pedestrian investigative techniques, such as interviews and interrogations of detainees, or follow-ups on lists of phone numbers or e-mail addresses found in terrorists’ laptops. That shoe-leather detective work is how the United States has tracked down so many terrorists. In fact, it’s exactly how we found Osama bin Laden.”

This quote reminded me of our debates about “big data.”  As I’ve noted before, vendors of big data technologies sometimes over-sell.  They assert that the technology can overcome inadequacies in the structure, quality and completeness of source data.  They imply that the value of data is primarily a function of the size of the database.  Big data sources and technologies are undeniably valuable for some purposes, and will be an important part of our future in health care analytics.  But, as with terrorist surveillance, the real value tends to come from more pedestrian “shoe-leather” investigative techniques.

  • Proactively collecting the data needed to definitively answer an important question.
  • Hiring and developing people with advanced training in analytic methods, such as epidemiology, biostatistics, actuary sciences, and health economics.
  • Following leads to get to the bottom of something to produce information that is actionable, rather than merely suggestive.
Read More
Page 1 of 1212345»10...Last »