EMR event log study shows 6 hrs of use per day, but implicitly belittles the clinician’s cognitive effort and the EMR’s support

In a recent paper in the Annals of Family Medicine by Brian Arndt and colleagues at the University of Wisconsin, the authors described the results of an analysis of the user event logs of their Epic EMR.  The authors determined that primary care clinicians used the system nearly 6 hours per day out of an 11.4 hour work day, and that 44% of that time was spent on tasks that the authors categorized as “clerical and administrative.”  It is an interesting paper, but I think it represents a lack of vision and insight on the part of the authors regarding the role that technology can and should play in supporting the cognitive effort of clinicians.
Most specifically, when a clinician was using EMR-based template charting and orders modules, the authors categorized that work as “clerical.”  Such a classification fails to acknowledge that the clinician is (or should be) using such modules to create a coherent, evidence-based plan of care:
  • using condition-specific templates that help remind her of the clinical observations and treatments to consider,
  • viewing reminders and order sets to help her to remember to include important evidence-based services in the plan of care,
  • receiving alerts to help her avoid ordering services that may cause an allergic response, conflict with other medications that the patient is taking, or that are dosed inappropriately considering the body mass of the patient,
  • viewing prompts for a needed referral for care management, or
  • receiving referral guidance to help direct specialty referrals to the specialists that have agreed to integrate care processes with the primary care practice within a clinically integrated network or accountable care organization.

The authors implicitly dismiss the cognitive work being done by the clinician when they are doing “documentation” and “ordering” and the support that the technology is providing to that cognitive work.  They reduce it all down to being wasteful paperwork, and suggest that it be eliminated through voice dictation or assignment of such paperwork to other members of the care team — both of which would preclude the decision support to those important cognitive processes.

I share the authors’ implied frustration with the failure of the current generation of health information technology to live up to the long-standing vision of supporting clinicians’ decision-making and coordination of care across a multi-disciplinary care team.  I acknowledge that today’s EMR-based care planning and coordination functionality feels like clerical work.  I’ve spent time on that problem.  I have two patents on proposed solutions to that problem.  I hope some day to help solve that problem, which I consider to be one of the most important problems in the broad fields of health care improvement, population health management and medical informatics.
I strongly agree with the idea of using user event logs to study how users actually spend their time and how they use application functionality, and I applaud the author’s efforts to validate the event log data with some direct observations.  But, I don’t agree with the authors’ suggestion that others should use their “EHR task categories” because they implicitly reject the vision of real technology-assisted care planning and coordination.  We shouldn’t give up on that vision.  We can and must do better to make it reality.
Read More

How government surveillance of internet data relates to health care privacy and big data

Sound of Music ImageThis week, I accompanied my daughter and her elementary school classmates on a field trip to see the musical “The Sound of Music.”  To my daughter, it was a story about good vs. bad parenting.  But the context is 1930’s Austria, just as the Nazi regime is taking over.  The story illustrates how the battle for control, at least initially, takes place among business colleagues, neighbors, friends and even among family members.  And, it reminds us that, no matter how secure we seem to be in our personal lives and no matter how stable our society seems to be, our fortunes can quickly change.  We live in beautiful mountains, and those mountains have cliffs.

When I got home from the musical, I read the news reports regarding fresh evidence that the federal government may be engaged in electronic surveillance of the phone and internet communications of millions of people.  The new reports sparked a resurgence of the longstanding debate about whether such surveillance is a good thing because it might detect terrorist plots in time to disrupt them, or a bad thing, because it infringes our privacy rights and erodes the foundations of our liberty.  That’s a great debate for us to have.  That debate is oddly refreshing because it does not fall cleanly along  party lines.

Tyranny vs Anarchy ImageIn my opinion, it is a balancing act.  As amateur mountaineers, we hike along a treacherous ridgeline, with cliffs on both sides and slippery rocks underfoot.  On one side is the cliff of anarchy, where lawless invaders, terrorists and criminal gangs roam freely, pillaging our villages and murdering our children.  On the other side is the cliff of tyranny, where the checks and balances in our government break down, leaders get too powerful, levy heavy taxes, imprison or kill dissenters, and generally tell everyone what to do.  As I’ve noted before, most important things involve balancing between undesirable extremes. We must be vigilant to avoid falling over either side.

However, it is useful to note that the risk is not symmetrical.  The steepness of the cliff on the side of anarchy makes it look more frightening.  But the loose gravel on the side of tyranny makes that the more slippery slope.  Over the course of history, its seems as if oppressive leadership has been a far more common state than lawlessness.  So, my advice is to pay a little extra attention to avoiding missteps that erode checks, balances or rights.

How does this relate to health care?

Two ways come to mind.

First, our healthy debate about the privacy and accessibility of health care data is analogous.  One the one side, we want to defend the right of patients to control who has access to their own confidential medical records and for which purposes they are used.  On the other side, we want everyone who cares for us to have access to the information they need to enable them to make good decisions about our care, without missing any needed services or subjecting us to the risks of duplicative services.  Furthermore, our ability to assure the safety and effectiveness of health care processes, to hold health care providers accountable, and to learn what works and what doesn’t work requires access to data for populations of patients. We can error in either direction.  But, as with the ridgeline hikers, the dangers are not symmetrical.  In my experience, there are far more people harmed by medical errors caused by lack of access to information and by ineffective care processes than are harmed by inappropriate disclosures of their confidential health information.  So, we need to worry in both directions, but pay a little extra attention to avoiding missteps that erode our clinical decision support and our improvement efforts.

Second, our discussion of “big data” is analogous.  In both terrorist surveillance and health care, there is widespread faith in the value of large quantities of data, without regard to the quality and completeness of that data.  In an opinion piece published on June 7, 2013 on the CNN web site, Shane Harris, the author ofThe Watchers: The Rise of America’s Surveillance State,” challenges the evidence that surveillance data mining reduces terrorist events and saves lives. He asserts:

“To date, there have been practically no examples of a terrorist plot being pre-emptively thwarted by data mining these huge electronic caches. (Rep. Mike Rogers, chairman of the House Intelligence Committee, has said that the metadatabase has helped thwart a terrorist attack “in the last few years,” but the details have not been disclosed.)

When I was writing my book, “The Watchers,” about the rise of these big surveillance systems, I met analyst after analyst who said that data mining tends to produce big, unwieldy masses of potential bad actors and threats, but rarely does it produce a solid lead on a terrorist plot.

Those leads tend to come from more pedestrian investigative techniques, such as interviews and interrogations of detainees, or follow-ups on lists of phone numbers or e-mail addresses found in terrorists’ laptops. That shoe-leather detective work is how the United States has tracked down so many terrorists. In fact, it’s exactly how we found Osama bin Laden.”

This quote reminded me of our debates about “big data.”  As I’ve noted before, vendors of big data technologies sometimes over-sell.  They assert that the technology can overcome inadequacies in the structure, quality and completeness of source data.  They imply that the value of data is primarily a function of the size of the database.  Big data sources and technologies are undeniably valuable for some purposes, and will be an important part of our future in health care analytics.  But, as with terrorist surveillance, the real value tends to come from more pedestrian “shoe-leather” investigative techniques.

  • Proactively collecting the data needed to definitively answer an important question.
  • Hiring and developing people with advanced training in analytic methods, such as epidemiology, biostatistics, actuary sciences, and health economics.
  • Following leads to get to the bottom of something to produce information that is actionable, rather than merely suggestive.
Read More

In the dust-up about “rebooting” the EHR meaningful use incentive program, “the Emperor has no clothes” has been the most interesting response.

Cover of Reboot Meaningful Use ReportLast month, six Republican senators released a document entitled “Reboot: Reexamining the strategies needed to successfully adopt health IT.” The report asserts that the original goal of the $35 billion EHR meaningful use incentive program was to achieve “interoperability,” creating a “secure network in which hospitals and providers can share patient data nationwide.”  The report expresses concerns that the meaningful use incentives and federal HIT policy in general are not achieving this interoperability goal and are instead increasing costs, leading to waste and abuse, threatening patient privacy, and leading to unsustainable IT infrastructure.

Since then, a number of parties have offered written responses, including the American Hospital Association, the Texas Medical Association, a consortium of consumer groups,  a group of EMR vendors and the “Healthcare Innovation Council.”  Both the senators’ document and virtually all of the responses agree with the premise that the health care system needs to be improved, and that improvements to health information technology are a necessary component of any effort to improve the health care system.  Almost all parties also agree that the $18.5 billion spent so far on stimulating EMR roll-out has not yet led to any great improvement in the health care system.  But, different parties express very different views about whether the incentive program as currently designed will eventually lead to such improvements.  And, among those arguing for “rebooting,” there are diverse opinions about what a rebooted program should look like.

On one level, the senators’ report fits into the frustratingly conventional narrative of political polarization, with senators from one pole arguing that the administration from the other pole is doing a bad job.  On another level, it fits into the deeper philosophical differences of opinion regarding the role of government spending and regulation vs. private enterprise in the health care system.  On a third level, the report represents the special interest views of one set of constituents, health care providers, who like receiving government funded incentives, but would prefer the bar to be lowered to earn those rewards.

Likewise, the responses defending the meaningful use program can be viewed as defending the administration’s record, promoting public involvement in health care, and representing the views of another set of constituents, the health care IT vendors, who really like the tsunami of revenue they are receiving as a result of HITECH and who fear that “rebooting” might end up more like “unplugging.”

The most interesting response

Logo for Healthcare Innovation CouncilA group called the Healthcare Innovation Council released what I consider to be the most interesting of the responses to the senators’ “reboot” document.  This Council was assembled by Anthelio Health, a health care IT outsourcing  company and consultancy that is not among the EMR vendor insiders that are reaping the greatest rewards from HITECH.  The Council includes a few leaders of health care provider organizations and leaders from other health IT and analytics companies not including any major EMR vendors.  Their report is entitled “Let’s Admit the Emperor has No Clothes: It’s Time to Redesign EHRs to Improve Patient Care.” They assert that EHRs have not led to the envisioned improvement in the health care system, and offer their diagnoses:

EHR design issues

    • “EHRs, to date, have been fundamentally designed to create electronic versions of paper medical records.”
    • “EHRs focus on data collection mostly for regulatory compliance and financial reporting, not to assist physicians, nurses and other clinicians in providing higher quality more efficient patient care. “

EHR implementation issues

    •  “EHR implementations are often led as IT projects by teams that do not obtain robust, meaningful, future-focused input/involvement from nurses, physicians, pharmacy and other clinicians who provide patient care. The end result typically is that EHR implementations don’t make life better for EITHER the clinician or the patient.”
    • “CMS’ and healthcare providers’ focus has been to ‘just get EHRs up and running‘ in a way that meets CMS’ meaningful use requirements so that they can get meaningful use dollars, without regard to how that affects patient care.”

I see the problem the same way.  But, the tricky part is the remedy.

The Healthcare Innovation Council’s paper first advocates for increased involvement by clinicians (with an emphasis on nurses) in the redesign of EHR technology.  On the surface, this is not really a controversial point.  The Council’s paper reverently referenced Steve Jobs twice in the paper as an innovator and simplifier.  But, it is interesting to note that Jobs was famously against too much end user involvement in the design process, arguing that users don’t have an easy time re-conceptualizing things.  People wouldn’t have asked for an iPod, an iPhone, or an iPad because they had never experienced them before and had established mental models of how to buy music, navigate, take movies, read books, etc.  The problem is on display within the Council’s document.  They write:

  • “EHRs are not designed to reflect or facilitate the way in which providers deliver patient care, and thus disrupt, rather than enhance, patient care”
  • “Improved focus on EHR design and implementation that starts by mirroring the way care is actually delivered by nurses, doctors and other clinicians.”

They seem to be asking for EMRs that “repave the cow paths,” a problem I’ve discussed in a prior post.  But, the Council at least seems aware of the difference between status quo and real disruptive improvement:

  • “This basic design then would move to new, information enhanced processes that not only help clinicians do their jobs easier, but measurably improve patient care safety and quality.”
  • “Rethinking, redesigning and re-engineering nurse, physician and clinician workflows to take full advantage of the capabilities of the new (and evolving) EHR tools to result in improved healthcare processes and care experiences.”

In addition to advocating for clinician-led EHR redesign, the Council’s remedy also includes having the federal government require providers to demonstrate “actual patient care improvement and better patient care process” to earn the incentives.  That’s a lovely thought, but imagine how many pages of regulation it would take for the federal government to define specific care processes that it considered to be “better” and methods to document that such care process changes were connected to the EHR technology.  Beware of what you ask for.

The last line of the Council’s paper is, perhaps, the most interesting.  The authors agree with the senators that it is time to “reboot” the meaningful use incentive program before all the money is spent.  Then, almost as a throw-away line, they add:

  • “Unless that is done, then we urge Congress to halt CMS’ “meaningful use” EHR program and spend the remainder of the “meaningful use” funds on providing financial incentives for hospitals and other providers that demonstrate “meaningful improvements in patient care” through whatever means they choose, and leave it to the healthcare providers, not our federal government, to choose the most effective means to improve patient care.”

Financial incentives for improvements in patients care is what value-based reimbursement is all about.  The Healthcare Innovation Council is basically saying that if you want effective, real improvement, rather than just superficial “compliance,” you need to pay for value.  Of course, the nation is transitioning to value-based reimbursement.  But that process is going slowly and the percentages of reimbursement that is value-driven has tended to be small.  As I’ve argued before, incentives tied to compliance is the opposite of real improvement, no matter how hard you try to make compliance meaningful.  If so, maybe we really need to consider re-allocating the remaining meaningful use funds to speed up the transition to value-based reimbursement, rather than just rebooting meaningful use.

Read More

Compliance and Process Improvement are Opposites, no matter how hard you try to make compliance “meaningful”

HIMSS13 iconI’m attending the HIMSS’13 conference in New Orleans this week, a huge assemblage of health care IT people and the bewilderingly diverse collection of technology vendors that sell to them.  The first meeting I attended was billed as a “town hall” meeting by the Federal Government’s “Office of the National Coordinator” (ONC).  The session was heavy on self-congratulatory enthusiasm, even including a demonstration of an ONC team cheer — and I mean that literally.  The session was very light on the type of challenging discussion connoted by the “town hall” concept.  No time for any questions from the microphone.  Just some time towards the end for some selected questions written on small cards, all of them softballs.

I don’t want to be too harsh on the ONC crew.   They are clearly talented, well educated and working hard for government salaries.  They clearly are inspired by the cause of improving health care.  And they are sincere in their belief that requiring or incentivizing compliance with a minimum standard set of technology capabilities is what we need.

In my experience, that is a very natural belief system for people in government agencies, accreditation organizations, philanthropic organizations, health plans, and many healthcare information technology vendors.  I admit to being guilty of going down that road myself.

But, I see it as an honest mistake.  In my experience, people within health care provider organizations are easily convinced that pursuing compliance, no matter how small the penalties and rewards involved, always seems like the most urgent priority.  It’s a mandate, so it gets to win in any internal prioritization debates.  And, that compliance mandate travels up the food chain to all the product management executives of health information technology vendors.   I used to be one of those too, and admit to making such arguments.  As a result, the compliance features occupy all the available near-term slots in the product roadmap, and the innovations have to wait until a future phase that never arrives because there is fresh batch of mandates coming up quickly on the heels of the last batch.

In the ONC town hall session yesterday afternoon, the ONC leaders seemed to acknowledge that the results of earlier phases of “meaningful use” incentives had been somewhat superficial, not anything that led to transformative change yet.  They emphasized that this is a long term journey, and that we need to have reasonable expectations and be patient.  But, they failed to recognize that the superficial nature of the response by vendors and providers may be due to the very fact that it was a compliance response, rather than an internally-initiated agenda driven by the process innovations the technology is intended to support.  Rather, they just looked forward with eagerness and cheerful enthusiasm to the next round of meaningful use standards that will continue the journey.

Read More

New HIT ROI lit review is a methodologic tour de force, but misses the point

JAMIA logoRecently, Jesdeep Bassi and Francis Lau of the University of Victoria (British Columbia) published in the Journal of the American Medical Informatics Association (JAMIA) another in a series of review articles that have been written in recent years to summarize the literature regarding the economic outcomes of investments in health information technology (HIT).  Such articles answer the questions

  • “How much do various HIT technologies cost?”
  • “How much do they save?”
  • “Are they worth the investment?”

They reviewed 5,348 citations found through a mix of automated and manual search methods, and selected a set of 42 “high quality” studies to be summarized.  The studies were quite diverse, including a mix of types of systems evaluated, methods of evaluation, and measures included.  The studies included retrospective data analyses and some analyses based on simulation models.  The studies included 7 papers on primary care electronic health record (EHR) systems, 6 on computer-based physician order entry (CPOE) systems, 5 on medication management systems, 5 on immunization information systems, 4 on institutional information systems, 3 on disease management systems, 2 on clinical documentation systems, and 1 on health information exchange (HIE) networks.

Lau HIT ROI results

Key results:

  • Overall, 70% of the studies showed positive economic results, 24% were inconclusive, and 6% were negative.
  • Of 15 papers on primary care EHR, medication management, and disease management systems, 87% showed net savings.
  • CPOE, immunization, and documentation systems showed mixed results.
  • The single paper on HIE suggested net savings, but the authors expressed doubts about the optimistic assumptions made in that analysis about a national roll-out in only ten years.

My take:

Bassi and Lau have made an important contribution to the field by establishing and documenting a very good literature review methodology – including a useful list of economic measures, a nice taxonomy of types of HIT, and many other tools which they graciously shared online for free in a series of appendices that accompany the article.  They also made a contribution by doing some tedious work to sort through lots of papers and sorting and classifying the HIT economics literature.

But, I think they missed the point.

Like many others, Bassi and Lau have implicitly accepted the mental model that health information technology is, itself, a thing that produces outcomes.  They evaluate it the way one would evaluate a drug or a cancer treatment protocol or a disease management protocol.  Such a conceptualization of HIT as an “intervention” is, unfortunately, aligned with the way many healthcare leaders conceptualize their investment decision as “should I buy this software?”  I admit to contributing to this conceptualization over the years, having published the results of retrospective studies and prospective analytic models of the outcomes resulting from investments in various types of health information technologies.

Process PuckBut, I think it would be far better for health care leaders to first focus on improvement to care processes — little things like how they can consistently track orders to completion to assure none fall through the cracks, bigger things like care transitions protocols to coordinate inpatient and ambulatory care team members to reduce the likelihood that the patient will end up being re-hospitalized shortly after a hospital discharge, and really big things like an overall “care model” that covers processes, organizational structures, incentives and other design features of a clinically integrated network.  Once health care leaders have a particular care process innovation clearly in sight, then they can turn their attention to the health information technology capabilities required to enable and support the target state care process.  If the technology is conceptualized as an enabler of a care process, then the evaluation studies are more naturally conceptualized as evaluations of the outcomes of that process.  The technology investment is just a one of a number of types of investments needed to support the new care process.  The evaluation “camera” zooms out to include the bigger picture, not just the computers.

I know this is stating the obvious.  But, if it is so obvious, why does it seem so rare?

This inappropriate conceptualization of HIT as an intervention is not limited to our field’s approach to economic evaluation studies.  It is also baked into our approach to HIT funding and incentives, such as the “Meaningful Use” incentives for investments in EHR technology, and the incentives created by HIT-related “points” in accreditation evaluations and designations for patient-centered medical home (PCMH), accountable care organizations (ACOs), organized systems of care (OSC), etc.  The designers of such point systems seem conscious of this issue.  The term “meaningful use” was intended to emphasize the process being supported, rather than the technology itself.  But, that intention runs only about one millimeter deep.  As soon as the point system designers put any level of detail on the specifications, as demanded by folks being evaluated, the emphasis on technology becomes instantly clear to all involved.  As a result, the intended focus on enabling care process improvement with technology slides back down to a  requirement to buy and install software.  The people being evaluated and incentivized lament that they are being micromanaged and subject to big burdens.  But they nevertheless expend their energies to score the points by installing the software.

So, my plea to Bassi and Lau, and to future publishers of HIT evaluation studies, is to stop evaluating HIT.  Rather, evaluate care processes, and require that care process evaluations include details on the HIT capabilities (and associated one time and ongoing costs) that were required to support the care processes.

Read More

Big Data: A break-through, or just new terminology?

I don’t normally read the Harvard Business Review.  I find it a bit too cool.  Trying too hard to coin new words.  Over-simplifying complex issues.  Over-confidently setting expectations.

Recently, Harvard Business Review sponsored a web-seminar entitled “Big Data: Can You Seize the Opportunity.”  Classic HBR.  Don’t challenge the premise.  Just challenge the audience to sign up.

Big Data GraphicBut, as covered by Health Data Management, two of the presenters challenged the “big data” framework — criticizing those who defined big data as “magic bullet” technology projects that can be expected to yield big short term returns, and essentially redefining the terminology to conform to their insights about how to gain long term value from analytics.  The heretics were Donald Marchand and Joe Peppard, two professors and analytics entrepreneurs from Switzerland and the U.K., respectively.  They argued that IT people have to stop thinking about building analytic applications (“design to build”) and, instead, think about developing data for use by non-IT people (“design to use”).   They emphasized the unfortunate reality that using data will drudge up problems with the data itself – problems that can’t be miraculously resolved by the big data technology (no matter what the vendor of the big data technology says…).

Marchand and Peppard acknowledged that long-standing conflict between the “c-suite” and IT leaders can present obstacles to enterprise information management.  They pointed out that getting value from analytics requires engaging lots of non-IT people in a long-term process that involves cultural change.  People have to get used to the idea of using data to support decisions, rather than just to make the case for already-made decisions.  IT people can provide some tools, but the non-IT people need to develop new skills and competencies in order to harvest the value of big data projects.

I can’t argue with any of that.  All of that was true before the words “big data” were uttered.  Much of that is quite different than what the proponents of big data are talking about — break-though technology with magical properties that can overcome data problems and shower untrained decision-makers with insights and instant return-on-investment.  But, I really like the idea of capturing the undeniable market momentum of the term “big data” and redirecting that momentum back toward the less glamorous, but ultimately far more valuable journey toward developing useful data and learning how to use it to support evidence-based decision-making.

Read More

The race in on between triple-aimers and hospital defenders

This week, Julie Creswell and Reed Abelson published an interesting piece in the New York Times about how the war between hospital chains puts doctors in a bind.  I think the article includes many relevant facts about the consolidation of the health care market.  But, I conceptualize the conflict a little differently.

Here’s my take…

For a number of years, there has been an internal battle within the federal government.  The CMS has been promoting the idea of formation of ACOs to achieve clinical integration of physician and hospital services.  But, as I described in an earlier post, the FTC and the Justice Department have been concerned that hospitals would use ACO safe harbor provisions as cover for consolidation to increase market power and drive up prices.

CMS was confident that, if ACOs faced enough risk, the savings from clinical integration and care process transformation would outweigh cost increases from higher prices.  But, when CMS proposed ACO regulations that had the audacity to include a modest amount of downside risk for providers, they faced a huge provider backlash.  CMS buckled under the pressure, and approved final ACOs regs that allowed providers to choose only upside rewards, with no downside risk.  As I’ve noted before, the vast majority of providers that ended up forming ACOs have very little skin in the game.

Within hospital organizations that formed ACOs, there are a mix of leaders.  Some leaders are enthusiastic promoters of the CMS vision of clinical integration, a resurgence of primary care, and achieving the “triple aim” of improving quality, increasing satisfaction and reducing per capita cost.  Other ACO leaders are more focused on the financial health of the hospital portion of the delivery system.  Such leaders are concerned that the per capita cost take out will come out of the hides of hospitals, some of which will be forced to close when their revenue cannot cover their substantial fixed costs.  These two types of leaders have been unable to agree on the need to make substantial investments in clinical process transformation and the associated clinical information systems designed to achieve the cost take-out.  So, the federal government came to the rescue with HITECH funding for EMR technology.  The Feds were able to justify this funding based on the need to stimulate the economy, rather than the need to reduce health care cost.  As a result, the HITECH “meaningful use” regulations are focused more on quality than cost take-out.  Though well meaning, the HITECH funding and the associated meaningful use regs actually created a huge distraction for IT professionals within provider organizations.  Many ACO leaders have directed their IT professionals to focus on meeting externally-defined meaningful use requirements — studying for the test — rather than doing the far more challenging work of actually figuring out how to support the triple aim through clinical process transformation.

The one thing that the two types of ACO leaders could agree upon was physician practice acquisition.  The triple-aim-oriented leaders were in favor of acquiring practices as a means to get sufficient control to achieve clinical integration.  The hospital-oriented leaders were in favor of acquiring practices as a means of controlling the hospital’s inbound referral pipeline to keep the beds filled and to increase market share to gain bargaining power against health plans.  So, practice acquisition has proceeded vigorously.  Now, the race is on to see which group of ACO leaders is going to dominate.  To this point, the triple-aim-oriented leaders remain hampered by longstanding weakness in physician leadership and governance structures (the problem of “herding cats”), the traditional dominance of specialty leaders over primary care leaders, the lack of available analytic talent, and lame clinical information systems.

Right now, the hospital-market-power-oriented leaders are out in front. There is a great risk that they will be too successful, making it too obvious that the FTC and the Justice Department were right about market consolidation. That outcome would lead the health care policy community and the public to conclude that “accountable care” is a failure, as happened with “managed care” in the 1990s, sending the idealistic triple-aimers back into exile for another decade.  But, that outcome is not yet certain. There remains a chance for some provider organizations to figure out ways to achieve the triple aim, including per capita cost take-out, and still accomplish profitability and growth, thereby disrupting the long-standing hospital-centric order. I would estimate the probability of that outcome to be less than 50%, but still likely enough to be worth fighting for.

Read More

Don’t pave the cow paths: The challenge of re-conceptualizing health care processes

There is a popular adage for information technology professionals: “Don’t pave the cow paths.”

I recently worked with a client from Texas, and they were fond of this adage. They said it with the most excellent drawl, giving it extra credibility, as if they may have actually learned it the hard way on their own ranches.

In the IT context, I interpret the adage to mean:

When designing an information technology solution for a business area, don’t just learn how they did the process manually and then create a software application to transfer that same process to computers. Rather, try to understand the underlying problem that the business process is trying to solve and the value that the business process is intended to create, and then take the opportunity to design a different processone that is rendered feasible with enabling information technology and that delivers greater value.

When designing a new process to replace an old one, the starting point is re-conceptualization. The process designer must shed some of the terminology used to describe the old process when that terminology is too strongly tied to the details of the old process. Rather, the designer must dig down to the more essential concepts, and choose labels that are simpler and purer, seeking fresh metaphors to provide cleaner terminology. Then, the new process and the associated data structures can be re-constructed on top of a conceptual foundation that is easier to talk about, simpler to manage, and more stable.

Once we have a strong conceptual foundation, we can then flesh out the details of how the process can be made leaner and more effective, enabled by information technologies. Obviously, the proposed new process design influences the selection and configuration of enabling technologies. But, awareness of the capabilities of various technologies can also help generate ideas about candidate process designs that will be rendered feasible by the technologies. Therefore, this process is inherently iterative. The old-school philosophy of getting sign-off on detailed system requirements before considering the technology solution was a response to a valid concern that people will fall in love with a technology and then inappropriately constrain their process design accordingly. But, applying that philosophy too rigorously causes the opposite problem. If process designers don’t know what’s possible, they naturally stick with their old conceptualization, which also serves to inappropriately constrain their process design. As with most hard things, effectiveness requires finding the right balance between two undesirable extremes.

An example: the case of “registries.”  

A “registry” is a list of patients. The list includes the evidence-based services they need and whether or not they have received them. It is like a tickler file to help members of the clinical team remember what preventive services and chronic condition management services need to be done, so the team can improve their performance on quality of care metrics and provide better care to their patients.

But, if you dig down, the more fundamental purpose of the registry can be conceptualized as enabling care relationship management and care planning processes. Conceptually, health care providers need to know which patients they are taking care of. That’s care relationship management. It involves integrating different sources of information about care relationships, including derived care relationshpis based on claims data (also called “attribution”) and declared care relationships from health plans, patients and physicians. Part of the function of a registry is to clarify and make explicit those care relationships. This simple function can be considered  radical to clinicians who have become accustomed to an environment where such care relationships have been ambiguous and implicit.

If a physician has a care relationship with a patient, then, conceptually, he or she has a professional responsibility to make and execute a plan of care for that patient. Care planning is the process of determining which problems the patient has and what services are needed to address those problems. Conceptually, a good care planning process also includes provisions for multi-disciplinary input by members of the clinical team.  And, a good care planning process also includes decision support, including alerts for necessary things missing from the care plan, and critique of things that have been put on the care plan.  Such critique can be based on clinical grounds, such as drug-drug interactions, drug allergies and drug dosing appropriateness. Or, they can be based on evidence-based medicine or health economic grounds, such as in utilization management processes.

The name “registry” is tied historically to the word “register” which is a type of paper book used for keeping lists of things. In the health care context, “registries” were used by public health officials to track births, outbreaks of infectious diseases and cancer tumors. So, when people think about chronic disease registries, their mental model of keeping a paper list is a barrier to their willingness to re-conceptualize the underlying function differently.  But, more fundamentally, a “registry” is just one type of tool to facilitate care relationship management and care planning — a tool designed to be used for a narrowly defined list of problems and services, rather than being designed for more general use.

Today, there is no single care plan for most patients.  The function of keeping track of the problems that need to be addressed is either not done or it is done in a haphazard way, peppered across various structured and unstructured encounter notes, scribbled on problem lists, auto-generated in clinical summaries based on diagnosis codes on billing records, checked off on health risk appraisals, etc.   The function of figuring out which services are necessary to address each problem is peppered across numerous clinical notes, requisition forms, e-prescribing systems, order entry systems, care management “action lists” and in the fields of registry systems.  The function of facilitating interdisciplinary input to a patient’s care occurs informally in hallway conversations, morning rounds, tumor board meetings, or, most commonly, not at all.  These are all care planning functions, but most clinicians have no familiarity with the concept of linking these diverse bits of data and process in a cleaner, simpler notion of managing a single care plan to be used and updated over time by the entire care team.  As far as they are concerned, such a notion is probably infeasible and unrealistic.  They’ve never seen a technology that can enable it to become reality.

Choosing the right leap distance.

Of course, when re-conceptualizing processes, it is possible to go too far.  Old habits, mental models, terminology, and processes die hard.  If your re-conceptualization is a great leap to a distant future state of elegantly conceptualized processes, it might end up being too difficult to convince people to take the leap with you.  Other adages come to mind:  “Don’t get in front of your headlights.” Then there is President Obama’s version: Don’t get “out over your skis.”  And my favorite, often quoted by Tom Durel, my old boss at Oceania, “Never confuse a clear vision for a short distance.”

The optimal “leap distance” is a function of motivation to change.  If people start to experience great pain in their current state and begin to fear the consequences of sticking to their old ways, change happens.  As we move forward to a world where providers are taking more economic risk and facing more severe consequences for failing to improve quality of care, we will be able to pursue bolder innovation and leap greater distances in our process and technology improvements.

Read More

Trinity Health and BCBSM sign contract to invest in infrastructure for clinical integration and population management

Trinity Health – Michigan and Blue Cross Blue Shield of Michigan (BCBSM) recently announced that they signed a three and a half year agreement under which BCBSM will provide some funding to support a collaborative effort of Trinity Health and its affiliated physician organizations to make improvements in infrastructure designed to improve clinical integration and support population management, with the goal of improving the quality of care, enhancing patient experience and outcomes, and reducing health care costs.  BCBSM and Trinity Health will also collaborate to implement performance measures on patient satisfaction and quality.  These infrastructure improvements and measures will prepare Trinity Health and its affiliated physician organizations for a transition from a fee-for-service reimbursement from BCBSM to a “performance-based reimbursement” or “gain sharing” arrangement, to be implemented sometime before 2016.

Trinity Health is the third health system to sign similar agreements with BCBSM, after St. John Providence Health System and Beaumont Health System.    By the end of 2012, BCBSM intends to reach agreements with additional hospitals so that 50% of its hospital spending will be subject to value-based reimbursement agreements.

More details are available in Crain’s Detroit Business.

Read More

Reports of the death of Cost-Effectiveness Analysis in the U.S. may have been exaggerated: The ongoing case of Mammography

Guidelines for the use of mammograms to screen for breast cancer have been the topic of one of the fiercest and longest-running debates in medicine.  Back in the early 1990s, I participated in that debate as the leader of a guideline development team at the Henry Ford Health System.  We developed one of the earliest cost-effectiveness analytic models for breast cancer screening to be used as an integral part of the guideline development process.  I described that process and model in an earlier blog post.  Over the intervening 20 years, however, our nation has fallen behind the rest of the world in the use of cost-effectiveness analysis to drive clinical policy-making.  As described in another recent blog post, other advanced nations use sophisticated analysis to determine which treatments to use, while Americans’ sense of entitlement and duty have turned us against such analysis — describing it as “rationing by death panels.”  Cost-effectiveness analysis and health economics is dead.

But, maybe reports of its death have been exaggerated.

recent paper published on July 5, 2011 in the Annals of Internal Medicine described the results of an analysis of the cost-effectiveness of mammography in various types of women.  The study was conducted by John T. Schousboe, MD, PhD, Karla Kerlikowske, MD, MS, Andrew Loh, BA, and Steven R. Cummings, MD.  It was described in a recent article in the Los Angeles Times.  The authors used a computer model to estimate the lifetime costs and health outcomes associated with mammography.  They used a modeling technique called Markov Microsimulation, basically tracking a hypothetical population of women through time as they transition among various health states such as being well and cancer free, having undetected or detected cancer of various stages and, ultimately, death.

They ran the models for women with different sets of characteristics, including 4 age categories, 4 categories based on the density of the breast tissue (based on the so-called BI-RADS score), whether or not the women had a family history of breast cancer, and whether or not the women had a previous breast biopsy.  So, that’s 4 x 4 x 2 x 2 = 64 different types of women.  They ran the model for no-screening, annual screening, and screening at 2, 3 or 4 year intervals.  For each screening interval, they estimated each of a number of health outcomes, and summarized all the health outcomes in to single summary measure called the Quality-Adjusted Life Year (QALY).  They also calculated the lifetime health care costs from the perspective of a health plan.  Then, they compared the QALYs and costs for each screening interval, to the QALYs and costs associated with no screening to calculate the cost per QALY.  Finally, they compare the cost per QALY to arbitrary thresholds of $50K and $100K to determine whether screening at a particular interval for a particular type of women would be considered by most policy-makers to be clearly costs effective, reasonably cost-effective, or cost ineffective.

The authors took all those cost effectiveness numbers and tried to convert it to a simple guideline:

“Biennial mammography cost less than $100 000 per QALY gained for women aged 40 to 79 years with BI-RADS category 3 or 4 breast density or aged 50 to 69 years with category 2 density; women aged 60 to 79 years with category 1 density and either a family history of breast cancer or a previous breast biopsy; and all women aged 40 to 79 years with both a family history of breast cancer and a previous breast biopsy, regardless of breast density. Biennial mammography cost less than $50 000 per QALY gained for women aged 40 to 49 years with category 3 or 4 breast density and either a previous breast biopsy or a family history of breast cancer. Annual mammography was not cost-effective for any group, regardless of age or breast density.”

Not exactly something that rolls off the tongue.  But, with electronic patient registries and medical records systems that have rule-based decision-support, it should be feasible to implement such logic.  Doing so would represent a step forward in terms of tailoring mammography recommendations to specific characteristics that drive a woman’s breast cancer risk.  And, it would be a great example of how clinical trials and computer-based models work together, and a great example of how to balance the health outcomes experienced by individuals with the economic outcomes borne by the insured population.  It’s not evil.  It’s progress.

It will be interesting to see if breast cancer patient advocacy groups, mammographers and breast surgeons respond as negatively to the author’s proposal as they did to the last set of guidelines approved by the U.S. Preventive Services Task Force which called for a reduction in recommended breast cancer screening in some categories of women.


Read More
Page 1 of 3123»