Beautiful video from Cleveland Clinic advocating empathy

We focus on optimizing care processes, analyzing data, implementing technology, and the other technical and scientific aspects of our work to improve health care.  But, this video created by the Cleveland Clinic is a touching reminder of the human element, advocating for empathy in our work.

Read More

Michigan physicians are more focused on medical homes and accountable care organizations and more optimistic about careers in medicine. Coincidence?

A recent article in Crains Detroit Business reported the results of a national physician survey conducted by The Doctors’ Company, a large malpractice insurer. According to the survey, 10% of physicians nationally report plans to convert their practice to a “medical home” model. But, that number was 27% for Michigan. Nationally, only 14% of physicians are interested in joining an accountable care organization (ACOs). But, in Michigan this number was 25%.

Why are Michigan doctors twice as interested in medical homes and accountable care?

Steven Newman, M.D., president of the Michigan State Medical Society, attributed it to the fact that Michigan physician organizations have been working on this for a long time. I was proud to see that he called out Blue Cross Blue Shield of Michigan as being one of the drivers. BCBSM’s Patient Centered Medical Home designation program has been going on for 4 years.  Its Physician Group Incentive Program (PGIP) has been going on for 8 years.   Its Collaborative Quality Intiatives (CQIs) have been going on for more than a decade.

Unfortunately, these things take time.  Fortunately, in Michigan, we started a long time ago.

My former boss, Tom Simmer, MD, CMO of BCBSM, has consistently emphasized the importance of using time as a lever of change.  And, he insisted on staying positive and respectful, focusing on building energy that can “catalyze” change by physicians and other health care professionals.  It seems to have paid off in terms of physician interest in medical home and accountable care transformation.  Although it is difficult to measure “energy,” a physician’s willingness to recommend a career in health care is as good a metric as any.  According to the physician survey, only 11% of physicians nationally would recommend a career in health care.  In Michigan, that number is 53%.  That’s huge.

We still have a lot more work to do before we can say that this long slog has resulted in substantial, measurable improvements in the overall quality of care, and overall reductions in per capita health care spending.  But, at least we have solid indicators that hearts and minds are optimistic and energized among those who will drive those improvements.

Read More

To resolve conflicts, re-frame polar positions as optimization between undesirable extremes. But, sometimes there is no way to win.

In politics and professional life, achieving success requires the ability to resolve conflicts.  I’ve noticed that conflicts often become entrenched because the opposing parties both simplify the conflict in black and white terms. They conceptualize their own interest as moving in one direction. And, they conceptualize their opponent as wanting to move in the opposite, wrong direction. As a result, the argument between the parties generates no light, only heat.  Each side only acknowledges the advantages of their direction and the disadvantages of the opposing direction. Neither side seeks to really understand and learn from the arguments offered by the other side.

When I’ve had the opportunity to mediate such conflicts, I almost always used the same strategies.

  • Step One.  I try to move both parties to some common ground, moving back to some basic statement that seemingly nobody could disagree with.  This generates a tiny bit of agreement momentum.
  • Step Two. Apply the momentum generated in step one to getting the parties to agree that, if you took each party’s position to an extreme, the result would be undesirable. The parties are inherently agreeing to re-conceptualize the disagreement from being a choice between polar opposite positions to an optimization problem. The idea is to choose an agreeable value on some spectrum between undesirable extremes.  If the parties make this leap, we are half way to sustainable agreement.
  • Step Three.  Get the parties to agree to avoid talking about the answer, and focus on reaching consensus on the factors and assumptions that influence the selection of the optimal answer.  Sometimes, this can be done subjectively, simply listing the factors.  Other times, it is worthwhile to get quantitative, working together on assumptions and calculations to estimate the magnitude and uncertainty of the outcomes at various points along the spectrum of alternative answers.  This quantitative approach has been described as the “explicit method,” and an example of applying it to resolve fierce disagreements about mammography guidelines is described in an earlier post.
  • Step Four.  Finally, ask the parties to apply their values to propose and explain an optimum answer, from their point of view.  In this step, the important point is to insist that the parties acknowledge that they are no longer arguing about facts or assumptions, since consensus has already been achieved on those.  If not, then go back to step three. The objective is to untangle and separate factual, logical, scientific debates from the discussion of differences in values.  If those remain tangled up, the parties inevitably resort to talking at each other, rather than engaging in productive dialog.
  • Step Five.  Try to achieve a compromise answer.  In my experience, if you’ve really completed steps 1-3, this ends up being fairly easy.
  • Step Six.  Work to sustain the compromise.  Celebrating the agreement, praising the participants for the difficult work of compromise, documenting the process and assumptions, and appealing to people to not disown their participation in the process are all part of this ongoing work.   Passive aggressiveness is the standard operating model in many settings, part of the culture of many organizations.  And, it is a very difficult habit to break.

Of course, in the routine work of mediating conflicts, I don’t really explicitly go through these six steps. This conflict resolution approach is in the back of my mind. They are really more like habits than steps.

Sometimes this approach works. Sometimes, it does not.  It can break at any step.

Notice that break downs in most of the steps are basically people issues. People won’t change their conceptualization. They are unwilling to make their assumptions explicit. They are unwilling to acknowledge differences in values. They are unwilling to compromise.

But, sometimes, the process breaks because of the nature of the issue being debated. Sometimes, conceptualizing the debate as an optimization problem between two undesirable extremes fails because there are really not good choices along the spectrum.

For example, when debating the design of a program or policy, I have often encountered a no-win trade-off between keeping it simple vs. addressing each party’s unique circumstances.  If I keep it too simple, people complain that it as a “hammer,” failing to deal with their circumstances.  If I add complexity to deal with all the circumstances, people complain that it is a maze or a contraption.  If I select some middle level of complexity, the complaints are even worse because the pain of complexity kicks in before the value of complexity is achieved.

I’ve seen this no-way-to-win scenario in my own work, in the design of information systems, wellness and care management protocols, practice guidelines and protocols, analytic models, organizational structures, governance processes, contractual terms, and provider incentive programs.  And, I’ve seen this scenario in many public policy debates, such as debates about tax policy, tariffs, banking regulations, immigration, education, and health care reform.  In cases when the extremes are more desirable than the middle ground, the only approach I can think of is to bundle multiple issues together so that one party wins some and the other party wins others, to facilitate compromise.

Read More

Health Care Heroes: Don Berwick, MD – Adapting industrial quality improvement principles to the improvement of health care processes

Last week, Don Berwick, MD, announced his resignation as Administrator of the Centers for Medicare and Medicaid Services (CMS).  Now is a good time to explain why Dr. Berwick is one of my all time health care heroes.

Don Berwick as one of the Notre Dame Four Horsemen on Ward's coffee mug

Apparently, I talk about Dr. Berwick a lot. A few years ago, I received one of my most treasured gifts from colleagues at Blue Cross Blue Shield of Michigan (BCBSM).  It was a coffee mug featuring the famous photograph of the Four Horsemen of Notre Dame, a reference to my undergraduate alma mater.  My colleagues replaced the faces of three of the horsemen with the faces of three of my health care heroes, Drs. Paul Ellwood (the person who coined the terms “health management organization” and “outcomes management”), David Eddy (the clearest thinker on the topics of clinical practice policies and the rational allocation of health care resources), and Don Berwick. The face of the forth horseman they replaced with my own face.  I considered it a great honor to be associated with my heroes, at least on a coffee mug.

My team at BCBSM had heard me repeatedly explain Dr. Berwick’s important contribution to adapting the quality improvement  principles that had been successfully used in manufacturing to the health care field.  Others had been involved in promoting “continuous quality improvement,” “statistical process control,” and “total quality management” in health care. Paul Batalden, Brent James, Eugene Nelson, and Jack Billi come to mind, to name but a few. But, in my opinion, it has always been Berwick that has been the most eloquent and persuasive. He connected the statistical tools emphasized by James with the front line worker involvement emphasized by Batalden. And, he was able to describe how these approaches applied to clinical decision-making as well as care delivery.

At the heart of Dr. Berwick’s contribution was teaching us all to distinguish between the “Theory of Bad Apples” and the “Theory of Continuous Improvement.”

According to the Theory of Bad Apples, errors come from “shoddy work” by people with deficient work performance.  Leaders who uphold this theory focus on inspection to identify such deficient performance, indicated by the undesirable tail in the distribution of provider performance as shown on the left side of the diagram above.  Then, such leaders focus on holding the bad performers “accountable” by applying disciplinary measures intended to motivate improvement in performance and by pursuing other interventions intended to re-mediate the bad performance.  In the health care context, the workers are physicians and the shoddy work is poor quality health care. According to Berwick, the predictable defensive response by the physicians who are targeted for such remedial attention includes three elements: (1) kill the messenger, (2) distort the data and (3) blame somebody else.

Berwick advocates instead for the Theory of Continuous Improvement.  The basic principles of this theory are

  • Systems Thinking: Think of work as a process or a system with inputs and outputs
  • Continual Improvement: Assume that the effort to improve processes is never-ending
  • Customer Focus: Focus on the outcomes that matter to customers
  • Involve the Workforce: Respect the knowledge that front-line workers have, and assume workers are intrinsically motivated to do good work and serve the customers
  • Learn from Data and Variation to understand the causes of good and bad outcomes
  • Learn from Action: Conduct small-scale experiments using the “Plan-Do-Study-Act” (PDSA) approach to learn which process changes are improvements
  • Key Role of Leaders: Create a culture that drives out fear, seeks truth, respects and inspires people, and continually strives for improvement

T-Shirt of "Berwickians" -- the staff of epidemiologists and biostatisticians at BCBSM

Berwick argued the point made by Dr. Deming:  if we  can reduce fear, people will not try to distort the data.  When learning is guided by accurate information and sound rules of inference, when suppliers of service remain in dialog with those that depend upon them, and “when the hearts and talents of workers are enlisted in the pursuit of better ways, the potential for improvement in quality in nearly boundless.”

I first was influenced by Dr. Berwick back in the 1980’s when he championed these ideas during his tenure at the Harvard Community Health Plan, and subsequently during the 1990’s when he led the National Demonstration Project on Quality Improvement in Health Care and the Institute for Healthcare Improvement.  His face was already on my coffee mug at the time he was nominated to lead CMS.  I was thrilled that someone from our community of people dedicated to clinical process improvement had been recognized and would be serving in a position of such influence.

The Irony of the Political Opposition to Berwick’s Role as CMS Administrator

Dr. Berwick’s candidacy as CMS Administrator faced stiff opposition from Republican leaders who were angry about anything connected to the health care reform law or, for that matter, the Obama administration itself.  The President made the decision to evade this opposition by making a recess appointment of Dr. Berwick.  But, such recess appointments have a limited lifespan.  As the deadline for making a formal, congressionally sanctioned appointment approached at the end of the 2011 legislative session, 42 Republican senators signed a letter reiterating their disapproval of Dr. Berwick as CMS Administrator.   The arguments against Dr. Berwick’s  candidacy, both at the time of his original nomination and again over the last few months, centered around comments that Dr. Berwick has made praising the British health care system.  They concluded from his comments that he was in favor of redistributing wealth to the poor and of rationing, the dreaded “R” word, the thing done by “death panels!”  He was, therefore both a bleeding heart and heartless at the same time.  Dr. Berwick denied these charges, but the opposition was unconvinced and unwilling to back down from a position of persistent opposition to anything connected to “Obamacare.”

The irony is that, of the heroes on my coffee mug, Dr. Berwick is not the one deserving of praise for having insight and bravery concerning the basic tenets of health economics. Instead, it was Dr. David Eddy’s mug that was on my coffee mug because he was brave enough to publish numerous papers in the Journal of the American Medical Association explaining why rationing was the right thing to do (e.g. this one and another one).  Eddy argued that creating evidence-based “practice policies” that rationally allocated health care resources using “explicit methods” was favorable to using implicit methods supported only by “global subjective judgement.”  What a radical thought!

Despite my great admiration for Dr. Berwick, he was the hero that disappointed me as a rationing denier.  In fact, in a 2009 paper published in Health Affairs entitled “What ‘Patient-Centered’ Should Mean: Confessions of an Extremist,” he eloquently argued that we should give any patient whatever they wanted, regardless of the cost and regardless of the evidence of effectiveness.  He discounted the role of the physician as a steward of resources.  I felt the argument was heartfelt and humanistic.  But, I felt it was a cop out.  How strange, then, that the Republican opposition hoisted him on the rationing petard.

Looking Forward to Berwick’s Next Journey

Although it is disappointing to me that Dr. Berwick will no longer be leading CMS, I am eager to see what he chooses to do next.  I’m sure he will continue to make a great contribution to our field.  Without all the administrative and political duties to clog up his day, perhaps we are about to witness a surge in his ongoing contributions to improving health care.

More information: See Health Affairs article and associated Health Affairs Blog Post praising Dr. Berwick.

Read More

Primary care physicians acknowledge over-utilization and blame it on the lawyers.

Catching up on some reading, I came across this article in Medical News Today, describing the results of survey research conducted by Brenda E. Sirovich, MD, MS, and colleagues from the VA Outcomes Group (White River Junction, Vermont), and the Dartmouth Institute for Health Policy and Clinical Practice.   They surveyed primary care physicians and published their results in the Archives of internal Medicine.  They documented that primary care physicians acknowledge over-utilizationof services received by their patients.

Their #1 theory of causation?  “It’s because of malpractice lawyers!” That is not surprising to me, and is consistent with many conversations with both front line PCPs and leaders of primary care physician organizations.

However, I personally believe that this is really the #1 rationalization of the over-utilization.  I feel that there are two main causes:

  1. Low fee-for-service reimbursement, creating the need for many short visits each day to generate enough revenue to make a good living (i.e. the “hamster wheel”).  When visits need to be short, prescriptions and referrals are important to make the patient feel satisfied that their problem is really being addressed.
  2. Lack of effective clinical leadership or even peer interaction over the actual clinical decision-making (i.e. “care-planning”) done on a day-to-day basis by the vast majority of primary care physicians

Beyond the medical school and residency stage, physicians’ care planning occurs all alone, with no-one looking over their shoulder — at least no one with sufficient quantity and quality of information to make any real assessment of clinical decision-making.  Health plans have tried to do so with utilization management programs, but the poor quality of information and the relationship distance between the physician and the health plan are too great to generate much more than antipathy.

If you eliminated malpractice worries and paid primary care physicians a monthly per-capita fixed fee, would wasteful over-utilization go down without also providing deeper clinical leadership and peer review enabled by better care planning data?  Perhaps.  But I would worry that, in that scenario, physicians would still stick with their old habits of hitting the order & referral button out of habit to please the patients who have been habituated to think of “lots of orders and referrals” as good primary care.

The “mindfulness” thing in the invited commentary by Calvin Chou, MD, PhD, from the University of California, San Francisco, is a bit much — trying too hard to coin a term.  I’ve heard that presented before, and I categorized it with “stages of change,” “empowerment,” “self-actualization,” “motivational interviewing,” and “patient activation.”   I’m not saying that such popular psychological/sociological concepts have no merit.  I’m just a Mid-Westerner who starts with more conventional theories of behavior.

Read More

How do we reduce errors in software and data analysis? Culture of Accountability vs. Culture of Learning

A young colleague recently wrote to me complaining of frustration from having to deal with a high rate of errors in software development and data analysis.  Any time you are innovating in a knowledge-intensive field such as health care, you will need to develop new software and analyze data in new ways.  Errors will inevitably result.  There’s no easy way to avoid them. Therefore, reducing errors in software development and analysis is a lifelong battle for healthcare innovators.

The conventional philosophy of reducing errors is the following:

  1. Make sure everyone clearly knows what they are responsible for
  2. Make sure you use a tightly controlled development process with clear steps, checkpoints, milestones and gates
  3. Make sure you have everything well documented, using documents created from highly detailed templates designed to assure that nothing is forgotten
  4. Make sure you have detailed testing scenarios designed in advance, and that you do “regression testing” to assure that changes to one part of a system or analysis do not cause the testing scenarios to fail
  5. Make sure everyone understands the consequences of errors, both to the organization and to them personally

These are the pillars of rigorous project management.

But, unfortunately, experience teaches that sometimes this philosophy can have some unintended consequences.  Sometimes, errors still occur. Little errors, like bugs.  And big errors, like creating something that nobody needs or wants.   For example, when you have a tightly controlled process, sometimes that communicates to people that you intend for the process to be linear, rather than iterative.  Even when you say “let’s do this iteratively,” all the steps, milestones and gates tell people that you really mean the opposite.  When you create a highly detailed template, intended to assure that nothing is forgotten, you unintentionally are switching people into a mode of “filling out the form,” rather than the much harder and more valuable work of figuring out how to effectively teach the most important concepts to the reader.  And, you unintentionally convert your quality assurance process to one that emphasizes adherence to the template, rather than the quality of the underlying ideas being taught.  When you create detailed testing scenarios, you unintentionally encourage the team to treat “passing the tests” as quality, rather than having them challenging the software or the analytic results to tests that are designed based on insights about how the software or the analytic calculations are actually structured and what types of errors might be more likely.  A software developer I know describes that as “testing smarter.” Finally, when you communicate to people the consequences to them personally of messing up, intending to increase their motivation to do error-free work, you unintentionally are telling them to allocate more of their time to avoiding blame and documentating plausible deniability.  And, you are unwittingly telling them to bury the problems that could provide the insights needed to drive real improvement.

W Edwards Deming

W. Edwards Deming famously advocated for “driving out fear.”  In his landmark book, “Out of the Crisis,” published back in 1982, Deming explains that people fear new knowledge because it could reveal their failings and because it could lead to changes that could threaten their security.  Focusing on motivating people might be a good idea if the problem is inadequate motivation.  But, in my experience, poor performance is usually not an issue of motivation, especially in professional settings. More likely, poor performance is an issue of poor tools, poor training (leading to inadequate knowledge or skills), or having the wrong talent mix for the job.

That last one — talent — is a tricky one.  We consider it enlightened to assume that everyone could do a great job if only they received the right tools and training. Saying someone lacks the necessary talent for a particular job can be considered arrogant and wrong.  I think this may be because  talent is an unchangeable characteristic, and we are taught that it is wrong to consider other unchangeable characteristics such as gender or race.  But, each person was given their own unique mix of talents.  They will make their best contribution and achieve their highest satisfaction if they are in a role that is a good fit for their talents.  On the other hand, it is devilishly hard to tell the difference between unchangeable talents vs. changeable skills and knowledge.  And, developing people’s skills and knowledge is hard work and requires patience. As a result, it is too easy for leaders to get lazy and waste real talent.  Finding the right balance between optimism and realism about peoples’ potential requires maturity, effort and some intuition.  If in doubt, take Deming’s advice and error on the side of optimism.

I’m not arguing against processes, documentation, test scenarios or accountability.  But, I am suggesting to be careful about the unintended consequences of taking those things too far and relying on them too much.

My advice to my colleague was to focus more on the following:

  1. Make sure you hire really talented people, and then invest heavily is developing their knowledge and skills
  2. Make sure you are actually analyzing data the moment you start capturing it, rather than waiting a long time to accumulate lots of data only to discover later that it was messed up all along
  3. Make sure you do analysis and develop software iteratively, with early iterations focused on the hardest and most complex part of the work to make sure you don’t discover late in the game that your approach can’t handle the difficulty and complexity
  4. Most importantly, create a culture of learning, where people feel comfortable sharing their best ideas, talking about errors and problems, taking risks, and making improvements
Read More

What can we learn from the managed care backlash of the 1990s? Can we avoid an ACO backlash?

Advocates of “accountable care organizations” (ACOs) are careful to avoid the terminology of “managed care,” which is widely viewed as a failed model from the 1980s and 1990s.  But, there are obvious similarities between ACOs and managed care.  Both involve an organization taking responsibility for the quality and cost of care for a defined population.  Both emphasize the importance of primary care as the foundation of a coordinated and efficient health care delivery system.  Both involve economic incentives to physicians to improve quality and slow the upward trend in total cost of care.

But, we all remember the strong backlash against managed care during the late 1990s.   Although almost 10% of the U.S. population are still served by HMOs, the managed care vision has been largely in exile for more than a decade now.   PPOs are now the dominant model, with relatively small financial incentives to patients to seek their care from providers within relatively large provider networks.  Many PPOs have dabbled in “pay for performance,” but the physician incentives involved have been relatively small and the performance bar set relatively low.  The use of more heavy-handed managed care approaches has declined significantly.  For example, plans usually don’t require a referral authorization by a “gatekeeper” primary care physician before granting access to specialists.  And the use of pre-authorization by health plan staff for many expensive procedures has declined significantly.   Health plans did not drop these heavy-handed approached because they became convinced they were ineffective.  They dropped them because they feared they would face a consumer backlash and lose membership.

So, what can we learn from the managed care backlash?  And what can we do differently to avoid an “ACO backlash?”

I went back to some research done during the height of the managed care backlash to refresh my memory of how bad it was, and why it happened.  Most helpful was a paper from 1997 in Health Affairs by Robert Blenden and other researchers at Harvard and the Kaiser Family Foundation. Blenden reported survey results showing that Americans hated managed care companies even more than they hated banks and oil companies.

Blenden’s survey results showed that a significant proportion of Americans experienced hassles and other problems with managed care plans.  These common, minor problems were hypothesized to serve as the seeds of stronger dissatisfaction and distrust.  The survey also showed that the public overestimated the frequency of rare events that are dramatic and threatening.  For example 66% thought that HMOs sometimes or often hold back on a child’s cancer treatment.  73% thought that HMOs send newborn babies home after just one day, in spite of mothers’ concerns about their children’s health.  As shown in the following graph, there was a dose-response relationship with the “heaviness” of the health plan and the degree of mistrust that the health plan would do the right thing if they got sick.

Blenden concluded that the backlash against managed care was primarily driven by mistrust and fear, leading to calls for government regulation and reducing the market demand for managed care.  I created the following “cause-effect” diagram to illustrate this theory.

So, what can we do differently this time around?  We must do a far better job of building trust. That will require actually being trustworthy.  And, it will require being more proactive about communicating trustworthiness.

This topic is so central to the success of ACOs that it deserves a lot more attention by people who have expertise in public opinion, market communication, and brand development.  But, here is my proposed starting point for developing a strategy to build trust in ACOs and other innovative models of health care finance and delivery.

Read More

Reports of the death of Cost-Effectiveness Analysis in the U.S. may have been exaggerated: The ongoing case of Mammography

Guidelines for the use of mammograms to screen for breast cancer have been the topic of one of the fiercest and longest-running debates in medicine.  Back in the early 1990s, I participated in that debate as the leader of a guideline development team at the Henry Ford Health System.  We developed one of the earliest cost-effectiveness analytic models for breast cancer screening to be used as an integral part of the guideline development process.  I described that process and model in an earlier blog post.  Over the intervening 20 years, however, our nation has fallen behind the rest of the world in the use of cost-effectiveness analysis to drive clinical policy-making.  As described in another recent blog post, other advanced nations use sophisticated analysis to determine which treatments to use, while Americans’ sense of entitlement and duty have turned us against such analysis — describing it as “rationing by death panels.”  Cost-effectiveness analysis and health economics is dead.

But, maybe reports of its death have been exaggerated.

recent paper published on July 5, 2011 in the Annals of Internal Medicine described the results of an analysis of the cost-effectiveness of mammography in various types of women.  The study was conducted by John T. Schousboe, MD, PhD, Karla Kerlikowske, MD, MS, Andrew Loh, BA, and Steven R. Cummings, MD.  It was described in a recent article in the Los Angeles Times.  The authors used a computer model to estimate the lifetime costs and health outcomes associated with mammography.  They used a modeling technique called Markov Microsimulation, basically tracking a hypothetical population of women through time as they transition among various health states such as being well and cancer free, having undetected or detected cancer of various stages and, ultimately, death.

They ran the models for women with different sets of characteristics, including 4 age categories, 4 categories based on the density of the breast tissue (based on the so-called BI-RADS score), whether or not the women had a family history of breast cancer, and whether or not the women had a previous breast biopsy.  So, that’s 4 x 4 x 2 x 2 = 64 different types of women.  They ran the model for no-screening, annual screening, and screening at 2, 3 or 4 year intervals.  For each screening interval, they estimated each of a number of health outcomes, and summarized all the health outcomes in to single summary measure called the Quality-Adjusted Life Year (QALY).  They also calculated the lifetime health care costs from the perspective of a health plan.  Then, they compared the QALYs and costs for each screening interval, to the QALYs and costs associated with no screening to calculate the cost per QALY.  Finally, they compare the cost per QALY to arbitrary thresholds of $50K and $100K to determine whether screening at a particular interval for a particular type of women would be considered by most policy-makers to be clearly costs effective, reasonably cost-effective, or cost ineffective.

The authors took all those cost effectiveness numbers and tried to convert it to a simple guideline:

“Biennial mammography cost less than $100 000 per QALY gained for women aged 40 to 79 years with BI-RADS category 3 or 4 breast density or aged 50 to 69 years with category 2 density; women aged 60 to 79 years with category 1 density and either a family history of breast cancer or a previous breast biopsy; and all women aged 40 to 79 years with both a family history of breast cancer and a previous breast biopsy, regardless of breast density. Biennial mammography cost less than $50 000 per QALY gained for women aged 40 to 49 years with category 3 or 4 breast density and either a previous breast biopsy or a family history of breast cancer. Annual mammography was not cost-effective for any group, regardless of age or breast density.”

Not exactly something that rolls off the tongue.  But, with electronic patient registries and medical records systems that have rule-based decision-support, it should be feasible to implement such logic.  Doing so would represent a step forward in terms of tailoring mammography recommendations to specific characteristics that drive a woman’s breast cancer risk.  And, it would be a great example of how clinical trials and computer-based models work together, and a great example of how to balance the health outcomes experienced by individuals with the economic outcomes borne by the insured population.  It’s not evil.  It’s progress.

It will be interesting to see if breast cancer patient advocacy groups, mammographers and breast surgeons respond as negatively to the author’s proposal as they did to the last set of guidelines approved by the U.S. Preventive Services Task Force which called for a reduction in recommended breast cancer screening in some categories of women.


Read More

How to get a clearer picture of economic performance: Standard Cost and Payer Neutral Revenue

This morning, I read a set of slides published by the American Hospital Association (AHA) and the Health Research and Education Trust (HRET) called “Striving for Top Box: Hospitals Increasing Quality and Efficiency.”

The report showcased the clinical integration and performance improvement efforts of three hospital-based health systems: Novant (North Carolina), Piedmont (Georgia) and Banner (Arizona). The report identified some common cultural characteristics and strategies responsible for their success, including precise execution, accountability in performance improvement, engaged physicians, focused action plans, consistent communication, team-oriented approaches, transparent data sharing, data dependent atmosphere and standardization in processes.   The slides are clearly written and are a worthwhile read.  I agree with essentially all the points made.  A concise summary based on actual case studies is a good thing.

One point, however, went beyond stating the obvious and got me thinking.   Novant, a health system with 12 hospitals serving the Charlotte and Winston-Salem metropolitan areas, described one of its two key strategies as “moving toward a payer-neutral revenue (PNR) system.” Novant considers all payers as if they were Medicare to prepare for a day when lower payments could be a reality.  They do this by running claims to be submitted to other health plans through a payment algorithm based on Medicare reimbursement rules and rates.  They use the resulting data to prepare pro forma financial statements showing which services lines would be profitable if all health plans paid like Medicare.

How does Payer Neutral Revenue relate to standard cost and standard revenue?

On the health plan side, clinical utilization efficiency measures have traditionally been based on “units” on paid claims.  But, units for different procedures codes within a single category of utilization can have vastly different economic implications.  Three MRIs plus one chest x-ray equal four units of radiology utilization.  If I eliminate the inexpensive chest x-ray, is it meaningful to say I improved clinical efficiency by 25%?

In response to these concerns, financial analysts in health plans have traditionally focused on per-member-per-month (PMPM) paid claim dollars.  But, if the health plan negotiated different fees for different providers, then the PMPM paid claims measures reflect a mixture of the providers’ clinical prudence and the health plan’s effectiveness in negotiating lower fees.  If a provider organization in the health plan’s provider network has a high paid claims PMPM cost, it could be because of unwarranted clinical utilization, or it could be that the provider organization successfully negotiated a higher price per service.

To create a measure that more clearly reflects only clinical efficiency, I have used what managerial accountants would call a “standard cost” approach.  This involves multiplying units of utilization of different procedures by a standardized dollar amount for each procedure.  The result is a utilization measure which is weighted by the economic value of each unit of utilization, after removing the effects of fee variation.   By using standard cost PMPM metrics, with proper risk-adjustment, the health plan can compare the clinical efficiency of different providers in its network on a more level playing field from the perspective of the individual clinician, who usually does not feel responsible for negotiating higher or lower fees.

In a sense, Novant’s PNR approach represents the inverse of health plans’ standard cost PMPM metrics. Just as a health plan can use standard cost to remove the variation in reimbursement of different providers, Novant is using PNR to remove the variation in reimbursement from different payers.  As such, I would suggest that a more generic term for PNR would be “standard revenue.”  After all, every organization’s cost is another’s revenue.

Is Novant’s Payer Neutral Revenue really neutral?

But there is one difference between Novant’s PNR approach and the standard cost approach that I typically use.  I typically set the standard cost for each procedure to the average fee paid for that procedure code during a prior time period.  That way, the standard cost PMPM metrics are measured in dollars, and overall standard cost will roughly equal overall actual cost.  There is no upward or downward bias in standard cost vs. actual cost.

But, Novant’s PNR approach does not try to be “neutral” in this sense.  They picked Medicare to be the basis of their standard.  Medicare generally pays substantially less than private health plans.  I presume this is because Novant is trying to create a sense of urgency.  They use Payer Neutral Revenue as the banner over their initiatives to reduce cost and unnecessary healthcare utilization, including their efforts to create a data-driven culture and improve their analytic capabilities.  I assume they are trying to emphasize the bottom-line connection of those efforts.  They currently have relatively rosy financials, based largely on the currently-available opportunity to shift costs from public to private payers.  By showing their financials in the more pessimistic future scenario where all reimbursement was at Medicare level, when cost-shifting is no longer possible, they are apparently trying to convey a greater sense of urgency.  They want people’s heads to be in the future, so they are more motivated to get started with preparations and make difficult choices today.

Read More

What makes teams effective? (2) Achieving the Piranha Club state

Back in the mid-1990s, when I was working in the Center for Clinical Effectiveness at the Henry Ford Health System, I had the privilege and joy of being part of a team of smart, creative, passionate people working to transform health care.  The team included a core staff and an extended collection of collaborators from various clinical and administrative departments within Henry Ford and the Health Alliance Plan.  The extended team also included collaborators from outside the institution, including the American Group Practice Association (now called the American Medical Group Association) and other physician organizations around the country that shared the vision of population management, quality and outcomes measurement, evidence-based medicine, and patient-centeredness — the same concepts that are at the heart of the current wave of health care transformation efforts.

The thing that I found so refreshing and stimulating about working with that team was the intensity of the intellectual discourse.  Everyone passionately debated everything, but nobody’s feelings seemed to get hurt too badly.  The ideas that survived tended to be good ones.  One of the members of that team was Yoel Robens-Paradise, a young project manager.  He captured the essence of this debate society culture in a satirical poem that he wrote in December of 1994.


Based on this poem, we coined the term “Piranha Club” to describe the rare but excellent state of a team where “everyone is fiesty, but no one is mean.”  I’ve been actively working to achieve the Piranha Club state in all the teams in which I have been a part ever since, including research teams, information technology development teams, clinical program design teams, consulting teams, and business leadership teams.  Sometimes I succeed.  Sometimes, not so much.

I am convinced that the secret to achieving the Piranha Club state is to have every single member of the team have two things.  First, each team member must start by having the confidence to know that they belong there. They can’t be worrying that other people might be questioning their competence or their integrity.  The slightest crack in this confidence will lead that team member to be defensive.  They will interpret negative feedback not as a critique of their idea, but rather as an attack on them personally.  If one person gets defensive, it can spread to the rest of the team like a bad cold.  To achieve this level of confidence and drive out defensiveness, members of successful Piranha Club teams offer each other constant reinforcement, showing respect, commenting on positive things.  Non-defensive, confident team members become skilled in the art of holding their ideas out in front of themselves, rather than keeping them in their chest.  They place their ideas on the meeting table, treating them as inanimate objects, rather than parts of their own identities.

The second requirement to achieving the Piranha Club state is for every single member of the team to be skilled in offering criticism in a constructive, “loving” way. This can reduce the chance that the critique will be received with defensiveness.   The right choice of language can go a long way in this regard.  In a Piranha Club, the team members say “this design has problem A” rather than “you are wrong.”  Rather than framing a question in black and white terms, they describe a spectrum framed in such a way as to make it easy for everyone to agree that both extremes are undesirable, and that the debate is merely fine tuning.  They go out of their way to mix negative feedback with some positive feedback.  And, most importantly, they use humor.  Almost every Piranha Club team has the habit of cracking jokes.  In the tradition of the court jester who can tell the King what the King needs to hear without getting thrown into the dungeon, Piranha Club teams naturally use satire, silliness and humor to lighten up and sweeten up negative feedback that can otherwise be too heavy and bitter.  In fact, Yoel’s poem is an example of how the original Piranha Club even joked about itself.

More recently, when I was at Blue Cross Blue Shield of Michigan, I tried to promote this Piranha Club state by giving out Piranha Club coffee mugs.

Mugs given to members of Piranha Club

These were awarded to members of the epidemiology, biostatistics, medical informatics and clinical program design teams.  To earn a Piranha Club mug, a team member must have demonstrated that they were able to take an intellectual beating with grace, or that they were able to rip someone else’s ideas to shreds in a loving way.  Every time a mug was awarded, there was an opportunity to explain the Piranha Club concept to newcomers and to reinforce the concept to people who heard it many times before.  I was truly amazed at how important a $6 coffee mug was to people.  In some strange way, it seemed as if some of the team members were eager to make a presentation, and eager to have others criticize their work, so they could show how bravely and graciously they could receive the criticism to earn the mug and to become an official member.  It became almost a right of passage.

And, for some reason, coffee just seems to taste better in a Piranha Club mug.

Read More