Looking back over the last 200 years, noticing progress to get energy to keep moving forward

I choose to be an optimist.  I can visualize a health care system that is better than the one we have. A system where we function as a coordinated team, learn from day-to-day care, enabled by analytic methods and information technology designed to make that happen. A system where we can measure improvement in patient experience and outcomes. A system that is efficient enough to make great care affordable and accessible to everyone.

But, I’ve been fighting the fight for long enough now that I can’t help but notice how little progress we’ve made. Whenever I come across work I did 20 years ago, I am struck by the fact that I could write the same thing today, and it would still be applicable. I might have to use my word processor to do a search-and-replace to update from old to new buzz words. But, we’re still struggling with basically the same barriers to real process transformation and still debating the same issues. It is valuable to face that reality, because the gap between expectation and reality is a source of “creative tension” that can be motivating. But, facing that reality can also be demoralizing if the gap looks more like a chasm that can’t be crossed.

It is in this context that I viewed an excellent 45-minute video created by the New England Journal of Medicine in celebration of the 200th birthday of the Journal in January, 2012. The video is entitled “Getting Better: 200 Years of Medicine.” I found it energizing to see how far we’ve come as a field, making a profound positive impact on human life — using the examples of surgery, chemotherapy, and AIDS treatment. It also exposes how long it took for changes to be accepted and adopted, before they eventually became standards of care. Maybe by the 250th anniversary presentation (shown on the holodeck?), the NEJM will celebrate breakthroughs in cost-effectiveness analysis, outcomes measurement, care coordination, team care planning, clinical process management, and patient-centered primary care.

http://nejm200.nejm.org/explore/medical-documentary-video/?emp=marcom&query=NEW

Read More

Simon Sinek’s TED talk: Change minds by starting with why, not what

Improving health care requires convincing people to make changes. Changing a care process begins with changing the minds of the people involved in that process.

But, how can we change people’s minds?

I am a person who loves logic and numbers. Therefore, my tendency is to assume that the best way to convince people to make improvements to care processes is to clearly explain the logic supporting the change and to use rigorous, transparent and evidence-based quantitative projections of the outcomes that can be achieved by making the improvements.

But, experience teaches that solid logic and analysis does not always compel action. Often, it fails even to capture attention.

In the blog of Kevin Fickenscher, the incoming CEO of the American Medical Informatics Association, he included a link to an excellent TED talk by Simon Sinek, the author of the book “Start With Why.”  Sinek argues that ineffective people first explain what they are proposing, then they explain how it can be done, and finally they explain why the change should be made.  Sinek explains that effective people structure their communications in the exact opposite order.  They first explain the why — their consistent mission or objective.  Then, they explain how they carry out that mission.  Finally, they explain what is a specific example that people can select if they identify with that mission.  Sinek believes that people will decide to “buy” a particular change if they see it as a way to define themselves as someone who is part of a compelling, attractive mission.  That last point is totally consistent with my experience: if you hire people who are mission-driven and you allow them to focus their professional attention on that mission, they will be far more productive and effective.  And, they will inspire others to do the same.

Sinek is a talented lecturer, as are almost all the people invited to give TED talks.  Although he sometimes seems a bit too sure of himself for a topic as subjective as human behavior, he nevertheless provides excellent food for thought for mission-driven people involved in health care improvement.

http://www.ted.com/talks/lang/en/simon_sinek_how_great_leaders_inspire_action.html

Read More

AHRQ guidance: Forget about proving Medical Home effectiveness with small pilot studies attempting to measure all-patient cost savings

The vast majority of already-published and currently-underway studies of the effectiveness of the Patient Centered Medical Home (PCMH) model of care are pilot studies with fewer than 50 PCMH practices.  Most of these studies report or intend to report reductions in hospitalization rates and savings across the entire population of patients served by the PCMH practices.  New  guidance from the Federal Government calls the value of such reports into question.

The AHRQ recently released an excellent pair of white papers offering guidance on the proper evaluation of PCMH initiatives.  The first is a four page overview intended for decisionmakers, entitled “Improving Evaluations of the Medical Home.”   The second is a 56 page document that goes into more detail, entitled “Building the Evidence Based for the Medical Home: What Sample and Sample Size Do Studies Need?”  The whitepapers were prepared by Deborah Peikes, Stacy Dale and Eric Lundquist of Mathematica Policy Research and Janice Genevro and David Meyers from the AHRQ.

The white papers emphasize a number of key points:

  • Base evaluation plans on plausible estimates of the effects of PCMH.  Based on a review of the evidence so far, the white paper suggested that a successful program could plausibly hope to reduce cost or hospitalizations, on average, by 15 percent for chronically ill patients and 5 percent for all patients.
  • Use a proper concurrent comparison group, rather than doing a “pre-post” analysis. Pre-post analyses, although common, are inconclusive because they can easily by confounded by other factors changing during the same time period, such as economic conditions, health care policy changes, advances in technology, etc.
  • Focus on evaluating a large number of practices, rather than a large number of patients per practice.  The authors point out that “a study with 100 practices and 20 patients per practice has much greater power than a study of 20 practices with 100 patients each.”  They warn that small pilot studies with 20 practices or less are unlikely to produce rigorous results without combining the results with many other small studies conducted using the same metrics.  Such pilot studies, which unfortunately are very common, are really only useful for generating hypotheses, not for drawing conclusions.  The authors note that neither positive nor negative results of such small studies should be relied upon.  Small PCMH studies can show no significant impact because they did not have the power to detect such an impact.
  • Focus on evaluating health and economic outcomes in subsets of patients such as those with chronic disease.  Satisfaction can be evaluated across the entire population, but if you use data for the entire population to measure hospitalizations, emergency department visits, inpatient days, or health care costs, the lower risk portions of the population contribute noise that obscures the measurement of the effect that is occurring primarily among those most likely to experience such events in the first place.
  • Use statistical methods that account for “clustering” at the practice level, rather than treating individual patients as the unit of analysis.  Since the intervention is intended to change processes at the practice level, the individual patients within a practice are not independent of one another.  Clustering must be taken into account not only at the end of the study, when evaluating the data.  It must also be taken into account at the beginning, when determining the number of practices and patients to sample.  For example, if a study includes a total of 20,000 patients, but the patients are clustered within 20 practices, then the effective sample size is only 1,820, assuming patient outcomes are moderately clustered within practices.  When statistical methods treat such patients as independent, they are implicitly treating the sample size in such a situation as 20,000 rather than 1,820.   As a result, evaluators making such an assumption are dramatically over-estimating their power to detect the effect of the PCMH transformation.  If they adjust for clustering at the end, their findings are likely to show a lack of a significant effect, even if the PCMH program really worked.  On the other hand, if they don’t adjust for clustering in the end, there is a great risk of reporting false positive findings.  For example, in a PCMH study with 20 practices and 1,500 patients per practice, where the analysis was done without adjusting for clustering and found a positive result, there is a 60% chance that the positive result is false, based on typical assumptions.
These recommendations are based not only on the experience of the authors, but on modeling that they did to explore the implications of different study scenarios with different numbers of patients, intervention practices, control practices and measures of interest.  These models calculate the minimum detectable effect (MDE) based on assumptions regarding typical characteristics of the patient populations, practices, and plausible effects of the PCMH program, based on a review of prior studies and the authors’ experience.  The models illustrate that, when measuring the impact of PCMH on costs or hospitalization rates for all the patients receiving PCMH care, the MDE drops as the number of practices in the PCMH intervention group increases.  But, even with 500 PCMH practices, the studies cannot detect the 5% cost or hospitalization reduction that the authors consider to be the plausible impact of PCMH on the entire population.

The authors re-ran the models, assuming that the measure of cost and hospitalization would consider only the sub-population of patients with chronic diseases.

The model showed that, based on reasonable assumptions, at least 35 PCMH practices, plus an equivalent number of concurrent comparison practices, would be required to detect the 15% effect that the literature suggests is the plausible effect of PCMH on cost and hospitalizations among patients with chronic diseases.  Even when focusing on the chronic disease sub-population, a pilot evaluation with only 10 PCMH practices and 10 comparison practices could not detect an effect smaller than 30%, an effect size they considered implausible.

I found this modeling exercise to be very informative and very worrisome, given the large number of pilot studies underway that are unlikely to provide conclusive results and the risk that people will try to draw incorrect conclusions when those results become available.  Often, health care leaders find these calculations inconvenient and frustrating, as if the bearers of this mathematical news are being overly rigorous and “academic.”

Note that these concepts and conclusions are applicable not only to evaluations of PCMH, but also of other programs intended to improve processes or capabilities at the level of a physician’s practice, a clinic or a physician organization such as health information technology investments, training staff in Lean methods, or implementing gain-sharing or other performance-based incentives.

Read More

Links to CMS ACO Proposed Rules and Dr. Berwick’s associated NEJM comments

After much delay, the Federal Government has finally released the following:

Looks like it will be a long weekend curled up with some long documents!

Read More

Atul Gawande’s articles in the New Yorker most relevant to ACOs


Atul Gawande - image link from New Yorker

Atul Gawande is a general surgeon at Brigham and Women’s Hospital in Boston, and is an amazingly compelling writer about health care issues.  He has written a series of articles in the New Yorker, blending stories of individual patients and their care providers with a larger scientific and health policy context.  Some of the most relevant of these articles for Accountable Care Organizations (ACOs) include:

December, 2007: “The Checklist:  If something so simple can transform intensive care, what else can it do?.” In this article, Gawande explains the work of Peter Pronovost, MD, a critical care specialists at Johns Hopkins who used simple checklists and an associated process to empower nurses and create a quality culture in hospital ICUs to dramatically reduce complications from central lines and ventilators, first at his own hospital, then throughout Michigan in the Michigan Health and Hospital Association’s Keystone Project (with funding from Blue Cross Blue Shield of Michigan).  The article laments about the resistance to national implementation of the checklist approach (a resistance that has at least partially been overcome in the three years since this article was published).  Pronovost argues that the science of health care delivery should be emphasized and funded as much as the science of disease biology and therapeutics.

August, 2010: “Letting Go: What should medicine do when it can’t save your life?.” In this article, Gawande describes the cultural and psychological barriers that make it difficult for patients, family members, and doctors to prepare for good end-of-life decision-making.  He reports the success of hospice programs, end-of-life telephonic care management programs, and programs to encourage advanced directives.

January, 2011: The Hot Spotters: Can we lower medical costs by giving the neediest patients better care?.” In this article, Gawande enthusiastically describes the work of Jeffrey Brenner, MD and his “Camden Coalition”  in Camden, NJ, and Rushika Fernandopulle in Atlantic City to develop intensive patient-centered care for high risk patients, and the analytics of Verisk Health focused on predictive modeling for high risk patients.  The article includes some encouraging pre-post study results from these programs, but acknowledges the risk that results could be biased due to the “regression to the mean” effect — when a cohort of patients specifically selected based on recent high health care utilization is expected to have lower utilization in subsequent time period without any intervention.   The article also points out the resistance to change in health care, evidenced by Brenner’s inability to get state legislative approval to bring his program to Medicaid patients.

Additional biographical information about Gawande, as well as a complete list of his articles in the New Yorker, are available here.

 

Read More

Reference for Joint Principles of the Patient Centered Medical Home, 2007

Here is the original reference to the February, 2007, 3-page consensus document that describes the Joint Principles of the Patient Centered Medical Home.  It was created by the four professional societies that are focused on primary care: American Academy of Family Physicians (AAFP), American Academy of Pediatrics (AAP), American College of Physicians (ACP), and American Osteopathic Association (AOA).

Very simple and straight-forward.

http://www.pcpcc.net/content/joint-principles-patient-centered-medical-home

Read More

Harold Miller’s Paper: How to Create Accountable Care Organizations

This is probably the single most useful reference regarding Accountable Care Organizations, outlining alternative ways of conceptualizing them and giving a balanced explanation of the pros and cons of alternative models, including structure and reimbursement.  David Share, MD, from BCBSM was involved, and the document uses BCBSM’s Physician Group Incentive Program as a case study.

The web reference is: http://www.chqpr.org/downloads/HowtoCreateAccountableCareOrganizations.pdf

Read More

Classic Papers before 2000

Note:  The following are our favorite “classic” references from before 2000.  Looking through this list, most have really stood the test of time, and it is amazing to see how long some of these ideas have been around, despite the relative lack of progress in our health care field in implementing the important ideas presented in these papers. We’ll need to do a little work to add more recent references.

Care Management and Clinical Practice Improvement References

Weinstein MC, Stason WB. Foundations of Cost-effectiveness Analysis for Health and Medical Practices. N Engl J Med 1977; 716-721.

Gold MR, Siegel JE, Russell LB, Weinstein MC. Cost-effectiveness in Health and Medicine. New York, Oxford University Press, 1996.

Donabedian A. The Price of Quality and the Perplexities of Care. 1986 Michael M. Davis Lecture, sponsored by The Center for health Administration Studies, Graduate School of Business, University of Chicago.

Wennberg J, Gittelsohn A. Small Area Variation in Health Care Delivery. Science 1973;182:1102-1108.

Wennburg JE, Freeman JL, Culp WJ. Are Hospital Services Rationed in New Haven or Overutilized in Boston? Lancet 1987;1:1185-1188.

Chassin MR, et al. Variations in the Use of Medical and Surgical Services by the Medicare Population. N Engl J Med 1986; 314:285-290.

Manus DA, Werner TR, Strub RJ. Using Measurement and Feedback to Reduce Health Care Costs and Modify Physician Practice Patterns. Quality Management in Health Care 1994; 2(2):48-60.

Berwick DM, Coltin KL. Feedback Reduces Test Use in a Health Maintenance Organization. JAMA 1986; 255:1450-1454.

Outcomes Management, A Technology of Patient Experience. Ellwood PM. N Engl J Med. 1988:1549-1556.

Nelson EC, Mohr JJ, Batalden PB, Plume SK. Improving Health Care, Part 1: The Clinical Value Compass. J Quality Improv, 1996:(22)243-258.

Eddy DM. Practice Policies and Guidelines: What Are They? JAMA 1990; 263:877-878,880.

Eddy DM. Practice Policies: Where Do They Come From? JAMA. 1990; (263):1265-1275.

Gottlieb LK, Sokol HN, Murrey KO, Schoenbaum SC. Algorithm-based Clinical Quality Improvement, Clinical Guidelines and Continuous Quality Improvement. HMO Practice 1991; 6:5-12.

Eddy DM. Guidelines for Policy Statements: The Explicit Approach. JAMA 1990; 263:2239-2240,2243.

Eddy DM. Comparing Benefits and Harms: The Balance Sheet. JAMA 1990;263:2493-2505.

Deming WE. Out of the Crisis. Cambridge, MA: MIT Center for Engineering Study, 1986.

Crosby PB. Quality is Free: The Art of Making Quality Certain. New York: American Library, 1980.

Juran JM, ed. Quality Control Handbook. 3rd ed. New York: McGraw-Hill, 1979.

Ishikawa K. Guide to Quality Control. White Plains, NY: Kraus International Publications, 1982.

Berwick DM. Continuous Improvement as an Ideal in Health Care. N Engl J Med. 1989; (320):53-56.

Kuperman G, James B, Jacobsen J, Gardner RM. Continuous Quality Improvement Applied to Medical Care. Med Decision Making. 1991; 11(suppl):s60-s65.

Laffel G, Blumenthal D. The Case for Using Industrial Quality Management Science in Health Care Organizations. JAMA 1989; 262:(20)2869-2873.

Kritchevsky SB, Simmons BP. Continuous Quality Improvement, Concepts and Applications for Physician Care. JAMA 1991; 266:(13)1817-1823.

Avorn J, Soumerai SB. Improving Drug-therapy Decisions Through Educational Outreach: A Randomized Controlled Trial of Academically Based “Detailing.” N Engl J Med 1983;308:1457-1463.

Grimshaw J, Russell IT. Effect of Clinical Guidelines on Medical Practice: A Systematic Review of Rigorous Evaluations. Lancet 1993; 342:1317-1322.

Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing Physician Performance: A Systematic Review of the Effect of Continuing Medical Education Strategies. JAMA 1995: 274(9):700-705.

U.S.Preventive Services Task Force. Guide to Clinical Preventive Services. An Assessment of the Effectiveness of 169 Interventions. Baltimore: Williams and Wilkins, 1989.

Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing physician performance. A systematic review of the effect of continuing medical education strategies. JAMA 1995; 274:(9)700-705.

Baker AM, McCarthy BD, Gurley VF, Ulcickas Yood M. Influenza immunization in a managed care organization: A randomized, controlled trial and cost-effectiveness analysis of computerized mailed reminders. J Gen Intern Med 1998; In press.

Rosenfeld RM, Post JC. Meta-analysis of antibiotics for the treatment of otitis media with effusion. Otolaryngol Head Neck Surg 1992; 106:(4)378-386.

Committee on Performance Measurement. HEDIS 3.0. Health Plan Employer Data and Information Set. 1997; Washington, D.C. National Committee for Quality Assurance.

National Heart L, and Blood Institutes. Guidelines for the Diagnosis and Managmenet of Asthma. Expert Panel Report 2. 1997; Washington, D.C. National Institutes of Health. No. 97-4051.

Agency for Health Care Policy and Research. Smoking Cessation Clinical Practice Guideline. 1996; Rockville, MD: U.S. Department of Health and Human Services. AHCPR No. 96-0694.

Kerlikowske K; Grady D; Rubin SM; Sandrock C; Ernster VL. Efficacy of screening mammography: A meta-analysis. JAMA, 273(2):149-54 1995 Jan 11

Berwick DM. A Primer on Leading the Improvement of Systems. British Med J 1996:619-622.

Read More