Why “case load” is not a good metric for case management productivity or intensity

As shown in the following graphic recently published by the Healthcare Intelligence Network, it has become common practice to use “case load” as a metric for the productivity of nurses in case management programs or as a measure of the intensity of a case management intervention.  More broadly, case load has been used for these purposes across many wellness and care management interventions, including chronic disease management, high risk care coordination, wellness coaching, care transition coordination, and other types of programs involving nurses, physician assistants, nutritionists, social workers, and physicians.

HIN Study Results - Case Manager Monthly Case Load

If everyone is doing it, it must be right.  Right?

In my opinion, case load is almost always the wrong measure to use to assess productivity or intervention intensity.   The graphic above indicates that 38% of case management programs have a case load between 50 and 99. Let’s say that one of those case management programs told you that they had a case load of 52.   That literally means that the average nurse in that case management program has a list of 52 patients (on average) that are somewhere between their enrollment date and their discharge date for a the program.  That could mean 52 new patients every day for an intervention consisting of a single 5 minute telephone call.  Or, it could mean 1 new patient each week for a year-long intervention involving an extensive up-front assessment and twice-monthly hour-long coaching calls.   Or, it could mean 1 new patient each week for a year-long intervention consisting of a 20 minute enrollment call and a 20 minute check-up call one year later.  In that context, if I tell you that my case management program has an average case load of 52, how much do you really know?

Some case management programs try to fix this problem by creating an “acuity-adjusted” or “case-mix-adjusted” measure of case load.  In such a scheme, easier cases are counted as a fraction of a case, and more demanding cases are counted as more than one case.  Such an approach requires some type of a point system to rate the difficulty of the case.  Some case management vendors charge a fee for each “case day,” with higher fees associated with “complex” cases, and lower fees for “non-complex” cases.  You can imagine how the financial incentives associated with the case definition can affect the assessment, and how reluctant the case management vendor would be to discharge cases, cutting off the most profitable days in the tail-end of a case when the work is light.

But these “adjustments” are missing the fundamental point that case load is a “stock” measure, while the thing you are trying to measure is really a “flow.”  A stock is something that you count at a point in time, like the balance of your checking account or the number of gallons of gas left in your tank.  A flow is something that you count over a period of time, like your monthly expenses or the number of gallons per hour flowing through a hose to fill up your swimming pool.  The work output delivered by a case management nurse is something that can only be understood over a period of time.

In my experience, the best approach to measuring case management productivity and intensity is to follow “cohorts” of patients who became engaged in the intervention during a particular period of time to track how many minutes of work are done for each period of time relative to the engagement month, as shown below.

 

This graph shows that, during the calendar month in which a patient became engaged, an average of 50 minutes of nursing effort was required.   For individual patients, the nursing effort could, of course, be higher or lower.  Some patients require more time to assess.  Some patients may have become engaged at the end of the calendar month.  Some patient may drop out of the program after the first encounter.  But, on the basis of a cohort of engaged patients, the average was 50 minutes.  In subsequent months, the average minutes of nursing effort typically decreases, after initial enrollment, assessment and care planning effort dies down.  Over time, depending on the intended design of the intervention, the effort falls off as most patients have either been discharged by the nurse, have chosen to drop out, or have been lost to follow up.  This graph is a description of the intensity of the case management intervention.  In this example graph, the cumulative nursing time over the first 16 months relative to the month of engagement is 234 minutes, which could serve as a summary measure of intervention intensity.  If nursing minutes data are not available, some “work-driver” statistics could be substituted, such as the number of face-to-face or telephonic encounters of various types.  These could be converted to minutes based on measured or estimated average minutes of effort for each of the statistical units.

This type of graph can be created for different engagement months and compared over time to determine if the intervention intensity is changing over time.  It can be created for one nurse and compared to all other nurses to determine if the intervention process delivered by that nurse appears to be similar or different than other nurses.

Then, to assess productivity, the total number of nursing minutes can be measured, and compared to the expected number of minutes for cases of the same type and the same mix of “month numbers” in the case intensity timeline.

The implications of this for Accountable Care Organizations is that information systems to support wellness and care management should be designed to explicitly capture the engagement date and the intended type of wellness or care management intervention in which the patient is becoming engaged. Such systems should also capture the discharge dates, statistics about the quantity and types of wellness or care management services delivered to engaged patients, and preferably the number of minutes of effort required for each of those services.

Some people may object to this approach because it implies that the patient is being subjected to a “cook-book” intervention that fails to take into account the uniqueness of each patient.  And, they argue that they cannot specify up front the type of wellness or care management program in which the patient is becoming engaged, because they have not yet completed an assessment.  But, I would argue that nothing in this approach assumes that each patient is treated the same.  This approach merely looks at a population of patients receiving a particular intended type of intervention program.  Although each patient may follow a different path and receive a different mix and quantity of case management services, the overall mix and timing of services for a population of patients can be assessed.  If this mix and timing is not generally constant over time, then you are dealing with a process that is not in control, a problem that must be solved before meaningful program evaluation of any type can be done.

 

Read More

Why many clinical leaders prefer Registries to EHRs: Actively Structured Information

I went to the huge HIMSS’11 convention in Las Vegas a few weeks ago.  When visiting the bewildering number of vendor booths, I focused on a few key questions relevant to the challenges facing physicians and hospitals working to establish successful Accountable Care Organizations (ACOs):

1) What is the current state of clinical data exchange between Electronic Health Record (EHR) systems primarily used for acute hospital care vs. ambulatory care?

2) Do leading EHR systems have useful “registry” capabilities?

Nuance Booth at HIMSS

 

I have been hearing from many clinical leaders that the most popular EHR systems still do not support real clinical integration across settings, and are still weak in terms of the “population management” and clinical process improvement features that are present in existing chronic disease registry applications.  I was hoping to learn that these concerns were really based on a lack on familiarity with how to configure and use EHR systems, rather than any inadequacies of the systems themselves.  After all, clinical message and clinical document standards have been around for years, and population management features were prominently identified in recent Meaningful Use incentives.

The verdict?

Some progress has been made in leading EHR systems, but clinical integration across settings and population management are still major weaknesses.

But why?

It turns out that both problems are related to the same underlying issue: failure to appreciate the importance of “actively structured” information.

Health Information Technology (HIT) leaders have for decades focused primarily on a vision of computerizing health information, going paperless, and improving the access to that information by many members of the clinical team across settings.  To reduce the burden of capturing information that can be displayed on a computer screen, HIT vendors have developed document imaging, optical character recognition, and voice recognition technology.  And, they have created clinical documentation systems that allow the user to enter simple codes to insert blocks of text into their notes quickly.  Over the last 15 years, most HIT leaders have recognized that there is some value in “structured data” — health information in the form of codes that computers can interpret rather than just display.  In response to this recognition, EHR vendors have added “template charting” features that allow users to do exception editing of some coded values in notes.  They have developed sophisticated systems to interpret the meaning of textual information and convert that text into structured data.  And, they have created messaging standards that allow orders, clinical documents (notes) and continuity of care information to be shared across systems and care providers.

But, the problem that remains is that the structured data that is being captured and exchanged is “passively structured.”  Although the current crop of EHR systems may allow some of the health information to be represented with codes, the EHR tools and the care process itself do not assure the capture and transmission of particular pieces of information required to support particular down-stream uses of the data, such as reminders, alerts, quality and performance metrics, and comparative effectiveness studies.  The clinical leaders that are focused on improving care processes, and the health services researchers that are trying to measure performance and comparative effectiveness are well aware that missing data is crippling to the usefulness of passively structured data.  The real care process improvement and performance and effectiveness measurement efforts of Accountable Care Organizations requires “actively structured” health information.   To assure the completeness of the information, they use “registry” applications, questionnaires, and data collection forms.   And, they transmit this information in formats that rigorously enforce the meaning and completeness of the structured information.  The objective is not to capture as much structured data as possible.  The objective is to assure the accuracy and completeness of particular data elements that you are relying on for specific, strategically important purposes.

I believe that HIT professionals may be reluctant to make actively structured information a requirement because (1) they lack expertise and familiarity with the downstream uses of the data for care process improvement and measurement, and (2) they are concerned that the added burden of actively structured data capture on end-user clinicians may further worsen their problems with “clinical adoption” of their technology.   They are attracted to solutions that extract passively structured information from clinical notes, such as voice-recognition and “clinical language understanding” (CLU).  And they don’t realize that the big pile of passively structured data output by such solutions will fail to meet the requirements of Accountable Care Organizations.

To solve the problem, two main things are necessary:

  1. Refocus the design objectives of EHR products on the strategic goals of Accountable Care, rather than on the tactical goals of reducing medical records cost, “going paperless” or reducing security risks.  The biggest implication of this will be the integration of analytic and care process management functionality into EHRs, rather than viewing those as separate solutions to different problems.
  2. Create stronger organizational linkages between HIT professionals, the clinical leaders involved in care process improvement and the epidemiologists and biostatisticians that have the expertise to measure provider performance and treatment effectiveness.  Many hospitals, physician organizations and nascent ACOs are attempting to do this by hiring “chief medical informatics officers” (CMIOs).  But that will only solve this problem if the new CMIOs can bridge between HIT, process improvement and measurement disciplines.  If the CMIO role is viewed more narrowly as a liason between HIT professionals and front line clinician users, the focus will continue to be on unstructured or passively structured data to reduce barriers to HIT adoption.

 

Read More

Hans Rosling illustrates how to tell compelling stories with data

Hans Rosling is a doctor and researcher that noticed that the statistics people at the World Health Organization had compiled a rich database of information regarding the health and well-being of people in different countries over long periods of time. But, he also noticed that they were wasting this data by producing boring charts and graphs.  He worked with his son to develop some great “Trendalyzer” software tools to show time series data as an animation — a powerful graphical design technique that he combines with his engaging presentation style to tell truly compelling stories with data.

This video is of Dr. Rosling presenting to the Technology Entertainment and Design (“TED”) meeting in 2006.  I have used this video to train and inspire epidemiologists, statisticians and other analysts about telling stories with data and encouraging them to explore new tools and to recognize the difference between graphical design and graphical decoration. It has always been very well received.

Rosling has done additional talks at subsequent TED conferences (see list here), which are also worth watching — but somewhat redundant for the limited purpose of teaching design and storytelling approaches. I encourage you to explore other TED talks, available for free at TED.com. They are very inspirational.

A couple of years ago, Rosling’s software tools were purchased by Google, who has adapted the data analysis / animation tools and incorporated them as “motion charts” into Goggle’s web-based spreadsheet application, part of Google Docs, and also available as an open component through an API, as documented here. They are available for free, and are fun to use.

Read More

Video of ACO Breakfast at HIMSS’11 Meeting

The following is the link to a hand-held video from the “ACO Breakfast” event on February 21, 2011 at the annual Health Information Management Systems Society convention in Orlando.

http://www.healthcareitnews.com/video/aco-breakfast-himss11

The panel discussion was kicked off by Eric Dishman, Director of Health Innovation for Intel, and featured:

  • Aneesh Chopra, the Chief Technology Officer of the US Government, who explained the government’s approach to creating an “innovation management pipeline” using an Innovation Center, emphasizing interoperability to create opportunities for private innovators to create “apps” that connect to a common platform, and using Medicare and Medicaid as early adopters.  His approach sounded very much like a description of an “open source” software ecosystem, although he did not use that terminology in the proprietary vendor-dominated crowd.
  • John Mattison, MD, Chief Medical Informatics Officer for Kaiser Permanente.  I worked with John years ago when I was at Oceania and he was the Kaiser Permanente Southern California lead for KP’s EMR efforts at the time.  John emphasized the importance of working with primary care physicians, and proposed that the shortage of primary care physicians will be offset by the reduced need for specialists.  Presumably, he is theorizing that medical sub-specialists will morph their practices to emphasize primary care and deemphasize  procedures.
  • Steven Waldren, MD, Director of the Center for Health IT for the American Academy of Family Practice.  I worked with Steve when I was at Blue Cross Blue Shield of Michigan to adapt the Continuity of Care Record (CCR) standard for use in care management and health plan member health record.  Steven emphasized the challenge of scaling EMR technology down to small independent primary care practices that lack IT infrastructure, and described clinicians disappointment with the inability of commercial EMR software to do chronic disease registry functions.
  • Michael Young, CEO of Grady Memorial Hospital.  Michael used the example of Kodak’s lack of success in transitioning from a profitable film-based photography business to the “next curve” of digital photography, despite having pioneered digital photography technology.  He used this example to explain the challenge facing health care leaders in moving from the current profitable fee-for-service volume-driven model to the next curve of performance and outcomes-based reimbursement.  He expressed some optimism by noting that the timeline for developing useful health car “apps” has decreased from 5 years to 5 days.
Read More

Learning from Manufacturers about Culture and Technology: The Case of PLM Systems

Product Lifecycle Management (PLM) systems are software tools that are used by product manufacturing companies to support the entire lifecycle of ideation, business planning, requirements, design, manufacturing, launch, and eventual retirement of products.

They are interesting to me for two reasons: (1) they are relevant to the development of software products, and (2) they share some important characteristics with health care information technology.   PLM systems, like electronic medical records (EMR), order entry, registries, care management and analytic systems used by Accountable Care Organizations (ACOs), are systems that are mission-critical to the company.  They involve complex information.  This complex information is created and maintained by business users with some technical capabilities.  This complex information must be communicated in ways that support collaboration by many different people from different professional disciplines.  They support processes that are constantly being changed and improved in ways that can create competitive advantage for the organization.   They support design and analysis, rather than just execution.

So, what can we learn from product companies and the systems that they use that can be applied to Accountable Care Organizations?

I just read a report from the Aberdeen Group on the “CIO’s role in Product Lifecycle Management (PLM) System.”

http://www.ptc.com/WCMS/files/110940/en/Aberdeen_Report_PLM_CIO_english.PDF

The report starts by acknowledging the tendency for CIOs to view PLM system implementations initiated by product people to be “rogue” projects.  The report explores whether such system implementations should be in the “realm” of IT, rather the product people.

This article seems to be written from the perspective of the CIO.  It concludes that to be “best in class”, a company needs to:

  • Have the PLM implementation project “under IT,” rather than have it under product area leaders who have “a day job,” asserting that, if the PLM system is supported by the product area, there will be a risk to “business continuity”.
  • Have the PLM implementation project funded through the IT budget, including funding for some dedicated staff in the IT area, but justified based on a business case tied to a business initiative of the product area.  In other words, the product people are responsible for helping argue the case to the C-suite for increasing the IT budget.
  • Minimize customization to the PLM software, even if the product people argue that customization is required to maintain some aspect of the product lifecycle process that they consider to be a competitive advantage.

At the end of the article, the authors acknowledge that less than 50% of recent PLM implementations are taking this advice and formally assigning CIO responsibility for PLM system implementations.  The companies that do so tend to be large companies that “focus on standardization.”

Of course, none of these turf issues matter if the people and departments involved are fundamentally collaborative and agile and if the right talents are represented in the collaborators.  I theorize that the problem has, as its root cause, the culture of blame that sometimes develops in large organizations.  In such organizations, IT leaders routinely experience blame when IT-related things inevitably go wrong, even in situations where the problem is  largely out of the control of the IT leaders. After receiving routine beatings over a long period of time, IT leaders and IT department culture can take on a defensive posture.  This defensiveness, in turn, does three things:

  1. It crowds out collaborativeness
  2. It fosters hiring and promotion processes that emphasizes management skills (“are you done yet?”) over design insight and creativity
  3. It encourages processes to be optimized for documentation and blame avoidance rather than agility

Then, after years of experiencing IT support that, in the opinion of product leaders,  lacks design insight, creativity and agility, the product leaders insist on keeping the PLM systems under their own control.  CIOs consider this to be “rogue” behavior that needs to be thwarted.  As a result, a reinforcing loop (a vicious cycle) is created.

This general system of causes and effects seems to apply as well to other technologies that are used for mission-critical processes by non-IT people with some technical capabilities, including workflow automation, business process management (BPM), enterprise data warehouse (EDW), and electronic medical records (EMR).  Specifically in the latter case, clinical leaders that are focused on using IT to transform their actual care processes sometimes tend to associate “EMR” with “big expensive project owned by IT people that don’t understand my clinic.”  As a result, they prefer “registries,” which connotes “little project owned by the clinicians and focused on enabling the front-line process improvements we’re trying to make right now.”

The Aberdeen Group report attempts to address some practical issues regarding budgets and organizational roles.   It does a nice job of linking survey data to recommendations.  But, in my opinion, the solution to this problem — whether it be for PLM systems in product companies or care process management systems in ACOs — is not to decide who owns the turf.  It is to address the root causes.  IT and product/clinical leaders must work together in ways that avoid finger-pointing, builds trust, attracts and retains the best talent, and invites meaningful participation by all the team members with something to contribute and a stake in the outcome.  And the information systems must be designed to enable such collaboration.  Admittedly, easier said than done.

Read More

What are agent-based models and how do they relate to ACOs?

Agent-based Modeling (ABM) is a type of computational simulation that involves the creation, inside a computer’s memory, of a collection of objects that are programmed to mimic the behavior of people, companies, countries or anything else that constitutes the “agents” that interact and change over time as part of a complex system. ABM has been used for decades in diverse fields, and has grown more popular as computers have become more powerful and less expensive. In the field of meteorology, ABMs have been designed such that each region of the atmosphere is represented as an “agent” that interacts with adjacent segments of atmosphere to create models that simulate and predict complex weather patterns. ABM has been used to study traffic patterns, where cars and trucks are modeled as “agents” moving across roads.  It has been used in biology to model predator-prey dynamics.  It has been used to study urban sprawl and racial segregation.  It has been used to model the behavior of children in the school yard and the behavior of nations interacting on the world stage. And, in health care, ABM has been used at the CDC and elsewhere to model the transmission and spread of communicable diseases.

ABM is useful when the system of interest is too complex to reduce down to an equation. In my experience, many problems in both health care and health care management fit that description. Think about creating a successful Accountable Care Organization. It involves primary care physicians, specialists, hospitals, patients and health plans interacting, being incentivized in new ways, with changing relationships and changing capabilities regarding information technology, care management and analytics. It seems unlikely that we can reduce this down to a system of equations. But, if we can’t model it with traditional models, should we just fall back on our intuition about what is likely to happen if we pursue different policies and make different investments? How about we use our intuition AND create models. The models allow us to clarify our thinking and explore many different scenarios to see the potential implications of different choices.

According to a Jan 31 article in the Johns Hopkins Gazette, Joshua Epstein is establishing the Johns Hopkins University Center for Advanced Modeling in the Social, Behavioral and Health Sciences (CAM).  Epstein is in the Department of Emergency Medicine, but has cross appointment in other departments including Bloomberg School of Public Health, School of Medicine, Whiting School of Engineering and Krieger School of Arts and Sciences.  They are focusing on disaster medicine, disaster response, public health preparedness, and chronic disease.   And, they are seeking multi-institutional collaboration, starting with the Santa Fe Institute, Pittsburgh National Center for Supercomputing Applications, Virginia Bioinformatics Institute at Virginia Tech, National Center for Computational Engineering at Tennessee and ETH, in Zurich.  Among the earliest models developed by CAM researchers is a planet-level model of 6.5 billion agents to explore the transmission of communicable diseases, including swine flu.

Joshua Epstein giving a lecture on Agent-based Models

 

In my opinion, ABM is destined to eventually become a standard tool to support knowledge-driven decision-making regarding the transformation of our complex health care system, including the formation of successful ACOs.

Read More

Reference for Joint Principles of the Patient Centered Medical Home, 2007

Here is the original reference to the February, 2007, 3-page consensus document that describes the Joint Principles of the Patient Centered Medical Home.  It was created by the four professional societies that are focused on primary care: American Academy of Family Physicians (AAFP), American Academy of Pediatrics (AAP), American College of Physicians (ACP), and American Osteopathic Association (AOA).

Very simple and straight-forward.

http://www.pcpcc.net/content/joint-principles-patient-centered-medical-home

Read More

Satirical YouTube video on Accountable Care Organizations

I’m not sure about the background of this video submitted to YouTube by Centura Health. But, it is hilarious and sad at the same time. I’ve shown it to dozens of people, and it circulated widely among physician organizations in Michigan.

Read More

Harold Miller’s Paper: How to Create Accountable Care Organizations

This is probably the single most useful reference regarding Accountable Care Organizations, outlining alternative ways of conceptualizing them and giving a balanced explanation of the pros and cons of alternative models, including structure and reimbursement.  David Share, MD, from BCBSM was involved, and the document uses BCBSM’s Physician Group Incentive Program as a case study.

The web reference is: http://www.chqpr.org/downloads/HowtoCreateAccountableCareOrganizations.pdf

Read More

The Smoking Intervention Program, a Provider-based Care Management Process

Smoking cessation is an important public health concern, and has been the subject of a recent Agency for Health Care Policy and Research (AHCPR) guideline, as well as a HEDIS measure.   A point prevalence study conducted with the Henry Ford Health System found a 27.4% prevalence of smoking, and an additional 38.6% former smokers.

The CCE developed a first-generation smoking-dependency clinic which was staffed by trained non-physician counselors and overseen by a physician medical director. The original intervention was a 50-minute initial evaluation and counseling visit, with nicotine replacement therapy prescribed for all patients with a high level of nicotine dependency. This intervention was subsequently updated to reflect the AHCPR recommendation that, unless contraindicated, all smoking cessation patients be prescribed nicotine replacement therapy.

Because relapse is a normal part of smoking cessation, the intervention was explicitly designed to address relapse. This was done through return visits, an optional support group, and follow-up telephone counseling calls throughout the year, as illustrated in the following figure.

The program was designed to be inexpensive and simple to execute within the clinic. This was accomplished by automating the logistics of both the intervention and the collection of outcomes measures. The Flexi-Scan System, an internally developed computer application which helps automate outcome studies and disease-management interventions was used to automate (1) data entry through a scanner, (2) prompting of follow-up calls and mailings, and (3) the generation of medical-record notes and letters to the referring physicians. A database that can be used for outcomes-data analyses is acquired as a part of this process.

As illustrated on the figure below, this first-generation program achieved a twelve-month quit rate of 25%. Such a quit rate is about twice as high as the rate achieved with brief counseling intervention.

To evaluate the cost-effectiveness of this program, a decision analytic model was constructed. This model was constructed using the Markov method.  Key assumptions of the model include the following:

  • One year quit rate for usual care (optimistically assumed to consist of brief physician advice) was 12.5%.
  • Spontaneous quit rate of 1% per year in “out years.”
  • Relapse rate for recent quitters of 10%.
  • Age, Sex distribution based on Smoking Clinic patient demographics
  • Life expectancy of smokers and former smokers by age and sex based on literature (life tables).
  • Cost of clinic intervention – $199
  • Cost of nicotine therapy Smoking Clinic – $101 (Assuming 0.9 Rx/Patient)
  • Usual Care – $33 (Assuming 0.3 Rx/Patient)
  • Future health care costs were not considered
  • Annual discount rate of 5%

The results of this model were presented at the annual meeting of the Society for Medical Decision-Making.  The model results are presented in the form a table called a “balance sheet” (a term coined by David Eddy, MD, PhD).  As shown below, the model estimated that the first-generation smoking-dependency clinic cost about $1,600 for each life year gained.

To help evaluate whether this cost-effectiveness ratio is favorable, a league table was constructed (see below).  The league table shows comparable cost-effectiveness ratios for other health care interventions.  Interpretation of the table suggests that the smoking cessation intervention is highly favorable to these other health care interventions.

League Table

Intervention Cost per Quality-adjusted Life Year Gained
Smoking Cessation Counselling $6,400
Surgery for Left Main Coronary Artery Disease for a 55-year old man $7,000
Flexible Sigmoidoscopy (every 3 years) $25,000
Renal Dialysis (annual cost) $37,000
Screening for HIV (at a prevalence of 5/1,000) $39,000
Pap Smear (every year) $40,000
Surgery for 3-vessel Coronary Artery Disease for a 55 year-old man $95,000

Although this first generation program was effective and cost-effective, it was targeted only at the estimated 16,500 smokers in the HFMG patient population who were highly motivated to quit.

The estimated 66,000 other smokers in the HFMG patient population would be unlikely to pursue an intervention that involved visiting a smoking dependency clinic. Even for the smokers who were highly motivated to quit, the smoking cessation clinic had the capacity to provide counseling to about 500 people each year, or about 3% of these highly motivated smokers.

Second Generation Smoking Intervention Program

In response to this problem, the CCE developed a “second generation” Smoking Intervention Program.” This program uses a three tiered approach which includes (1) a “front-end” process for primary care and specialty clinics to use to identify smokers and provide brief motivational advice, (2) a centralized telephone-based triage process to conduct assessment and make arrangements for appropriate intervention, and (3) a stepped-care treatment tier.

In the “front-end” process, clinic physician and support staff were trained to screen their patients from smoking status and readiness to quit and provide tailored brief advise. Each participating clinic was provided with a program “kit” including screening forms, patient brochures, and posters to assist them in implementing the program. Patients who are interested in further intervention are referred to a centralized triage counselor for further assessment and intervention. These counselors are trained, non-physician care providers. They proactively call each patient referred, conduct an assessment of the patients smoking and quitting history and triage into a stepped-care intervention program.

An important part of this intervention has been providing information to clinicians, including a quarterly report showing the number of patients they have referred to the Smoking Intervention Program, the status of those patients, the type of intervention they are receiving, and the number of patients who report not having smoked in the preceding six months.

The clinician-specific data is presented in comparison to data for the medical group as a whole. These reports have a strong motivational effect on clinicians, as evidenced by a sharp increase in Smoking Intervention Program referrals after each reporting cycle.

As shown above, the second generation program achieved a six month quit rate of about 25%. This rate is virtually identical to the first generation program.  The new program, however, has much larger capacity and lower cost per participant. Patient satisfaction with the Smoking Intervention Program is encouraging, with 85% reporting that they would refer a friend to this program.

Read More
Page 10 of 12« First...«89101112»