Skip to content Skip to navigation

Up-To-Date Problem Lists And MU

September 6, 2010
by Joe Bormel
| Reprints
How involved is your medical staff in using problem lists

When we ranked the Meaningful Use implementation and use challenges from the Final Rule, July 28, 2010, numbers one and two were clearly CPOE and Maintain Up-to-Date Problem List. Last week, following the CCHIT jury testing process, we tested our product's Problem List capabilities (§170.302b) against the government test procedures, as well as CCHIT Inpatient test scripts.

What became immediately clear, and yet not obvious from a distance, is that the government is very serious about the "Up-to-Date" words before the "Problem List" for the inpatient setting. Clearly, creating a one problem Problem List on admission to satisfy the reporting requirement, an approach that more than one CIO asserted would be their approach, will not be "up-to-date" on hospital day two. For example, per TD170.302.c-5, there is an expectation that an acute heart attack be changed from active to inactive.

This point was made by a prominent CIO, Dr. Hal Baker, in his podcast last week as well. Dr. Baker pointed out that the challenges to turning on the Problem List capability within an EMR is easy; and getting a medical staff to use it is complex. These are entirely different issues.

Issue: Historical Reluctance to Use Problem Lists

- Demanding to maintain
- Usually incomplete – so inaccurate
- Often not maintained – results in mistrust
- Problems often linked in a casual way
- Limited classifications, types and status
- Lack of ownership


It stands to reason that, if your goal is to demonstrate Meaningful Use in 2011, creating a shared clinical vision of an up-to-date Problem List is going to be necessary, and will require a campaign and prototype.

To help you get started, here are a few of the benefits you can achieve from evolving toward an up-to-date Problem List:

- Facilitate analysis of potential interaction between patient problems and diagnostic/therapeutic interventions

- Facilitate association of clinical information to a specific medical problem

- Facilitate management of patient chronic conditions

- Support continuity of care

- Improve clinical decision making

- Increase adoption of screening programs and preventive health measures

- Improve communication between health professionals

- Provide a central and concise view of the patient’s medical problems

- Encourage an orderly process of medical problem solving and clinical judgment

- Improve provider productivity, while creating accurate and complete medical records

 

Installing and turning on certified software is one thing. Achieving and demonstrating Meaningful Use is something more.

 

What do you think?

 

Photo: Dr. Thomas Garthwaite, over a decade ago, led an initiative as Under Secretary for Health at Department of Veterans Affairs, driving problem list usage from 60% prevalence to over 90%, as part of a highly successful and systematic set of initiatives to improve cost, quality and access.

Topics

Comments

Dr Lyle,
Thanks for your comment. I'm a huge fan of your blog and recommend it to my friends and readers regularly (including through links.) Your investment in research as well as your deep background shows in every post. Thanks for your blogging.

Auto-Population of Problem Lists

Your comment, auto-population of problem lists was wonderful and hadn't been brought up yet. I steered away from that. Just a little too painful for readers who are, understandably, scared, humbled or both by what's in the Final Rule for Stage One. Since auto-population isn't required, it's a distraction. (It's also not matured and, as you point out, could lead to an additional MD prompting that doesn't engender MD love.)

So, that said, working with our friend and former CIO (actually member of the oCIO comprised of CIOs) Dr Paul Fu, we did experiment with problem list auto-population, using NLP and CAC.  Here's a link to a description and screen shots of that work:  /Media/BlogReplies/2005.05.16 v5 Mortimer, Bormel - TEPR05.ppt  The graphic to the right "Problem" Management, shows our 2004 concept of where it was possible to "auto-populate" from, in the left column labeled Sources.  The graphic below shows the problems, codified in SNOMED, from one patient's corpus of documents. The date of the note refering to the problem is a link on the RHS, labeled Occurances.

We presented the work at the SNOMED International annual meeting in 2004 and again at TEPR in 2005. In a nutshell, we used military-grade NLP that had previously been proven effective by the National Cancer Institute (NCI) to locate correct SNOMED codes for problems, allergies, medications, symptoms and other findings in dictated documents. We used the technology to build a proforma problem list from the H&P and Discharge documents (the section of the repository was called Health Notes.) The method was identical to your Option 2 (in your comment). The results were strikingly good.



Sadly, in both the grant-based research world of fellowships, as well as the world of commercial software, the initiative was ripe before the market was. The participants moved on to other roles. The corporations involved themselves evolved, no longer resembling the their past selves (different products, markets, customers, executives, and branding.)

Happily, we've revisited NLP/CAC and have been demonstrating auto-creation of codified problem at AHIMA and elsewhere. We're using a fabulous open-source technology stack and, we're able to address now ripe (or atleast ripening) market issues, including MU and ICD-10.

I'm glad I was able to end on a happy note. As you know, there's a bittersweet quality to accomplishments in "resident research projects"!

We surveyed our current problem lists in our enterprise, including our ambulatory EMR and inpatient documentation. That turned up two problems.

First, the billing codes and historical billing codes for individual patients dont serve as a very good starting point. (no surprises there)

Second, many of the existing "clinical" problem lists are so non-specific as to be nearly useless for the purposes you outlined above. It's easy to find the terms HTN, DM, CAD, OA, and CVA. Each is so non-specific as to be of little value.

As you suggest, it will be critical that each clinical community, be it a hospital, clinic, etc, pull together it's clinical leadership to come to a high level agreement: What's a good problem list? What's constitutes a bad problem list? What problems or classes of problems are so important that extra resources are warranted to make sure we capture them? What shouldn't be on a problem list?  Etc.

From my local experience, inventorying current practices wont write the local charter for problem lists. Nor will interviewing the departmental and committee stakeholders, although that's a critical start.

Our current thinking is to drive the initial population of inpatient problem lists using recognized terms from the quality reporting measures, combined with infection control (largely culture, sensitivity, and related history). We already have dedicated, employed staff whose job it is track these populations. The UTDPL seems like the rational vehicle to associate these efforts with direct patient care.

Robert,

Thanks for your comment. I strongly agree with your perspective.

One of my mentors said, "how caring and committed to excellence a physician is was usually evident by the time that person was five years old."

In this respect, the behavior of caring (e.g. enough to maintain a problem list) is more of a talent, than a skill or knowledge. As such, training alone is likely to be inadequate to get the necessary behavior change. It really takes the desire you referenced.

The other dimension you are raising (which hasn't been raised yet in this post) is that electronic patient records are read by more people involved in care than paper records were read. As such, an actively managed problem list is more important than it had been in the paper world.

I do think that some specialties are more obsessive about clarity or specificity of problems on the problem list than other specialties. I say that but I have no data! My sub-specialty experience from my UCLA fellowship left me saying things like "patient has a syndrome compatible with but not diagnostic of systemic lupus by classification criteria." When I ran an urgent care clinic in Boston, on the other hand, I was perfectly happy and comfortable with UTI, URI, dehydration, etc, with very little precision.

Thanks again for your comment. I'm thrilled to see the depth of positive passion that the MU dialogue, including those of policy makers, is bringing to modern medicine. It's refreshing.

Chupacabra,

Thanks for your comment sharing your checklist. You brought up several critically important problem attributes, workflow needs, as well as roadmap of design issues that will increasingly become important (e.g. NLP / Computer-Assisted Coding feeds).

I think this is a great topic, and I'm interested in the aspects of applying CDS to populating and maintaining the problem list - in other words, how can we actually use our computerized data to make this process more automatic - making Problem List documentation a "intelligent-aware byproduct of our care" rather than a separate manual function.
 
For example, we have previously heard from Dr. Bill Galanter [on the AMDIS ListServ] - who has done some great work in creating rules that "suggest" a diagnosis to a doctor if trying to prescribe a certain medication.  We do this with DM meds and indeed- we feel very comfortable that the vast majority of our diabetics have Diabetes on their problem list. 
 
And many years ago, my "resident research project" was ascertaining the reliability (specificity and sensitivity) of past billing diagnoses for clinical use•   For every patient visit, we pre-printed a list that rank-ordered this "problem list" based on an algorithm that included both total number of times diagnosed, as well as the most recent time, so it might look like this:




 
We looked at both a complete list (which could be 30-50 diagnoses on some pts) vs. a list that stopped at 10
The results for the Top 10 (just based on BILLING data) were pretty good- the lists were about 80% accurate and reliable, meaning 80% of the diagnoses could be regarded as true problems, and these lists were only missing around 20% of the diagnoses.  Not perfect- but I wonder if we could even get close to that based on our current program of having docs "manually" add the problems.  Of interest was that the things we might not regard as "permanent" problems still had value - for example, while I would not put "Sinusitis" on a problem list, I do think there is value in knowing if the pt gets an average of 5 a year, 1 a year, or 1 every 5 years• and would be nice to have that made transparent. 
 
So options for CDS auto-creation of problems list might be:
1. IF prescribe Med affiliated with a problem (and/or labs corroborate) - THEN prompt MD to add that problem to problem list
2. IF diagnose used over 3 times in a single year, OR over 5 times in 2 years, etc•   THEN Automatically add to problem list (or prompt MD to add)
 
Of course, if the MD said "no" then the alert should be suppressed- maybe forever, maybe just for 1 year to re-evaluate?
 
So we know some places are doing #1- but I have a feeling it's not to many. 
Is anyone doing option 2?    If so - how are things going?
 
Thanks, Lyle
 
_______________________________________________
Lyle Berkowitz, MD
 

Web: www.DrLyle.com
Blog: www.drlyle.blogspot.com
 

I think the topic of an actively managed problem list really reflects on the personal desire for clinical excellence by the individual physician regardless of specialty. This falls back on simple medical school teaching of taking an accurate detailed history and physical. All technology aside, if the physician doesn't pay attention to the details and strive for excellence the best technology in the world doesn't matter. I've seen it time and again in the EHR world where the data and information is there, but the physician has overlooked it. Sometimes the basic learned skills of being a caring thorough physician mean more that NLP, CDSS and computerized intelligence.

Dr. Joe,

There is a bit of a dichotomy in Stage 1. While the narrative talks about up-to-date meaning to be looked at at least once each episode of care, the measure is simply that there is at least one entry per patient per year.

The dialogue in the advisory committees was more along the lines that the problem list is updated several times during each patient hospitalization as existing problems are resolved and new ones identified, but with the resolved/inactive problems being moved to a non-active status rather than being deleted from the list.

For ambulatory, the dialogue was more that patients come for an acute episode have a new problem that needs to be added to the list. Patients coming for follow-up, chronic or check-ups may or may not have new problems or changes in status. Consultants and other EPs who never see the patient may never find adding or amending the problem list appropriate.

So we have lots of 'clarity' from the feds, but the clarity really does not prescribe how practitioners should use and manage the problem list.

Here's the excerpt from the final rule's narrative:

Commenters requested clarification of the term "up-to-date".


CMS Response: The term "up-to-date" means the list is populated with the most recent diagnosis known by the EP, eligible hospital, or CAH. This knowledge could be ascertained from previous records, transfer of information from other providers, or querying the patient.
However, not every EP has direct contact with the patient and therefore has the opportunity to update the list. Nor do we believe that an EP, eligible hospital, or CAH should be required through meaningful use to update the list at every contact with the patient. There is also the consideration of the burden that reporting places on the EP, eligible hospital, or CAH. The measure, as finalized, ensures the EP, eligible hospital, or CAH has a problem list for patients seen during the EHR reporting period, and that at least one piece of information is presented to the EP, eligible hospital, or CAH. The EP, eligible hospital, or CAH can then use their judgment in deciding what further probing or updating may be required given the clinical circumstances.



And here's the language of the regulation itself:


(3)(i) Objective. Maintain an up-to-date problem list of current and active diagnoses.
(ii) Measure. More than 80 percent of all unique patients admitted to the eligible hospital's or CAH's inpatient or emergency department (POS 21 or 23) have at least one entry or an indication that no problems are known for the patient recorded as structured data.

Here's a link to the Final Rule and relevant section:

http://edocket.access.gpo.gov/2010/pdf/2010-17207.pdf   Page 44336, middle column.




I hope that helps, Dr Joe.

Dr Joe,

While I agree with your comments, the main purpose of ARRA is to improve the quality of care and that cannot be done with an inaccurate Problem List. One entry a year or visit does NOT reflect the patient safety risks associated with the communication between providers of care.

The CMS response provided below states "The term "up-to-date" means the list is POPULATED (meaning added to) with the most recent diagnosis" to help assist judgment in deciding what further probing or updating may be required given the clinical circumstances. If problems are maintained as they should be, then the list will reflect chronic problems that change status over time and so this assists a clinicians judgment in deciding the most appropriate level of care. In addition, the management of problems will provide a history of the patients clinical status, which additionally assists in the most effective level of care. All of this results in reduced costs.

I know this is a very difficult topic to address, but I believe as an informatics community and as clinicians, we need to deliver a clear message that if the Problem List is used appropriately, there will be less work, improved accuracy and completeness which will result in improved patient care and reduced costs.

Obviously, I am very passionate about this because I really believe there is enormous value to the patient, clinician, hospital, etc.

Thanks for helping make the stakes more explicit.

Here's a checklist for an Inpatient, MU-ready Problem List:

1. Flexible Design - ability to arrange required elements to meet varying physician workflow and contributors

* Attending Only
* House Staff (Res, PA, CRNP) + Attending
* Nurse + House Staff + Attending

2. Ability to create problem-based or systems-based problems (e.g. HP might be problem based, but it converts to systems-based if pt admitted to CC)

* ability to easily sort/move problems within the Problem List

3. MU Requirements are met

* able/searchable ICD-9 (option for ICD-10)
* SNOMED Coded elements
* Onset/Inactivated dates recorded
* last modified by date/time recorded avail for display on the note
* Present on Admission Indicator
* Status indicator (worse, improved, etc.)
* Principal/Primary indicator
* ability to sort problems

4. Incorporation of DSS - recommended courses of action and triggering of NQF quality measures

5. Carry forward flexibility options to:

* set carry forward on/off at the field level (e.g. Dx could carry forward, but force users to update the assessment/plan, etc.)
* hover or review yesterday's assessment/plan

6. Validation rules: customer can determine what fields are required option to prevent user from signing the note if fields are not completed or reviewed

7. (near future capability) Use automation (NLP) to create problem list from the HPI

8. Abstraction of problems for reports (to PCP, for HIM, or DC Summary):

* ability to create lists for display on the DC Summary Active Problems (with codes/dates, etc) vs Inactive Problems

Sure, Dr. Joe.

The vision of the stages , in lay terms, is:


Stage 1: Get them to start using an EHR.
Stage 2: Get them proficient at using an EHR.
Stage 3: Make them accountable for quality and outcomes.


My point in my comment was that stage 1 is all about getting a toe in the water not about achieving best clinical practice with problem lists.

The feds will increase the requirements in Stages 2 and 3, but we don't have any advance indication of how high the bar will be raised in either subsequent stage nor what the exact specifications will me.

My guess is that Stage 3 will see rules for 'best practice' requirements around problem lists. Backing into where they want to be in Stage 3, the feds will craft stage 2 to be some kind of midpoint between the stage 1 entry level and the stage 3 best practices level.

The high ground for vendors and provider organizations is to lead users to be using the problem list in a way that adds real and positive value, not just doing the minimum compliance thing which would likely not encourage good clinical practice.

To the original point of your post, that requires medical staff leadership. You and Dr Baker are right. Problem list roll-out is extremely complex and important. Getting it right will improve patient safety through demonstrably better communication. Getting it wrong will be evident if clinicians don't appropriately use and trust the problem list. That clearly wont happen auto-magically by turning on the functionality!

Thanks Rich and Ed.

There is a sentiment that I've heard widely; paraphrasing:


"It is really unnerving that ARRA is being looked as an opportunity to gain funding, rather as a reward for doing the right thing and being effective. "



"If practitioners use the problem list according to the federal regulation minimum and not in a way that makes sense for clinical work flow, then the MU Stage 1 problem list may well add negative value to patient care. "


Rich, I think you were getting at this with your comment "[there is a ] dichotomy in Stage 1."   Can you expand a bit?

IA, Thanks for sharing your insights and your organizations strategy. Per Ed's comment above, and yours above, anything less than maintaining a UTDPL is likely to result in patient harm through a too-blind trust in that list.

Joe Bormel

Healthcare IT Consutant

Joe Bormel

@jbormel

...