Skip to content Skip to navigation

Where the Rubber Meets the Road in MD Documentation: An Emergency Physician Perspective

July 20, 2013
by Mark Hagland
| Reprints
Reid Conant, M.D., a California emergency physician, shares his perspectives on optimizing physician documentation in the ED context

Reid Conant, M.D., wears several hats these days. He practices half-time as an emergency department (ED) physician, and in that role, he  is also CMIO of the Tri-City Emergency Medical Group, a 23-doctor emergency physician group in Oceanside, Calif. Tri-City provides emergency physician coverage at Tri-City Medical Center, a 400-bed community hospital in Oceanside.  Conant also consults privately through his firm, Conant and Associates, where he focuses on the optimization and deployment of physician documentation solutions.

At Tri-City Emergency Physicians and Tri-City Medical Center, Conant has been leading the implementation and optimization of physician documentation, including through the Dragon Medical speech recognition solution from the Burlington, Mass.-based Nuance. He also has a business relationship with Nuance through his consulting work. Conant spoke recently with HCI Editor-in-Chief Mark Hagland regarding his and his colleagues’ experiences with physician documentation and speech recognition. Below are excerpts from that interview.

In your view, are the requirements of physician documentation in the ED as onerous as before?

There’s been significant improvement in adoption, primarily because of new technologies, but also in the understanding of how to train and re-train physicians to adopt these technologies. One thing that we encountered at my facility was an initial reluctance to document electronically, because of all the pointing and clicking. Now, since we’ve added speech recognition as an element, we’ve made it so that it’s no longer all points and clicks; and that has enhanced adoption. The documents were more meaningful to the hospitalists or ICU nurses; they were more meaningful when we added Dragon to it, because we were able to add more to the narrative. I’m a decent typist, but I’m nowhere able to get near the level of efficiency that’s possible using speech recognition solutions.

I do think that we’re in a transitional period right now as an industry, including medical informatics and clinical practice in total. And the reason for that is that we have requirements for problem list management and for diagnosis list management, as well as core measures and other regulatory requirements. And it’s not just completing these, but documenting them thoroughly as well. So there has been an increased burden for providers not only to deliver care consistent with clinical guidelines, but also to document that one has done so. And we’re in a transitional period in which the technologies are catching up with the requirements, but they haven’t entirely done so yet.

Reid Conant, M.D.

Can you speak a bit more specifically to the transitional aspect of this?

Yes. For a while, we’ve been working in an environment with fairly regimented formats, which was necessary because the technology was not yet there to capture data from unstructured text. Now, we can get structured data out of narrative text, because of natural language processing. In an ideal world, the physician would be able to create a document, guided by a framework or template consistent with a presenting condition or care plan, but they’re able to add patient-specific narrative that could then be mined and accessed to create discrete data elements. That’s kind of the best of both worlds—the necessary flexibility to make providers efficient—along with the ability for the technology to capture discrete data elements.

How frustrated are most emergency physicians right now, with having to move forward in the electronic documentation world?

Well, there’s a bell curve in that regard. Those sites that have deployed documentation solutions in a strategic manner, with the right tools, and with the tools optimally configured, can make that transition quite seamlessly. I’ve also seen sites struggle, unfortunately; and we were one of those, when we went up six or seven years ago. But the addition of speech recognition, and the optimal configuration of the electronic health record, have helped.

So for example, the creation of commands within Dragon to help speech-enable steps that we repeatedly use within the electronic medical record: to add an order, to sign a note, many other examples. But we can also build content into Dragon that can facilitate and streamline or work as well; for example, if I have a code status discussion, such an advance directive or end-of-life direction, discussion, with a patient. In that instance, there are multiple items I would cover in that discussion on a regular basis; so why should I repeat that instruction over and over, when I can rely on a pre-created element? There are many other examples, such as  operative report details, procedure notes, assessments.

How complicated is it to build speech recognition elements into these templates?

It’s something that we’re able to train our physicians to do, and can be done on an individual-user level, or on an organizational level. It’s very doable at the user level. As consultants, my colleagues and I have also put together a bundle of about 2,000 different starter commands over 50 subspecialties.

Could you provide an example of a bundle of starter sets?