Physician documentation in electronic medical records will be a lynchpin of achieving meaningful use in hospitals, but most agree that getting information into the electronic record is still a challenge for clinicians. While structured notes can reduce the burden of physician typing, there is still the unstructured note to address.
One way to achieve balance between the two is voice recognition (VR) technology for the EMR. But are physicians ready to be their own editors - especially when physician culture is often ‘business as usual’?
“This is technology that is ready for prime time,” says John Halamka, M.D., CIO of Boston's Beth Israel Deaconess Medical Center (BIDMC). “But it's not one size fits all - you have to look at the workflow of the clinician.” That means some may want to use a phone to dictate to a server, while others may want a handheld voice recorder that captures data and uploads it to the server, he says. “Physicians love to dictate.”
Wes Rishel, vice president of Stamford, Conn.-based Gartner, agrees. “For physicians using an EMR, you almost have to get out of their way,” he says. “They're very anxious to use it for text entry because when they are interacting with an EMR, they would rather speak than type.”
Halamka, who has been using macros and templates to enter text into his EMR, says that using voice recognition for front-end dictation (where the physician self-edits) is a winning solution. In fact, Halamka says he implemented eScription VR technology from Nuance (Burlington, Mass.) at an initial cost of $500,000, and it has yielded savings of more than $5 million.
But what about hospitals that may not be as technologically advanced in this area as BIDMC? Is using VR on the front end to get notes into the EMR a viable solution for other hospitals, and is it a technology they are eager to adopt? “Very,” says Rishel. “At a minimum it's a cost savings - and it can have other important effects.”
The savings, he says, come first from reducing the number of transcriptionists needed, but other efficiencies can be realized, too. “If the physician adopts ‘once and done’ it has important other effects,” Rishel says, referring to the workflow in which the physician reviews, amends and signs off immediately after dictation, eliminating the extra step of review. “With ‘once and done,’ the resulting report is available right away.”
And having that report available immediately not only means better care for the patient, but it also impacts the bottom line. “Often, the time before you can take the next step for a patient is driven by how soon you can get the report back,” says Rishel. He explains that in addition to the clinical benefit, as hospitals adopt ‘once and done,’ patients can be discharged sooner, in some cases eliminating an extra day. “So in terms of direct cost of the transcription and in terms of the other efficiencies it creates, hospitals are anxious to use voice recognition.”
There are, however, specific challenges to using VR in an inpatient setting; notably, the physician's voice profile. In the past, voice recognition meant tethering physicians to a specific workstation that recognized their voice. While that continues to work well in environments where physicians use the same computer - such as ambulatory clinics and specialties - the inpatient setting, with its roaming physicians and thousands of workstations, is another story.
Like Halamka, Ed Babakanian, CIO at University of California San Diego (UCSD) Medical Center, found server-based voice recognition to be the solution. Babakanian signed on for a site contract with Nuance that will allow physicians to use VR at any computer, including laptops. “I think it's important when we introduce technology that we don't do it in a cheap way,” he says. “We didn't want to make it difficult for the physicians by restricting it to certain workstations because we only bought 20 or 30 licenses - if you're going to do it, you do it right.”
Babakanian says making VR convenient for the physicians is key to adoption. When physicians sign in, their voice profile is not on that machine; rather, it resides on a server, along with all the other physician profiles. “That is one of the key advancements making voice recognition more broadly used,” he explains.
That is, as long as the hospital's network capabilities are strong - otherwise there could be a lag time to get the physician's voice profile to the central server and back. Though at most it might last only a few seconds, it's not something most physicians are ready to accept. “It all depends on the fabric of your backbone,” says Babakanian. “All of our network is fiber-optic. We have a high speed network so the speed of transmission of that voice profile from point A to point B is not significant.”
The fundamental question, says Rishel, is how much the workflow changes for the physician. “Specialists making $400,000 a year may not want to change their workflow to dictating online and approving it right away,” he says. “It takes less time in their day not to wait for that to happen.” Many prefer, he says, to have a queue of reports to approve, typical of back-end VR.
“If the physician has no economic benefit from changing, they won't,” Rishel adds. “It takes an aggressive program of working with the physician both as a clinician and as a business person to get them over that hump.”
Get the latest information on Health IT and attend other valuable sessions at this two-day Summit providing healthcare leaders with educational content, insightful debate and dialogue on the future of healthcare and technology.