Speech Recognition to the Rescue | Healthcare Informatics Magazine | Health IT | Information Technology Skip to content Skip to navigation

Speech Recognition to the Rescue

January 2, 2008
by Alan Fink
| Reprints
When transcription costs started getting the better of Camino Medical Group, the healthcare organization decided to fight back.

Transcription costs can be crippling, representing millions of dollars for large healthcare organizations. Whether transcription is done internally or is outsourced, providers still suffer from long lag-times between dictation and report availability.

At Camino Medical Group in Mountain View, Calif., we deployed an EMR in 1997, and as a result, our leadership team has an appreciation for the value of technology as an enabling tool in providing care. Camino CMG is one of northern California's largest physician-governed multi-specialty medical groups and is part of the not-for-profit Palo Alto Medical Foundation. We have locations throughout the South Bay Area in Cupertino/San Jose, Mountain View, Santa Clara and Sunnyvale. We have strived to remain innovative in information technology and have taken steps in exploring and deploying new technologies, such as voice recognition.

CMG sees more than 600,000 patients each year, has nearly 300 primary care and specialist physicians, nurse practitioners and physician assistants, and performs more than 22,000 surgeries in outpatient centers and at participating hospitals. Our transcription costs were becoming an unacceptable burden, costing us more than $2 million annually for outsourced transcription services.

Our documentation process was also plagued by slow turnaround and delayed availability of completed reports. We knew we wanted to go paperless to save money and improve access to patient records. Many of our physicians were still actually handwriting reports, with an even larger number relying on traditional dictation/transcription processing. Our CIO decided to use speech recognition technology to give our physicians the ability to dictate, review, edit and electronically sign their own reports bypassing the transcription process entirely.

Before starting the transition, we identified a number of critical success factors, including providing proper training and maintaining continued executive support. That combination of smoothing the learning-curve and keeping motivation high was key. Most important of all, though, was providing a physician-friendly interface so our doctors would not be put off by an overly complicated system.

After exploration into a variety of approaches to speech-recognition, we centered on Enterprise Workstation from Dictaphone (now the healthcare division of Nuance Communications Inc., Burlington, Mass.) as the right combination of technology, tools for the physicians to use in completing their reports and back-end technology. This new system also gave us the freedom to use a mix of self-edit and traditional dictation/transcription as part of our enterprise documentation workflow.

Physicians don’t want to keyboard

Our electronic documentation initiative was part of a larger plan for overall automation of our patient information systems, including an enterprise-wide EMR. That effort was strongly guided by Mahnaz Choobineh, our CIO at the time, who believed that to be successful, physicians would need to fully embrace and gain from any technology that was put in place.

We knew that our physicians did not want a keyboard, but rather, were looking for an alternative that would not require significant changes in how they created their reports within our current EMR environment. Therefore, we decided that the most logical starting point was to focus on creating that intuitive physician interface. With the ability for speech recognition to offer improvements in the creation of the dictated narrative report, that was a natural starting point.

We also saw speech recognition and the EMR to be very complementary initiatives, because we placed a very high value on providing a comprehensive picture of a patient’s medical history.

Today we have nearly 300 doctors across the South Bay Area self-completing 6,500 reports each week. We have reduced more than $2 million in annual transcription expenses by eliminating outside transcription of dictated reports. Of equal importance, we have created a way to migrate other physicians who were still handwriting reports to an electronic format without incurring costs.

We do still have a few physicians handwriting reports, however we have already gotten very close to our goal of 100 percent of physicians using dictation with speech recognition and 70 percent using the self-editing functionality. We estimated that without speech recognition, converting all of our handwritten reports to traditional dictation-transcription would cost more than $4 million.

Plans for the future

We plan to continue rolling the system out to more physicians. Currently, more than 90 percent of our 50,000 visits a month are documented using dictation and more than 70 percent are self-edited and completed by the physicians themselves.

Our next big challenge is the implementation of a new EMR platform that will be rolling out over the next year, and we are already working to ensure that our speech-recognition system is integrated into the new EMR.

Alan Fink is director of the project management office for Palo Alto Medical Foundation.

The Health IT Summits gather 250+ healthcare leaders in cities across the U.S. to present important new insights, collaborate on ideas, and to have a little fun - Find a Summit Near You!


See more on

betebet sohbet hattı betebet bahis siteleringsbahis