Everyone working in healthcare informatics shares one deep and profound belief. The belief is that, with all of this data that we're systematically collecting (or going to be collecting when our projects are further along), we'll be able to arrive at truth. What works? What doesn't work? In short, applying the scientific method.
In concrete terms, if (or when) we're each diagnosed with breast cancer, or prostate cancer, heart disease, diabetes or a common cold, our doctors will be able to quickly and accurately diagnose us and prescribe the most effective treatment. Everyone working here shares an understanding that
there's a learning process, involving all prior patients and the healthcare system's experience with diagnosing and treating those prior patients. Patients and families also hope that we (society) will learn from our prior misfortunes, including deaths.
And yet, every single person who has had a few experiences with our healthcare delivery system has experienced
a system that clearly doesn't learn so well. Whether learning needs to come from observational research, or formal, multi-center trials with clever protocols, we're finding that our learning processes are unnecessarily inefficient.
A field of medicine has emerged to improve the speed and fidelity of this learning; it's called
translational medicine. It's focused on studying and then quickly taking action on translating what has been a 17 year process of taking medical discoveries and getting them into common practice, much faster. Going from lab bench to bedside. Now, in 2008, we have hundreds of commonly used tests and drugs. We have several orders of magnitude more genomics data coming on-line every year. Operationalizing learning is no longer just a good idea. The practice of medicine, whether that's evidence-based medicine, or simply experience-based, expertise-infused requires new methods.
It turns out, according to a healthcare informatics grand rounds presentation last week at Johns Hopkins, translational medicine has several barriers that are formidable:
-- our current system is antiquated; repeating and/or combining studies for learning purposes is impossible in very objective terms. (hence the title "... where do they
amongst researchers across space and time is an extremely low reliability system.
Social, economic and management systems
need revision. That's an ecosystem with academic medical centers, biotech, pharma, and consumers.
The scientist / entrepreneur / executive who presented the grand rounds, Dr. Steve Bova, provided a glimpse of
what's required to solve the translational medicine problem. It includes governance and standards, but not what you might expect. It draws lessons from web 2.0 with mass collaboration tools, democratization of content, transparency and publication. And, there's a healthy dose of workflow automation. No surprise there!
Want to learn more?
If you've ever been involved in a clinical study, extensive chart review, or happen to have masters or PhD level training involving healthcare research, you'll find a lot of value in
Steve's Grand Rounds presentation here (http://real.welch.jhu.edu/ramgen/DHSI/Oct032008.rm). Bring lunch to your desk and attend virtually. It's free and requires no log in. You will need RealPlayer. (The initial introduction of Dr. Bova, not on the video, includes that Steve is an Assistant Professor of Pathology, Oncology, and Urology, holds a Joint Appointment In Health Science Informatics, and is boarded in Prostate Cancer and BioInformatics/Genomics.)
Despite the title, the talk is really about the big picture, Science 2.0. The title of the talk is just the entry point. The style is more of a first-person narrative with artifacts, rather than a primer on translational medicine per se. That said, it's absolutely relevant if you meet the audience criteria I described in the previous paragraph (
Enjoy! And, of course, comment here on what you thought.