Imagine walking down the street, minding your own business, when you are approached by a woman introducing herself as a medical doctor. She hands you a small, brown bottle of pills and states that consuming these pills will make you better: you will become a better employee at work, more productive at home, and will likely lead a happier overall existence.
You respond by informing the doctor you are not sick, yet she insists you will be better off with these pills. She then begins listing the ways in which you could personally improve in life — issues you are aware of but certainly did not invite to her comment upon. She concludes by handing you the bottle and informing you she will be checking in with your family doctor in two weeks to ensure you have begun using the pills. As she walks off you are left holding the bottle, along with feelings of defensiveness, anger, resentment, and a determination to not take those pills she imposed upon you, even if they might improve the issues she listed. After all, who is she to tell you how to live a life that you have been quite successful at so far, thank-you-very-much?
Many reading this share a problematic experience: less than ideal levels of employee adoption for newly implemented technologies. Though the preceding paragraph is a work of fiction, it is illustrative of a real and on-going reason for this shared experience of non-ideal technology adoption efforts.
If not carefully handled, technology implementations may feel the same way for your clinical staff as it did with the fictional doctor. Employees are carrying out their daily tasks, in some cases performing work that has remained fundamentally intact and basically effective for years, when they are approached by a hospital representative with some new technology. Often, this new technological tool or application is presented as a cure for an ailment many of the staff did not even know they had. On top of that, the technology is designed to solve a problem about which no one has even sought their opinion.
This new technology will require a change in how daily work is performed, necessitating a fundamental change in their personal behaviors, and therein lies a fundamental problem.
Leading with the technology, as opposed to beginning with an accurately defined problem and then documenting a rationale for the change, increases the probability of end-user rejection of the technology. This occurs because a foundation of adoption — clear understanding of the change justification and benefits — has not been adequately laid.
Preliminary resistance to change is often a behavioral manifestation of end users' lack of understanding about the change. Research shows we tend to make changes in behaviors only after we have answered certain questions to our satisfaction.
These questions include:
- Why should I make this change?
- What are the advantages of the change?
- How will my life be better after making this change?
This seems obvious, but is often overlooked or not given the time and attention needed. Answering this brief set of questions satisfactorily for end users will greatly increase the probability of a technology implementation's success.
Technology, by definition, is a way to solve a problem. Lack of consensus on the presence of a "presenting problem" decreases the probability of successful implementation. For, without the presenting problem, there exists no need for a change. If there exists no felt need for change, attempts to explain the rationale for a change will likely fall upon deaf ears. Think of visiting the doctor. If you complain of flu-like symptoms, you would anticipate your physician recommending remedies congruent with your complaints. If the physician instead recommended physical therapy for your knee, the odds of you following that recommendation are very small.
This is because your physician is recommending you implement a change in your life that has no bearing on your current situation, no matter how clear the rationale or benefits of physical therapy may be. If you don't have knee problems, why would you make behavioral changes to fix your knee? Likewise, if you don't have perceived or clearly identified problems in your current work processes or with your current technology, why would you want to make changes in those areas?
Therefore, a principal step in a successful technological implementation is to define organizationally agreed upon areas for improvement. The optimal methodology for defining these areas for improvement is any that involves direct input from all levels of staff — particularly your frontline staff who will be most impacted by any changes.
Documented areas for improvement by nature presume an underlying presenting problem or issue necessitating resolution. Healthcare is increasingly focused on technology as a resolution to its ailments or as an opportunity for process improvement. However, many implementations achieve less-than-ideal results because the technology is rolled-out as the ultimate goal in and of itself, rather than as a solution to some felt pain.
Technology is not the end, but a means to an end. The ultimate end goal is the resolution of some process improvement, and implementing of technology is one means of obtaining that goal. Use of technology in this fashion presumes there is a problem to be solved. Any clinical technology implementation must start by clearly answering the questions, "What's the problem?" or "Where is your pain?"
Get the latest information on Health IT and attend other valuable sessions at this two-day Summit providing healthcare leaders with educational content, insightful debate and dialogue on the future of healthcare and technology.