Skip to content Skip to navigation

Lions and Tigers and Meaningful Use Betas, Oh My!

April 5, 2011
by Joe Bormel, M.D.
| Reprints
The complexity of dozens of MU measures leads to even more complexity with beta testing


Every vendor involved in ARRA certification, inpatient and eligible ambulatory care provider, has rolled its newly certified code to a beta client or two, on the path to demonstrate Stage 1 Meaningful Use. This will be the first relevant experience for implementation teams to bring Stage 1 functionality live across the install bases.

What is a beta test? Definition

Second level, external pilot-test of a product (usually a software) before commercial quantity production. At the beta test stage, the product has already passed through the first-level, internal pilot-test (alpha test) and glaring defects have been removed. But (since the product may still have some minor problems that require user participation) it is released to selected customers for testing under normal, everyday conditions of use to spot the remaining flaws.

http://www.businessdictionary.com/definition/beta-test.html


Because of the incentive revenue implications for the provider organizations, and the threat of having their attestation contested by the government, and because of the complexity of dozens of measures (Core, Menu, and requisite capability), we have never before seen betas of this complexity in HCIT. And just as we begin to tie a bow around our Stage 1 accomplishments, an uncertain Stage 2 looms on the electronic horizon.

Stage 2 Meaningful Use is shaping up to be more complicated with a broad spectrum of escalating requirements. Already many on both sides of the vendor-provider partnership are pushing back against what they’ve determined is an overly challenging set of requirements with an equally unrealistic deadline. Stage 2 certification will be a difficult and expensive process, but with commensurate improvements for more efficient, safer and better care delivery.

Pilot testing (see definitions of alpha and beta stages above) for both Stage 1 and 2 requirements are, of course, extremely important. There have been many past pilots from many very competent IT companies that ended in disaster even though the controlled testing results were positive. In this blog, I hope to help those of you developing, managing, participating in, and interpreting pilots to avoid having to deal with a disaster by implementing a crisis management plan focused on not rolling out a pilot program designed solely to succeed.

A short time ago, I ran across a section of an article on pilot roll outs. The article was titled, “ Strategies for Learning from Failure ,” by Amy C. Edmondson. Read the whole article. It’s a treat, and is typical of her other research. There is a companion video interview with the author here .

The article closes with an interesting twist on pilot testing that’s worth your attention:

Designing Successful Failures

Perhaps unsurprisingly, pilot projects are usually designed to succeed rather than to produce intelligent failures—those that generate valuable information. To know if you’ve designed a genuinely useful pilot, consider whether your managers can answer yes to the following questions:

Is the pilot being tested under typical circumstances (rather than optimal conditions)?

Do the employees, customers, and resources represent the firm’s real operating environment?

Is the goal of the pilot to learn as much as possible (rather than to demonstrate the value of the proposed offering)?

Is the goal of learning well understood by all employees and managers?

Is it clear that compensation and performance reviews are not based on a successful outcome for the pilot?

Were explicit changes made as a result of the pilot test?

Pages

Topics

Comments

IA:  Thanks for your comment. I appreciate that you focused on the heart of the beta testing process, which is learning and executing on what is being learned. I think you are right. The same skills in running a successful Scrum in terms of planning, execution and feedback are essential for a Meaningful Use beta. It amounts to using the guiding principle:


[ Mostashari for ONC (link):] "the principles we're following: eye on the prize, feet on the ground, foster the market, and watch out for the little guy."


The "feet on the ground" is a reference to being grounded by reality, including learning from relevant experience.

Scrum is all about breaking the work up into the focused, bite sized pieces and sequencing them intelligently. That's challenging under the timing of the MU Stages. Scrum is about delivering on objectives with fixed constraints. I don't think HITECH staging precludes that.  Scrum provides more flexibility than waterfall for learning; it does not remove the challenging time constraint.


Ed Weaver: I liked the sports analogy. Very relevant. We recently watched "Miracle (2004)", a movie about 1980 US Olympic hockey team, lead by coach Herb Brooks. Winning the crucial game with the Russians was a result of more training, teamwork, and tenacity.  More of each was planned and executed than the US teams in the prior 20 years.

Similarly, Meaningful Use is a long, focused march to address a set of objectives simultaneously. New learning has been critical to playing that game successfully. And, Ed, as you point out, working at this daily, for months is essential, because winning the end-game demands a commitment unlike any prior beta testing.  Unlike the sports analogy, however, this isn't head-to-head competitive.  There will be multiple winners.

Joe,
Terrific post and a set of real, challenging issues being discussed.
 
In my experience, beta design is important and you do have to "fail small."  A good design recognizes that and deliberately captures (includes) the failure points, for exactly the reasons Dr Edmondson and you point out.   If you don't, the second recipient of the software will have a bear of a time and will likely fail.  As a result, your first beta will have failed to produce the learnings necessary to avert that outcome.
 

I agree with Insightful anonymous and especially the comment "thoroughly take and test certified beta code (designed to meet Meaningful Use objectives)" — why? All suppliers are duty bound to ensure that their overall solutions are moved forward as a "whole product".

Achieving a successful beta process is like throwing a spectacular touchdown. Unfortunately, one touchdown does not win the game - poor defense and illegal plays can still mean that you lose.

Too many healthcare organizations have been motivated to focus on this one play, rather than coaching their organization to win. Winning should still be built around improving patient quality, safety, efficiency and reducing health disparities. Achieving ARRA compensation should still be considered a by-product of best practices.

Equally, as every coach knows one bad pass does not loss the game!

IA, Thanks for your kind words.

The publishers and editors at Vendome / HealthCare-Informatics have been really supportive and clear with me. They asked that I bring a vendors perspective, without commercial spin, to these blogs.

As Farzad Mostashari shared in his interview with Mark Hagland on HCI last month, almost 200 vendors have certified their products. The next step in a healthy, hygienic software development and delivery process is the beta testing. As disclosed in my post above and your comment validates, the stakes are incomparable to prior betas. Impact on workflow alone for maintaining an up-to-date problem list is a case in point. So are the direct impacts on revenues and, often unplanned costs. As is the social drama, with boards of directors routinely asking-but-actually-telling CEO's, so you are going after MU dollars in 2011?

Right now, the white flame point is the beta test, combined with the public and open-government dialogue about how things are going.

As Dr Mostashari said but bears (sic) repeating, "It's going to be hard for everybody, but it's gratifying work, and we think we'll end up with a transformed healthcare system that delivers higher quality, greater patient safety, and increased efficiency."

The photo just felt appropriate!

Dr B,
have you noticed that the meaningful use beta testing process you are describing is fundamentally a waterfall development process?

As the software and product industries have learned in the last thirty years, the Agile-with-Scrum competing development model has the potential to adapt to real world learning on a monthly basis.

Scrum is capable of re-prioritizing the work backlog even daily as real world learning takes place. In contrast, the "Final Rule" nature of HITECH staging restricts or impedes exactly the kind of learning that Dr Edmondson is advocating.

Dr B —
what an apropos and timely post. The criticality of being able to thoroughly take and test certified beta code (designed to meet Meaningful Use objectives) is as you express, taken an importance not (ever) seen to date! SO MUCH is riding on the outcome, given timelines and potential REAL dollars at stake.

The checklist of designs should be a resounding "yes" to all categories. This is needed to give vendors the credibility to the ultimate end-users, that the systems will in fact, help in the care of patients and positively impact the outcome for patients.

Other feedback. I really wasn't sure of the connect between the photo of the Washington Monument and Cherry blossoms and the blog post title. The title of course implied "bears" which to most are a "scary" concept.

Pages

Joe Bormel

Healthcare IT Consutant

Joe Bormel

@jbormel

...