Skip to content Skip to navigation

How Much Does One Earth Second Really Matter?

February 4, 2012
| Reprints
When it comes to the little things, clinical informaticists are having to make important judgment calls

the Urania-Weltzeituhr (atomic clock) in Berlin's Alexanderplatz




I have to admit that I was amused by an article I read in the Jan. 20 New York Times, whose headline read, “Decision About One Second Is Postponed for Three Years.” How intriguing a headline is that, anyway? <smile> Seriously, though, the article was about a decision facing delegates at the Jan. 19 meeting of the International Telecommunication Union (ITU) Radicommunication Assembly, held in Geneva, Switzerland, over whether the “leap second” should continue to be added to the earth’s atomic clocks or not.

Basically, the problem is this: adjustments have regularly been made since 1972 to the global “Coordinated Universal Time” (UTC) in increments of one second every several years, in order to keep the world’s atomic clocks synchronized with earth’s rotational cycles. Without such adjustments continuing to be made, some experts worry, essentially, “noon” on any day would, by the year 2100, become in actuality, something closer to 11:58 instead of noon (so that’s where all that time wasted on that one unproductive project went!).

But, here’s where all this rather abstract debate meets a concrete reality: “Opponents of leap seconds, led by the United States,” the Times article notes, “say the sporadic addition of these timekeeping hiccups is a potential nightmare for computer networks that depend on precise time to coordinate communications.” Interestingly, the UTC representatives from the United Kingdom, nominally one of our staunchest allies in global politics, are on the other side in this “second” debate.

For now, the decision as to what to do with the next potential “leap second” or two has been postponed for at least three years, until 2015 at the earliest (I know you were already in deep suspense about this!).

OK, so the reason this debate hasn’t gotten more publicity is that, frankly, the world’s governments and peoples have more pressing concerns than whether or not they can afford to lose two minutes off the world's calendars in the next 90 years. But reading the article triggered some thoughts for me around the legitimate question of what’s important and what’s not in our own worlds. For example, take the development, implementation, and updating of evidence-based order sets within computerized physician order entry (CPOE) systems. In my years-long reporting on this topic, I’ve heard clinical informaticists say over and over that one of the biggest challenges, perhaps ironically, is managing the plethora of what might at first glance sound like “little” issues—for example, which physicians to include in order set development committees; the degree to which those committees need to be multispecialty and multidisciplinary, depending on the type of order set involved; how many passes around various clinicians any order set needs before it is approved; and so on.

And those “little” decisions really do matter. What’ I’ve learned is writing on this subject is that the composite of these many decisions will help shape how successfully a patient care organization ultimately is in creating an architecture for strong clinical decision support for its physicians. And that really is important.

Indeed, as readers will learn when they read my article on the development of advanced clinical decision support, one of our Top Ten Tech Trends in the March issue of Healthcare Informatics, industry experts and leaders in the trenches say that one of the reasons for the failure, or at least sub-optimization, of many first-generation CDS systems nationwide, has come about because of the time-pressed “slamming in” of CDS systems, under pressures to get EHRs implemented quickly and on schedule. What hospital and integrated health system leaders are discovering is that it’s simply not possible to “shotgun” in a meaningful CDS successfully, and that means that CMIOs, VPs of clinical informatics, and other clinicians, supported by their CIOs, will inevitably have to make their strategic and tactical choices carefully and over a period of time.

If ever the old adage “the devil is in the details” applied to clinical informatics, it would have to be in the area of evidence-based order set development under the umbrella of CDS development overall. At least CMIOs won’t be responsible for having to try to save two to three minutes of global time in the next 90 years!