It’s rare that a company occupies a truly unique position, but that seems to be the case with Orem, Utah-based KLAS. The organization is one of the most respected sources of information in the healthcare IT industry, producing reports that CIOs pay major bucks to possess. In fact, a number of CIOs have told HCI that KLAS reports are one of the main tools they use when starting down the path of vendor selection, at the very least, giving them a baseline of who’s who in the market. But nobody’s perfect, and KLAS has had some rocks thrown its way. In the following interview, HCI Editor-in-Chief Anthony Guerra talks with KLAS President Kent Gale about all of the above, and more.
AG: Why do you think your model, of getting feedback from users, as opposed to doing your own evaluations, has resonated so well?
KG: First off, this is not a black and white measurement. You know we are not measuring tolerances here. What we’re measuring is individual expectations being met or not being met and this is typically by the business owner. So it wouldn’t be the registrar, the person sitting at the desk registering patients, it would be the manager of the registration area, that we typically talk to. So we don’t want to know specifically if every little feature functions exactly how they want. We want to know if it’s meeting the business need, are there functions missing or are there other exceptionally good functions they have and what are those. Typically, our goal is to talk to the business owner and they could be anything from a CEO of the hospital or IDN all the way down to the manager of registration services.
So the theory is to talk to the people who best know if their expectations are being met. We gather anecdotal information and then we have them answer 40 symmetric questions that allow us to aggregate the data and report it back to the same people who gave it to us. Then they can look at the data. The key difference here is, we have a hard and fast rule that everything we do has to be to the benefit of the provider community.
We have huge pressure from the vendors, from consulting firms, from investment bankers to sort things out so that it makes more sense to them, but not as much sense to the provider. So we always lean to the provider’s side; if we’re going to do anything, it has to benefit the provider. And if we don’t use that as the foundational rule, it can be very easy to get off target and start reporting data that makes the vendor look better, but doesn’t serve the provider’s needs. I mean, a vendor may say, ‘Gee, we’re really good at 500 beds and up, and you ought to measure it that way,’ but most providers don’t necessarily care that that’s a break point.
There are just different ways you can slice data, and we get pressure sometimes to move one way or the other. We also have pressure from vendors to take their name out of the pool, they don’t want to be measured. We’ve been threatened many times by vendors who say, ‘Hey, we don’t want you to measure us,’ and we go ahead and do it anyway.
AG: Tell me about some ways you deal with that pressure. How do you push back?
KG: For example, there are times when vendors will threaten to sue us and our approach is well, we’ll give you a couple of hours to talk you through our methodology, we’ll try to help you understand, but the minute we start becoming so nervous about this, so now we spend all our time defending ourselves, now we’ve taken our eye off what’s best for the provider. We say, ‘Here’s how much time we’re willing to help you understand this, if you don’t understand it at the end of this period, go and sue,’ but we’re going to keep doing our job because we can’t get the verdict from our job trying to satisfy the vendor’s appetite. So we have to be very careful to not let the vendors push us around.
AG: I would imagine some vendors are very aggressive with you, while others don’t do anything. Is that true?
KG: What happens is the larger vendors that really could have an impact on us with pressure, realize that that blows up in their face because now of all sudden KLAS starts reporting this vendor is pushing us around and that just blows right up in their own face. They probably know that’s not a good idea.
And we have to be meticulous. We have multiple levels of quality check. In fact, I was just was on the phone before I talked to you, to a large organization, they said, ‘How do you keep this from being an American Idol contest, in which you just get a whole bunch of a vendor’s best customers calling you and trying to give you a positive story?’
First off, most providers are much smarter than that, they’re not interested in making a poor performing vendor look good. Most of the professionals in this industry have good integrity, and they’ll report what really is performing. Plus KLAS interviews everyone that gives us data and finds out what position they have, is this a legitimate measurement, have you had good experience with the vendor and, by the way, why are calling us, why are you doing this? For the data?
AG: There are only two knocks that I’ve seen on your business. One would be that your methodology is somewhat murky and people aren’t really sure how you’re coming up with the numbers. The other would be that the system can be gamed. How would you respond to those charges?
KG: Let’s start with methodologies just for a second. In a perfect world, we would interview every customer of every vendor and the right people inside that customer base and, assuming that we did that, statistically it still wouldn’t be valid. The reason would be because the way customers use a product or how deeply they use it. Are the nurses using it for vitals or are they using it for automated care plans or are they using it for downloading from patient monitors. So the variable use of products makes it so that statistically it’s impossible to get a measurement that is a real metric.
So what we actually do is, if we have pool, like for example we have 100+ interviews with Cerner customers on PowerChart, that 100+ interviews is actually a valid statistical measurement for those 100. That’s an actual black and white measurement for those people. And so we state, ‘Hey look at the pool, look at how big it is, look at the check marks.’ This is a data point for you to look at and it should trigger good questions in terms of other research that you want to do, but it is not a definitive black and white measurement, where one measurement that you can use actually reflects a set of opinions. Be aware of how, if you had the numeric base for this, three check marks means you’ve got at least 30 organizations, three different customers that have rated the product and that is as many check marks as we put. Even though we may interview 100, we’re still only going to go up to three check marks, which means a reasonable sample size.
So we’re trying to say to people, what we actually give you is a valid measurement for those specific people we talked to. But if we talked to another 30, it may move a certain direction, now you’re talking about statistical variance and margin of error and it’s impossible to put a reasonable track on that because one vendor, let’s take Siemens Soarian which only has 20 customers, for example, if we interview 18 of those 20, that’s a pretty good sample. But what if someone else has 1,000 customers and we interview 30 of them; it’s impossible that there’s a real honest comparison between the vendors and the customers they have because the sample sizes are so different, the number of customers they have. You’ll hear some vendors argue, ‘Gee, I only have 18 customers. It’s really hard to get your first 18 live, so you can’t expect the score to be very high.’ Then you’ll have other vendors that say, ‘Gee it is easy to get only 18 happy,’ and so you can see arguing both sides of why a number would not be useful.
So now let’s talk about gaming the system. Most providers that we talk to are honest, good people and they’re not into gaming anything. It’s impossible for a vendor to pay someone to score high, because the vendor never knows what they put in as a score to KLAS, and they never know if it gets through KLAS and into our database. So a vendor can’t tell what a particular provider scored him, unless the provider actually has a vendor sit next to him while he does it.
So we have had many vendors who have encouraged their customers, ‘If you can’t rate us higher than a seven, don’t go in and rate us in KLAS, talk to us first until you can rate us higher than a seven.’ There’s an e-mail that went out from one vendor to thousands of their customers, well, the customers sent it onto us. When a vendor tries that it blows up, because many of those providers have become very loyal to KLAS and they don’t want it to be gamed, so they may at least start passing back to us what is going on.
AG: Were you tempted to take that public?
KG: Oh we always take it public, next time I do a presentation I take the e-mail, put it up and show it to everybody in the room.
AG: But you don’ t want to tell me right now who the vendor was?
KG: Well, I can’t remember that one.
AG: Okay, you did at the time and you have no problem making that kind of information public?
KG: Right absolutely. We’ve had many, many of those by the way, attempts to try to get the score up through that kind of mechanism. One of the vendors went to their very best customers and encouraged them to come to KLAS and give us their ratings and I thought that was really great because I got to watch and interview the people as they came in, but what the vendor didn’t realize was that their customers weren’t rating them that high. I was just sitting there thinking this was awesome, these people are honest, honorable people, and they’re rating the vendor the way they see it. We watched that, we were fascinated over time watching that happen.
Can you game the system? Yes temporarily. There are vendors that have been able to initially get some of their best customers to come in and we don’t catch it for several months, and then we catch it and we quarantine that data until we can do a statistical sample that we compare with the original one and see if they match.
AG: That seems so counterproductive, spending energy to game a report rather than improving products and services. They’re almost hiding from themselves the truth of the problems rather than fixing them.