At UPMC, Mastering the IT Management Issues—On a Massive Scale | Healthcare Informatics Magazine | Health IT | Information Technology Skip to content Skip to navigation

At UPMC, Mastering the IT Management Issues—On a Massive Scale

June 22, 2016
by Mark Hagland
| Reprints
Chris Carmody, SVP of enterprise infrastructure at the vast UPMC health system in Pittsburgh, details his bold plans for the organization’s IT future

The UPMC Health System, based in Pittsburgh, is a vast, 20-plus-hospital integrated health system that encompasses 60,000 employees and over 1,000 sites of care. The sheer size of the organization demands a level of IT governance and management that may not be demanded of much smaller patient care organizations.

Sitting at the nexus of a constant swarm of activity and innovation is Chris Carmody, senior vice president of enterprise infrastructure for UPMC. Carmody, who has been at UPMC for 18 years—the past three in his current role—is also president of the health system’s health information exchange, ClinicalConnect. In his SVP of enterprise infrastructure role, Carmody finds himself constantly in motion, helping CIO Ed McCallister to oversee complex IT operations on behalf of an Army-sized cadre of healthcare professionals.

Recently, Carmody welcomed HCI Editor-in-Chief Mark Hagland into his office in the health system’s corporate headquarters in the US Steel building, which dominates downtown Pittsburgh’s skyline, to discuss the challenges and opportunities that he and his colleagues are working through these days, as they push ahead into the future. Below are excerpts from that interview.

Tell me about your overall infrastructure strategy for the organization?

In my 20-year career in IT, this is probably the most transformative time ever, and especially from an infrastructure perspective, because of the emergence of cloud computing and software as a service (SaaS). So being responsible for the infrastructure for such a large system, is definitely a challenge. When I stepped into this role three years ago, things were still pretty fragmented. And we’ve taken on and absorbed into our environment, all the server and data center aspects. So you have all these legacy applications and systems that we’re taking on, and we can’t just throw away. So we have to manage and store data.

Webinar

Augmented Intelligence: The Digital Key to Healthcare’s Digital Future

The value of artificial intelligence and IoT has been difficult to prove for many health systems - yet these advances in technology are heralded as the ultimate solution to all healthcare problems...


Chris Carmody

Right now, we are very much moving away from that traditional approach, and transitioning, so that we can deliver IT as a service. We’re working to eliminate the traditional stovepipes of IT—PC support, help desk, operating systems, DVAs. So you’ll see the elimination of those stovepipes and the full implementation of hybrid cloud computing, a combination of public cloud computing and on-premise storage.

Today, we have two data centers. Forbes Tower is our primary data center, and we have a smaller data center at Shadyside Hospital, 2.7 miles away [both in central Pittsburgh]; and they’re on the same power grid. But because of the organic growth of UPMC, we hadn’t addressed that in the past. And 12 years ago, when we were looking at a data center, we decided to virtualize. So we went from 999 physical servers to 4,000 virtual servers across these two data centers. The problem is that the actual facilities are 20 years old (Forbes) and 37 years old (Shadyside). So part of our long-term planning is to look at a new data center. We’re in the very late stages of selecting a partner to build a new data center.

Is that the same as having data center operations outsourced?

No, we won’t be outsourcing data center operations; instead, it will be a leasing type of situation. Either it will be a straight lease where we’ll lease the data center facility, or our data center will be co-located with the data centers of unrelated organizations, in the same facility. When such facilities are shared, you typically have your power infrastructure in the middle, with data halls. We would be one data hall.

So other completely unrelated organizations might be in the data halls?

Yes, or it might be a whole new building that we might lease. But we believe, given a hybrid cloud computing environment, that this should be the right strategy for us. There are still things we need to keep control over. The element that will consume the most power and space will be our network. With the Internet of things, the many devices—not just traditional computing devices, but medical devices and so forth—we’ve got a huge range of devices to manage.

And then you have this thing called the cloud. And this idea of shifting workloads. We came to the endpoint of our ELA, enterprise licensing agreement, with Microsoft, and used that as an opportunity change the licensing model. When you have 60,000 employees, and countless licenses, we were probably double-paying for licenses. So we collapsed that into an IT programming governance structure. All those processes that should be best practices—we’ve collapsed the people and processes, and now are streamlining to deliver IT as a service, to the enterprise.

We’re actually implementing a technology solution from Cherwell. They’re a platform that will provide a service catalog to UPMC that will be our front door. So if you need a landline phone, a cell phone, a PC, a laptop, an application, you’ll go there. So we’ll build the automation and orchestration layer with different software components that will actually automate the build. So we’re going to get rid of Shadyside, our secondary data center, and we’ll pick up and plop into the new data center, 28 miles apart, so that will be on a different grid, which will be good, from a disaster recovery standpoint. And when our Forbes Tower lease comes up… The first move will happen in about a year, to be sited about 28 miles away in the Pittsburgh metro area.

And so if the lase comes up on Forbes and it’s cheaper and there’s less risk, we might put that in the cloud. So we’ll have a much smaller physical presence because of cloud technology.

So all of this is being governed by a broad strategy, correct?

Yes, that’s correct: our strategy is really hybrid cloud computing. And one of our first big moves is that we’re migrating to Microsoft Office 365. This month, we will go live on it with our first big grouping of users, our IT department, which is about 1,500 users. We have four different connections—we’re creating private connections to two different Microsoft data centers, for speed and redundancy. That’s what Microsoft Office 365 is. And that’s critical to UPMC. So we’re going to be migrating all the mailboxes over to Microsoft 365. And it will be a much cheaper solution. When our ELA ended with Microsoft, the cost to move to the cloud was millions of dollars less than running things on premise. And plus, it was about the capabilities and functionality to help our users. So that’s happening now. We’re constantly looking at and talking with vendors about SaaS solutions. And we’re evaluating that. And about three or four years now, when we move into the new data center and ultimately have the leases for the old data centers to expire, we’ll end up with between 40 and 60 percent of our data in the cloud. We just did an agreement with our HR vendor, and it’s about a three-year process to move to the cloud.

What about your EHR [electronic health record] hosting?

With Cerner, we haven’t yet decided if and when we’ll move to Cerner’s cloud for the EHR, but that’s a two-year process just to migrate the physical solution to their cloud. We have to run operations 24/7 here. It’s like flying an airplane and changing out the engine while in flight. So it has to be planned well, executed well, validated, before you’re ready to turn it over to use.

And we’ve used Alcatel/Telus for networking; now it’s called Nokia. And we can move things quickly across the network. We have about 1,000 physical locations. So we use the same type of gear that AT&T and Verizon use to support their network. And that needs to be upgraded over the next few years. What’s more, we did an RFP to look at access-level switching. So people’s plug-ins on their floors, we need to upgrade that environment as well, over the 4-5 years. Replacing 4,000 switches across 1,000 sites takes time, but it allows us the scalability and flexibility that we need.

Towards a “friction-less experience” for end-users

We had five RFPs in the past year. The first was for the data center facility itself, for our access layer switching for our network. Keep in mind that there are three layers of network: access layer, distribution layer, and core network—and that the access layer involves end-user access, both wired, and wireless. Cherwell, the service catalog solution; we’ve replaced a product we had for 12 years, and we’re replacing the old product next month. We want our end-users to have a frictionless experience; we don’t want you to need to go through five people to get a new PC. If your PC is at the end of its life, we want to go out and do it proactively. Four is our end-user devices. We have about 105,000 devices—cell phones, tablets, PCs, desktops, laptops, all over the place. And quite honestly, this was two-fold why this was part of this process.

This is a very dynamic and changing environment, and yet in healthcare, we seem to be drawn out around our ability to change. We went through a third party in the past, and saw a huge cost savings opportunity, so we went through an opportunity and selected Dell as a partner, and so we’re saving a significant amount of money per device per unit, and Dell’s committed to work with us to figure out workflows and how we can change that whole end-user computing device situation. So if you went into a physician clinic, you might see a bunch of desktop computers, then you see the computers on wheels, and someone walking around with a tablet device. So a lot of different devices have been invested in.

You’re going to try to be systematic?

Yes, that’s right. We just completed the Dell agreement in the last month or two, and we’ll be starting to roll things out around it later this summer.

And what was the fifth RFP for?

The fifth was around information security. We’ve traditionally spent different monies with different vendors. So we wanted to look at our entire spend from an IT security standpoint and see what we could do with the same funding. Our budget isn’t increasing. And we definitely have access to more funds, being a large system, but we definitely need to use that money wisely. And the information security issue is very dynamic and changing. To me, ransomware is the latest fad, to be honest. We, as the folks providing the data and managing the assets, have to try to maintain security and at least mitigate any possible event, and to look at things from a very broad standpoint. And from that perspective, I like to be proactive in the security space. So what we did is that we kind of bundled all our needs together, some of the stuff we currently did and saw that we needed to do. So we ended up creating an agreement with Dell SecureWorks, the industry leader from an information security perspective. Their SOC [security operations center] will go live within a month. Currently, we use Splunk. It’s a technology that’s ingesting five terabytes of data that we ingest every day. The SOC will work down the list and investigate the data we’re getting through Splunk. They’ll work with me for that.

Are you the chief information security officer for the organization as well?

No, the person with that title, John Houston, reports to me. And he’s great. And some of the changes we’ve made over the past few years have been towards making security not just an IT issue, but a business issue. So we’ve engaged different stakeholders from HR, finance, some of our clinical departments. And we’ve great support from our board. Obviously, every board wants to avoid being in the newspapers for a ransomware incident. And we’ve been doing tabletop exercises—mock events, around ransomware and other issues, for a few years now.

What have the biggest lessons learned in the last year or so been for you and your colleagues as you’ve been moving forward in all these areas?

Mostly, it’s been more reinforcement of what we’ve already known, that it’s not just about the technology, it’s about the people and the processes. One of my biggest opportunities and one of the most exciting opportunities I have as an IT leader here at UPMC, is to help people and get them excited for the future, for developing new skill sets, and so forth. And you can’t just make investments in technology and not work through the processes. We like to pick and choose what works best from a process standpoint.

What do you see happening in the next few years?

We’ve always been focused on the international processes and operations; we’ve been so much focused on the internal aspects of IT. But the focus is increasingly going to become external, towards patients and consumers. I think that the whole consumerism movement will change the model. Because care management is changing things. And the whole genomics space, and with it, the analytics around genomics, that will change everything.

And the analytics piece will change how we treat and deal with people, and will help medicine become more precise for you and I. So the population health piece will change things on a broad level, and the genomics and analytics piece will personalize things at the individual level. So this is a time of transformational change.

You seem optimistic overall, correct?

I’m absolutely optimistic. If I weren’t optimistic, I wouldn’t be in this job.

Historically, IT in particular in healthcare has been managed in a rather custodial way. Yet now, it seems that the approach needs to change to an enterprising one, as you’ve been helping to lead here at UPMC.

Yes, that’s correct. And we’re lucky: our CEO is such a visionary. And the board has been incredibly supportive. And so I think it’s more of a cultural thing at UPMC. And it may stem from being an academic medical center, we’re always after the latest and greatest. And so being a custodian of how we deliver healthcare, we can implement change, we can change the model, and I think that’s what we’re in the midst of.

 

 


The Health IT Summits gather 250+ healthcare leaders in cities across the U.S. to present important new insights, collaborate on ideas, and to have a little fun - Find a Summit Near You!


/article/cloud/upmc-mastering-it-management-issues-massive-scale
/whitepaper/challenges-and-opportunities-genomic-data-patient-care-and-cloud

Challenges and Opportunities: Genomic Data, Patient Care, and the Cloud

Please register to download


Patient care organizations are moving forward to connect the academic research arms of their universities to the patient care delivery operations in their clinical organizations. And that is leading both to opportunities and challenges.

On the opportunity side, genomic data is now actively being used for rare disease diagnosis; for cancer detection; for the tracking of mutations; and for medication selection for patients.

But the data challenges involved in working with genomic data, particularly in participating in any activities connecting genomics to patient care, are many, and complex.

More From Healthcare Informatics

/webinar/augmented-intelligence-digital-key-healthcare-s-digital-future

Augmented Intelligence: The Digital Key to Healthcare’s Digital Future

Thursday, December 6, 2018 | 1:00 p.m. ET, 12:00 p.m. CT

The value of artificial intelligence and IoT has been difficult to prove for many health systems - yet these advances in technology are heralded as the ultimate solution to all healthcare problems. Many of these AI and IoT initiatives fail to deliver on outcomes because they focus on the challenges of bringing together data and miss the opportunity to operationalize information.

In this webinar we’ll discuss what it means to Augment Intelligence in your smart hospital operations. You want to “augment” the intelligence of your workforce by giving them contextual information to either support decision making or automate processes - AI and analytics alone is not enough you need context.

Related Insights For: Cloud

/news-item/cloud/microsoft-healthcare-rolls-out-fhir-server-azure

Microsoft Healthcare Rolls Out FHIR Server for Azure

November 13, 2018
by David Raths, Contributing Editor
| Reprints
Developers could use the server to quickly ingest and manage FHIR datasets in the cloud

Microsoft Healthcare has announced the release of an open source project, FHIR Server for Azure, to offer developers access to software that supports the exchange and management of data in the cloud via the FHIR specification.

FHIR Server for Azure on GitHub provides support infrastructure for immediate provisioning in the cloud, including mapping to Azure Active Directory (Azure AD), and the ability to enable role-based access controls (RBAC), the company said. Developers can save time when they need to integrate a FHIR server into an application or use it as a foundation to customize a unique FHIR service.

In a blog post, Heather Jordan Cartwright, general manager of Microsoft Healthcare, said the company “is contributing this open source project to make it easier for all organizations working with healthcare data to leverage the power of the cloud for clinical data processing and machine learning workloads.”

In August 2018, Microsoft joined with Amazon, Google, IBM and other companies in a commitment to remove barriers for the adoption of technologies that support healthcare interoperability, particularly those that are enabled through the cloud and AI and especially FHIR.

Among the points the companies agreed to was: “We understand that achieving frictionless health data exchange is an ongoing process, and we commit to actively engaging among open source and open standards communities for the development of healthcare standards, and conformity assessment to foster agility to account for the accelerated pace of innovation.” 

As an example of how FHIR Server for Azure will work, Microsoft said developers can use the server to quickly ingest and manage FHIR datasets in a cloud environment, track and manage data access, and begin to normalize data for machine-learning workloads.

In August, Josh Mandel, chief architect of Microsoft Healthcare, noted that the company had added support for FHIR to the Dynamics Business Application Platform through the Dynamics 365 Healthcare Accelerator, and developed an open source Azure Security and Compliance Blueprint for Health Data and AI for deploying a FHIR-enabled, HIPAA/HITRUST environment in Azure.

 

See more on Cloud

betebet sohbet hattı betebet bahis siteleringsbahis