Skip to content Skip to navigation

Containers in the Data Center Help Healthcare Organizations Strengthen Competitive Position

October 16, 2015
by Michael York, senior systems engineer, IT services, Asante
| Reprints
Michael York

Competitive pressures in today’s healthcare market are taking their toll on IT departments. Health systems are being gobbled up in an eat-or-be-eaten environment that results in ongoing data center consolidation challenges. New mandates such as electronic medical record adoption and online consumer health-management offerings require high availability services to be spun up rapidly. And overall budgets are tight, which means IT needs to accomplish more with fewer resources.

Many organizations (both within and outside of healthcare), looked to server virtualization to help solve these challenges. But over time, virtualization brought its own problems, including rampant sprawl, exploding licensing costs, and increasing needs for agility that virtual machines (VMs) can’t meet. Containerization is the next logical approach, but new container technologies are getting more attention in the Linux world than in the world of enterprise-level Microsoft Windows data centers. That’s a mistake that healthcare CIOs can’t afford.

Asante—a not-for-profit regional healthcare system in southern Oregon—has been using container technology throughout its data centers since the mid 2000s to better manage our enterprise-critical SQL Server workloads. When I joined Asante in 1999, we had one hospital and a dozen outlying clinics, with an IT staff of about 20 people managing 30 servers. Today, we have more than 700 servers spread across three hospitals and more than 50 outlying clinics, all of which is managed by an IT group of about 180 people. With that kind of growth, we had to implement innovative approaches to control SQL Server and OS sprawl, and to gain the flexibility to spin up services more quickly and manage them more efficiently. Containerization has helped us do all that. And in the process, Asante is saving hundreds of thousands of dollars in hardware, software licensing, and support costs—money that can be better applied to services aimed at positively impacting patients’ lives. It’s an approach that other healthcare data centers should be considering.

Virtualization Can Only Take You So Far

In the mid 2000s, we began virtualizing much of Asante’s data center in order to consolidate, improve mobility, and achieve higher availability for business-critical applications, as well as gain better use of IT resources and engineering time. We started with smaller projects and low-hanging fruit, virtualizing as much of the data center as made sense, which helped keep operating costs down by reducing ongoing lifecycle issues around refreshing hardware every four to five years with physical servers.

But while we saw improvements, there were still a number of workloads we couldn’t collapse. These were increasingly business-critical SQL Server applications that required multiple servers for high availability and were too large to be virtualized. With nearly 200 line-of-business applications, we continued to see an explosion of physical and virtual SQL Servers throughout the organization.

In addition, new projects were increasing in size and complexity, requiring multiple servers for databases and applications, as well as web and interface servers. These included small to mid-size databases that came in from vendors as overbuilt solutions, as well as very large, transaction-heavy databases. But even though not all of these databases were processing at 100 percent, many vendors’ philosophy—especially for healthcare applications—was to provide the biggest and best for future growth and high availability, which usually meant large clusters of servers. Our concern was that clusters are designed to protect hardware. If a single software piece fails and the other elements of the system don’t know about it, the system still fails. To avoid the complexity and restrictions of Microsoft clusters, we looked for an application-focused alternative to virtualization for I/O-intensive SQL Server workloads that didn’t involve spinning up large hardware projects with multiple hypervisor nodes and related overhead.

We also knew that we couldn’t afford to make all of the special provisions needed for SQL Server to exist within VMWare. I/O contention meant dedicating one or more costly ESX host to run SQL workloads. But without building specialized VM farms, that led to wasting space in the wrong kind of storage or putting too much I/O workload into a VM. We needed an approach that would let us break apart vendor-recommended solutions similarly to what we were doing with VMware, but that provided a viable alternative to clusters for SQL Server applications.

Containers Provide the Logical Next Step

Our early experience with containers was in the mid-2000s. We were intrigued by containers’ ability to encapsulate an application instance and its associated resources—but not the entire OS—to decouple them from the host OS instance and IT infrastructure. That was a very different approach than VMs, which encapsulate the application, its required binaries and libraries, and also an entire guest OS. It’s this last requirement that causes VMs to sprawl and to become so expensive to manage over time. Container software lets us stack multiple containers on a single VM or physical Windows host, without the continuing to duplicate operating systems, thus lowering administrative overhead, overall compute, and licensing costs across the enterprise.




Containers indeed represent a number of advantages for SQL Server operation, including automation for development and test and reduced VM instances and license costs.