Competitive pressures in today’s healthcare market are taking their toll on IT departments. Health systems are being gobbled up in an eat-or-be-eaten environment that results in ongoing data center consolidation challenges. New mandates such as electronic medical record adoption and online consumer health-management offerings require high availability services to be spun up rapidly. And overall budgets are tight, which means IT needs to accomplish more with fewer resources.
Many organizations (both within and outside of healthcare), looked to server virtualization to help solve these challenges. But over time, virtualization brought its own problems, including rampant sprawl, exploding licensing costs, and increasing needs for agility that virtual machines (VMs) can’t meet. Containerization is the next logical approach, but new container technologies are getting more attention in the Linux world than in the world of enterprise-level Microsoft Windows data centers. That’s a mistake that healthcare CIOs can’t afford.
Asante—a not-for-profit regional healthcare system in southern Oregon—has been using container technology throughout its data centers since the mid 2000s to better manage our enterprise-critical SQL Server workloads. When I joined Asante in 1999, we had one hospital and a dozen outlying clinics, with an IT staff of about 20 people managing 30 servers. Today, we have more than 700 servers spread across three hospitals and more than 50 outlying clinics, all of which is managed by an IT group of about 180 people. With that kind of growth, we had to implement innovative approaches to control SQL Server and OS sprawl, and to gain the flexibility to spin up services more quickly and manage them more efficiently. Containerization has helped us do all that. And in the process, Asante is saving hundreds of thousands of dollars in hardware, software licensing, and support costs—money that can be better applied to services aimed at positively impacting patients’ lives. It’s an approach that other healthcare data centers should be considering.
Virtualization Can Only Take You So Far
In the mid 2000s, we began virtualizing much of Asante’s data center in order to consolidate, improve mobility, and achieve higher availability for business-critical applications, as well as gain better use of IT resources and engineering time. We started with smaller projects and low-hanging fruit, virtualizing as much of the data center as made sense, which helped keep operating costs down by reducing ongoing lifecycle issues around refreshing hardware every four to five years with physical servers.
But while we saw improvements, there were still a number of workloads we couldn’t collapse. These were increasingly business-critical SQL Server applications that required multiple servers for high availability and were too large to be virtualized. With nearly 200 line-of-business applications, we continued to see an explosion of physical and virtual SQL Servers throughout the organization.
In addition, new projects were increasing in size and complexity, requiring multiple servers for databases and applications, as well as web and interface servers. These included small to mid-size databases that came in from vendors as overbuilt solutions, as well as very large, transaction-heavy databases. But even though not all of these databases were processing at 100 percent, many vendors’ philosophy—especially for healthcare applications—was to provide the biggest and best for future growth and high availability, which usually meant large clusters of servers. Our concern was that clusters are designed to protect hardware. If a single software piece fails and the other elements of the system don’t know about it, the system still fails. To avoid the complexity and restrictions of Microsoft clusters, we looked for an application-focused alternative to virtualization for I/O-intensive SQL Server workloads that didn’t involve spinning up large hardware projects with multiple hypervisor nodes and related overhead.
We also knew that we couldn’t afford to make all of the special provisions needed for SQL Server to exist within VMWare. I/O contention meant dedicating one or more costly ESX host to run SQL workloads. But without building specialized VM farms, that led to wasting space in the wrong kind of storage or putting too much I/O workload into a VM. We needed an approach that would let us break apart vendor-recommended solutions similarly to what we were doing with VMware, but that provided a viable alternative to clusters for SQL Server applications.
Containers Provide the Logical Next Step
Our early experience with containers was in the mid-2000s. We were intrigued by containers’ ability to encapsulate an application instance and its associated resources—but not the entire OS—to decouple them from the host OS instance and IT infrastructure. That was a very different approach than VMs, which encapsulate the application, its required binaries and libraries, and also an entire guest OS. It’s this last requirement that causes VMs to sprawl and to become so expensive to manage over time. Container software lets us stack multiple containers on a single VM or physical Windows host, without the continuing to duplicate operating systems, thus lowering administrative overhead, overall compute, and licensing costs across the enterprise.
That early container technology eventually went end-of-life, but it let us see the promise of safely stacking instances to get more out of our hardware. More recently, we’ve been using DxEnterprise from DH2i. When we began with containerization, we started with low-hanging fruit as we had with virtualization, such as applications that were data-mining multiple SQL Servers running on disparate servers that could be collapsed into a better framework for high availability. Ultimately, we found that outside of limited situations that require applications to run in a separate VM (such as those with strict vendor requirements or FDA solutions), Asante has been able to consolidate 15 to 20 instances per server across the data center—which corresponds to a 15X to 20X reduction in OS count.
Containerization does a much better job than virtualization of running high-end workloads right where we need them, with all the management capabilities built in. And because containers are by nature extremely portable, we can move them from one host to another host in seconds with almost no application downtime. Also unlike VMs, containers can be moved between hosts of different types (e.g., a VM running Windows Server 2008R2 to a VM running Windows Server 2012), which makes configuring and maintaining a high availability environment much simpler. The container decouples SQL Server instances from the infrastructure, whether it’s physical or virtual. That means we can place a database in a lightweight container on our SQL Server farm and immediately have high availability for our internal customers. Built-in management tools make step-and-repeat on a large scale much easier than any other approach, and the container puts the workload right next to the disk where it needs to be, rather than separated by unnecessary virtualization layers. The mobility that containers provide also help us improve overall management and reduce operational and lifecycle headaches such as patching, and gives us infrastructure independence. Whatever comes next, we’re not locked into any single technology.
Today’s Challenges Are Different
When we started with containers, our goal was to spin up services more quickly, to manage them more efficiently, and to reduce SQL Server sprawl with an agile infrastructure that would let us provision and scale up servers as quickly as possible. Now that we have data centers in multiple locations, we were also happy to take advantage of containerization’s disaster recovery capabilities, with easy multi-subnet failover on any infrastructure, and which can be managed as an extension of the data center using the same tools—even for SQL Server instances that don’t natively support multi-subnet.
But while all those advantages are important, other things have changed over time. In the past, we considered licensing costs simply the price of running SQL Server within the organization. Back in the days of SQL Server’s CPU-based licensing model, it was four times cheaper than it is today, so we simply bought a SQL Server license, put it on the biggest server that made sense, and charged it back to the project without thinking about the snowball effect. When Microsoft changed its licensing to a core-based model, we realized we were going to have to pay hundreds of thousands of dollars this year alone to true-up our existing licenses if we didn’t consolidate. With containers, we consolidated to the point that we only had to pay about $20,000 in true-up costs, while maintaining full license compliance. With the added mobility that containers provide (without requiring SQL Server Enterprise edition), we’re also saving the monthly cost for Enterprise Agreement support, which increases incrementally as a percentage of licensing costs and comes up for renewal every two years. Those recurring SQL Server costs typically aren’t passed along to internal departments, so that’s a substantial additional savings for the IT department. Today, if we had to build two physical SQL Enterprise servers for a project, we’ve already saved money by instead building a single server layered with container software, to the tune of hundreds of thousands of dollars in fixed and recurring SQL Server costs.
Like many healthcare IT organizations, Asante needs both technical and budget flexibility to deliver infrastructure and spin up new services quickly and effectively. In our case, new IT initiatives are allowing Asante to offer electronic medical record (EMR) services to hospitals outside of our network, opening up new revenue and business opportunities. In the eat-or-be-eaten healthcare industry, containerization is positioning Asante to eat rather than be gobbled up.