The Data Center's Unspoken Four Letter Word - And How to Remove It
“Do I hear ‘Jaws’ music?”
There is a problem lurking below the covers of the servers in your data center. It is a problem that has been around since computers were invented, though most thought the problem was vanquished long ago. Worse yet, the problem could cause your OPEX and CAPEX to explode and you won’t be able to stop it. What is this zombie menace from long ago that threatens to drain your budget faster than a pack of consultants with an open-ended contract from your CFO? Heat. More specifically, future heat and its impact on everything from workloads to real estate. Heat that will require you to change your entire approach to systems acquisition and operations. And the solution? It too might be a blast from the past.
Today’s servers are packaged with more components than ever before: I/O (networking) cards, increased storage devices, higher power consuming memory technologies and more accelerators. Each addition has a power cost associated with it. Intel processors themselves have seen a 71 percent increase in total drawn power (TDP) over the last decade. Granted, there has been a 6X growth in the number of cores and corresponding increase in performance. Technologies like virtualization and demanding workloads like analytics are requiring better server performance – and more power. At the same time, everyone desires density: space efficient systems, concentrated into a smaller footprint. All the power needed to drive all those cores generates greater heat, which means larger heat sinks on top of the CPUs. This, in turn, will drive taller chassis for the servers, taking up more room in the rack, reducing density.
Therefore, you are going to face higher electricity consumption from the systems themselves, plus, higher air conditioning costs to get more cold air into the systems. According to ASHRAE, the heat load of a rack full of 2U two-socket servers will increase 67 percent between 2010 and 2020. Couple that with less density in the rack, which means more racks and higher real estate costs.
IT teams are looking for space-efficient, less expensive ways to remove the heat created by their data center equipment
So how does one solve the dilemma of power and heat vs. density? Data centers are increasingly looking at liquid cooling in some form or fashion. Whether that is direct liquid cooling to the actual system components or rear-door heat exchangers (RDHX), utilizing water is going to save you electricity.
Which Cooling Solution is Right for You?
IT teams are looking for space-efficient, less expensive ways to remove the heat created by their data center equipment. Several innovative new technologies address these cooling challenges, but one rises above the rest – direct water cooling. But first, let’s take a look at the pros and cons of other options.
Immersion cooling, where the systems are dipped in an immersion bath, can handle 100% of the heat load. However, this solution limits the components that can be used, and makes replacing parts more complex. Worse, it consumes a large footprint in a place where the whole point is to use the space as efficiently as possible.
Rear Door Heat Exchangers
Rear door heat exchangers act like a giant radiator, capturing the heat directly from the back of an air cooled server and transferring it to water before it enters the room. These systems can easily handle a lot of heat (over 30KW of heat per rack), but they require chilled water to run efficiently, which can add costs. The benefit of an RDHX solution is that you can utilize your existing infrastructure without the requirement of any specialty hardware.
Warm Water Cooling – the Ultimate Solution
Direct warm water cooling has many of the benefits of the above solutions without the downsides. With direct water cooling, water piping is paired with cold plates to transfer heat from the server directly to the water. This results in an energy efficient operation with little to no heat left for the data center to contend with.
Water cooling is nothing new – it was the primary cooling method in the industry until the advent of the x86 server, when air cooling became the standard and water went by the wayside. Water cooling has made a resurgence of late, but this latest generation of technology differs from its ancestors. In the old days, water cooling solutions required cold water, so chillers were needed. But unlike those older water cooling solutions and rear door heat exchangers, the water in modern direct water cooling systems doesn’t need to be chilled, so it requires less energy. These systems can use intake water up to 50 degree Celsius, making warm water cooling a viable option almost anywhere in the world.
Additionally, the newest systems have been designed so that more components can be cooled by water. In addition to the CPU memory, now the IO and voltage regulation devices are also water cooled, driving the percentage of heat transferred from the system to water to more than 90 percent. Problem solved!
One example of how warm water cooling systems have helped an organization dramatically cut their electricity bill without sacrificing compute power is the supercomputing center Leibniz-Rechenzentrum (LRZ) in Munich, Germany. LRZ adopted a direct water cooling solution in 2012. Not only did the solution reduce their electric bill by 40 percent, the performance of the CPUs actually “increased” by 10-15 percent.
Unfortunately, not everything in the data center can be water-cooled, so our team at Lenovo and LRZ are in the process of taking alternative cooling to an entirely new level, by actually converting the hot water “waste” into cold water that can be reused to cool the rest of the data center. Not only will this solve the heat problem caused by higher power density data centers, but it will actually generate more cold water than the data center can consume.
It’s no wonder that IT departments around the world are increasingly choosing direct warm water cooling to solve heat issues caused by higher power density in their data centers. To date, warm water cooling has mostly been deployed in high performance computing environments, but will likely make its way to regular commercial data centers in the future, given the benefits and savings it creates.