Why air-cooled chillers still make more sense (and cents) for hyperscale and growing data center facilities. 

Recent research suggests that pre-COVID data centers accounted for approximately 1% of the world’s electricity use, a number that’s surely risen in the months since. That energy powers everything from uninterruptible power supplies (UPSs) down to the blinking lights on operating storage drives and, of course, the equipment needed to cool it all down. 

Data center deployments continue to grow in size and power density, driven in part by power-hungry cloud platforms, artificial intelligence (AI) solutions, and the Internet of Things (IoT), among other emerging technologies. The greater computing demand has led to increased use of high-performance graphical processing units (GPUs) and central processing units (CPUs) that, collectively, can produce as much as 350% more heat than previous generations 

In recent years, direct-to-chip liquid cooling and immersion cooling have become increasingly popular choices for heat removal because of their efficiency, relative ease of deployment, and scalability. While we believe these trends are“cool,” STACK INFRASTRUCTURE made a conscious decision to employ air-cooled chillers in many of our Basis of Design data centers 

The unexpectedly high cost of efficiency 

Even though air is highly effective at providing traditional cooling, water is still a much more efficient heat transfer medium.  This is a simple factor of comparing the specific heat of the two.  You may remember from chemistry class that a watt is a unit of power equal to 1 joule per second.  The specific heat of air is 1.0 kJ / kg C° while the specific heat of water is 4.2 kJ / kg C°. In layman’s terms, it takes four times more energy to heat up a kilogram of water by 1°C than a kilogram of air. More so, when a substance changes phase from liquid to a gas, it has to absorb thermal energy in order to rearrange its molecules. This is called the latent heat of vaporization, and the true source of efficiency in evaporative cooling. 

But there’s a catch. Or, rather, several catches.  

Much like the way critics of electric vehicles (EVs) cite the natural resource-intensive process for building EV batteries, evaporative cooling systems also come with certain impacts on a data center’s ability to operate.  

Physical size considerations 

Simply put, cooling towers are large and water is heavy (8.33 lbs/gallon to be exact).  A cooling tower capable of serving ~3MW of IT load (measured in “tons of refrigeration”) can weigh upwards of 50,000 lbs.  Supporting that type of weight and all the associated piping on a roof can be an expensive and complicated venture.  Conversely, putting it on the ground raises concern with the air plume and health hazards – the most publicized one being Legionnaires’ Disease. 

Strain on utility companies 

Water is a precious resource, becoming sparser and more valuable by the day as studies suggest global water usage will increase by as much as 30% in the next few decades. Hyperscale data center campuses as large as 500MW in capacity use hundreds of thousands or even millions of gallons of water every day, which puts a tremendous strain on municipal water and treatment plants to reliably supply that.

Better living through water chemistry 

Operating evaporative systems, especially at scale, requires thorough expertise in water chemistry.  Dissolved minerals in water can impact equipment or the efficiency gains sought out by employing these cooling methods in the first place 

Water chemistry balancing is a huge undertaking, made even more challenging if the source of the water is less than perfect. As water evaporates, the concentration of dissolved solids leftover increases with every passIt is kept in check by draining down the old “condenser water” and refilling with fresh “makeup water,” in what’s called “blowdown”.  Furthermore, the warm and dirty cooling tower condenser water is a tremendous breeding ground for bio-growth that can foul up any heat exchange surfaceNot only do data center operators have to worry about protecting their own equipment, but they also have to balance the “water constituents,” being sent back to the local municipality’s sanitary district.  In most cases, the local water plants maintain a relatively tight band for mineral concentration, chemical inhibitors, and biocides that can kill the microorganisms at the treatment plants. Failing to meet basic water chemistry and quality standards may result in a data center being prohibited from sending the water back to the treatment plant and in search of alternative options. 

Limits location possibilities 

One of STACK INFRASTRUCTURE’s points of pride is our expansive national footprint of strategic data center locations, serving our clients where they need to be. Water availability can have a significant influence on current and future site selection that can impact everything from connectivity and latency to the overall cost of building or running a data center.  

Typically, site selection involves evaluating fiber connectivity, power availability, and potential threats to continuous operations such as natural disasters. Facilities that use evaporative water cooling systems will also have to account for sewers, general water availability for cooling operations, and utility main sizing.  The reliance on additional utility sources introduces risk into the construction and ongoing operation of a data center.  Furthermore, those utility systems that are not fully controlled by the data center organization are considered unreliable and therefore concurrent maintainability requires onsite water storage for available delivery for a minimum of 12 hours.  These structures can become immensely large and expensive. 

Keeping it simple

Centralized water-cooled chilled water plants typically require their own unique and robust electrical distribution systems. Due to their size, the plants will have their own utility transformer(s) and generator(s).  The STACK design serves the air-cooled chiller load from the same main switchboard as the critical UPS load in a 1:1 ratio (12MW of critical load will use 12 air-cooled chillers).  This distributed topology makes designs repeatable, scalable, and allows for better speed-to-market. 

Future-proofing your data center 

Some data centers have been able to greatly lower their build costs and PUE by utilizing direct evaporative cooling. For those looking to operate within the ASHRAE recommended temperature and humidity limits, indirect evaporative cooling (IDEC) systems and pumped refrigerant economizer systems both maximize hours of free cooling while preserving integrity of the data processing environment. There is, however, a practical upper limit that can be cooled with air alone.   

Like almost everything in tech, direct liquid cooling (DLC) and immersive cooling systems have evolved over time. But while these solutions aren’t technically “new,” how they’re being used is. DLC, specifically, is no longer just for supercomputers and heavy processing. It’s also a viable option for storage applications because it can help improve density, availability, and reliability — all important considerations in the age of massive data stores required by artificial intelligence, machine learning, and other industrial applications.  

Liquid cooling technologies can handle much higher power densities than air-cooled systems and are far more energy efficient when also allowing for effective reuse of heat rejected by the computing equipment. Add to it that these solutions can cool solid-state drives, reduce the effects of humidity on other network components, and support high operational performance and it’s no wonder that analysts project the demand for liquid cooling systems to grow by more than 230% by 2025. Having chilled water available in the data center white space reduces obsolescence in the coming years. 

Seeing the bigger picture 

Cooling is a foundational component of successfully operating a modern data center facility and many end-users have wholeheartedly embraced evaporative cooling systems in their singular pursuit of power efficiency.  

In contrast, STACK has made a concerted and conscious effort to adopt and embrace recent advancements in air-cooled chillers as part of a broader picture. Generally, air-cooled systems are easy to work with, require less expertise to operate, and carry lower upfront cost. They also eliminate the worry about minerals and bio-growth concerns, creating unplanned and unnecessary downtime, and adding to operational costs. 

More importantly, efficiency is just one ingredient in the recipe of a successful, sustainable, and resilient data center operation. While water-cooled systems may be more efficient than air-cooled systems on the whole, STACK aims to eliminate the externalities that can conflict with our company’s broader, more balanced philosophy of doing what’s best for our customers, our business, and the environment.  

 

Brian Medina is the Director of Strategy and Development for STACK.

READY TO TALK TO
A STACK EXPERT?