Michael McNerney, Vice President Marketing and Network Security, Supermicro, outlines the different methods data centre operators can take to reduce their power consumption
Data centres use significant amounts of electricity to power their thousands of servers. From the location of a data centre to the placement of server racks, there are several actions that data centre managers can take to improve the power usage effectiveness (PUE) of the data centre. The PUE of a data centre is defined as the total amount of power delivered to the data centre, divided by the amount of power used by the IT components. The lower the value, the more energy efficient the data centre is. Of course, sourcing renewable power is an obvious first step. Still, other methods, such as increasing air inlet temperatures, optimising power delivery, and utilising the right system at the right time, can contribute to a greener data centre.
Operate at higher temperatures
When using traditional air cooling mechanisms, the air entering the server (inlet temperature) is maintained by Computer Room Air Conditioning (CRAC). How air conditioning is used in a data centre contributes the most to the PUE calculation. Reducing the amount of air conditioning significantly lowers the PUE and, thus, OPEX costs. Around the world, many data centres are keeping inlet temperatures too low. Data centre operators can reduce power usage by increasing the inlet temperatures to the manufacturer’s recommended maximum value. Looking at the results from a recent survey of over 400 IT professionals and data centre managers, there is a wide range of inlet temperatures, which indicates that most IT administrators are limiting the inlet temperature to less than the manufacturer’s “highest” limit.
Capture heat at the source
CRAC is the most significant variable to optimise to lower overall PUE. The PUE of a data centre can be significantly reduced when using liquid cooling solutions in particular. While the data centre infrastructure may need to be modified or added to, the longer term OPEX savings will outweigh the initial costs.
Liquid Cooling
Liquid cooling of the CPUs and GPUs can significantly reduce the need for having CRAC units in data centres and the need to push air around. There are several different methods to use liquid cooling to reduce the need for forced air cooling.
Direct To Chip (DTC or D2C) Cooling
This method passes a cold liquid over the hot CPU or GPU. Since a liquid is much more efficient at removing and transporting heat than air is, the CPU or GPU can be kept within its thermal design power (TDP) envelope. This can lead to significant savings when scaled across thousands of systems in a medium to a large data centre.
Server with D2C Liquid Cooling Installed
Rear Door Heat Exchanger (RDHx)
The rear door of the rack contains liquid and fans, which cools the hot server exhaust air before the air enters the data centre. The hot liquid needs to be cooled before it is returned to the data centre CRAC. This liquid cooling method keeps the air at a lower temperature in the data centre, reducing cooling demands on the CRAC, which will lessen the amount of electricity needed in the data centre.
Immersion Cooling
With immersion cooling, the entire server – or groups of servers – are submerged in a dielectric liquid. The close contact of the liquid molecules with the hot CPUs, GPUs, and other components is an efficient way to cool the servers, as fans will need to be removed from the servers. Some minor modifications must be made to the server before immersion. An entire rack of servers can be cooled in this manner.
Immersion Cooling of Complete Servers
Hot and Cold Aisles
A significant amount of electricity can be saved using the CRAC if the hot and cold aisles are separated in the data centre. When designed with hot and cold aisles, the inlet and exhaust air should not mix, allowing the data centre cooling to operate more efficiently. For adequate cooling, the rows of racks need to be installed so that the rear of the racks face each other, creating a hot aisle. Therefore, an important best practice when designing an energy-efficient data centre is to have hot and cold aisles.
Hot and Cold Aisles in a Data Center
Optimise power delivery
Power conversion from AC to DC entails some amount of heat generated. With AC being delivered to the data centre, the power must be converted to DC for the system. With each conversion, energy is lost, contributing to the inefficiency of the data centre. More efficient conversion will result in less wasted power during the conversion, with heat being the by-product that must be removed from the system.
Titanium power supplies are the most efficient option, offering 96% power efficiency. Platinum power supplies are slightly less efficient at 94%. Gold power supplies offer a lower efficiency of 92%. The efficiency of a power supply isn’t linear or flat when it comes to the supply’s output range. Most power supplies operate at their maximum efficiency when running in the upper ranges of their rated capacity. This means that an 800-watt power supply providing 400 watts of power (50% capacity) will be less efficient than a 500-watt power supply providing that same 400 watts of output power (80% capacity).
Source green energy
A data centre’s energy source has the most significant impact on its carbon footprint and poses the most substantial opportunity to benefit the environment. Renewable energy programmes for commercial customers include generation through utility, third-party power purchase agreements (PPA), or renewable energy credits (REC). Distributed renewable energy production owned or controlled by data centres is optimal. But on-site renewable energy sources do not always satisfy data centre energy demands. Fortunately, clean grid energy can augment this. There are also increasingly effective energy storage solutions for deployment on-site, coming down in cost as battery technology improves and scales.
Rethink site selection criteria
Large-scale data centres cost a lot of money to operate. For example, a single hyper-scale data centre can demand 100 MW of power to keep servers, storage, and networking infrastructure performing as expected (enough to power 80,000 US households). In addition, while electronics use most of the energy consumed in a data centre, cooling those electronics to maintain operating temperatures can consume 40% of facility energy.
Building costs consist of the land value as well as the cost of construction. Construction prices vary depending on the geography or region. Unlike building a home or an office building, a data centre’s location has some unique requirements to be considered “green” and deliver agreed-upon Service Level Agreements (SLAs). Factors such as climate, energy pricing, risk of natural disasters, water costs, and the cost of network bandwidth all contribute to the choice of data centre locations.
Data centres are critical to the world’s economy. Many aspects of modern life depend on them, which consumes more electricity than ever before to deliver the services everyone uses. While the work per watt of the CPU continues to increase, there is a need to reduce the overall data centre power consumption. There are several actions that data centre operators can take. These include running systems at warmer temperatures, configuring the data centre with hot and cold aisles, and sourcing green energy. Data centres can reduce their PUE by taking just a few steps, lowering their operating expenses and decreasing their CO2 footprint for years to come.





Comments