William Heesbeen, lead mechanical engineer at RoyalHaskoningDHV, opens the IT Infra event on 20 November. Data centers are a crucial part of our society. But social expectations and technical reality are not in step.

The annual IT Infra event has built up a well-deserved reputation. Professionals from the sector meet on the exhibition floor, share knowledge and gain inspiration. A permanent part of IT Infra is the lecture program. This edition, William Heesbeen opens the event. He has been involved in the development and engineering of data centers for almost 20 years. We speak to each other via a Teams connection and after a short introduction I ask him how the development of data centers has changed over the years.

He has to think for a moment. “I think there has been a big shift in focus,” says William. “When you compare the first data centers with the current ones, you see a huge shift in energy consumption. And now we see the focus shifting from energy consumption to carbon footprint.”

“My first designs of data centers were simpler compared to the data centers that are now on the drawing board,” William explains. “A big difference is that old data centers were more robust and simple.” Of course, technology has not stood still in recent years and has undergone enormous development. “The development and engineering of data centers has grown along with this.”

“Data centers have made a big leap, especially in terms of efficiency. The emphasis has shifted from security of supply to efficiency.” William immediately adds that the availability of the data center is the main thing. The so-called five-nines that the customer demands: a data center must be available 99.999 percent of the time. This is an enormous requirement that data centers must meet. “The agreed temperature and humidity limits may only be exceeded for a few hours per year,” he summarizes briefly. “We now design data centers largely according to Tier 3 qualification. This means that a data center can continue to operate when it needs maintenance. Tier 4 is the next step. These are data centers that work self-correcting. Including automatic compartmentalization in the event of a leak detection, for example.”

Cooling

It sounds paradoxical, but the developers have discovered that you can cool a data center more efficiently at a higher temperature. In the first data centers, a room temperature of twenty degrees was already high. “Environmentally, energetically and financially, we are working hard to improve efficiency,” says William. “We are now at 27 degrees air inlet of the servers. And in the future we will go to 35 degrees.”

“A server has an optimum temperature. The preferred temperature of a server is currently lower than we expected. And this is part of the problem with liquid cooling,” says William, ahead of his presentation.

“The usual method of cooling data centers was with air. With direct liquid cooling, the entire server is immersed in liquid to directly dissipate the heat. The chip is directly cooled with water.” Liquid is much better at dissipating heat than air. As a result, liquid cooling can dissipate more power because the cooling is more efficient. “A standard air-cooled rack requires a maximum of 10 kW. While a water-cooled rack can handle a capacity of 100 to 150 kW. And probably even higher in the future.”

“We have already exceeded the limit that is possible with air cooling. The problem is that our expectations with direct liquid cooling do not match reality. The expectation was that with direct liquid cooling the temperature of the water would increase. The idea was to use this high temperature return heat usefully. For example, for heating houses.”

The expectation of the higher temperature was twofold: heat exchange with third parties and the application of free cooling. “The current state of affairs indicates that in the best case we achieve the same return water temperature”, says William and he explains this by means of a comparison of direct liquid cooling with air cooling. In addition, there is a technical limitation. “With direct liquid cooling it is difficult to take into account peak loads of the servers. The only way to solve this is to give the server the amount of water it needs during peak loads. This results in a drop in the temperature of the return water. The temperature of the return water is higher when the servers are continuously fully loaded. But this is a utopia.” The only way for data centers to absorb peak loads is to keep the cooling running. Even when this is not necessary.

“We need to temper our expectations,” William summarizes the problem. “Technology can barely keep up with the exponential growth of data. The amount of data doubles every year. Everyone wants more Netflix, more internet, more data. But no one wants a data center in their backyard.” And then William warns that we need to temper our expectations regarding the use of residual heat from data centers. The hour is up and William gives me his closing line: “We are still a long way from the solar-powered data center that heats your home and stores your excess solar energy from the neighborhood.”

IT Infrastructure event

During IT Infra you will find inspiration, knowledge and networking opportunities. Explore innovative solutions and products on the exhibition floor, where leading suppliers present their latest technologies designed to improve the performance, scalability and security of IT infrastructure
More information
FHI, federatie van technologiebranches
nl_NLNederlands