Data Centers and the challenge of High Density.

Increasing server density reduces the size of the hosting space and makes cabling easier, provided that the heat generated by these new designs can be evacuated. The DATA4 hosting company has successfully managed to meet the technical and economic challenge of high density, for the benefit of its customers.

Data centres currently face a challenging paradox tied to their energy consumption: how can they reduce the cost of computing by making servers denser, when density increases the cooling costs? DATA4, a pioneer on this issue, has seen an uptick in customer interest on this question since server manufacturers announced this year that they would double the number of communication ports in a given space.

“Combining two rows of server racks into one is an economic opportunity, because there are half as many cables and network switches to be purchased, installed and maintained. But under these conditions, racks consume between 12 and 17 kilowatts, compared to 7 kilowatts previously. The challenge is in finding ways to optimise energy consumption in the computer rooms,” summarises Mohamed El Barkani, DATA4’s pre-sales director.

A design challenge for a PUE of 1.22 per computer room

To create high density on its sites, the hosting company had to completely redesign its computer rooms. Mohamed El Barkani explains that the high density issue is particularly important for  hyperscalers, the financial industry, and even online video game publishers. These companies need entire computer rooms with hosting companies, which makes them especially sensitive to energy issues.

“Most companies with just a few racks have service packages where electricity consumption is included. Since they aren’t aware of their energy use, they’re not too concerned by energy issues. On the other hand, companies that take up entire clean rooms and pay for their electricity usage on a pay-as-you-go basis want to optimise their energy bill and reduce their PUE,” he adds.

The unit of measurement that is the centre of focus here is the PUE (Power Usage Effectiveness). This is the ratio between the total amount of energy used by a computer room and the energy used just by the computing equipment. A data centre generally has a PUE of 1.8. With earlier versions of its computer rooms, DATA4 was able to achieve a PUE of 1.5. Thanks to its new designs, it has now reached a record low of 1.22.

“The factor with the greatest impact on the PUE is the air-conditioning. A row of ten 10-kilowatt racks produces 100 kilowatts of heat. Sometimes almost as much energy is needed to create enough cooling to dissipate the heat completely. Our design challenge was to find a way to reduce the heat without needing additional cooling,” says Mohamed El Barkani.

Rethinking aisle widths and electricity supply

The high-density design developed by DATA4 uses five key factors to improve its PUE. The first and simplest change was to increase the width of the aisles between server racks by 50%. “We went from 1.20 m to 1.80 m, because ultimately what counts is the energy consumption per square metre. This allowed us to double the computing power without doubling the energy consumption or halving the number of racks. We use only 2.1 to 2.5 kilowatts per square metre for dense racks of 12 to 17 kilowatts, compared to 1.5 kilowatts per square metre with regular racks that use 7 kilowatts,” the pre-sales manager explains.

The second factor is the optimisation of the electrical power units in the computer room. “To create redundancy, we have to supply twice as much electricity as needed (2N architecture). For example, if a computer room needs 4 megawatts of energy, we had two energy sources each capable of supplying 4 megawatts, which were used at 50% of their capacity. However, the power units only operate at maximum efficiency when they are running at around 60% to 70% of their capacity.

To work around this problem, DATA4 still connects each rack to two UPS circuits, but there are now three of them (3N/2 architecture). They each only put out 2 megawatts each and run at 66% of their capacity. This means they are used at maximum efficiency, minimising the maximum power supply required and ensuring 100% recovery of electrical needs in the event that one of the power sources should fail.

Containing the aisles to expel hot air

The third factor is aisle containment: either hot aisles are positioned back-to-back, where the servers expel their heat, or cold aisles direct air to the front side of the racks, where the cool air enters. “We prefer to confine the hot aisles, since we only need to maintain an ambient temperature of 22°C in the cold aisles and push the air from the hot aisles to the outside, without needing to cool it down.”

Mohamed El Barkani explains, however, that aisle containment only works if the hot and cold air do not mix. “You cannot leave any gaps in the rows of racks. That’s why we recommend that our customers install all the racks in an aisle from the very start, even if it means having empty racks that are blocked with plates.”

Combining high density and ecology by cooling only 10 to 17% of the year

The fourth factor is the cooling itself. “By using the ambient air outside the building, we are able to use the cooling unit for just 10 to 17% of the year,” says Mohamed El Barkani. In this case, DATA4 takes advantage of two parameters: the servers’ operating temperature and the ventilation system.

For one thing, manufacturers now confirm that a temperature of 18 to 27°C at the front of their servers is optimal for their operation.

“In our climate, temperatures are within this range most of the time. During the winter, when the outside temperature is too low, all we have to do is mix the cold air with a little warm air from the servers. Most of the time, we just use fans.”

Furthermore, DATA4 has developed, patented and implemented a ventilation system that allows it to optimise air recovered from outside while discharging the warm air from the contained aisles. This ventilation system, for which DATA4 holds a patent, is ceiling mounted.

“There is no need to install a raised floor to circulate the cool air, which means we can use heavier racks than usual. This is particularly important when it comes to high-density racks, where there is twice as much equipment as usual.”

Optimising the server and network configuration in the racks

The final factor is optimising how the servers are arranged in the racks. “More often than not, companies let us install its racks ourselves according to its specifications. They can of course install their racks themselves, but our procedures require them to consult us before doing so. The IT departments in charge of these projects have a hard time predicting where hot spots will form, whereas we do thermodynamic modelling (CFD) using specialised software before installing the racks, which can even simulate air-conditioning incidents,” says Mohamed El Barkani.

As the density of hosted infrastructures is increasing due to more compact equipment with greater connectivity, hyper-converged infrastructure and a growing needs for computing power, a sustainable response to the issue of energy dissipation needed to be found.

High density and energy efficiency are closely related, and it is now possible to combine them in terms of infrastructure, materials and equipment.

DATA4’s success is certainly due to its ability to optimise a number of parameters, in cooperation with each customer and while respecting the most stringent reliability requirements.