OPENXTRA Environmental Monitoring Solutions

5 Things to Know About Data Centre Temperature

Your rating: None Average: 3.4 (13 votes)

How can data centre administrators start making the right decisions in terms of temperature and humidity? Where should the appropriate limits be set? What technologies are available to tackle these issues?

Temperature is a critical factor in the modern data centre, and it is only becoming more so. Increased processor speeds, smaller server form factors, and higher server rack densities have all contributed to tremendous challenges for data centre administrators in the areas of cooling and air movement.

These physical challenges, and management’s mandate to maintain asset availability at all times, means data centre administrators must strive to make environmental factors such as temperature and humidity priority No. 1.

But how can data centre administrators start making the right decisions in terms of temperature and humidity? Where should the appropriate limits be set? What technologies are available to tackle these issues?

Model The Flow

Rack and Room Environment Monitors

In any undertaking, it is extremely helpful to have a model that faithfully approximates reality and allows administrators to make effective decisions before they do any physical work. One option available to data centre administrators is CFD (computational fluid dynamics).

Simply put, CFD is the mathematical modelling of data centre environmental variables such as temperature and airflow. Carl Cottuli, vice president of product development and services at Wright Line, says CFD modelling essentially constructs a virtual representation of the data centre. It models the impact of load distribution within the facility as well as the flow of hot and cold air within the space, adds Cottuli, who explains that CFD is useful for illustrating how to increase rack densities and server installations without creating additional hot spots and airflow issues.

Know What To Measure

A key factor of data centre temperatures is knowing where to take measurements. After all, temperatures can vary widely depending on where readings are taken. This is important to know because taking temperature readings in the wrong spot can lead administrators down the wrong path, resulting in wasted time and money and potential damage to business-critical equipment.

Darren Bonawitz, co-owner of 1102 GRAND, a Kansas City-based data centre, says administrators should avoid focusing on the entire room when taking temperature readings. Instead, he says, the inlet of IT equipment is what matters, so temperatures should be monitored there. These readings should drive any decisions made regarding data centre temperatures, adds Bonawitz.

Dan Hyman, principal at Custom Mechanical Systems, a provider of customised cooling systems for data centres, says the important temperature is that of the air entering the server rack, namely because this air is used to remove the heat generated by the server. The server is designed to work properly with a maximum temperature determined by the manufacturer, Hyman says; even though the server will work fine at lower temperatures, higher-temperature air will cause the server to overheat.

What's In A Number?

Monitoring and collecting data on environment variables such as temperature and humidity is only part of the battle. Knowing what to do with that data is really where the proverbial rubber meets the road. Again, it is essential to critically analyze this data so cost-effective decisions can be made. John Consoli, chief technology officer at AFCO Systems, says administrators should know the current TC 9.9 guidelines from ASHRAE (American Society of Heating, Refrigerating, and Air-Conditioning Engineers). Current ASHRAE guidelines provide a roadmap for fine-tuning temperature and humidity in the data centre. These guidelines are the result of focused research work in this arena, so users can be confident that the guidelines have been well thought-out and developed.

Consoli believes most data centres are kept much colder than necessary, and 1102 GRAND’s Bonawitz agrees that administrators should consider raising the data centre temperature. Data from IT hardware vendors indicates equipment can safely run at higher temperatures than previously thought, and this strategy can pay dividends, says Bonawitz: Increasing the temperature just one degree can save 3% or more off the data centre energy bill. So, instead of shooting for 68 to 72°F (20 to 22°C), administrators should find a comfort zone anywhere up to 80°F (27°C), which is within the range of 64.4-80.6°F (18-27°C) recommended in the ASHRAE 9.9 guidelines.

Also, warns Bonawitz, administrators should make sure equipment fans are in sync with the change, or savings may be offset when more fan energy is utilised if the equipment thinks it is too hot. Additionally, harder-working fans add to a data centre’s ambient noise.

Humidity Is Important, Too

Humidity control in the data centre should not be overlooked. Daniel Calderon, facility director at Host.net, says it is important to keep a constant relative humidity of 40 to 55%. Too much humidity in the room can cause condensation that can in turn lead to server failures. Also, Calderon says condensation on CRAC unit coils can cause the unit to work harder. Too little humidity is also harmful, he adds: Not enough humidity in the air can cause static electricity.

In terms of humidity, ASHRAE 9.9 guidelines recommend a humidity of less than 60%, along with a lower dew point temperature of 41.9°F (5.5°C and an upper dew point temperature of 59°F (15°C). Additionally, a recent whitepaper published by the ASHRAE 9.9 Technical Committee recommends that, in addition to monitoring temperature and humidity, administrators should also make an effort to monitor and control dust and gaseous contamination, which can cause unwanted chemical, mechanical, or electrical effects on equipment.

Understand Airflow

For optimal tuning of data centre conditions, it is important to understand the principles behind airflow management, says Wright Line’s Cottuli, who points out that cooling models in legacy data centres employ an open supply and return air methodology that drives mixing of both supply and return air streams. Cottuli says that overall statistics show that 40 to 50% or more of the energy consumption in a data centre goes to cooling. But, he warns, these high percentages result in one to two times oversupply of cold air pumped into the system.

According to Cottuli, IT professionals need to understand the reasons behind this provisioning in order to run a more efficient and reliable data centre. Thus, airflow patterns, including the effects of recirculation, air stratification, and bypass air, must be understood.

“Dramatically reducing these often overlooked issues and employing some common-sense data centre practices will help you eliminate the chaos,” Cottuli adds.

Reproduced with kind permission from AVTECH Software Inc.

Comments

Post new comment

By submitting this form, you accept the Mollom privacy policy.