The data center temperature debate

Though never directly articulated by any data center authority, the prevailing practice around these critical facilities has often been “The cooler, the better.” However, some leading server manufacturers and data center efficiency experts share the view that data centers can run much hotter than today without sacrificing uptime and with great savings in both related costs. with refrigeration as in CO2 emissions. A server manufacturer recently announced that their server rack can operate with inlet temperatures of 104 degrees F.

Why do you feel the need to push the envelope? The cooling infrastructure consumes a lot of energy. Running 24/7/365, this system consumes a lot of electricity to create the optimal computing environment, which can range from 55 to 65 degrees F. (The “recommended” range ASHRAE current is 18-27 C or 64.4 degrees F up to 80.6 degrees F)

To achieve efficiencies, a number of influential end users are running their data centers warmer and advising their contemporaries to do the same. But the process isn’t as simple as turning up the thermostat in your house. Here are some of the key arguments and considerations:

Charge: Raising the server inlet temperature will result in significant energy savings.

Arguments for:

o Sun Microsystems, a leading hardware manufacturer and data center operator, estimates a 4% savings in energy costs for every degree increase in server inlet temperature. (Miller, 2007)

o A higher temperature setting can mean more hours of “free cooling” possible through air-side or water-side economizers. This information is especially compelling for an area like San Jose, California, where outdoor air (dry bulb) temperatures are 70°F or below for 82% of the year. Depending on geography, the annual savings from economizing could exceed six figures.

Counterarguments:

o The refrigeration infrastructure has certain design set points. How do we know that increasing the server inlet temperature will not create a false economy, causing additional and unnecessary consumption on other components such as fans, pumps, or compressors in the server?

o Free cooling, while great for new data centers, is an expensive proposition for existing ones. The entire cooling infrastructure would require re-engineering and could be cost prohibitive and unnecessarily complex.

o The costs of temperature related equipment failures or downtime will offset the savings gained from a higher temperature set point.

Charge: Rising server inlet temperatures complicate equipment reliability, recovery, and warranties.

Arguments for:

o Intake air and exhaust air are often mixed in a data center. Temperatures are kept low to compensate for this mixing and to keep the server inlet temperature within the ASHRAE recommended range. Raising the temperature could exacerbate already existing hotspots.

o Cool temperatures provide a cool air envelope in the room, an advantage in the event of a cooling system failure. Staff may have more time to diagnose and repair the problem and, if necessary, properly shut down the equipment.

o In the case of the 104 degree F server, what is the probability that every piece of equipment, from storage to network, is reliable? Would all warranties still apply at 104 degrees F?

Counterarguments:

o Raising the temperature of the data center is part of an efficiency program. Temperature rise should follow best practices in airflow management: use blanking panels, seal cable cuts, remove cable obstructions under raised flooring, and implement some form of air containment. These measures can effectively reduce the mixing of hot and cold air and allow for a practical and safe temperature rise.

o The 104 degree F server is an extreme case that encourages thoughtful discussion and critical inquiry among data center operators. After your study, perhaps a facility that once operated at 62 degrees will now operate at 70 degrees F. These changes can significantly improve energy efficiency, without compromising equipment availability or warranties.

Position: Servers are not as fragile and sensitive as you might think. Studies carried out in 2008 underline the resistance of modern hardware.

For arguments:

o Microsoft ran servers in a tent in the humid Pacific Northwest from November 2007 to June 2008. They experienced no failures.

o Using an airside economizer, Intel subjected 450 high-density servers to the elements: temperatures up to 92 degrees and relative humidity ranges from 4 to 90%. The server failure rate during this experiment was only marginally higher than that of Intel enterprise installations.

o Data centers can operate in the 80s and still be ASHRAE compliant. The upper limit of its recommended temperature range has increased to 80.6 degrees F (up from 77 degrees F).

Counterarguments:

o High temperatures, over time, affect server performance. The server’s fan speed, for example, will increase in response to higher temperatures. This wear and tear can shorten the life of the device.

o Studies from data center giants like Microsoft and Intel may not be relevant to all companies:

o Its huge data center footprint is more immune to the occasional server failure that can result from excessive heat.

o They can leverage their purchasing power to receive gold-plated warranties that allow for higher temperature settings.

o Most likely they update their hardware at a faster rate than other companies. If that server completely dies after 3 years, no problem. A smaller company may need that server to last more than 3 years.

Position: Higher inlet temperatures can create uncomfortable working conditions for data center staff and visitors.

Arguments for:

o Consider the 104 degree rack. The hot aisle could be anywhere between 130 and 150 degrees F. Even the high end of the ASHRAE operating range (80.6 degrees F) would result in hot aisle temperatures of around 105 to 110 degrees F. perform maintenance on these racks would endure very uncomfortable working conditions.

o In response to higher temperatures, the server’s fan speed will increase to dissipate more air. Increasing the fan speed would increase the noise level in the data center. The noise can approach or exceed OSHA sound limits, requiring occupants to wear hearing protection.

Counterarguments

o It goes without saying that as the server inlet temperature rises, so does the hot aisle temperature. Companies must carefully balance worker comfort and energy efficiency efforts in the data center.

o Not all data center environments have a high volume of users. Some supercomputing/high-performance applications operate in a lights-out environment and contain a homogeneous collection of hardware. These applications are suitable for higher temperature set points.

o The definition of a data center is more fluid than ever. Traditional off-the-shelf installation can add instant computing power through a data center container without an expensive construction project. The container, separated from the rest of the building, can operate at higher temperatures and achieve higher efficiency (some close-coupled cooling products work in a similar way).

recommendations

The move to increase data center temperatures is winning but will face opposition until concerns are addressed. Reliability and availability are at the top of any IT professional’s performance blueprint. For this reason, most to date have decided to err on the side of caution: remain calm at all costs. However, higher temperatures and reliability are not mutually exclusive. There are ways to safeguard your data center investments and become more energy efficient.

Temperature is inseparable from airflow management; Data center professionals must understand how air circulates, enters, and passes through server racks. Computational fluid dynamics (CFD) can help by analyzing and graphing projected airflow on the data center floor, but because cooling equipment isn’t always running to specification and the data you input can miss some key obstructions. On-site monitoring and adjustments are critical requirements to ensure that your CFD data and calculations are accurate.

Overcooled data centers are prime environments for raising the temperature set point. Those with access points or insufficient cooling can start with inexpensive remedies like blanking panels and grommets. Closely coupled cooling and containment strategies are especially relevant since server exhaust air, which is often the cause of thermal challenges, is isolated and prohibited from entering the cold aisle.

With airflow addressed, users can focus on finding their “sweet spot” – the ideal temperature setting that aligns with business requirements and improves energy efficiency. Finding it requires proactive measurement and analysis. But the rewards—lower energy bills, improved carbon footprints, and a corporate responsibility message—are well worth the effort.

Website design By BotEap.com

Add a Comment

Your email address will not be published. Required fields are marked *