Facebook Aims For The Moving Target Of Data Center Optimization

Jay Park, director of Facebook Data Center Engineering, recently posted a fascinating, encouraging Note on the topic of data center optimization.

Jay Park, director of Facebook Data Center Engineering, recently posted a fascinating, encouraging note on the topic of data center optimization.

It’s easy to forget that the intangibility and mobility of the Internet exists somewhere in temporal reality. That is, it relies on actual machines – on bits of metal, plastic, silicon (and blood, sweat, and tears) – to function. It’s not just floating out there in the ether (yet). As it grows in scope, so too must its physical corollaries.

Facebook is a perfect example of this growth and subsequent need for increased on-the-ground solutions. It is hugely popular and growing every day, and maintaining that growth requires vast amounts of resources. You can either amass more resources, optimize the ones you already have, or combine the two approaches. Humans have famously opted for the former option in days past, but Park and the engineering team at Facebook are committed to optimizing their data centers.

In his note, Park highlights the three primary optimization issues with current data centers: inefficient airflow distribution; excessive cooling; and low rack inlet and chiller water temperatures.

Airflow distribution was made more efficient by implementing cold aisle containment in each server room. Enclosing each cold aisle eliminated exposure to recirculated hot exhaust, and airflow was further improved by sealing cable orifices to minimize air leaks.

Data centers run hot and require cooling, but Facebook found they were running cooler than they needed. Optimizing fan speeds while maintaining proper temperatures saved precious power, and shutting down unnecessary computer room air handler (CRAH) units saved even more.

Facebook engineers also discovered they could raise the rack inlet and chiller water temperatures slightly without compromising the mission. CRAH unit return air was bumped up to 81 degrees F from 72 degrees, and chiller water went up from 44 to 52 degrees F. Chilled water system load was reduced by 171 tons per hour without a negative impact on data center function.

All told, implementing these changes in one data center amounted to annual savings of 2,418,000 kilowatt hours and energy demand savings of 276 kilowatts. That’s $230,000 saved and 967 metric tons of greenhouse gases reduced annually (according to EPA calculations, about as much as the emissions of 332 cars or the electricity usage of 211 homes).

And that’s just the one data center. Park says similar measures are being taken in others, too. Best of all, though, Park and his team continue the search for optimal optimization and vow to share their findings with the rest of the industry.