Live Blogging Facebook’s “Open Compute Project” — Opening Up Data Center, Server Tech

We’re here live at Facebook headquarters for a press event about Facebook’s data centers and servers, and something the company is calling the “Open Compute Project.”

Company chief executive Mark Zuckerberg has taken the stage. Live-streaming video is here. Our paraphrased live blog, below. The official Facebook Engineering post on the announcement, here.

10:20

Type-ahead feature — we needed to build the capacity to do it, in order to add the feature. Real-time comments. All this ends up being is extra capacity, bottlenecked on being able to power the servers.

What we found over the years — we have a lot of social products…. — what we found is that there are a lot of database designs that go into this: caching, web, etc.

Over the years we’ve honed this, organized our data centers. What we’ve found is that as we’ve transitioned from being small one office in a garage type startup, really a couple ways you can go about designing this stuff. You can build them yourself, work with OEMs. Or you can get whatever the products are that mass manufacturers put out. What we found is that the mass manufacturers had weren’t exactly in line with what we needed. So we did custom work, geared towards building social apps.

We’re sharing what we’ve done with the industry, make it open and collaborative, where developers can easily build startups.

We’re not the only ones who need the kind of hardware we’re building out. By sharing that, we think that there’s going to be more demand, drive up efficiency, scale, make it all more cost-effective for everyone building this kind of stuff.

10:23

Jonathan Heiliger, vice president of technical operations, on stage:

We’ve started innovating in servers and data centers that contain the software.

First, what it’s like to lease  a data center. I’d leased an apartment, I wanted to change the paint color, the landlord wouldn’t let me. Like that, leasing a data center doesn’t allow as much customization.

We started this project about a year and a half ago with two goals in mind. Two benefits: really good for environment, really smart use of our resources as small and growing company.

PUE: ratio of amount of power coming into data center, and going into actual computing. Ideal is 1.0 = all computing. Industry average is 1.5. We’re 1.4 to 1.6 in leased centers. Our Prineville (OR) center is now at 1.07.

Term: megawatt. What you never see or never use. Make your power usage more efficient and effective. You may think we’ve had hundreds of engineers. Just 3 people: Amir Michael, Ted Lowe, Pierre Luigi, and Ken Patrick (data center operations head). We built a lab at headquarters, the team worked using best practices in the industry.

As such, we believe in giving back.

10:30

Sharing data center designs, schematics. Few more people will walk through this.

Benefits to Facebook from servers: 38% more efficient. Tends to come at a cost. LED light bulb versus incandescent. More efficient but costs ten times as much.

Jay Park, head of data center design, and Amir Michael will be explaining.

Park is now on stage.

Let me give you a little history about Prineville. Three criteria: power, network connectivity, climate to maximize cooling.

Here’s how power is delivered. Most efficient way to get power from substation to motherboard. In typical data center, you’ll see approximately 11% to 17% power loss. We experienced total loss of only 2%.

In a typical center, four steps of power transformation happening… deliver power straight from power supply. When you see the design, it looks simple. But we had to work in quite a bit of detail. We started this project about 2 years ago, we couldn’t quite agree to a lot of things so one day the whole idea came to me in the middle of the night, I didn’t have anything to write, I picked up a dinner napkin and started writing on it.

Recommended articles