We’re here live at Facebook headquarters for a press event about Facebook’s data centers and servers, and something the company is calling the “Open Compute Project.”
Type-ahead feature — we needed to build the capacity to do it, in order to add the feature. Real-time comments. All this ends up being is extra capacity, bottlenecked on being able to power the servers.
What we found over the years — we have a lot of social products…. — what we found is that there are a lot of database designs that go into this: caching, web, etc.
Over the years we’ve honed this, organized our data centers. What we’ve found is that as we’ve transitioned from being small one office in a garage type startup, really a couple ways you can go about designing this stuff. You can build them yourself, work with OEMs. Or you can get whatever the products are that mass manufacturers put out. What we found is that the mass manufacturers had weren’t exactly in line with what we needed. So we did custom work, geared towards building social apps.
We’re sharing what we’ve done with the industry, make it open and collaborative, where developers can easily build startups.
We’re not the only ones who need the kind of hardware we’re building out. By sharing that, we think that there’s going to be more demand, drive up efficiency, scale, make it all more cost-effective for everyone building this kind of stuff.
Jonathan Heiliger, vice president of technical operations, on stage:
We’ve started innovating in servers and data centers that contain the software.
First, what it’s like to lease a data center. I’d leased an apartment, I wanted to change the paint color, the landlord wouldn’t let me. Like that, leasing a data center doesn’t allow as much customization.
We started this project about a year and a half ago with two goals in mind. Two benefits: really good for environment, really smart use of our resources as small and growing company.
PUE: ratio of amount of power coming into data center, and going into actual computing. Ideal is 1.0 = all computing. Industry average is 1.5. We’re 1.4 to 1.6 in leased centers. Our Prineville (OR) center is now at 1.07.
Term: megawatt. What you never see or never use. Make your power usage more efficient and effective. You may think we’ve had hundreds of engineers. Just 3 people: Amir Michael, Ted Lowe, Pierre Luigi, and Ken Patrick (data center operations head). We built a lab at headquarters, the team worked using best practices in the industry.
As such, we believe in giving back.
Sharing data center designs, schematics. Few more people will walk through this.
Benefits to Facebook from servers: 38% more efficient. Tends to come at a cost. LED light bulb versus incandescent. More efficient but costs ten times as much.
Jay Park, head of data center design, and Amir Michael will be explaining.
Park is now on stage.
Let me give you a little history about Prineville. Three criteria: power, network connectivity, climate to maximize cooling.
Here’s how power is delivered. Most efficient way to get power from substation to motherboard. In typical data center, you’ll see approximately 11% to 17% power loss. We experienced total loss of only 2%.
In a typical center, four steps of power transformation happening… deliver power straight from power supply. When you see the design, it looks simple. But we had to work in quite a bit of detail. We started this project about 2 years ago, we couldn’t quite agree to a lot of things so one day the whole idea came to me in the middle of the night, I didn’t have anything to write, I picked up a dinner napkin and started writing on it.
We use 100% outside air to cool the data center. No internal air conditioning…. System brings in cold air from outside, forces it down into server area, hot air collected and comes up and out. Dump it outside. During wintertime, use the air to heat the office as well.
1. 480 volt electrical distribution system providing 277 volts directly to each server.
2. Localized uninterruptible power supply each serving six racks of servers.
3. Ductless evaporative cooling system.
Amir Michael is coming on stage.
Chassis, removed everything extra. Made it slightly taller. Let us use taller heat sinks. More efficient. Larger fans. More efficient. Not only less air, but less energy. Data center technicians who swap hard drives in and out, fixing motherboards, fixing CPUs. Everything comes together almost no tools. Snaps and spring-loaded plungers instead.
Threw a party for engineers: chicken wings, beer and servers. Taking lots of notes about how people interacted. People practiced on motherboard. Did this with Quanta, our partner in Taiwan. Efficiency on motherboard reaches 94.5%. All comes together and we put it in our rack.
Three columns of servers, 90 in total. Deploying is much faster. Built panels in back and punched shelves. Show how easy it is to pull it out.
Battery pack. In event of power failure, discharge battery into servers, enough to keep going. Easy to maintain as traditional. Lots of sensors, all report back health of batteries.
Got to doing the design. Power it all in blue light. Quanta said blue would cost 7 cents. Green would only be 2 cents. But went with blue.
Shows a short video about it.
Heiliger is back up. Introduces Om Malik from GigaOm, who will be moderating panel of peers about it.
Allen: What this means to people like you, larger context in industry. At Zynga, we’re in the business of play, play should be fun. Behind making it occur for 250 million people, we need a lot of infrastructure behind that. We deploy private data centers, we use private cloud as well. One of the world’s largest. Intrigued about using Open Compute Cloud as part of that. We’re definitely considering using.
Om: Graham, can you talk to about cost savings?
Graham: Rackspace will reach $140 million in revenue this year, add servers all the team. Servers should be service.
Om: How much does typical data center cost?
Graham: Rule of thumb is 1 megawatt equals $1 millin in power costs per year.
Om: In terms of you guys, will you be actual user?
Graham: Yes, we will. We’ve been developing our own IP but we’ll be flushing some of that to go with an open standard. Rackspace has believed in open source from the beginning.
Om: Michael, people don’t really give much attention to people in the government. But spends many millions a year.
Michael: First, thank Mark and company for engaging the public sector. I work in a unique environment. Federal agency CIO, plus work in Department of Energy. Work around energy efficiency, grid systems. Also work on federal data center consolidation initiative. Very ambitious undertaking. We’re developing some of our own and working with industry to bring efficiencies into play. New design in Open Compute Project needs to be factored in. I’m here working on tech transfer
Om: Looking at doing data centers with this tech?
Michael: Yes, in combination with other tech.
Om: Forrest, how does this impact Dell’s business going forward? We’ve been partnered with Facebook for the past 3 years. Creating open standard, part of our DNA. Really repudiating proprietary approach. Open standards foster innovation, creates community of folks that can innovate together. Give opportunity for companies throughout ecosystem to innovate and add value.
Allen: increased our server capacity by 75 times over the last two years. Welcome all this as option, open standard, looking to Facebook and working with them and everyone else to drive industry forward.
Om: Forrest, introducing Open Compute products?
Forrest: We’re doing that already. Great for very large deployments. But smaller companies may not need so large a bite. How do we make this tech more accessible to companies of all scales? Bring into our C line of server products.
Om: Frank, what do you get out of this?
Frank: We hope to benefit by accelerating innovation, move the industry forward.
Om: If somebody uses your design for servers, what happens, how much of the IP is clean? What if somebody tries to innovate?
Frank: Everything that we’re publishing today are specifications that Facebook developed. Went through with partners to make sure there was no IP in there that they didn’t want shared broadly. Governed by Open Web Foundation agreement, use without license fees, etc.
Om: How many people actually using?
Frank: Haven’t counted, but I’d guess 10 to 15 partners. Coming to rest of market soon? Can Rackspace and Zynga go get them now? Obviously deployed a number already. Also sent out for other people to test. Dell team has integrated our specific motherboards into their products, available today. Cynix also using it.
Om: What does this mean for a startup, like Instagram. There’s a lot of people in the room who really don’t care about data centers.
Allen: Seeing emergence of new stack. Data center, server, software on top. Faster apps. Innovation has been separate in servers, data centers, software — tie all together and drive costs down.
Numbers we put out today were part of most recent benchmark. Massive boom in other parts of the world.
Cloud computing is driving enormous data center efficiency expansion. Amount of work has become major cost, and also has environmental impact. More efficient versus servers in your office. But as costs come down, you need more of it.
Jason: Every data center looks great — go into emerging countries and you’ll see far less optimized standards. By opening up, building awareness, how to build a data center, how to make an efficient server, there are a lot of places around the world that can benefit from this type of information.
Forrest: You just launched a big cloud computing effort in China. You said a lot of data centers there and other parts. Developing world is on same trajectory. Ramp of internet utilization is absolutely phenomenal. Huge demands over there. Data centers, as Jason said, are 1995 tech vintage. Very old, low power levels. Problem becoming acute. You’ll see opportunity for internet companies in the developing world to take a leap forward, jumping over the last 15 years and exploiting the latest we have available. All these initiatives will make it very easy for these companies to jump forward.
Om: Allen, you’ve worked with data centers for the past 15 years. Can you talk about how it has evolved, how it stands up to current version of hardware?
Allen: Back in the early days, things were a lot more discrete elements, that didn’t work together, in unison. We built around cooling efficiency, we built to move air.
Please view the video of the event for full details from the presentation and panel.