Readers have written to us about commenting becoming real time and it looked so natural to us that we’d presumed it had been around for longer than just a couple of weeks ago. Today Facebook announced the upgrade officially came out in the middle of January, and offered the developer community an explanation of how this deceptively simple looking feature evolved.
Every minute, Facebook serves more than 100 million pieces of content that might receive a comment, and during that same period of time, users post about 650,000 comments — that’s 16 million new associations per second. To make the commenting real time, the social network needed to track of who’s looking at what at any given time, which called for the invention of new systems for handling load patterns.
The solution ultimately became distributed storage tiers that updated locally in high frequency while retrieving information from data centers a bit less frequently. Facebook Engineer Ken Deeter calls this process “write locally, read globally.” He explains:
For example, when a user loads his News Feed through a request to our data center in Virginia, the system writes to a storage tier in the same data center, recording the fact that the user is now viewing certain pieces of content so that we can push them new comments. When someone enters a comment, we fetch the viewership information from all of our data centers across the country, combine the information, then push the updates out. In practice, this means we have to perform multiple cross-country reads for every comment produced. But it works because our commenting rate is significantly lower than our viewing rate. Reading globally saves us from having to replicate a high volume of writes across data centers, saving expensive, long-distance bandwidth.
Facebook posts explanations such as this one to a blog most likely hoping that third-party developers will tap the “write locally, read globally” model. What kinds of applications do you think might be able to build on the real-time commenting?