Inside Social Games starts scoring game reviews

Inside Social Games is changing its reviews policy today to include a three-point ratings scale organized by three simple words: Play, Skip and Wait.

What It Looks Like

The scale is based on time. The most important piece of information ISG can provide is whether or not a game is worth a reader’s time. It takes time to get into a new social game — setting up the permissions, going through the tutorial, adding friends, etc. Even the simplest games with the cleanest interfaces and shortest tutorials take a good five minutes from first click to actual gameplay — and with so many social games launching on Facebook and Google+, that might be time our readers don’t have.

A Play rating means it’s worth the reader’s time to play the game.

A Skip rating means that a game isn’t worth the reader’s time.

A Wait rating indicates that the game might not be worth the reader’s time right now, but it has the potential to grow into a game that earns a Play rating.

Our reviews will still provide gameplay analysis, screenshots, currently monthly and daily active user totals as tracked by our AppData traffic monitoring service and any context the developer can provide if we’re able to reach them. As almost all games now monetize in the same ways and leverage the same social features, we won’t make mention of these components unless a game does something new or interesting with them. We will share a bit of opinion on a game based on our personal response to it — but our reviews are intended as interpretive analysis rather than stand-up comedy.

How It Works

How we pick a rating for a game is based on our approach to social games overall. In contrast to consumer-facing video game publications like or Gamezebo, we’re analysts that cater to an audience of developers, investors and other industry insiders that need information to make informed decisions — not just about what they’ll play for fun, but what they’ll do with their own companies.

From this perspective, we judge social games based on one question: “Will this work?” That can mean several things for a game: It monetizes, it finds traction on its platform, or the developer is supporting a game so thoroughly that we can expect to see it everywhere for the next year. Notice that we don’t bother to say that a game is “good;” that term is too subjective to have any meaning to our readers. We’ve seen plenty of “good” games on Facebook fail to monetize, fail to attract an audience or go offline after barely six months. Being “good” isn’t a guarantee that a game will work.

The flip side of that is that we won’t tell you a game is “bad” based on the fact that we don’t like it. Plenty of games do things we don’t like such as spamming us, hitting us with pay walls right after the tutorial, playing repetitive sounds, cloning a different game we’ve already spent months playing, etc. Games may also just fail to appeal to us because of art style or genre. None of these things would necessarily stop a game from working, however, unless something is bad enough to be distracting or damaging to the core gameplay experience.

We will tell you when a game isn’t ready. Many times in 2010, our reviewers would encounter games in the early beta stage where graphics are missing, social features aren’t optimized and sometimes monetization isn’t even implemented yet. In 2011, Zynga started providing hands-off press demos of new games just days before launch, sometimes with content that we couldn’t expect to see when the game went live because the developer would decide to cut it based on early user feedback. We could wait these games out and return to them when we think they might be ready — like when a developer “officially” launches the game or once it hits a certain number of monthly active users. But so many games get lost in the shuffle as new titles launch that this approach is sloppy. It’s better we see the game in the conditions we found it (via social discovery, word-of-mouth or sudden spike in AppData activity) and report back to the readers in a timely fashion.

Why We’re Doing It

We’ve gone back and forth over the concept of scoring games since the blog launched in 2008. At first, we used a 10-point scale derived from scores in sub-categories like “graphics” or “sound” that resembles the old methods used by traditional video games press. Sometime in 2009, the then-lead writer scrapped scores after deciding that the practice didn’t make much sense because most of the leading games came from the same pool of developers and each had relatively similar themes and standards of quality. We brought it back briefly at the end of 2011 in the short-lived “What we’re playing” articles; but these pieces lacked context and focus.