Wooga: Building a Successful Social Game by Combining Metrics With Emotion

[Editor’s note: In the article below, Wooga product lead Stephanie Kaiser provides an in-depth look at her team’s development of successful Facebook game Monster World. Kaiser has also been speaking about her experience at Casual Connect in Seattle this week and in Hamburg earlier this year (you can find slides and video from that presentation here). This article is also being published in the latest edition of the Casual Connect magazine.]

Some games don’t become instant hits right after launch. Such was the case with Monster World, a farming game developed by wooga and launched in April 2010. wooga is the 2nd largest developer of social games on Facebook. And Monster World is wooga’s most successful title today, growing virally and monetizing well with over 1.6 million daily active users.

With this in mind, I will describe some lessons learned during the development of the game, structured along the four topics: engagement, virality, monetization and non-metric related factors. But be forewarned: As you gain insights into relevant key performance indicators, you will more than likely fall in love with at least one of wooga’s monster characters.


After launching the game in April 2010, Monster World was not an immediate success, but we were able to enhance the game step by step. Looking at the post-launch growth chart, our release cycles become very visible. Every Tuesday we are launching a new version of the game. And each Tuesday’s enhancements can be seen in the growth curve of the game.

We began by improving features related to engagement, reasoning that without high user engagement, any enhancement to virality and monetization would be useless.

The KPIs related to engagement are one-, three-, and seven-day retention and a game’s sticky factor (monthly active users divided by daily active users). Besides that, we dissected each step of the beginner’s tutorial and observed how many users reached steps one, two, three, and finished the tutorial. By undertaking A/B tests, we enhanced specific steps of the tutorial to help as many users as possible enter progressively higher levels within the first session.

In an A/B test, we send a percentage of our users to one version of the game including a feature we intend to test and the others enter a version of the game excluding this feature (the control group). By analyzing the relevant performance indicators afterwards, we are deciding to continue developing one of the tested versions. These tests need to be undertaken simultaneously in order to keep outside factors (such as weather or Facebook downtimes) from influencing the results.

For example, we tested a version of the tutorial that forced the user to perform exactly the action the tutorial character Mr. Tentacle suggests. Meanwhile, other users saw a version that left the decision to follow directions open to them. In the latter, Mr. Tentacle is still visible and giving tips, but any action was voluntary.

The result was pretty surprising. We had always thought that users would prefer freedom of choice. But looking through the results, we had to accept that users wanted to be guided. A lot more users got to the end of the tutorial when they were guided through it, so we selected that version for all users.

Features don’t get into the game simply because someone thinks they’re cool’; they get in only if they are proven by metrics. A very good example is the “Monster Choose,” which was initially the first screen in the game. Looking back at it today, I still think the screen looks quite nice. But after analyzing the numbers we had to acknowledge that we were losing too many users at this step. Surprisingly, there was no effect on user retention if users were not offered the opportunity to choose their monster anymore. So we cut the feature.

Because we had built quite a large user base (already around 300.000 daily active users at the time), we gained highly reliable data from A/B tests. Consequently, those tests became one of our favorite optimization instruments.