Nielsen Study: Twitter Activity Ahead of TV Show Premieres Could Indicate Success

By Adam Flomenbaum 

nielsen-twitterAhead of the 2014 fall premiere season, Nielsen evaluated how Twitter activity surrounding a new show in the weeks leading up to a show’s premiere (from six weeks before the premiere date until two weeks before the premiere date) could be used as a good predictor of the show’s success. The significance of this, if it proved out, is that networks and advertisers would still have the opportunity to shift promotion strategy in the two weeks leading up to the premiere.

What Nielsen found was that, based on the 42 broadcast and cable series premieres it tracked, Twitter TV activity could be used together with other data to more accurately anticipate premiere audience sizes.

Nielsen hypothesized that shows that were more heavily promoted on TV ahead of the premiere would result in a larger audience once the show premiered. This was confirmed:

In other words, without including Twitter TV data in the analysis, we confirmed that programs that were promoted more substantially saw higher premiere audiences. To be truly useful then, Twitter TV activity would need to tell an additional story on top of what we already know from promotions alone.

To measure the impact of Twitter TV activity on top of regular promotions, Nielsen created a model (using the same six weeks before until two weeks before premiere date time frame) to determine the expected live+7 day (L+7) audience of 18-34 viewers using three variables:

o   Promotion activity: Commercial (C3) impressions

o   Twitter TV activity: Program-related Tweets (24/7 tracking)

o   Network type: Broadcast vs. Cable

The results of the model compared to actual premiere audience sizes indicate that Twitter TV activity can definitely help predict success. According to Nielsen:

While this model is based on only 42 data points, the overall model and all three variables are statistically significant. The model explains 65% of the variance in the premiere audience sizes, compared with 48% using promotions data alone. So, these three variables together can explain nearly two-thirds of the difference in premiere audience sizes. Most importantly, an agency or a network could have used this model to identify the top 10 and bottom 10 program premieres more precisely than by relying on promotion data alone.




















Nielsen is still unable to say that increased Twitter TV activity = larger audience sizes (which is why in the blog post announcing the results of this study it writes: “the findings do not necessarily mean that Twitter TV activity causes larger audience sizes”). Causality is difficult to prove, but Nielsen continues to produce compelling studies that at least show strong positive correlation.

It then becomes Twitter’s challenge to convince advertisers and brands to act (a.k.a. spend) on this data. But, the more such studies they are armed with, the easier the sell becomes.