For today’s modern marketer, it’s never been a more exciting time to work with video. Advances in underlying data (the who) and advertising inventory management (the where) make it easier than ever before to marry consumer desire with the elegant beauty conveyed by sight, sound and motion. Personalization extends video’s capabilities even further, delivering uniquely tailored experiences to each individual.
And yet, despite 20 years of advances in digital measurement and multi-touch attribution, we all too often find that a view (aka a gross impression), followed closely by the click, is the common currency of the advertising community to evaluate the impact of video.
If you only care about views, then there are likely more efficient ways to achieve them, such as outdoor advertising close to highways or the more ephemeral skywriting. Simply stated, video views are a vanity metric and should be treated as such, like followers bought on Instagram or Facebook. Brands need to instead look toward revenue impact such as return on ad spend (ROAS) or the incremental lift in revenue generated over and above a group unexposed to video.
If you are already prepared for more advanced attribution, don’t wait for multi-touch attribution to catch up to you. Instead, create your own control/exposed split test to measure the efficacy of your video programs today.
Establish a control group that is large enough to meaningfully measure, with statistical significance, the lift you seek. The exposed group should be sized up appropriately to make an impact. From there, ensure that your test doesn’t extend indefinitely. It should run just long enough to reach significance, as time can introduce bias. Make sure to measure twice but cut once. Plan to check in on your test at least two times per day.
By properly setting up A/B tests, you are able to single out video as a sole difference between one audience and another, which creates a vacuum that allows you to see the direct impact of said video.
It’s important to note that the details are extremely important when it comes to setting up a proper test for this, though. For example, if you are testing the effectiveness of a video ad in influencing customers to make a purchase compared to a traditional display ad, then you need to ensure both ads contain the same information or content. If the video ad contains different content, such as high-level brand messaging, then you are already soiling the validity of the test. Ensuring details like this are consistent is essential to maintaining the validity of your test because it becomes difficult to attribute the results to the video if there are multiple variables.
Other than airtight testing environments, it’s important to make sure your control and test audiences are both statistically significant. It’s also critical that your test doesn’t overstay its welcome. Running a test for too long introduces bias; running it too short doesn’t allow it to reveal accurate results due to the novelty effect. The novelty effect says that when a feature is new, it generates much more excitement and engagement than it actually will in the long run once the novelty of it has worn off. In other words, you want to ensure your test length is long enough to reveal accurate results and that your test audience is big enough to bury bias while also not allowing either to go too long or get too big.
Nonetheless, if you’re not ready to take the plunge all the way to revenue, then consider perhaps the most underappreciated metric that matters (and measures attention): time spent. Time is our most precious commodity, and exposure over time builds favorability and brand preference. Perhaps most surprisingly, video generally trounces display advertising on an efficiency basis relative to time, especially if you use enhanced approaches like personalized video, which extend the potential value of time spent by the consumer by offering timely, relevant and useful information personalized to him or her. The current Media Rating Council guidelines for display advertising effectively state that an ad must be in-view for one continuous second, post render.
If you buy media at a $5 cost per thousand (CPM) for a questionable minimum of one second of exposure time versus a $15 CPM for video that’s viewed for more than three seconds, video delivers more efficient time spent. While I’m not suggesting the creation of three-second video ads, I am advocating for a reevaluation of time as a more consistent currency to evaluate the effectiveness of each visual advertising medium.
As video advertising continues to evolve, the process behind tying it to real business value, such as return on ad spend, will continue to get easier. The process of organizing data in the current state of things may be long and tedious, but the payoff is worth it. When testing video ads, don’t rely on vanity metrics alone to determine success. Instead, start measuring the success of your video advertisements with metrics that tell you important stories about the attention and intention of your customers.