Being on the account service side of advertising and considering myself somewhat of a data geek, I am always amazed at how often test results are not measured correctly. Once in a while I'll come across an analysis - born outside of BKV - that is completely incorrect in its methodology of reading results. To that end I've decided to get on my high horse and yammer on a bit about how to measure a test.
Specifically I want to focus on reading a creative test from a paid search campaign because a creative test in an online environment has a big potential for skews. A "skew" is the key point in this rant because you have to spend as much time, if not more, looking for skews when analyzing data as you would in just reading the data. The most infamous skew in an online creative test is a cookie. Not a big, fat, double-chocolate chunk cookie , as there is nothing bad about this whatsoever. Rather, it's the post-impression, post-click cookie that can create havoc. I don't think people realize how big of an impact cookies, which track lag sales, can have on a creative test. Here is just one illustration of this.
A good marketer often tests landing pages within their paid search campaigns to ensure the destination of a click fulfills the promise of the ad as seen on the search engine. Usually a newly hatched landing page is being tested against a control. A landing page test is typically set up as a split-serve against the incoming clicks. This all sounds easy, but there is an inherent problem in that the Control page has probably been live for some time and has the benefit of established cookies - usually 30 days worth - that are driving lag sales. The test page, which is housed at a different URL, does not have the benefit of established cookies and lag sales and cannot catch-up to the control unless you let the test run for over thirty days and only read results from day 31+. But this often takes too much time and we want a quick read on our results. So what's the solution? There are at least three solutions to consider, one being very easy, the other two being a little trickier.
1. The easiest solution is to re-traffic the keywords you're using for the test into a new campaign within your search engine. You don't have to delete your current campaign - rather pause it - and your established cookies will still run and track your lag sales to the paused campaign. Then launch the new campaign, which your Ad-serving platform and Search Engines will read as a new keyword that has no established cookies. This puts both the control and test landing page on equal ground, creating new cookies and lag sales from the time of the test launch.
Make sense? The only downside to this is if you're testing a landing page against a whole slew of keywords. It may be too troublesome to create multiple campaigns if you're testing across multiple keywords. We typically take our highest-volume generating brand and non-brand keywords (one or two) to get a quick read. As soon as the test has concluded, we pause the test campaign and restart the main campaign.
2. Another option is to leave the control and test campaign running without moving to a new campaign and re-setting the cookies. To solve for the possible inflation from lag sales, one could look at extended data that comes with your tracking service and only analyze sales that had clicks starting on the day of the test and not count sales that occurred from clicks before the test date. This assumes that you have access to extended data and a reliable site serving tool.
3. One alternate solution is to leave your campaigns as-is and normalize the data by removing lag sales. This can be tricky and requires you to have specific data on hand to correctly normalize the sales. Basically you need to know how your lag sales build within an average 30-day period so you can correctly remove lag sales. An example of how lag sales are built can be seen in the charts referenced earlier.
The key is to remember that you have a different percent of lags for each of the prior 30 days as it builds. This has a few implications:
• If a control page was previously being served at 100% and is now being served at 50% for a test, the amount of lag sales seen within the ensuing 30 days will increase because it's based off the previous days when the universe was 100%. If this is not accounted for, the conversion rate for the control page will be inflated.
• The test page will take 30 days to fully accumulate all the potential lag sales so the percent of lag sales in any given day has to build until the full cookie-window is exposed.
This example, as illustrated below, is just one of the many skews that can occur when testing in an online environment. It can make or break a test.
If, for example, you don't account for lag sales and a control page appears to have a 5% lift in sales as compared to the test page, you could assume the test didn't work. But if you adjust for the skew caused by lag sales, you could then see that you're test page is actually providing a 10% lift in sales. I think any marketer would be remiss if they walked away from a potential lift of this magnitude.
In closing, this is just one example of a potential skew that can be caused when testing creative online. There is a whole laundry list of others that I may talk about in another blog!
If you'd like to see an Excel document with the above examples and working formulas, feel free to email me directly at Jana.Ferguson@bkv.com.