It’s always nice when your ads pass your minimum data amounts and achieve statistical significance. However, there are many times that your ad tests won’t achieve the necessary confidence levels to take action. As we’re often looking for places to take an action, these types of tests are often missed and ad tests linger with no results, when you should be ending that ad test and trying something different.
Sometimes this happens because of the test itself, and other times users just don’t see the ads as that different.
Quick Case Study
This is an example that happened to one company. They wanted to test adding a period to the end of description line 1 to see the difference between extended headlines (the long 70 character headline when the description line 1 is added to the headline) and having two different lines of text.
So their test was quite simple: Add a period to the end of description line 1 and then examine the results.
This is normally a valid test. However, a few problems arose.
First off, they didn’t test by device type. For extended headlines, there are some simple rules:
- In most cases, if description line 1 ends in a sentence ending mark, then the description line 1 can be added to the headline if the ad is not shown in the right rail
- Extended headlines only show in the top ads for desktops
- Extended headlines are shown for any ad on mobile devices (there is no right rail, so all ads are treated as ‘top’ ads); hence you should always test mobile and desktop ads separately
The second issue is that right after they launched the test, they changed their bidding strategy and the ads dropped into position 5 along the right rail so that the extended headline would never actually be used.
This company did examine their test results monthly; however, they only look for places to take action and always ignored results that hadn’t achieved their minimum confidence levels. After a year, the test was still running and still didn’t have any results and yet the test was still running.
Defining Maximum Data
For your ad testing, you not only want to define minimum data, you also want to define maximum data.
If your ads hit your maximum data and have not achieved your minimum confidence levels, then you need to end the ad test and move on.
There are usually two ways to define maximum data:
- Use 10x your minimum data
- Use a 3 month time frame (assuming your tests are above minimum data)
Defining both minimum and maximum data for your ad tests ensure that you are striving to find actionable information even if that action is to just end a test as the results are not valid and start from a different hypothesis.
For Adalysis users, we automatically alert you to test results that are above minimum data thresholds and have been running for at least 3 months and have not achieved your minimum confidence levels. There’s no need to worry about defining this information. If you are testing within Excel or another system, ensure that you are defining maximum data so that you don’t have ad tests running that are not going to produce any results so that you’re always striving towards improving your performance.