It’s always nice when your ads pass your minimum data amounts and achieve statistical significance. However, there are many times that your ad tests won’t achieve the necessary confidence levels to take action. As we’re often looking for places to take an action, these types of tests are often missed and ad tests linger with no results, when you should be ending that ad test and trying something different.
Sometimes this happens because of the test itself, and other times users just don’t see the ads as that different.
This is an example that happened to one company. They wanted to test adding a period to the end of description line 1 to see the difference between extended headlines (the long 70 character headline when the description line 1 is added to the headline) and having two different lines of text.
So their test was quite simple: Add a period to the end of description line 1 and then examine the results.
This is normally a valid test. However, a few problems arose.
First off, they didn’t test by device type. For extended headlines, there are some simple rules:
The second issue is that right after they launched the test, they changed their bidding strategy and the ads dropped into position 5 along the right rail so that the extended headline would never actually be used.
This company did examine their test results monthly; however, they only look for places to take action and always ignored results that hadn’t achieved their minimum confidence levels. After a year, the test was still running and still didn’t have any results and yet the test was still running.
For your ad testing, you not only want to define minimum data, you also want to define maximum data.
If your ads hit your maximum data and have not achieved your minimum confidence levels, then you need to end the ad test and move on.
There are usually two ways to define maximum data:
Defining both minimum and maximum data for your ad tests ensure that you are striving to find actionable information even if that action is to just end a test as the results are not valid and start from a different hypothesis.
For Adalysis users, we automatically alert you to test results that are above minimum data thresholds and have been running for at least 3 months and have not achieved your minimum confidence levels. There’s no need to worry about defining this information. If you are testing within Excel or another system, ensure that you are defining maximum data so that you don’t have ad tests running that are not going to produce any results so that you’re always striving towards improving your performance.