If you have multiple ads in an ad group; your ad rotation settings will determine how often each ad is displayed. Based upon how you are testing and your favorite metrics, you should consider the rotation setting you are using and how that affects your ability to receive statistical significance data to make testing decisions.
There are four ad rotation settings:
The ad served percentage shows you how often each ad was served across your account, campaign, or ad group.
When examining this data; it is important to keep in mind the time frame you are examining. If you have paused or deleted ads that were active during the timeframe you are examining, then your ad served percentages may not add up to 100% unless you show those ads.
In addition, it is useful to only examine the data when all the ads were running at the same time. If you create an ad one month ago; but you are looking at the last three months of data; of course it will look like the newer ad doesn’t have the appropriate ad served percentage; and it can’t as it wasn’t active for two of the three months you are examining.
When you use any ad rotation setting except for ‘Rotate….’ (such as ‘Optimize for CTR’ or ‘Optimize for conversions’), sometimes AdWords makes decisions too quickly about which ad is likely to be a winner.
For instance, if the campaign had the ad rotation setting set to ‘Optimize for conversions’ and we examine a time frame where all the ads were active, we would expect to see that the highest ad percentage served would be the ad with the highest conversion rate.
Unfortunately, that is not always true. Sometimes AdWords makes decisions too quickly and the wrong ad has the highest ad served percentage. For example, the below campaign was set to ‘Optimize for conversions’. Ad two is the highest converting ad and has the highest click through rate; so by any ad rotation standards the second ad should have the highest percentage served. However, it does not.
This does not happen all the time by any means; however, it does happen. When your ads are served improperly, it affects your account’s goals and your ability to reach the minimum viable data in determining the true statistically significant winners.
Any ad test should have a minimum amount of viable data, such as a minimum amount of time, clicks, impressions, and conversions. These may vary depending on the type of metrics you are using for ad testing and the type of keywords you are testing (such as brand vs product).
When your ad served percentages are skewed towards a single ad, then the other ads receive less impressions. Since they have less impressions, these other ads also receive less clicks and conversions. Since these ads are receiving less data, it takes longer for those ads to build up enough minimum viable data to make statistically significant decisions.
For most serious ad testers, you should be using ‘Rotate indefinitely’ as your ad rotation setting. This will ensure that your ads are equally served and that all the ads build up the minimum viable information to make ad testing decisions.
Even if you only care about conversion rate (and if you are in lead generation; there are probably better lead gen ad testing metrics) and think that using ‘Optimize for conversions’ is the best option; that is only true if you meet two conditions:
This second point is an important one; take the metrics from this ad group that was not well looked after.
|Ad||% Served||Conversion Rate||Conversions|
If ad 1 (with the highest conversion rate) had been served 100% of the time; then the ad group would have had 900 conversions instead of 768. As you should always be ad testing; there will be some opportunity cost when you test an ad that turns out to be a loser.
However, unless you are willing to test ads, you’ll never get better. Moreover, if your competition is diligent about testing, you’ll get worse by clinging to your old ads as they start to beat you.
To be a good ad tester, you should use ‘Rotate indefinitely’ as your ad rotation setting to ensure that all your ads are equally served. Then you need to be diligent about removing losers and creating more ads.
While this can be a lot of manual work, with good ad testing software, it can be very easy to find winner and losers and continuously test your ads so that your account keeps getting better and beats your competition.
I know you saw some of the twitter comments about this yesterday; but just to make sure there’s an archive for people following this; here’s a couple of the threads:
If you look at your ads, break them down by timeframes, locations, etc – you’ll see that Google isn’t quite that sophisticated with the ad serving. They are pretty good with CPA bidding in taking some of these items into account; but their CPA bidding system does not serve ads. This has been confirmed by Google many times. One is for bidding and one is for ad serving. Their ad serving tech isn’t that sophisticated for search. On GDN, it is a bit more sophisticated as they will use things like contrast ratios between an ad’s color and a website’s color to see if low or high contrast has higher CTRs, etc; but they are less sophisticated on search for those variables.
Hopefully one day they will get there; but for now; they are a long ways from being that good at serving ads; which is why you have so much control in the targeting methods as that’s what Google really relies on to determine if an ad should be shown; past that, they are’t that sophisticated on which one to actually show.
Find out, in minutes, what your account score is with our FREE audit report.
What about where Google realises that serving different adverts to different subsets of users gets different outcomes?
For example, when served in the evening advert A performs way better than advert B. Then Google serves advert A in the evening and advert B the rest of the time. Performance in the evening is better but impression volumes are lower. Then you end up in the situation where advert A is served less but appears to perform better.
Matt van Wagner wrote a bit about this on Search Engine Land back in 2010 http://searchengineland.com/pitfalls-of-ab-ad-testing-part-3-41190
Matt Gershoff at Conductrics is the person doing the most with these ideas that I know of. http://conductrics.com/intelligent-agents-ab-testing-user-targeting-and-predictive-analytics/ gives an overview of what his optimisation tool is capable of in this context. I find it difficult to believe that Google aren’t doing something at least as sophisticated.