Voted "Best overall presentation from Heroconf London 2017"
“Google doesn’t understand your business, so better pick the winner yourself” – that’s how most PPC professionals approach ad testing. Rotating your ads and waiting for statistical significance has been accepted as the best practice approach for years. The problem: It doesn’t work. Statistical significance sounds fantastic, but with regards to AdWords, it’s nonsense.
This presentation looks at two things: Why the best practice approach can’t possibly work – and how it actually doesn’t. To this end, it includes exclusive data on a large number of ad tests that didn’t go as you would’ve expected.
This presentation covers
* Why statistical significance has no place in ad testing
* What actually happens if you follow the industry best practice
* Why simpler approaches can actually lead to better results
2. @bloomarty
Learner Outcomes
• Ad testing best practices don‘t work
• What actually happens when you follow best practices
• What you should be doing instead
3. @bloomarty
Additional Slide
• To explain what was said in the presentation, I added a few slides like this one
• I plan to write about all of this in greater detail on my blog, PPC-Epiphany.com
• Additional data will also be covered on my blog
• … hopefully soon …
19. @bloomarty
Analyzing the Data
• Script to evaluate the data
• Calculate level of significance for each day
• Visualization:
– Ad 1 reaches statistical significance (95%)
– Ad 2 reaches statistical significance (95%)
20. @bloomarty
The Result (a small part)
statistically significant
no longer significant
still significant…
waiting for significance
23. @bloomarty
Results
• Most tests reached a significance level of 95% at some point
Minimum total Impressions Tests to reach significance Still significant in the end
1,000 55% 13%
10,000 62% 12%
100,000 81% 11%
24. @bloomarty
Additional Slide: The Twist
• These were actually 576 A/A tests
• With enough time and data you can almost always find a point where one ad is
significantly better than the other
• Just don‘t take „no“ for an answer and stop when you get the „yes“
• … which is precisely our industry‘s best practice approach to ad testing
31. @bloomarty
Additional Slide
• You can actually target search partner sites like eBay
• For how to do this, read
https://www.ppc-epiphany.com/2012/04/02/targeting-search-partners/
• However, CTR and conversion rate on sites like eBay are abysmal, so this probably
won‘t be worth your time
• Search partners are often profitable, but your priority in ad testing should be Google
33. @bloomarty
Segmented by Network
Impressions Clicks CTR
Ad 1 2,000 200 10%
Google 1,000 180 18%
Search Partners 1,000 20 2%
Ad 2 3,000 240 8%
Google 1,000 220 22%
Search Partners 2,000 20 1%
34. @bloomarty
Impressions Clicks CTR
Ad 1 2,000 200 10%
Google 1,000 180 18%
Search Partners 1,000 20 2%
Ad 2 3,000 240 8%
Google 1,000 220 22%
Search Partners 2,000 20 1%
Also possible …
35. @bloomarty
Impressions Clicks CTR
Ad 1 2,000 200 10%
Google 1,000 180 18%
Search Partners 1,000 20 2%
Ad 2 3,000 270 9%
Google 1,000 220 22%
Search Partners 2,000 50 2.5%
Also possible …
36. @bloomarty
How common is this?
Based on a study of 6,500 ad pairs, compared with an AdWords Script
• Overall winner loses on Google • Overall winner loses on
Google & Search Partners
32.74%
12.23%
39. @bloomarty
Same Thing with Slots
• Overall winner loses in the top slot • Overall winner loses on top & other
18.46%
6.30%
Based on a study of 6,500 ad pairs, compared with an AdWords Script
48. @bloomarty
Additional Slide
• What did these three scenarios say about the performance of your ad?
• Actually, don‘t bother
• As search marketers, all we see is what‘s on the next slide
52. @bloomarty
CTR
The Position Feedback
• Positive feedback
• No loop:
position effects do not affect QS
Ad position
Ad Auction
Ad rankingHigher CTR
Higher
Quality
Score
Better
Position
Higher CTR
59. @bloomarty
The AdWords Business Model
„How much would you give us if we gave you the click?“
„How much would you give us if we showed your ad?“
Sell ad clicks
Sell ad impressions
Convert bids
No control over clicks
Advertisers want clicks
60. @bloomarty
The Ad Auction
• 𝐴𝑑 𝑅𝑎𝑛𝑘 = 𝐶𝑃𝐶 ∗ 𝑄𝑢𝑎𝑙𝑖𝑡𝑦 𝑆𝑐𝑜𝑟𝑒
• = 𝐶𝑃𝐶 ∗ 𝐶𝑇𝑅
• =
𝐶𝑜𝑠𝑡
𝐶𝑙𝑖𝑐𝑘𝑠
∗
𝐶𝑙𝑖𝑐𝑘𝑠
𝐼𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛
• =
𝐶𝑜𝑠𝑡
𝐼𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛
= „How much would you give us if we showed your ad?“
62. @bloomarty
Additional Slide
• Quality Score is basically CTR
• Google needs to know CTR to calculate cost-per-impression bids
• Based on this, Google can maximize their revenue from showing ads
• Getting CTR wrong means leaving money on the table – this adds up
• Google is well-motivated to get CTR (and therefore ad testing) right
65. @bloomarty
Example: Search History
• Have they searched for this before?
• Did they interact with ads?
• Did they interact with organic results?
• Have they seen our ad before?
66. @bloomarty
Example: Personality
• Do they take their time to read the entire ad?
• How do they respond to
– discounts
– reassurances
– testimonials
69. @bloomarty
New Mindset
• You don‘t have control over ad testing. Let it go.
• There can be multiple winners.
• Use Google‘s optimized ad rotation by default.
70. @bloomarty
Let the Machines Do Their Job
• Google is well motivated
• Google is really good with data and algorithms[citation needed]
• Let Google decide which ads to show
73. @bloomarty
Keep an Eye on the Machines
• If necessary, force data collection
• Rotate at adgroup level
• Consider the cost of even rotation
• Alternative: Add the ad again
iStock.com/RichVintage
74. @bloomarty
To Sum Up…
• No more micromanaging ad tests
• Focus on messaging and supervising the machines
• Your job just became more interesting – congrats!
75. @bloomarty
Thank You!
• Agency Blog: Die Internetkapitäne
• Advanced AdWords Blog: PPC-Epiphany.com
@bloomarty
Martin Röttgerding
Head of SEA
BloofusionGermany GmbH
martin@bloofusion.de