I have found myself repeatedly using the phrase “statistically significant data” in recent discussions about paid search efforts, Facebook advertising, email marketing or site traffic in general. I also realized that I’m more than a decade removed from my last statistics class, so I might be am probably using the wrong term. What I mean when I say this is, “do we have enough data about this outcome to draw a conclusion or make a decision?”
For those that love nerd-ing out on this kind of stuff: I stumbled across a few really interesting articles loaded with actual formulas for calculating significant sample size(s), statistically significant outcomes and standard deviation. I even found a neat calculator for establishing a good number of subjects (AKA site visitors or ad impressions) for an A|B test.
For the rest of you, it likely makes intuitive sense that you can’t draw a really solid conclusion about ad performance on a few clicks or conversions. The smaller a data set, the more a few outliers can skew your data. With all of the chatter in the digital marketing space about more data equalling more intelligence and decision making capacity, it’s equally important to understand when and how that data becomes worth something. Each ad, ad set, ad group or ad campaign is, of course, unique. And depending on targeting, search terms, or desired outcome it will require analyzing and comparing different metrics.
- $100 spend
- Facebook right rail ($1 CPM)
- 1 creative asset
- 2 copy variations
- $100 spend
- Facebook newsfeed ($10 CPM)
- 4 creative assets
- 1 copy variation
This is a very simplistic take on this problem, but right away you may have a sense of which scenario is likely to generate more actionable data.
In the first scenario, given the CPM (cost per thousand impressions) we have enough budget to reach 100,000 people while in the second we have the same budget – but can only reach 10,000 people. While the second may show higher blended click-through rates (because newsfeed clicks-through better), there are also 4 variants instead of 2 as in scenario one.
Furthermore, depending on if these ads were run in separate ad groups or the same, Facebook may have optimized for the best performing creative asset too soon and not given the other three times to get enough impressions and draw a conclusion. The first scenario may generate a fair number of clicks (thought at lower click-throughs) and because there are only two variants of copy we could probably tell with that $100 spend which copy is grabbing more clicks.
The same ideas hold true in PPC (pay-per-click) campaigns.
If you are bidding on very specific and narrow keyword groups, you are likely to get less overall search volume, and therefore lower clicks regardless of CTR (click-through rate). The more broad the search terms, of course, the more search volume and potential data one can gather – of course these will also be more expensive clicks in all likelihood.
In email, as well, a promotional message with two variants (classic A|B test) to a list of 100 people will give you far less conclusive data than the same message to a list of 1,000 recipients. It’s important when sending to smaller lists to keep the number of variables to a minimum and test messaging or creative in emails that will continue to grow in volume; like a welcome email or other workflow email as opposed to a promotional one. For example, if you send a promotional email to a list of 100 people, after that send, your data set is complete and your scope is limited. Compare that to an ongoing A|B test of a welcome email that gets 40 new users each week. Within a few weeks you’ll have much more data, and as site traffic and email capture increases you’ll continue to get useful data until a trend emerges at scale.
In all marketing efforts it is important to keep in mind the scientific method and begin with a question and hypothesis in mind. When testing this hypothesis or looking to learn something about your audience or potential audience. It’s equally important to change only one variable at a time unless you have a significant audience or reach across which you can test multiple variables. As your lists and budgets grow you can begin to apply what you’ve learned in earlier phases to larger groups and gain additional learning. The best bet in the beginning is to cast a relatively wide net and analyze reporting from ad platforms or your email service provider, rather than start narrow and, potentially, miss out on audiences/results you may not have thought were on your radar.
Ultimately, the name of the game in digital marketing is about testing, analysis and optimization – these aren’t just buzzwords. This is the blocking and tackling of a comprehensive strategy and as you and your marketing team better understand your buyers, users or potential customers you can expand creatively into new segments or test new ideas and differentiate in the market.
Need help? Hit us up 🙂
About: Hawke Media is full service outsourced CMO and digital advertising agency with clients in Santa Monica, Los Angeles, San Jose, San Francisco, Chicago and New York.