We’ve got so much data to look at these days—so many papers and reports. Endless analytics data.
Infinite ways to screw up what any of it actually means.
The Leap
A pretty entertaining article on PBR has been making the rounds this week. In it, the author gives a personal anecdote about her apparently not cool husband buying PBR, which has a reputation as a “hipster” (don’t get me started on this word) beer. Also, apparently, her husband’s purchase conclusively inaugurates PBR’s entrance into the mainstream, and, thus, spells the end of the brand.
The unfounded leap is made after sharing a study that shows a “direct correlation between a brand’s perceived autonomy and a consumer’s level of counterculturalism.” She mentions that PBR markets in a very non-traditional manner (which is true, most certainly here in Atlanta) and then ties the two together:
Of course, the fact that hipsters spent the past decade drinking “sub-premium” beer isn’t their fault. It’s the fault of savvy PBR’s marketers—and also of human nature.
She goes on to essentially conclude that non-traditional marketing created a connection with people who identify with counterculturalism and therefore caused them to integrate it into their group identity (the human nature part she mentioned). It’s even supported with heat maps of beer availability in trendy neighborhoods.
Let’s pause. This is a great article because it’s a compelling story. It’s not in a scientific journal; it’s in an outdoor magazine’s blog.
It’s incredibly insightful, but it’s biased. I want to show you how because we’re incredibly prone to doing this within our marketing tactics. We’re suckers for a good story.
Detection & Confirmation Bias
Let’s go back to those heat maps presented as evidence that PBR is popular with a vague group of individuals because of non-traditional marketing.
One of the heat maps is shown further down the page, accompanied by an explanation for why the attempt to link PBR to hipsters falls short:
While the two most obvious pictures of hipsterdom seem to check out with the PBR hypothesis, going to other cities brings us other results.
Detection bias shows up when hipsters from New York and San Francisco are followed more closely (as in many articles reporting on the so-called hipsters) than similarly described people elsewhere. Studying one particular portion of the culture in which what you’re looking for is most likely to occur will yield results more like what you’re expecting—which has already begun making the transition into confirmation bias. You’ll tend to favor information that supports your existing conclusion.
Even that very article (the one with the heat maps) goes on to explore the idea that the link is not so much with a particular culture so much as it is a particular economic status:
The places we do see PBR light up are around universities. Cambridge and Boston have tons of schools, and larger ones like Harvard and MIT bring droves of college kids that can’t spend a boatload on drinks. That raises the next natural question: if you keep a tight budget, are you wise to flock to PBR to keep costs low?
Now we’re getting somewhere. PBR is cheap. Miller High Life and Tecate are also cheap beers, which my thoughts immediately jumped to reading the PBR article, because I see those other beers being prevalent in the same culture just as much, if not more sometimes.
The data is already skewed because it’s got the wrong variables. Once the running theory is that PBR is prevalent where cost is a concern, comparing it against other cheap beers would give far more compelling data regarding its impact on cultural identification.
This happens frequently in marketing efforts; the clearest example is advertising. We know with certainty that it’s virtually impossible to measure returns on advertising and our efforts to attribute causality result in a drastic overestimation of effectiveness.
How do we deal with our very real inability to draw meaningful conclusions?
Activity Bias
Awareness of certain types of bias helps cancel it out. Even though there are more mathematically correct means by which we can do so, we’d do well just avoid being wholly duped and asking the right questions.
Let’s go back to cheap beer. A more in-depth analysis would have looked at the availability and purchase of PBR and other similarly priced beers in areas targeted by PBR’s marketing efforts. People are already buying cheap beer; the question is whether they’re choosing PBR or not.
That’s an admittedly abstract but useful example of something called activity bias. It’s a problem, because we tend to correlate activity with a marketing effort without comparing it to activity by people not exposed to the marketing effort.
You can understand this easily with a graph from Nielsen Norman’s exploration:
If it weren’t for that pesky orange line, this graph could be used to quickly prove that a promotional video resulted in increased traffic. A keen eye would see that the graph actually started trending up before the video even began, but comparing it to traffic from those who didn’t see the video is the final nail in the coffin.
There’s plenty of discussion about why this activity bias exists, but why is less important than your knowledge that it’s there. It starts small.
Why look for more data when the data you have makes it look like you made the right decision? Why rock the boat when the boss is happy? Why turn down a larger budget, now that you’re showing you deserved it?
You may have personal reasons not to counteract this bias, but know that it’s leading to bad analysis which can, in turn, cause companies to spend massive amounts of money in a less efficient manner.
You need to remember that attribution models are imperfect and may need to change frequently. eBay experimented with branded keyword advertisements and found that the true benefit of search ads comes from exposure to customers who don’t know or remember your brand. Jakob Nielsen adds:
Activity bias comes back to haunt marketing managers who run simplistic analyses of “attributed sales” to advertising, assuming that sales are caused by whatever happened to be the user’s last click. Many users who both click ads and make purchases would have done the latter even if they hadn’t seen an ad. A controlled experiment is the only way to discover an ad’s true impact.
Want to actually understand marketing analytics? You’ll need to get your proverbial lab coat on and run true experiments.
If you don’t, your data analysis will ironically cause you to put money in the wrong places. And guess who loves irony?