Marketing Pulse Blog

An image of color series

Marketing Attribution is All About Causation

Written by
Taryn Shulman

When marketers talk about measurement, they’re really talking about causation. Why, you may ask? Because at the root of it, marketers want to understand causal links between their advertising and their KPIs. They want to know whether the ads they run on any given channel actually cause people to take action and buy their product.

Causation is important because it helps us understand where to best spend our budget. It helps demonstrate value, and gives us the information we need to make decisions, like whether to run a channel or not. Today we take a deep dive into the world of causation, understanding its dynamics within the user journey and how it impacts attribution.

Measuring Causation

You may have heard the common phrase: correlation doesn’t imply causation. In the context of marketing, this reminds us not to treat correlations in our data as if they are causations. And yet, many of us do just that.

If you’re running any digital ad, you’re inevitably reliant on some form of rule-based attribution. Last touch, first touch, all of it. All these types of methods try to derive causation from correlation.

Rule-based attribution goes like this: If a user sees an ad on Channel X, and then goes on to convert, that’s sufficient justification for the ad manager to decide that the ad Channel X caused the conversion.

This default method for attribution pervades across every platform we’ve all worked on. It’s hard to imagine marketing without it. And yet, for all its popularity, it’s an idea that’s deeply flawed.

The Problem With Rule-Based Attribution

To see where rule-based attribution falls short, let’s consider an example.

In a game of soccer, one team scores a goal. Now, prior to it being scored, 5 different players from that team touch the ball.

So, which player caused the goal to happen? In marketing, we’d have to say this.

  • A last-touch approach says only the last player to touch the ball caused the goal.
  • A first-touch approach says only the first player to touch the ball caused the goal.
  • A linear approach says that all players played an exactly equal part in causing the goal.

While some approaches may work in specific circumstances, none truly offer a full understanding of which player caused the goal.

To determine causality, we have to look deeper than the simple order of events.

How Can We Do Better?

So if the ordering in certain events doesn’t explain causation, what does?

Let’s continue our soccer example. Normally, people would say that the player touched the ball (first, last, whichever), so that means the player is responsible for the goal. But, that’s the wrong way to think about it. Instead, we should be saying “if the player didn’t touch the ball – regardless of whether first, last, or otherwise Causation is important because it helps us understand where to best spend our budget — would the goal have been scored? That is a more accurate way of understanding attribution: the player caused the goal to be scored if the goal wouldn’t have been scored had that player not touched the ball.

This “counterfactual” idea — the idea that something would not have happened but for that player’s involvement — seems to fit better than a rule-based approach because you are able to specifically weigh the player’s contribution to the goal. The absence of that player means the goal wouldn’t have been scored. This information is more relevant to causation than who touched the ball first or last before the goal.

Transferring This Knowledge To Marketing

So how can we apply this approach to channel measurement for something like display ads.
Instead of asserting that a given display ad caused a user to convert, simply based on the fact that it was the last (or first) channel they interacted with before converting, we consider a different question: Would that buyer convert if they hadn’t interacted with our display ads?
Of course, we can’t rewind time and see what that user would’ve done if they hadn’t seen our display ads. We can, however, compare the buyer’s journey to other identical user journeys that had not seen the display. If users who didn’t interact with display are just as likely to convert as those who did, then display probably isn’t causing buyers to convert (and vice versa!). This is how ‘data-driven’ attribution methods work.

The Challenge Of Delivery Bias

This method of measuring causation works well, but it requires an assumption that pre-existing likelihoods don’t impact ad interaction. In other words, users with high pre-existing likelihoods of converting are no more or less likely to interact with display ads than users with low pre-existing likelihoods of converting.

But this presents a bit of a challenge. If we only show our display ads to users who are already likely to convert (perhaps because they are part of a retargeting audience of users who have recently browsed the site), then we have created something called delivery bias.
We can’t use the data-driven approach to determine causation, because our display ads are inherently biased toward users already likely to convert. Because of this, it’s not fair — nor accurate — to compare users who have and haven’t interacted with display ads.
Given how effective ad platforms are at finding users that are predisposed to convert, this is a real issue.

Overcoming Bias With Lift Testing

The only way around the problem of delivery bias is to stop channels from showing content exclusively to users predisposed to convert. Since it’s not necessary to remove all targeting to do this, we can simply run what are called lift tests.

A lift test creates a control group and an exposed group that is created just before the ad is shown. Users in the control don’t see the ad or any future ads, while users in the exposed group continue to see them. The fact that the assignment happens just before a user sees the ad removes the potential for delivery bias — users in the target audience are no more or less likely to be in the exposed group.

With a lift test, you can measure how much more likely the exposed group was to convert over time. And best of all, this gives us what we’ve been looking for: a way to measure the causal effect of a channel on conversions.

Given its use in academia, measuring everything from vaccine efficacy to economic interventions, it’s no surprise that lift testing is generally considered the gold standard of marketing attribution. They do take time and can be difficult to run, but ultimately, they offer marketers the best chance of accurately measuring causation.

If you have questions about causation, rule-based attribution, or just want to talk more about how Performance Branding can transform your marketing plan, let’s talk.

Tagged as​
Share this
Written by
Taryn Shulman

Taryn is the VP, Marketing at WITHIN. When not building demand gen machines and content engines, she can be found enjoying outside sports with her kids and husband. Taryn lives in Toronto, ON. Go Leafs Go.

Newsletter

Related Articles

Search