Camela Thompson is the Head of Marketing at the RevOps Co-op and ex-VP of Marketing and Attribution SME at CaliberMind, a marketing analytics company. She also spent over 15 years in operations functions spanning the go-to-market team in B2B tech before becoming a marketer.
The most challenging task in RevOps is designing a compensation plan everyone is happy with. A close second is deploying an attribution model that marketers will use and other go-to-market teams believe in. I’ve done it a few times, but only because I learned from past failures.
Multi-touch attribution is a highly contentious concept in B2B. People like Chris Walker, CEO of Refine Labs, have even built successful followings because they bash attribution in favor of a free text form field asking people where they first hear about the company. Which, for those of us old enough to remember being forced to use this as our only attribution option, is a terrible idea because 1. it doesn’t scale and 2. people have terrible memories (there’s a reason why the leading cause of wrongful convictions is eyewitness testimony).
But I digress.
This article aims to share why so many attribution implementations fail and the common characteristics of companies that have successfully deployed multi-touch attribution and use it to make critical business decisions.
I have seen dozens of attribution implementations throughout my career, and the common thread in every failed attribution rollout is pretty simple:
The stakeholders disagreed on what attribution should be used for and didn’t understand what attribution could realistically measure because it wasn’t treated as a cross-functional project. I’m not blaming the totality of the failure on the person implementing the tool or model.
Marketing leadership (or whichever executive is pushing for multi-touch attribution) should take responsibility for learning what is and is not possible and then help operations sell attribution (as they intend to use it) to other go-to-market functions. However, it is operation’s role to gather requirements and understand how each stakeholder defines attribution.
I’ve found that by asking marketers, the CMO, the VP of Sales, and the CEO, “How do you define attribution, and what do you think it will solve?” you can uncover many misconceptions and red flags. And the earlier you reveal these landmines, the more hope you have of resolving them before you roll out your model.
If you don’t participate in this discovery and proactively participate in damage control, multi-touch attribution will fail before it’s launched.
Here are some of the landmines I’ve uncovered by asking, “How do you define attribution, and what do you think it will solve?”:
Attribution vendors did a great job of selling multi-touch attribution – and by “great,” I mean they completely screwed up and sold it as a silver bullet that could “see” every step of the buyer journey. This will never be possible unless we start tracking thoughts via neurotransmitters (no, thank you). We’re also losing visibility into what was previously trackable as browsers and smartphone vendors restrict the use of cookies and passing other identifiers - like IP addresses or machine IDs.
It is hyper-critical to communicate what is and is not possible with attribution – after you understand what exactly people are trying to use it for.
In 1976, British statistician George Box wrote, “All models are wrong; some are useful.”
Ironically, the first step of a successful attribution project is to accept and communicate that attribution is inherently flawed. No matter which tool you use or how complex your model becomes, it will NEVER be perfect. People will always find flaws with the data because your ability to capture meaningful moments in the buyer journey is limited, and each person has an agenda that motivates how they define a “meaningful” touchpoint.
This doesn’t mean you shouldn’t implement attribution. I’d argue that multi-touch attribution is the best way to understand what does and does not work in marketing. However, it’s essential to understand:
Humans are biased, emotional creatures. Each of us wants to believe that our work makes the difference in a prospect’s decision to purchase a product. The reality is that it takes a myriad of positive interactions to convince someone to make a purchase – and the likelihood that a website visit was as influential as a meeting with a salesperson is preposterous.
Once we accept that our attribution model will never be perfect and that people will have an agenda they want reflected in our model, we can begin defining attribution using the following steps.
Two scenarios live rent-free in my realm of nightmares:
I’m one of the few attribution advocates who isn’t afraid to use attribution in a cross-functional setting, but only if sales is included in the conversation from the beginning. If marketers talk about an attribution model they developed in a silo as the Truth and then try to display marketing pipeline vs. sales pipeline dollars in a chart, they are asking for a fight. I’ve also noticed that if a CMO keeps hammering on the attribution model, they frequently display signs of past attribution trauma. If you sit them down and get them talking, they’ll inevitably share a story about how they worked hard on a model only to have it ripped to shreds by sales.
If there’s one message I get across in this article, I want it to be the following:
An attribution model rarely fails because of a data issue (what we can measure or how we’re measuring it). Attribution usually fails because we’re not aligned on what we want to prove.
And if we aren’t aligned on what we’re using attribution for, we can’t build a model that anyone will use.
The best case scenario for a multi-touch attribution implementation is if we can get the marketing team to agree that 1) the dollars we display are just a directional estimate (not the Truth when it comes to pipeline or bookings dollars) and 2) we can use more than one model as an estimate for what works and what doesn’t.
For example, one multi-touch model shows $250K of pipeline built against in-person events out of a $1,500,000 total. If my marketer understands that I’m not comparing pipeline to sales and they should look at the data for a channel relative to other channels (let’s say email built $50K and webinars built $100K, and the rest went to demo requests), my marketer can use the data to understand how effective each investment might be relative to one another. Then, attribution is a function of marketing optimization – not defending how much marketing brought to the table versus other go-to-market teams.
We can also more readily accept that some tactics won’t appear in attribution because of their nature – and that’s okay! For example, podcasts can drive a ton of top-of-funnel traffic, but we may not know it works until we discontinue the podcast and traffic falls off a cliff. On the other hand, if marketers think they need to get “pipeline credit” to be seen as hard-working, they’ll do backflips to figure out how to track and get dollars assigned to their work.
If we’re not using marketing attribution as a weapon to defend our department, it’s much easier to accept and use the data – even when we know it isn’t perfect.
Unfortunately…. The CMO is under tremendous pressure to prove ROI by marketing channel and defend their impact on pipeline, so they typically want to use it in the boardroom….
When we understand how everyone wants to use attribution – and accept when we can’t change their minds – it’s much easier to set the right expectations and run an education campaign. It’s also crucial for all the stakeholders involved to sign off on how attribution will be used so they can give the proper input on what they need for that use case to happen.
I used multi-touch attribution at CaliberMind in the boardroom. To do that, I needed to work with the VP of Sales to understand which sales and marketing activities he felt deserved attribution. Once we excluded what we both agreed was just “noise” that would distort any model, we could focus on deciding how to position attribution to the CEO and board of directors. I also reviewed the data with him regularly throughout the quarter, so he wasn’t surprised by the results - and more importantly - his questions were answered before he felt put on the spot.
The result was that we were united in our talk tracks when we reported the results to the board.
A lot of other work happened behind the scenes in the months leading up to the board meeting. I needed to ensure that everyone in the C-suite understood how we thought of attribution and what our model could and couldn’t do. After we agreed that attribution was an estimate and we were assuming directional accuracy (and not precision), we incorporated reports into our weekly executive sync.
It is mission-critical that your stakeholders understand how attribution will be used, have an opportunity to give feedback on what needs to happen for them to accept the model, and have repeated exposure to the data until they are comfortable relying on the results.
If they want the model tweaked, you need to ask questions to determine their motives. If they feel their team is under-represented (which is often the root cause of over-engineering attribution), you’ll need to go back to your stakeholders and figure out what, if anything, needs to be done to address their concern.
I did many things wrong the first time I rolled out multi-touch attribution. The biggest misstep was assuming the marketing leader understood attribution.
This was way back in 2012 when attribution wasn’t common.
I’d explain how the touchpoints worked, get a bunch of nodding, and then he’d ask a question that made it extremely clear he still thought of this as a single interaction in time.
And if my CMO didn’t understand attribution, how could I trust him to walk into an executive meeting without being blown up by the sales leader (which happened when he went rogue and presented the data on his own despite my warnings)?
It was a mess, but fortunately, I had a good rapport with the sales leader, and he and I worked through what was needed for him to be comfortable with the data. He figured out that he could come to me with questions about the data since I was creating the reports. I figured out that he needed to see the results before the main meeting so he could ask questions before my CMO charged ahead.
You’ll roll out attribution, train your audience, and feel like they get it. Only to be asked many questions during the next quarterly business review that make you wonder if they slept through your first round of training.
Like your CRM, attribution isn’t a set-it-and-forget-it proposition. You’ll constantly have to socialize the tool, keep an ear to the ground for rumors about why it’s “broken,” reset expectations, and start over again during the next reporting cycle.
Remember that you only do attribution wrong if you ignore feedback and objections.
For more on which models to use, how to configure attribution, and when not to use it, follow thought leaders on LinkedIn like Zee Jeremic, Christopher Antonopolous, Justin Norris, and Carey Picklesimer.