AdRoll logo.

Acquisition Attribution:
How AdRoll Measures ROI across Sales and Marketing


By Shane Murphy, VP of Marketing

Late last year, I was sitting with our Chief Revenue Officer, Suresh Khanna, discussing how we should invest resources for 2017—and we discovered a multi-million dollar problem.

The problem was that we didn’t have a model to capture how marketing and sales worked together to drive acquisition. I had a model that showed the impact of our marketing department, and Suresh had a model that showed the impact of our sales teams. However, these two models weren’t linked—and they often double counted the impact of each department’s activities.

We had a lot of unanswered questions about splitting the acquisition budget across teams. Sure, we track how much of our marketing budget it takes to get prospects interested in AdRoll. But do we get more revenue from them if sales also reaches out? And yes, our sales teams know how much it takes to clinch a contract, but do our marketing efforts help to close a deal faster?

If we could develop an attribution model that quantified the impact that each sales and marketing dollar had on acquisitions, we could spend less to bring in the same revenue and drive up our profitability.

Thus, I—along with business intelligence manager Etan Schwartz—set out to build such a model. In this guide, you’ll see how we defined the issue to our executive C-suite and the data we compiled to understand the overall problem. Then, you’ll learn about the three big challenges we had when building our acquisition attribution model and how we solved them.

A little about myself

During my career, I’ve worked on building and scaling businesses. For over a decade, I’ve taken new products and services to market and grown them for major European brands like Orange and Paddy Power. In 2014, I headed up acquisition at Paddy Power, and we built a sophisticated business-to-consumer attribution model that saw us increase acquisition 34% year over year—delivering 1 million new customers in that year alone. This is a topic I am passionate about.

I’m now Vice President of Marketing at AdRoll and have been thinking hard about how to apply the same type of analytical approach to the business-to-business (B2B) space.

The Problem

Here are some terms I will use in this post:

  • Lead—A potential new customer
  • Inbound—A lead who comes into the sales funnel proactively (i.e., driven by marketing activity)
  • Outbound—A lead identified by sales outreach via email or cold calling

Over the last few years, we've worked a lot on improving the efficiency of acquiring new customers. We’ve invested in two main pathways: self-service and sales teams.

Our self-service pathway is completely automated and geared toward smaller accounts. Customers simply fill out an online form, and then they follow programmed prompts until they’ve opened an account and can start using our products.

Our sales-teams pathway involves in-person outreach for larger accounts. Sales development representatives (SDRs) support more experienced sales reps in the heavy work of cold outbound outreach.

As a result, we’ve moved to a model where four teams work together to generate, work, and close leads:

  • Product—The team that ensures that the self-service journey is as simple as possible, so that direct signups convert with minimal human intervention.
  • Marketing—The team that drives inbound leads and also assists in converting outbound leads the sales teams are already working on with initiatives like direct mail and events.
  • Sales development representatives, or SDRs,—Relatively junior salespeople who do the initial outreach on cold outbound leads and on new inbound leads. Their goal is to get leads interested in meeting with an experienced sales representative. They do the heavy lifting to allow more experienced sales reps to focus only on leads who have already shown interest.
  • Sales representatives, or sales reps,—The more experienced sellers who take over once SDRs have generated initial interest and qualified the opportunity.

How much to invest in the product experience was never in question. For a company like ours, product comes first. However, how much to invest in the other three teams—marketing, SDRs, and sales reps—was unclear.

We needed to know how much and how efficiently each team was driving our acquisition revenue. To do this, we needed an attribution model across sales and marketing, a model that could help us understand how these three teams work together to drive our business.

Here’s how we did just that.

Designing the Model

We found that there were three entry points into our sales funnel:

  • Inbound through marketing
    • The lead interacts with our self-service product or a marketing effort (e.g., content download or conference event booth).
    • Once a lead has come inbound, there are two routes:
      • Closed without any sales touch by converting through our self-service product. (Bucket A)
      • Worked by sales to close. (Bucket B)
  • Outbound through an SDR (Bucket C)
    • An SDR is assigned a cold lead and starts to reach out.
    • Once a lead has an initial screening conversation with the SDR, they get passed to a sales rep. The SDR provides details about how AdRoll’s suite of products can meet the client’s individual needs.
    • The lead may also interact with marketing during their conversion journey, so marketing may assist the conversion (e.g., by sending the lead direct mail with a gift or piece of content, or by meeting with them at an event). (Bucket E)
  • Outbound through a sales rep (Bucket D)
    • The reps find a cold lead themselves and start to reach out without the help of an SDR.
    • Again, a lead may also interact with marketing during the conversion journey. (Bucket E)

This can be summarized as follows:

This analysis yielded four core buckets. These four buckets are mutually exclusive and collectively exhaustive of our acquisition:

  • AInbound Product Closed
  • BInbound Sales Closed
  • COutbound SDR Assisted
  • DOutbound Sales Rep Only

We also had a fifth bucket, which was a subset of the two outbound buckets and showed us the impact marketing was having on helping the outbound buckets:

  • EOutbound Marketing Assisted

Previously, Suresh would have counted the revenue in buckets B, C, and D as sales-driven because it was closed by sales reps. I would have counted revenue in buckets A, B, and E as marketing-driven because it was touched by marketing. This double counting made our return-on-investment (ROI) calculations unreliable. In order to grow our business as efficiently as possible, we needed a better model.

With our buckets in place, we set ourselves a goal of being able to construct a table that compares the ROI across them. We ultimately wanted to be able to fill out a table like this:

In order to accurately measure ROI across sales and marketing, we needed to perform three analyses:

  • 1Identify and categorize all leads worked and closed in a period for each bucket.
  • 2Measure the revenue associated with each closed lead.
  • 3Calculate a cost per acquisition (CPA) for each closed lead based on the amount of time and money sales reps, SDRs, and marketing spent on each one.

If we made these calculations accurately, we could finally learn a powerful metric for the first time:

  • 4ROI per acquisition bucket

But in order to do that, I needed help from Etan and our business intelligence team to collect the necessary data.

While this sales funnel made the most sense for us, I would imagine for most businesses a simpler breakout of buckets would be:

  • Inbound Product Closed
  • Inbound Sales Closed
  • Outbound Marketing Assisted
  • Outbound Not Marketing Assisted

Building the Model

In order to do these four analyses, we needed a clean and accurate data set.

So, our business intelligence team plugged into our customer relationship management (CRM) software—and every time one of our leads got touched by marketing, an SDR, or a sales rep—a new record was created. The team also time stamped and categorized the activity by type. Here’s an example of the data we collected for a single lead:

  1. Account_id This is the ID we give to the lead so we can identify it in our data.
  2. Date This is the date the activity occurred.
  3. Category This is the current bucket that the lead would be in if they were to start spending (Inbound Product Closed, Outbound SDR Assisted, etc.).
  4. Marketing This is TRUE if the lead has a marketing touch against it.
  5. SDR This is TRUE if the lead has an SDR touch against it.
  6. Sales This is TRUE if the lead has a sales rep touch against it.
  7. Distance_from_last_activity This is the number of days since the last touch.
  8. Activity_category This is the type of touch the activity falls under.
  9. Activity_type This is a more detailed description of the type of touch that occurred.
  10. Initial_spend_date This is whether or not the lead has started spending and, if so, when that occurred.
  11. Cycle_number This is the number of sales cycles this lead has been in. (We will address the concept of the sales cycle later.)

These data points formed the backbone of our analysis, and without them, none of this work would have been possible. Once we were sure we had a complete data set, we moved on to filling out our model. (If you didn’t understand a thing I just said—don’t worry—your business intelligence partner will.)

Challenge Number 1: How to assign a lead to a bucket

In order to start, we first needed to assign leads to each bucket and calculate three figures:

  • The number of leads worked
  • The number of new customers
  • The percentage of leads closed

We ran into two immediate problems. First, we needed to identify which team had first touched the lead (i.e., marketing, SDR, or sales rep). Second, we needed to determine when in the sales cycle we would consider the lead lost.

In a B2B company like AdRoll, this isn’t easy. A sales rep could be working a lead over the course of months, and it’s not necessarily a lost lead. But in order to build a model, at a certain point in the sales cycle we needed to consider some leads to be lost—even if they could potentially convert in the future.

For the purposes of our acquisition attribution model, we decided to use a standard 30-day sales cycle. If a lead hasn’t responded after 30 days, it’s pretty clear they aren’t interested. The sales cycle would start after a first touch by either our marketing team, SDRs, or sales reps. After the first touch, we would record all subsequent touches by all teams.

We would consider the lead a non-conversion if none of our teams touched it after a 30-day period. If we reached out to the same lead again after 30 days, the cycle would start all over. We would count the lead as new and reassign it to a bucket based on the first touch of the new sales cycle—regardless of whether or not another team had touched it previously outside the 30-day window.

The sales cycle is an important concept for two reasons:

  1. A lead shouldn’t hold on to its first touch as its permanent bucket. If sales reaches out to a lead and gets nowhere—but then a year later that lead attends a marketing event and closes—credit should go to one of the inbound marketing teams.
  2. In order to know the percentage of leads closed, a time-bound sales cycle allows us to break our funnel into four categories:
    • Number of leads worked in a period
    • Number of leads who converted
    • Number of leads who did not convert after a certain timeframe
    • Number of leads still in active funnel who could convert in the future
    By categorizing leads as such, we can calculate which activities provide the greatest returns. For example, let’s say we have two campaigns with 100 leads each: 50 of whom have converted and 50 of whom still haven’t. If the 50 non-converted leads in Campaign A haven’t made any contact within a predetermined amount of time—but the 50 non-converted leads in Campaign B have made contact—clearly Campaign B should get more attention.

Here are some examples of how we assigned leads to different buckets:

Example 1

  1. This lead starts life in the Outbound SDR Assisted bucket, as their first touches are by an SDR.
  2. A sales rep then reaches out, and those touches are recorded.
  3. However the lead doesn’t convert, going more than 30 days with no touches and therefore losing their bucket.
  4. Marketing then touches the lead, starting a new sales cycle. They convert with help from sales, putting it in the Inbound Sales Closed bucket.

Example 2

  1. This lead starts life in the Outbound SDR Assisted bucket (with marketing assist).
  2. They then lose this bucket after no touch for 30 days.
  3. They start a new sales cycle and closes as Inbound Product Closed, as their first touch is marketing and there are no sales touches prior to close.

We now could fill the top section of our table (dummy numbers used here).

Challenge Number 2: How to figure out the revenue from each lead

Next, we needed to calculate two figures for each bucket:

  • Revenue over 360 days
  • Average revenue per active customer (ARPA) over 360 days

This section of the table represents the projected lifetime value (LTV) of our acquired customers. We needed to know the likely total revenue customers would bring in so that we could calculate ROI for each of our five buckets.

If the value of a customer in one bucket was significantly different from the others, it was important for us to know.

There are a few caveats we need to note:

  • When considering this section, we decided to use 12 months as the length of a customer’s lifetime. In reality, many of our customers continue spending far past 12 months—but we needed to pick a time period over which to look. For some companies, this would be far too short a period (e.g., a telecommunications company that signs up customers to a three-year contract). In our case, we felt that a 12-month period was long enough for us to be confident in our results, but not long enough to make our forecast unreliable. (If you go past 12 months, you also start getting into thorny territory with issues like net present value (NPV) which we wanted to avoid.)
  • The calculations that follow are a pretty simplistic way of projecting a customer’s LTV, but we didn't want to get bogged down in this part of the analysis. The purpose wasn't to get a perfectly accurate LTV projection—it was to determine the relative value of customers across our five buckets.
  • We at AdRoll have a lot of good historical data on which to base LTV calculations. If you’re a new business without historical data, you may want to ask your analyst to help here.

In order to determine total revenue per bucket, we needed two pieces of information:

  • Baseline revenue curve—This is the typical revenue curve you see in your business. In our case, we took the month-over-month spend of a similar historical customer cohort, starting with their month of first spend.
  • Revenue curve to date per bucket—This is the revenue curve you’ve seen so far for each bucket. To make an accurate forward forecast, you want some data on the early-life spend of each bucket you’re looking at. In our case, we had the first two-month spend of each bucket.

Once we had that information, the next step was to forecast how much revenue per bucket could be expected over 12 months.

To do this, we took each revenue curve to date per bucket and used our baseline revenue curve to predict future revenue.

To do the forecast, you need a table like this:

The yellow sections are the actual month-over-month spends for both your baseline and projected bucket.

You will need to work with an analyst to pull the actual numbers from your database. All this info can be easily pulled by anyone with basic SQL skills.

From there, you do the following:

  • 1Calculate the variance between the baseline revenue curve and bucket revenue curve in the most recent month. For example, in the table above, the actual bucket revenue is 97% of the baseline revenue—or 3% lower.
  • 2Use this difference to predict your variance across the year. In this case, that means that every month the projected bucket revenue variance will be lower than the baseline revenue curve. Because of this, we have a 29% difference between baseline and projected bucket revenue by month 12.
  • 3Multiply the baseline revenue curve by the variance to get the projected bucket revenue retention curve.
  • 4Multiply the revenue retention curve percent for each month by the actual first-month bucket revenue to get a predicted bucket revenue for each month.

Effectively, what this approach does is forecast each bucket revenue curve against the baseline revenue curve—maintaining the initial two-month deviation in the curves across the lifetime. You can find the math we used here.

See the two revenue curves below.

As you can see, our bucket curve starts off under the baseline curve and continues as such when projected forward.

At this point, we could predict the total revenue over 12 months for each bucket! From there, we simply divided that number by the number of new customers to get the ARPA.

With these figures, we were already beginning to see some new insights we hadn't before. For example, Outbound SDR Assisted customers were almost twice as valuable as Inbound ones.

Challenge Number 3: How to assign a CPA to each deal

Finally, we needed to calculate two sets of figures regarding our costs:

  • Cost per team (i.e., cost per sales rep, SDR, and marketing team)
  • CPA per team

To calculate CPA for each bucket, we wanted to know where our sales rep, SDR, and marketing teams were spending their time and money.

Step 1—Figure out cost per weighted touch

  • If we knew how much each touch cost, we could then very easily multiply that by the number of touches in each bucket.
  • To get to a cost per weighted touch, we needed to divide our total spend by weighted touches.
  • To calculate total weighted touches, we counted each type of touch and then weighted them based on the relative time each activity would take, as listed below. For example, we assumed a call would take 15 times longer than writing an email.
    • Email—1
    • Call—15
    • Meeting—30
  • This gave us the total number of weighted touches, which we divided into total spend to get a cost per weighted touch.

Step 2—Split total cost based on number of weighted activities per bucket

  • Now all we had to do was split our weighted touches per bucket (which we could do from our underlying data set).
  • Then we just multiplied the total weighted touches by the cost per weighted touch to get to a total cost per bucket.

We did this separately for our marketing spend, sales rep salaries, and SDR salaries. To get to a CPA, we simply needed to divide the cost per bucket by the number of new customers in each bucket in the period.

Now we had everything we needed to calculate ROI for each bucket!

The Outcome: How to finish a full table of ROI

With a full table of our acquisition split by our four buckets, we could calculate the final ROI section:

  • ROI 360 Days %—[ARPA - 360 Days] divided by [CPA Total]
  • ROI 360 Days $ per new customer—[ARPA - 360 Days] minus [CPA Total]

While the numbers above are dummy numbers, they tell a similar story to some of our actual key takeaways:

  • Inbound
    • Inbound acquisition closes at a higher rate and therefore has a lower CPA, leading to a very solid ROI.
    • Inbound Product Closed acquisitions close at a lower rate and has a lower ARPA than Inbound Sales Closed acquisitions. This is because a sales rep is neither helping them convert nor selling them multiple products. We realized we need to do a better job of ensuring that sales teams work inbound leads effectively.
  • Outbound
    • Our lowest ROI bucket is Outbound Rep Only because it’s much more efficient to allow SDRs to do the heavy lifting, only getting the sales rep involved once the lead is interested in getting into a detailed negotiation.
    • When marketing assists the close on the outbound side, the conversion rate is significantly higher, reducing the CPA. We therefore budgeted to ensure that our marketing team is assisting on everything in the sales pipeline. This formed the basis of our account-based marketing strategy.

These types of insights have gone a long way, not only toward helping us make better budgetary decisions, but also toward showing the business that our marketing team takes ROI and commercial analysis seriously. I would encourage all marketing teams to get as deep into attribution analysis as possible. All companies are different, but I am hoping that you've taken something from our solution that you can apply to your own business.