I have sat through dozens of attribution model debates and come to a firm conclusion: every attribution model is wrong, most are also useless, and the energy spent perfecting them would be better invested in improving the marketing itself.
Last-click attribution tells you which channel received credit for the final touch before purchase. Multi-touch attribution tells you how to distribute credit across all touchpoints according to a model that someone invented in a spreadsheet. Data-driven attribution claims to use machine learning to figure out which touchpoints actually caused conversion. None of them tell you whether the marketing is actually working. They tell you stories about credit.
The dashboard I use has three numbers. You can track them in a spreadsheet. They do not require an attribution vendor. They do not require a sophisticated analytics setup. They do require discipline about what you measure and what you ignore.
Why Attribution Models Fail in Practice
The fundamental problem with last-click: A customer discovers your product through a LinkedIn post, reads three blog articles over two weeks, sees a retargeting ad twice, and then searches your brand name on Google and converts via a paid brand keyword. Last-click gives 100% of the credit to Google branded search. The LinkedIn post, the blog content, and the retargeting campaign get nothing. You optimize toward branded paid search and defund everything that was actually creating demand.
The fundamental problem with multi-touch: To allocate credit across touchpoints you need to track every touchpoint. Across channels with different tracking windows, different identifiers, iOS privacy changes, cross-device journeys, and offline touchpoints, you cannot track every touchpoint. The multi-touch model allocates based on the touchpoints it can see, not the ones that actually happened. You get a sophisticated-looking model built on incomplete data.
The fundamental problem with data-driven attribution: It requires conversion volume that most SMB and mid-market accounts do not have. Google's data-driven model recommends a minimum of 300 conversions per month per campaign to have statistical confidence. Most accounts are not at that volume. Below the threshold, data-driven is pattern-matching on noise.
The Three Numbers
Here is the dashboard I keep. It fits on a whiteboard.
Number 1: Revenue per dollar of total marketing spend (ROAS or revenue-to-spend ratio).
Total revenue attributed to new customer acquisition, divided by total marketing spend including all channels, agencies, tools, and people. Not platform ROAS. Not last-click ROAS. Total business revenue from new customers divided by total marketing cost.
This number is immune to attribution gaming because it counts everything as spend and everything as revenue. You cannot optimize one channel at the expense of another and have this number improve. If it improves, the marketing is working. If it holds flat while platform-reported ROAS improves, someone is gaming the attribution model.
For a direct-to-consumer client with $140,000 in new customer revenue and $22,000 in total marketing spend: revenue-to-spend ratio is 6.4x. Track this monthly. A 10% sustained improvement over a quarter is real. A 30% improvement in one month usually means a definitional change, not a marketing improvement.
Number 2: New customer count per month.
Total new customers acquired in the month. Not leads. Not trials. Not "qualified opportunities." Customers who paid money. This number is the output of all marketing activity combined. It does not care about attribution. It does not require any model. You count the customers.
Track this against the previous month and the same month last year. A rising new customer count with flat or declining revenue-to-spend ratio is usually a sign of a declining average order value or promotional discounting. Both are things you want to know about immediately.
Number 3: New customer CAC by cohort month.
Total marketing spend in a given month divided by the number of new customers acquired in that month. This is blended CAC across all channels. Calculate it every month and track the trend.
Rising blended CAC is a genuine signal that something is getting more expensive. It could be platform CPCs going up, creative fatigue in paid channels, organic traffic declining, or sales efficiency declining. You will not know the root cause from this number alone. But the trend tells you whether to start investigating.
| Month | New Customers | Total Spend | Blended CAC |
|---|---|---|---|
| January | 84 | $18,400 | $219 |
| February | 91 | $19,200 | $211 |
| March | 87 | $19,800 | $228 |
| April | 76 | $20,100 | $264 |
| May | 79 | $20,400 | $258 |
April is the month I investigate. CAC jumped 16% while new customer count dropped and spend increased slightly. That combination prompts a channel-level audit. I will look at whether a specific paid channel's efficiency changed, whether organic dropped, whether a new competitor entered. But the prompt to look comes from the dashboard, not from a platform attribution report.
Why Not More Numbers?
Because more numbers create more places to hide. An agency that reports 47 metrics is providing cover. With 47 metrics, something is always improving and you can tell a story about any period. With three metrics, if two of the three are declining, the marketing is declining. There is nowhere to hide.
I work with clients who come to me with multi-page agency reports showing green arrows next to click-through rates, impression share, quality scores, engagement rates, and conversion-rate improvements. Meanwhile blended CAC has increased 40% over six months and new customer count is flat. The agency has optimized toward metrics they control and away from the business outcome. A three-number dashboard would have surfaced this in month two.
How to Set This Up
The setup requires two data sources: your marketing spend records and your CRM or payments data. Nothing more sophisticated.
Spreadsheet structure:
Column A: Month
Column B: Total marketing spend (from invoices and platform reports)
Column C: New customers acquired (from CRM, not platform conversions)
Column D: Total new customer revenue (first-order revenue only)
Column E: Blended CAC (=B/C)
Column F: Revenue-to-spend ratio (=D/B)
Optional: Column G: LTV:CAC ratio (if you have LTV data, link to cohort table)
The discipline is in column C. Use your CRM or payment processor as the source of new customer count, not Google Analytics, not Meta's conversion reporting, not any platform. Platforms overcount because they each claim credit for conversions that other platforms also claim. Your payment processor does not overcount. A person either bought or they did not.
The Arguments Against This Approach
"You cannot optimize channels without channel-level attribution." Partially true. I still look at channel-level data when I need to diagnose a blended CAC change. But I use channel-level data for diagnosis, not for executive reporting. The difference matters: channel-level attribution for diagnosis accepts that the numbers are approximations. Channel-level attribution for reporting creates incentives to optimize the approximation rather than the outcome.
"You will underinvest in channels that have long attribution windows." Also partially true. Brand awareness spend, content marketing, and social media all have longer attribution windows than the month-over-month new customer count captures. For businesses with significant brand awareness budgets, a lagged correlation analysis (does awareness spend in Q1 predict new customer count in Q2?) is the right supplement. For SMBs where 80% of spend is direct response, the three-number dashboard is sufficient.
What I Got Wrong
I spent years building and defending sophisticated attribution models for clients who wanted certainty about where to allocate budget. The models produced certainty. The certainty was false. The models told clients what they wanted to hear, which was usually that their largest channel was working and deserved more budget.
The three-number dashboard tells you less but lies less. That is a trade-off I make deliberately now. I would rather have three honest numbers than forty that collectively mislead.