I have audited a lot of marketing accounts on behalf of founders who suspected something was wrong. The accounts usually come with a reporting package: monthly PDFs, Google Data Studio dashboards, weekly email summaries. The reports uniformly look good. Green arrows. Positive trends. Improving benchmarks. The founder is dissatisfied with revenue growth but cannot identify why the marketing is not translating.
The answer, in most cases, is that the reports are not measuring whether the marketing is working. They are measuring the metrics that are easiest to make look good.
This is not usually fraud. It is a structural incentive problem. Agencies keep clients by showing positive trends. The metrics that consistently trend positive regardless of business outcome are activity metrics, vanity metrics, and platform-reported conversion numbers that favor the platform. Agencies default to these because they produce stable narratives. Founders accept them because they sound relevant.
The Metrics That Lie Most Often
Impression share. "We increased your impression share from 34% to 58% this month." Good impression share means your ads showed more often than before. It does not mean more people became customers. Impression share is a competition metric, not an outcome metric. It is worth tracking for specific strategic reasons (ensuring coverage for branded terms, dominating a specific keyword cluster). It is not a performance metric.
Engagement rate. "Your engagement rate increased 47% across social channels." Likes, shares, comments, and saves are not revenue. They are evidence that content received a reaction. The correlation between high engagement and revenue conversion is weak for most businesses. Content that gets high engagement but does not attract buyers is entertainment, not marketing. I have seen engagement rates climb consistently while email list growth, lead volume, and new customer acquisition all declined.
Platform-reported ROAS. This is the most dangerous one. Every advertising platform reports ROAS based on conversions it can attribute to itself. Meta's 7-day click, 1-day view attribution window means any conversion that happened within 7 days of an ad click and 1 day of an ad view gets credited to Meta, even if the customer would have converted anyway from organic search or direct traffic. Platform ROAS consistently overstates true ROAS by 20-60% in the accounts I audit. The gap widens in multi-channel environments where every platform claims credit for the same conversion.
Cost per click (CPC). A declining CPC looks like efficiency. Sometimes it is. More often, CPC declines when targeting broadens (cheaper, less qualified audiences), when ad rank drops (fewer premium placements), or when the platform's algorithm shifts spend to cheaper inventory that converts less well. CPC is an input metric, not an output metric. A CPC decline that correlates with declining conversion rate is a warning sign dressed as good news.
Follower growth. "We added 2,400 new followers this month." Unless your business model directly monetizes followers (media, creator economy, affiliate), follower count does not predict revenue. The businesses I work with that have grown fastest have focused on the email list, the customer count, and the revenue. Social following has been a consequence, not a driver.
The Three Metrics That Do Not Lie
Metric 1: New customers per month from paid channels (independently verified).
Not platform-reported conversions. The number of new customers who paid money in this month, traced back to the paid channel that sourced them, verified against your CRM or payment processor data.
This number cannot be inflated by attribution window games because you are counting paying customers, not conversion events. It cannot be inflated by broad targeting because low-quality traffic that does not convert does not produce paying customers. It cannot be inflated by view-through attribution because you are tracking customers, not attributed events.
Track this monthly, by channel, going back as far as your data allows. A trend of declining new-paid-customers despite stable or growing ad spend is the single clearest signal that something is wrong, regardless of what the platform reports say.
Metric 2: Blended CAC, trended monthly.
Total marketing spend (all channels, all fees, all tools) divided by total new customers acquired (from your CRM, not the platform). Covered in more detail in my writing on the three-number dashboard, but the key insight here is that this metric is immune to the platform attribution game.
The trend matters more than the absolute number in a single month. Blended CAC rising 10% per month for six months is a structural problem. Blended CAC rising sharply in one month due to a seasonal ad price spike is operational noise. The trend is the signal.
Metric 3: Revenue from new customers acquired in the cohort month.
Total revenue attributed to customers who were first acquired in a given month, tracked at 30, 90, and 180 days from acquisition. This is the early-stage LTV measurement that tells you whether paid acquisition is producing customers who actually pay over time, not just customers who convert once and disappear.
A business where new-paid-customers is growing but 90-day revenue per cohort is declining is acquiring customers who churn or who were attracted by a promotional offer that does not reflect real willingness to pay. Platform-reported ROAS will not show you this. Cohort revenue tracking will.
| Month | New Paid Customers | 30-Day Rev | 90-Day Rev | 180-Day Rev | Blended CAC |
|---|---|---|---|---|---|
| Jan | 84 | $11,200 | $19,600 | $28,400 | $219 |
| Feb | 91 | $12,300 | $21,500 | $31,200 | $211 |
| Mar | 87 | $10,400 | $16,800 | $22,100 | $228 |
| Apr | 76 | $9,800 | $14,200 | N/A | $264 |
March is a warning month in this table. New customers were roughly in line with February, but 30-day revenue per customer dropped from $135 to $120 and 90-day revenue dropped significantly. This suggests March cohort customers had lower willingness to pay or a different use case. Blended CAC also rose. The platform-reported ROAS in March looked fine because the conversion count was stable. The underlying economics were deteriorating.
How to Read a Report That Is Hiding Problems
When I receive a new client's existing agency report, I look for four things:
What is missing from the report? If a report covers CTR, CPC, impressions, and platform ROAS but does not include new customer count or revenue-to-spend ratio, the omissions are telling. These are harder to make look good, so they are omitted.
Is platform ROAS compared against independently verified revenue? A report that shows Meta ROAS of 4.2x alongside an independent revenue figure that implies 2.1x overall ROAS is telling you the attribution model is off. The gap should be explained, not buried.
Are trends shown over at least 6 months? Reports that show only the current month versus the previous month are designed to avoid trend visibility. Good reporting shows 6 to 12 months of trends. Seasonality is a reality in most businesses. A 6-month view makes seasonality visible; a month-over-month comparison hides it.
Is the attribution methodology stated explicitly? A report that does not specify what attribution model produced the conversion numbers is either using platform default attribution (which overstates) or does not know. Either is a problem.
What Good Reporting Actually Looks Like
Good reporting is boring. It shows three to five outcome-level metrics with 12 months of trend data, one channel-level diagnostic table (for root-cause analysis when a trend moves), and a plain-English "what we are testing this month and why" section.
The "what we are testing" section is the accountability mechanism. An agency that commits in writing to a specific test with a specific hypothesis and reports back on whether the hypothesis was confirmed is doing real work. An agency that reports only on what the platform measured is reporting on the output of an algorithm, not on their own decisions.
What I Got Wrong
I used to push back on vanity metrics gently, with caveats, trying not to damage the client relationship by being too direct about what the reports were actually showing. This was wrong. Founders who are making budget decisions based on misleading reports need direct information, not diplomatic hedging.
The right approach: show the client the three honest metrics alongside the agency report and let the gap speak for itself. If platform-reported ROAS is 4.8x and independent revenue-to-spend is 2.2x, that gap is the conversation. Once a client sees it clearly, the demand for honest reporting follows naturally.
The Uncomfortable Conclusion
Most performance marketing reports are not wrong in the sense of containing false data. They are wrong in the sense of measuring the wrong things. The solution is not to distrust your marketing entirely. It is to insist on three honest metrics alongside whatever else is in the report, and to make decisions based on those three numbers rather than the 47 that look great.
If your agency resists reporting on independent new-customer count and blended CAC, that resistance is a data point about the agency.