EXPOSE / SPEND ANALYSIS

Where Your Marketing Budget Is Actually Going

Dawid Jozwiak · · 12 min read

Can you tell me, right now, what each marketing dollar produced last quarter?

If the answer involves opening four spreadsheets, asking three people, and a caveat about “attribution being tricky” - you have a spend visibility problem. Not a data problem. A structural one. The Expose stage of the Growth Recon framework treats this as the first operational checkpoint: before you optimize anything, you map every dollar to the output it created. No output? That dollar is on trial. This deep-dive gives you the exact process to build that map, stress-test it, and make budget decisions that hold up under scrutiny.

Most marketing teams can tell you what they spent. Very few can tell you what they bought. The gap between those two numbers is where your growth stalls.

Step 1: Build the raw spend ledger

Start with the most boring artifact in marketing: a complete list of where money went. Not categories. Not buckets. Line items.

Pull every invoice, subscription, contractor payment, platform fee, and internal labor cost from the last two full quarters. Two quarters matters because one quarter can be distorted by a launch, a seasonal spike, or a vendor negotiation that shifted timing. Two quarters gives you a pattern.

Your ledger needs five columns:

  • Line item - the specific thing you paid for (e.g., “Google Ads - branded search,” not “paid media”)
  • Quarterly cost - fully loaded, including labor hours at blended rate
  • Primary output metric - what this spend was supposed to produce (leads, trials, demos, revenue)
  • Actual output - what it actually produced, measured
  • Cost per unit of output - total cost divided by actual output

That last column is where the conversation starts. Because when you see that your LinkedIn thought-leadership program costs $2,400 per marketing qualified lead while Google branded search delivers them at $85, the question stops being “should we do LinkedIn” and starts being “what exactly are we buying for $2,400 that we can’t get for $85.”

The labor trap

Most spend ledgers ignore labor. They shouldn’t. If a campaign requires 40 hours of design, 20 hours of copywriting, and 10 hours of project management per month, that’s real cost. At a blended rate of $75/hour, you’re looking at $5,250/month in labor before a single ad dollar runs.

A channel that shows $3,000/month in platform spend and produces 50 leads looks like a $60 cost per lead. Add the labor and it’s $165. That’s a different conversation entirely.

Blended rate calculation: take total compensation (salary + benefits + overhead allocation) for everyone who touches marketing, divide by productive hours per month (typically 140-150 after meetings, admin, and PTO). For most mid-market companies this lands between $65 and $95/hour.

Step 2: Construct the spend-output matrix

The ledger gives you raw data. The matrix gives you a decision framework.

Build a 2x2 where the X-axis is cost per unit of output (low to high) and the Y-axis is volume of output (low to high). Every line item from your ledger goes into one of four quadrants:

High volume, low cost (Scale) - These are your workhorses. They produce results efficiently. The only question is whether the output quality holds as you increase spend. For most companies, branded search, email to existing lists, and referral programs live here.

High volume, high cost (Evaluate) - Producing at scale but expensively. Often these are channels where competition has driven up cost per click or where your creative has fatigued. The fix is usually operational - better targeting, refreshed creative, landing page optimization - not abandonment. Paid social prospecting frequently ends up here.

Low volume, low cost (Test) - Cheap but small. These are either early-stage experiments that need more investment to prove out, or niche channels with a natural ceiling. The question: is the ceiling real or have you just not pushed hard enough? Content SEO in its first six months often sits here.

Low volume, high cost (Kill or justify) - Expensive and small. This is where sacred cows graze. Conference sponsorships, vanity PR placements, the agency retainer nobody reviews. Every item in this quadrant needs a written justification that goes beyond “we’ve always done it” or “it’s good for brand.” If the justification can’t tie to a measurable outcome within two quarters, cut it.

Template - use this to categorize your channels:

ChannelQuarterly CostOutput (Customers)Cost Per CustomerLTV:CACQuadrant
Google Branded Search$9,00045$2008.2:1Scale
Email to Existing List$2,40032$7512.1:1Scale
LinkedIn Prospecting$28,00012$2,3332.1:1Evaluate
Paid Social (Facebook)$15,0008$1,8752.9:1Evaluate
Content SEO$6,0004$1,5004.8:1Test
Industry Conference$22,0003$7,3331.4:1Kill/Justify
PR Agency Retainer$18,0000 measurableN/AN/AKill/Justify

Fill in your own numbers. The quadrant assignment happens automatically once you see cost-per-customer and volume side by side.

How to handle “brand” spend

Someone will argue that certain spend “builds brand” and can’t be measured by direct return on ad spend. They’re partly right and mostly using it as a shield.

Brand spend is measurable. Not by last-click attribution, but by proxy metrics that compound over time: branded search volume (are more people searching for you by name?), direct traffic trends, organic share of voice, and win rates in competitive deals. If your “brand building” spend isn’t moving any of these proxies over a six-month window, it’s not building brand. It’s burning budget.

Set a hard limit: no more than 15% of total marketing budget goes to spend that’s measured by proxy metrics rather than direct output. Everything else needs to show cost-per-output numbers or it moves to the kill quadrant.

Step 3: Normalize across channels

Raw cost-per-lead comparisons lie. A $40 lead from Facebook and a $400 lead from a niche industry event are not comparable until you follow both through the funnel.

If the Facebook lead converts to sales-qualified at 3% and the event lead converts at 35%, your actual cost per SQL is:

  • Facebook: $40 / 0.03 = $1,333 per SQL
  • Event: $400 / 0.35 = $1,143 per SQL

Now factor in close rates. If SQLs from Facebook close at 8% and event SQLs close at 22%:

  • Facebook: $1,333 / 0.08 = $16,667 per customer
  • Event: $1,143 / 0.22 = $5,195 per customer

The “expensive” channel is four times cheaper. This is why top-of-funnel metrics are dangerous without downstream normalization. A team optimizing for lead volume will pour money into Facebook. A team optimizing for customer acquisition cost will buy event booths.

The normalization stack

For every channel, calculate five numbers:

  1. Cost per lead (CPL) - total channel cost / leads generated
  2. Cost per SQL - total channel cost / SQLs attributed
  3. Cost per customer - total channel cost / customers acquired
  4. LTV:CAC ratio - average lifetime value of customers from this channel / cost to acquire them
  5. Payback period - months until revenue from acquired customers covers acquisition cost

A channel with a 6:1 LTV:CAC and a 4-month payback is fundamentally different from a channel with a 6:1 LTV:CAC and a 14-month payback - even though the ratio is identical. Cash flow matters. A SaaS company burning runway can’t afford 14-month payback channels regardless of eventual return.

Build this stack for every channel with enough volume to be statistically meaningful (minimum 30 conversions per quarter as a baseline). Anything below that threshold is either too new to evaluate or too small to matter.

Step 4: Identify the spend-output gaps

With your matrix built and channels normalized, three patterns will emerge:

Pattern 1: The subsidized underperformer. One channel is dramatically more expensive per customer than others but keeps getting funded because it was the first channel that worked, or because a senior leader championed it, or because the team running it is politically untouchable. This is the most common gap. It exists in roughly 70% of marketing budgets we’ve audited. The median waste: 25-35% of the channel’s budget could be reallocated to higher-performing channels with no loss in output.

Pattern 2: The underfunded winner. A channel is producing efficiently but receiving a fraction of the budget. Usually this happens because the channel is “boring” (email, SEO, referral) or because it was introduced recently and hasn’t yet earned trust. The test: increase budget by 30% for one quarter. If cost per unit of output stays within 15% of baseline, you have a scale channel that was being starved.

Pattern 3: The measurement black hole. Certain line items can’t be placed in the matrix because nobody tracks their output. Not “hard to attribute” - literally not tracked. This is more common than anyone admits. Trade show sponsorships where nobody scans badges. Content partnerships where no UTM parameters exist. Display campaigns where viewability isn’t measured. These aren’t unmeasurable. They’re unmeasured. Fix the tracking or kill the spend. There is no third option.

Step 5: Build the reallocation model

Don’t present findings as “we should cut X.” Present them as “if we move $Y from channels producing at $Z per customer to channels producing at $W per customer, we get N additional customers for the same budget.”

The math is straightforward. If Channel A produces customers at $5,000 each and Channel B produces them at $2,000 each, every $10,000 moved from A to B yields 5 customers instead of 2. That’s a 150% improvement in output for zero additional spend.

Build three scenarios:

Conservative (10% reallocation): Move 10% of underperforming channel budgets to top performers. This is the “nobody can object” scenario. Model the output gain.

Moderate (25% reallocation): Move a quarter of underperforming spend. This requires one or two sacred cows to get reduced. Model the output gain and the political cost.

Aggressive (40% reallocation): Eliminate or dramatically reduce the bottom-performing channels entirely. Redirect to proven winners. This requires executive sponsorship and a willingness to deal with pushback. Model the output gain and the risk.

Present all three. Let the data advocate for itself. In our experience, most organizations end up somewhere between conservative and moderate in the first quarter, then move to moderate-aggressive once they see the first round of results.

The diminishing returns check

One risk in reallocation: assuming your top channels scale linearly. They don’t. Google branded search might produce leads at $85 each on a $10,000/month budget. At $30,000/month you’ve exhausted branded search volume and you’re bidding on broader terms at $200/lead. You didn’t scale the channel - you changed the channel.

For every reallocation target, define the volume ceiling. Talk to the channel operator. Pull the historical curve of spend vs. output. Find the inflection point where efficiency degrades. That’s your maximum useful allocation, regardless of what the model says.

Step 6: Set up the spend monitoring cadence

A one-time audit is useful. A recurring system is transformative.

Weekly: Automated dashboard pulls cost-per-output for every active channel. Flag anything that moves more than 20% from baseline. No meeting needed - just a Slack alert or email digest.

Monthly: Thirty-minute spend review with channel owners. Three questions per channel: What did we spend? What did we get? Is the cost-per-output trending up, down, or flat? Anyone whose channel is trending up by more than 10% month-over-month presents a diagnosis and a fix.

Quarterly: Full matrix rebuild. New normalization stack. Updated reallocation model. This is where you make budget moves. Not in annual planning (too slow) and not weekly (too reactive). Quarterly gives you enough data to detect real trends and enough time to respond.

What “good” looks like

Benchmarks are dangerous because they vary wildly by industry, deal size, and business model. But directional targets help:

  • B2B SaaS ($20K-$100K ACV): Blended CAC of $3,000-$8,000. LTV:CAC of 3:1 minimum, 5:1 target. Payback under 12 months.
  • B2B SaaS (< $20K ACV): Blended CAC of $500-$2,000. LTV:CAC of 4:1 minimum. Payback under 6 months.
  • E-commerce (DTC): Blended CAC of $30-$80 on first purchase. ROAS of 3:1 on paid, 8:1+ on email/organic. Factor in churn - repeat purchase rate determines whether that first-purchase CAC is viable.
  • Agency / services: Blended CAC of $1,000-$5,000. Close attention to payback period because revenue is typically recognized over months, not upfront.

If your numbers are materially worse than these ranges and you’re in a competitive market, your spend-output matrix will tell you exactly where the drag is.

The metrics that lie to you

Not all outputs are created equal. Some of the most commonly tracked vanity metrics in marketing actively distort spend analysis:

Impressions - tell you nothing about attention, intent, or action. A million impressions at zero clicks is a million people who ignored you. Never use impressions as an output metric for spend analysis.

Social engagement - likes, shares, comments. Correlated with reach, uncorrelated with revenue in most B2B contexts. A post with 500 likes and zero pipeline contribution is content marketing theater.

MQL volume without quality scoring - the most dangerous metric in B2B marketing. If your MQL definition is “downloaded a whitepaper,” you’re counting interest as intent. Every $50 MQL that sales rejects costs more than the $50 - it costs sales time, CRM clutter, and trust between departments. Quality-weight your MQLs or stop counting them.

Website traffic - a directional indicator, not an output metric. Traffic without conversion context is noise. Ten thousand visitors who don’t convert cost you server fees and nothing else.

Use these as diagnostic inputs, not as output metrics in your spend matrix. The only outputs that belong in the matrix are actions that move toward revenue: qualified leads, pipeline created, deals closed, revenue booked, customers retained.

Running this for the first time

If you’ve never done a formal spend-output analysis, expect the first round to take 2-3 weeks of calendar time. The data gathering is the bottleneck - most companies don’t have a single source of truth for marketing spend, and getting accurate labor allocation requires conversations with people who aren’t used to tracking their time by channel.

Start imperfect. A directional spend matrix with estimated labor costs is better than no matrix at all. Precision improves with each quarterly iteration. The first round typically finds 15-25% of budget that’s either unmeasured or dramatically underperforming. That’s not a failure - it’s the baseline every company starts from.

The companies that actually close the gap are the ones that run this as a system, not a project. A one-time audit creates a report. A quarterly cadence creates accountability. Accountability is what changes behavior, and changed behavior is what changes results.

Where this fits in RECON

Spend analysis is the operational engine of the Expose stage, but it doesn’t work in isolation. The Research stage gives you the ideal customer profile and market data that determines which output metrics actually matter - there’s no point optimizing cost-per-lead if you’re attracting the wrong leads. Research tells you who to target; Expose tells you what it’s costing you to reach them.

From here, the findings feed directly into Convert. Once you know which channels produce customers efficiently, the Convert stage focuses on improving the mechanisms that turn attention into action - landing pages, offers, onboarding sequences, trial experiences. You’re not converting more of everything. You’re converting more of the traffic that Expose proved is worth paying for.

The spend-output matrix also creates the baseline for Optimize. You can’t measure improvement without a starting point. Every reallocation, every channel adjustment, every vendor renegotiation creates a before-and-after that Optimize tracks over time.

And finally, Navigate uses the cumulative data from your spend analyses to model future scenarios. If you know that your best channel produces customers at $2,000 with a 4-month payback, you can model what happens when you double the budget, enter a new market, or face a competitor’s price war. You’re not guessing. You’re projecting from a system that’s been stress-tested quarterly.

The spend analysis isn’t a one-stage exercise. It’s the financial backbone that every other stage of RECON depends on. Get it right here, and every decision downstream gets sharper.