Research
"Learn the business through the customer's eyes, not the company's ego."
Research Process
Research maps the business reality - who actually buys, how they describe the problem, whether the data is meaningful, and what can break - before any strategy is formed. It produces The Source Doc: the single source of truth that dictates every decision that follows.
ICP Mapping
Why it matters
You can't sell to "everyone." Different segments have different problems, budgets, timelines, and objections. Marketing to a $10K/year SaaS customer the same way you market to a $200K/year enterprise buyer wastes both budgets. Most companies have a vague idea of who buys - ICP mapping replaces that with data.
Skip this? Your ad spend targets everyone and converts no one. Every campaign downstream is a guess.
How to do it
Level 1 - Identify actual segments
Pull from CRM, payment data, and support tickets - not the sales team's gut feeling. Group by: revenue contribution, deal size, industry, company size, use case. The goal is to see who actually pays you, not who your pitch deck says should.
Level 2 - Map each segment across 6 dimensions
- Demographics - industry, company size, role of buyer, geography
- Problem they're solving - in their words, not yours
- Buying trigger - the event that made them start looking
- Decision process - who's involved, who has veto, how long it takes
- Price sensitivity & budget authority - who signs off and what they compare against
- Objections & dealbreakers - what kills deals, what slows them down
Level 3 - Sources and output format
- CRM data - export closed-won deals from the last 12 months. Segment by the dimensions above.
- Sales call transcripts - search for "why did you start looking?" and "what almost stopped you?"
- Support tickets - categorize by segment. Different segments have different problems post-sale.
- Review sites (G2, Capterra) - read 1-star and 5-star reviews. The extremes reveal the truth.
Output: one row per segment with all 6 dimensions filled. No empty cells. If a cell is empty, you haven't done enough research on that segment.
ICP Mapping Checklist
- Exported and segmented last 12 months of closed-won deal data
Pull from CRM and payment data - not gut feeling. Group by revenue contribution, deal size, industry, company size, and use case.
- Identified top 3 revenue-contributing segments from payment/CRM data
The segments that actually pay you may not match your pitch deck. Revenue data reveals the truth.
- Mapped all 6 dimensions for each segment (no empty cells)
Demographics, problem, buying trigger, decision process, price sensitivity, objections. Empty cells mean incomplete research.
- Validated buying triggers with 3+ customer interviews per segment
Data shows who buys. Interviews reveal why they started looking and what almost stopped them.
- Documented decision process including who has veto power
B2B deals die when you miss a stakeholder. Map every person involved, their concerns, and their influence.
- Identified objections and dealbreakers per segment
Different segments have different fears. The enterprise buyer worries about security; the SMB worries about price.
- Compared ICP assumptions (pitch deck) vs. reality (data) - noted gaps
The gap between who you think buys and who actually buys is where your marketing budget disappears.
- Prioritized segments by revenue potential × ease of acquisition
Not all segments are worth equal effort. Focus resources where the math works best.
Building ICPs from the sales team's gut feeling instead of payment data. Sales remembers the big logos and the painful deals. They don't remember the 200 mid-market companies that closed in 2 weeks with zero drama - which might be 60% of your revenue.
Real-world example: Your sales team says the ICP is "Series B SaaS companies with 50–200 employees." Payment data shows 60% of revenue comes from bootstrapped companies under 50 employees who found you through a blog post about reducing churn. Different ICP, different content strategy, different channels.
Customer Language Audit
Why it matters
People search for problems, not your solution's name. If your website says "revenue intelligence platform" but customers say "I need to know why deals are dying" - you're invisible in search, invisible in AI answers, and your sales team translates on every call. The gap between your language and theirs is the gap between being found and being ignored.
Skip this? Your website speaks a language your customers don't use. Every ad, every landing page, every email talks past them.
How to do it
Level 1 - Collect language from 5 sources
- Support tickets - how customers describe problems when they need help
- Sales call transcripts (pre-education) - what prospects say before your team "corrects" their vocabulary
- Review sites (G2, Capterra) - unfiltered language from users who aren't talking to you
- Reddit/community threads - how people describe the problem to peers
- Churned customer interviews - why they left, in their words
Level 2 - Build a language map
Three-column table: Company Term | Customer Term | Context.
Example row: "Revenue intelligence" | "See why deals are dying" | Problem awareness stage. Every row represents a gap between how you talk and how your market talks.
Level 3 - Priority matrix
Score each gap by search volume, buying intent, and current coverage. Biggest gaps with highest intent = your content priorities. This is where your editorial calendar comes from - not brainstorming sessions.
Language Audit Checklist
- Pulled and categorized language from 50+ support tickets
Support tickets capture how customers describe problems when they need help - unfiltered, emotional, real.
- Reviewed 20+ sales call transcripts for pre-education language
Focus on what prospects say BEFORE your team 'corrects' their vocabulary. That's the language your market uses.
- Read 30+ reviews across G2, Capterra, TrustRadius (1-star and 5-star)
Extremes reveal truth. 5-star reviews show what you do well. 1-star reviews show what you actually need to fix.
- Searched Reddit/communities for 10+ threads discussing the problem
How people describe the problem to peers - when you're not in the room - is the language that drives search.
- Conducted 5+ churned customer interviews focusing on their language
Why they left, in their words. Not your exit survey checkboxes - their actual explanation.
- Built language map with 20+ rows minimum
Three columns: Company Term | Customer Term | Context. Every row is a gap between how you talk and how your market talks.
- Identified top 10 gaps between company terms and customer terms
These gaps are where you're invisible - in search, in AI answers, and in every sales conversation.
- Scored gaps by search volume, buying intent, and current coverage
Biggest gaps with highest intent become your content priorities. This replaces brainstorming as your editorial calendar source.
- Mapped high-priority gaps to existing pages/content (or flagged as missing)
Content you already have may just need language alignment. Missing content gets added to the production queue.
- Shared language map with sales team - confirmed it matches what they hear
Sales is in the field daily. If the language map doesn't match their experience, something's missing.
Running a customer survey asking "How would you describe our product?" You'll get your own language reflected back. Customers mirror your terminology when talking TO you. You need language from when they're talking ABOUT the problem to peers, to Google, to Reddit - when you're not in the room.
Real-world example: A project management SaaS kept writing blog posts about "cross-functional collaboration workflows." Their top-converting organic page? "How to stop projects from falling through the cracks." That's how their customers describe the problem.
Data & Tracking Audit
Why it matters
You can't improve what you can't measure - but measuring the wrong things is worse than measuring nothing. It creates false confidence. Most businesses track too much noise and too little signal. The CMO sees a dashboard every Monday, nods, and goes back to what they were doing. That's theater, not data.
Skip this? You optimize based on broken numbers. Three months later, nobody can tell you if the redesign worked because there's no "before" snapshot.
How to do it
Level 1 - Audit what's tracked vs. what matters
List every tracking tool, every metric reported on, and who receives each report. For each metric, ask one question: "What decision does this metric inform?" If the answer is "none" or "I don't know" - it's noise.
Level 2 - Audit tracking implementation
- Analytics events (GA4, Mixpanel) - do they map to actual conversion points or just pageviews?
- UTM consistency - are parameters standardized or ad-hoc across campaigns?
- CRM lead source accuracy - what percentage is labeled generically ("Website", "Inbound")?
- Attribution model - does the team understand what model they're using and what it misses?
- Reporting cadence - who sees what, how often, and what happens as a result?
Level 3 - Specific things that break
Look for these exact problems - they exist in almost every company:
- GA4 event names inconsistent across pages (same action, different name)
- UTMs created ad-hoc by whoever launches the campaign
- CRM says "Website" for 70% of leads - useless for attribution
- Attribution model is default last-click - nobody chose it, nobody questioned it
- Executive dashboard hasn't been challenged or updated in 18+ months
Data & Tracking Audit Checklist
- Inventoried all tracking tools and mapped every metric to the decision it informs
List every tool and metric. For each, ask: 'What decision does this inform?' If the answer is 'none' - it's noise. Remove it.
- Verified analytics events match actual conversion points (not just pageviews)
Events that track pageviews instead of conversions create false confidence. Walk every conversion path and test manually.
- Audited UTM consistency and CRM lead source accuracy
Ad-hoc UTMs make attribution impossible. If 70% of CRM leads show 'Website' - you can't tell what's working. Fix both.
- Documented attribution model and verified team understands it
Most teams use default last-click because nobody chose it. Know what you're using, what it misses, and why it was selected.
- Audited every recurring report - who reads it and what they do with it
Interview 3+ report recipients. If the answer is 'look at it and move on,' the report is theater. Kill or redesign it.
- Identified tracking gaps and flagged misconfigured events
What's NOT tracked that should be? What's broken (duplicate events, stale UTMs, broken goals)? Gaps and misconfigs both produce wrong answers.
- Documented 'before' benchmarks for all key metrics
Without a baseline, you can never prove impact. Document everything now, before you change anything. Date it.
- Created naming conventions for events, UTMs, and lead sources
Consistency enables analysis. Without naming conventions, every new campaign creates a new data silo that breaks future reporting.
Assuming more tracking = better tracking. A company with 400 GA4 custom events and 12 dashboards isn't data-driven - they're data-drowning. The CMO sees a dashboard every Monday, nods, and goes back to what they were doing. That's theater.
Real-world example: A B2B SaaS had "lead source" in their CRM. 68% showed "Website." Useless. After audit, they discovered the highest-converting leads came from a specific integration partner's documentation page. Nobody knew. They 3x'd the partner program budget.
Adversarial Assessment
Why it matters
Most strategy asks "what should we do?" The adversarial lens asks "what can break, and who benefits?" This is risk assessment - identifying dependencies, single points of failure, competitive vulnerabilities, and opportunities that only become visible from an attacker's perspective.
Skip this? Your competitor finds the vulnerability first. Or a single vendor change takes down 40% of your pipeline overnight.
How to do it
Level 1 - Threat categories
Evaluate the business across five threat categories:
- Competitive threats - what's defensible vs. easily copied? Who's gaining and why?
- Dependency risks - single-vendor or single-channel dependencies that could disappear
- Regulatory/compliance - what's changing in your industry that could restrict operations?
- Reputation risks - Glassdoor scores, outage scenarios, PR vulnerabilities
- Market timing risks - growth trajectory, category maturation, new entrant window
Level 2 - Risk vs. reward scoring
For each identified risk, score on four dimensions: Likelihood (1–5), Impact (1–5), Mitigation cost (low/medium/high), and Opportunity angle - because every risk has a flip side.
Level 3 - Mitigation plans
Top 5 risks each get a one-paragraph plan: what you'd do, when you'd trigger it, estimated cost, and who owns it. These aren't theoretical - they're playbooks ready to execute.
Adversarial Assessment Checklist
- Completed competitive threat assessment - documented defensible vs. copyable
What can competitors replicate in 6 months? What can't they? Your strategy depends on knowing the difference.
- Identified all single-vendor and single-channel dependencies
73% of signups from organic search? One Google update away from crisis. Map every concentration risk.
- Mapped regulatory/compliance exposure relevant to the business
What's changing in your industry that could restrict operations? GDPR, AI regulation, industry-specific rules.
- Assessed reputation risks (Glassdoor, outage scenarios, PR)
Check Glassdoor scores, plan for outage scenarios, identify PR vulnerabilities before they become crises.
- Evaluated market timing - growth trajectory, new entrant risk
Is the market growing, maturing, or contracting? Where are you in that cycle? This shapes urgency.
- Scored all risks/opportunities on likelihood × impact matrix
Not all risks are equal. A high-likelihood, high-impact risk needs immediate attention.
- Identified top 5 risks by combined score
Focus on the vital few. You can't mitigate everything - prioritize what can actually hurt you.
- Written mitigation plan for each top-5 risk
Each plan: what you'd do, when you'd trigger it, estimated cost, who owns it. Ready to execute, not theoretical.
- Identified at least 2 opportunities from adversarial analysis
Every risk has a flip side. A competitor's weakness is your opportunity. A market shift is a land grab.
- Documented risk assessment with quarterly review cadence
Risks change. A one-time assessment decays fast. Schedule regular reviews to catch new threats early.
Skipping this because "we're too small for anyone to care." Small companies have the most concentrated risk - fewer channels, fewer customers, fewer fallbacks. One dependency fails, and there's no backup. The adversarial assessment matters MORE when you're small.
Real-world example: A SaaS got 73% of signups from organic search. Nobody flagged it as risk. Then a Google core update dropped their main page from position 2 to 14. Signups dropped 40% overnight. No paid acquisition, no email list, no partnerships. Adversarial assessment would have flagged single-channel dependency on day one.
The Output: The Source Doc
Everything from Research gets compiled into a single document. Not a slide deck. Not a Notion wiki with 40 sub-pages. One document that anyone on the team can read end-to-end in under an hour.
The Source Doc - Table of Contents
- Business Reality - what the company sells, who buys it, how money flows
- ICP Mapping Table - every segment, all 6 dimensions, no empty cells
- Customer Language Map - company terms vs. customer terms with context
- Data & Tracking Assessment - what's measured, what's broken, what's missing
- Adversarial Assessment - risks scored, mitigations planned, opportunities identified
- Key Findings - the 5–10 insights that will change how the business operates
- Recommended Priorities - feeds directly into the Expose stage
This document dictates design, content, campaigns, and priorities. Without it, everything that follows is guesswork dressed up as strategy. With it, every decision in the Expose, Convert, Optimize, and Navigate stages has a foundation.