Optimize
"Build the system that keeps it working - then reapply the framework to catch what you missed."
Optimize Loop
Optimize builds the operating system that keeps everything working - reporting rhythms, accountable processes, testing discipline, and a feedback loop that reapplies the framework to catch what changed. This isn't a one-time optimization. It's the system that prevents decay.
Operating Rhythm
Why it matters
Without a rhythm, accountability decays within weeks. Meetings become status updates instead of decision points. Reports get generated but never acted on. People default to whatever feels urgent instead of what actually matters. The rhythm is the heartbeat - lose it and the body stops functioning.
How to do it
Level 1 - Define metrics per role
What does each person need to know, how often, to do their job and make decisions? The CMO needs different data than the content manager. The SDR team needs different signals than the demand gen lead. Map metrics to roles, not to dashboards.
Level 2 - Design reporting cadence
Three tiers, each with a purpose:
- Daily - blockers and priorities only. What's stuck? What needs unblocking? Two minutes per person, max.
- Weekly - decisions. What did the data say? What are we changing? What are we committing to for next week? Every meeting ends with assigned actions.
- Monthly - strategy. Are we tracking toward quarterly goals? Do the goals still make sense? What did we learn that changes our approach?
Each report has a purpose, a recipient, and an expected action. If no action follows from a report, the report shouldn't exist.
Level 3 - Build meeting structure that produces decisions
Every meeting produces a decision or an action. No status-update meetings. If it's just "what did you do this week?" - that's an email or async update. Meeting templates include: agenda (pre-sent), decisions to be made, actions assigned, recorded outcomes.
Operating Rhythm Checklist
- Defined key metrics for each role
The CMO needs different data than the content manager. Map metrics to roles, not to dashboards.
- Mapped each metric to the specific decision it informs
If a metric doesn't inform a decision, it's noise. Every metric earns its spot or gets removed.
- Designed daily reporting format (blockers/priorities only)
Two minutes per person, max. What's stuck? What needs unblocking? No status updates.
- Designed weekly decision meeting format with required outputs
What did the data say? What are we changing? Every meeting ends with assigned actions, not general agreement.
- Designed monthly strategy review format tied to quarterly goals
Are we tracking? Do goals still make sense? What did we learn that changes our approach?
- Eliminated all status-update-only meetings
If it's just 'what did you do this week?' - that's an email. Meetings produce decisions or don't exist.
- Created meeting template with required outputs (decisions/actions)
Agenda pre-sent, decisions to be made listed, actions assigned, outcomes recorded. Every time.
- Assigned metric ownership - one person per metric, no shared responsibility
Shared responsibility means no responsibility. One person owns each number. They know why it moved.
- Built dashboard accessible to all relevant roles
Transparency enables accountability. When everyone can see the numbers, behavior changes.
- Scheduled 30-day review of rhythm effectiveness
The rhythm itself needs evaluation. Is it driving decisions? If not, adjust. The system optimizes itself.
Weekly meetings where the same tasks "roll over" for 3+ weeks with no consequence. If tasks roll over, either the task isn't important (kill it), the person is blocked (unblock them), or there's no accountability (fix the system). Rolling tasks are a symptom, not a quirk.
Real-world example: Replace the 60-minute weekly marketing all-hands (status updates nobody acts on) with a 15-minute daily standup (blockers only) and a 30-minute weekly decision meeting (actions required, decisions made, recorded). Total meeting time: down 60%. Decisions made: up 300%.
Process Design
Why it matters
Good process gets followed because it makes work easier. Bad process gets tolerated, worked around, or ignored entirely. Most companies have processes that exist on paper but haven't been followed in months. That's not process - it's documentation theater.
How to do it
Level 1 - Audit existing processes
List every documented and undocumented process. Which ones do people actually follow? Which ones exist on paper but are ignored? Ask the people doing the work, not the people who wrote the document. There's always a gap.
Level 2 - Eliminate processes nobody follows
If a process has been ignored for 6+ months, it's not a process - it's documentation theater. Kill it. Fewer enforced processes beat a library of ignored ones. Every process you keep must earn its existence.
Level 3 - Design new processes with 3 tests
Every new process must pass three tests before it ships:
- Simple - can you explain it in 2 sentences? If it needs a 10-page SOP, it won't be followed.
- Trackable - can you see compliance without asking? If you have to survey people to know if it's working, it's not trackable.
- Accountable - what happens when it's not followed? If the answer is "nothing," it's not a process. It's a suggestion.
Process Design Checklist
- Inventoried all existing processes (documented and undocumented)
Ask the people doing the work, not the people who wrote the documents. There's always a gap between policy and practice.
- Identified which processes are actually followed vs. ignored
Be honest. A process ignored for 6+ months isn't a process - it's documentation theater.
- Eliminated all unused processes - removed from documentation
Fewer enforced processes beat a library of ignored ones. Every process you keep must earn its existence.
- Designed new processes passing all 3 tests (simple, trackable, accountable)
Simple: explain in 2 sentences. Trackable: see compliance without asking. Accountable: consequence for non-compliance.
- Documented in a single accessible location the team already uses
Put SOPs where the work happens - in the project management tool, not in a shared drive nobody opens.
- Trained team on new processes by doing, not by reading
Run the process together. Learning happens in doing, not in reading a document.
- Set up compliance tracking that doesn't require manual reporting
If you have to survey people to know if a process works, it's not trackable. Build visibility into the workflow.
- Scheduled 30-day process review to catch what's not working
New processes break. Schedule the review now so fixes happen before bad habits form.
Implementing a project management tool and calling it "process." A tool is not a process. Process is who does what, when, how it's tracked, and what happens when it's not done. Asana is a tool. "Every campaign request goes through the intake form, gets prioritized weekly, and has a 48-hour response SLA" is a process.
Real-world example: Team used Asana with 47 project boards. 41 hadn't been updated in 30+ days. Reduced to 3 boards, each tied to the ONE active objective. Completion rate: from "nobody knows" to 89% within 6 weeks. The tool was the same. The process changed.
Testing Discipline
Why it matters
"Let's see what happens" isn't a test. Without structure, you learn nothing from experiments. You just spend money and hope. Real testing produces knowledge - it tells you why something worked or didn't, not just that it did or didn't.
How to do it
Level 1 - Test framework
Every test needs four elements before it runs:
- Hypothesis - what we expect to happen and why
- Metric - how we'll measure success (one primary metric, not five)
- Timeline - when we'll make a decision (not "when we feel like we have enough data")
- Decision rule - what constitutes success and what constitutes failure, defined before the test starts
If you can't fill all four, you're not ready to test. You're guessing with structure theater.
Example: filled-in test card
Level 2 - Prioritization matrix
Score every test idea by impact (if this works, how big is the win?) multiplied by ease (how hard is this to run?). High impact + low effort runs first. Low impact + high effort gets killed or deprioritized. No emotional attachment to pet ideas.
Level 3 - Documentation standard
For every completed test, record: what ran, what happened, what we decided, what we learned. Build a searchable archive. The point isn't bureaucracy - it's preventing re-running failed experiments because nobody remembers the last attempt.
Testing Discipline Checklist
- Created test template with hypothesis/metric/timeline/decision rule
Every test needs all four before it runs. Can't fill them? You're not ready to test - you're guessing.
- Built prioritization matrix scoring impact × ease
High impact plus low effort runs first. Low impact plus high effort dies. No emotional attachment to pet ideas.
- Scored all pending test ideas - killed low-impact/high-effort ones
Be ruthless. A long list of mediocre tests wastes more time than testing nothing at all.
- Ran top-priority test with full framework applied
Walk the walk. Run one test with all four elements. Document what happens. Set the standard.
- Documented results and learnings in standard format
What ran, what happened, what we decided, what we learned. Four sections. Every time.
- Created searchable test archive accessible to the team
Prevents re-running failed experiments because nobody remembers the last attempt.
- Trained team on test process - no tests run without the template
The template is the minimum bar. If it can't be filled out, the test isn't ready.
- Established monthly test review cadence to assess pipeline and results
Review what ran, what's coming, and what the cumulative learnings tell you. Compound knowledge.
Running A/B tests without enough traffic. If your landing page gets 200 visits/month, you need 3-6 months for statistical significance on most tests. Test at the right level for your traffic. Test big things (value prop, page structure) on high-traffic pages. Test small things (button text, colors) only when you have the volume to detect a difference.
Real-world example: Instead of A/B testing button colors on a page with 150 monthly visitors, tested the entire value proposition on the homepage (high-traffic page). Clear winner in 2 weeks. Then tested details on the high-traffic winner. Big swings first, details later.
Framework Reapplication
Why it matters
Markets change. People change. Competitors change. What worked 6 months ago may not work now. The framework is a loop, not a line. Companies that treat RECON as a one-time engagement watch their gains decay as the world shifts around them.
How to do it
Level 1 - Quarterly mini-audit
Review each RECON stage for 1 hour. Five hours total, once per quarter. What changed? Any new risks? Any new opportunities? Any processes degrading? This isn't a full reapplication - it's a check-up that catches drift before it becomes damage.
Level 2 - Annual full reapplication
Run the full framework again. New ICP data, new language audit, new adversarial assessment. The business is different now - new hires, new products, new competitors, new market conditions. Last year's Source Doc is last year's truth.
Level 3 - Trigger-based reapplication
Certain events should trigger an immediate partial or full reapplication, regardless of schedule:
- New competitor entering the market with aggressive positioning or pricing
- Major algorithm change (Google core update, social platform shift, AI search disruption)
- Leadership change - new CMO, new CEO, new VP of Sales changes priorities
- Merger or acquisition - ICP, messaging, and competitive landscape all shift
- Significant market shift - recession, regulation, category disruption
Don't wait for the quarterly check-up when a trigger event happens. By then, the damage is done.
Framework Reapplication Checklist
- Scheduled quarterly mini-audit dates for the next 12 months
One hour per RECON stage, once per quarter. Five hours total. Catches drift before it becomes damage.
- Assigned mini-audit ownership - one person drives each quarterly review
Without an owner, it won't happen. Assign it. Put it on the calendar. Make it non-negotiable.
- Created trigger list for immediate reapplication events
New competitor, major algorithm change, leadership change, M&A, market shift - don't wait for the quarterly check.
- Scheduled annual full reapplication on the calendar
New ICP data, new language audit, new adversarial assessment. Last year's Source Doc is last year's truth.
- Documented what changed since the last audit cycle
Track the delta. What's different now? This creates institutional memory and trend awareness.
- Updated The Source Doc with new findings from each review
The Source Doc is a living document. If it's not updated, it becomes historical fiction.
- Reviewed and updated adversarial assessment with current threats
Threats evolve. New competitors emerge. Dependencies shift. Check every quarter.
- Communicated all changes to stakeholders - no silent updates
Silent updates breed confusion. When something changes, tell everyone who needs to know.
Treating the initial RECON engagement as "done." The framework is a loop, not a line. Companies that skip reapplication see gains decay within 6-12 months as the market shifts and internal discipline fades. The work isn't finished when the first pass is complete. It's finished when the system runs itself.
Real-world example: Company completed RECON, grew 40% in year one. Skipped reapplication. Year two, a new competitor entered with aggressive pricing. They didn't see it until pipeline dropped 25%. A quarterly adversarial assessment would have flagged the competitor at market entry, not at revenue impact.
The Output: Self-Running Operating System
Optimize produces a business that runs without constant intervention. Not autopilot - but a system where the right people see the right data, make decisions on a rhythm, follow processes that work, test with discipline, and catch drift before it becomes decay.
What Optimize Delivers
- Reporting Rhythm - daily/weekly/monthly cadence tied to decisions, not dashboards
- Accountable Processes - simple, trackable, with consequences for non-compliance
- Testing Discipline - structured experimentation with documented learnings
- Feedback Loop - quarterly mini-audits, annual reapplication, trigger-based reviews
- Institutional Knowledge - searchable archive of decisions, tests, and results
This is what separates a living system from a one-time consulting engagement. The operating system feeds directly into Navigate - where the team learns to run it themselves.