Intervention Playbooks

Back to Toolkit

One playbook per CLEAR lever, each matched to its CIRCA condition. Each follows the same structure: when to use, step-by-step protocol, common mistakes, and calibration signals. Select a condition below. Use the Pattern Recognition Table (A2) first if you have not yet confirmed your diagnosis.

Appendix C1 · Intervention Playbook
Clarity Lever
Matched to Contradictory conditions
60 min session to design 2–4 weeks to implement Decision velocity target: 10–50× improvement
Use this playbook when you see
  • Every priority labelled “critical”
  • Stakeholders pulling in three different directions
  • Meeting one goal violates another
  • Decisions reversed when stakeholders complain
  • Progress theatre — meetings but no decisions
  • Best people withdrawing from the problem
Why Clarity works

Contradictory conditions emerge from legitimate but incompatible stakeholder mandates. Clarity does not eliminate contradiction — it forces explicit choice about which stakeholder wins in which context. Teams can execute when they know the priority order, even if they disagree with it. Paralysis comes from ambiguity, not from the choices themselves.

60 minutes · Decision-makers + representative stakeholders · Neutral facilitator
1
List all “critical” priorities — 10 minutes
Every priority currently labelled critical goes on the list. No debate yet — just capture. Typical output: 8–15 items all marked critical.
2
Define decision criteria — 15 minutes
Ask: “What actually makes something more important?” Common criteria: revenue impact, customer-facing vs. internal, regulatory requirement vs. nice-to-have, time sensitivity, strategic vs. tactical. Choose 3 criteria maximum. More creates a new contradiction.
3
Forced ranking — 25 minutes
Rules — non-negotiable:
  • No ties allowed
  • No “1A and 1B” tricks
  • Every item gets a unique rank
  • Facilitator enforces: “Someone loses. Who?”
Start at the top: “Which ONE is actually #1?” Continue until all are ranked.
4
Decision protocol — 10 minutes
Document and publish:
  • Who owns priority ranking (single decision-maker)
  • How often ranking is reviewed (typically monthly)
  • Under what conditions ranking can change mid-cycle
  • Escalation path when stakeholders disagree
Post the ranking visibly. When someone says “but this is critical,” point to the ranking: “It is #7. We will reach it after #1–6 complete.”
What goes wrong and how to avoid it
Allowing ties
“These three are all equally critical.”
→ You have recreated the problem. Force choice — there are no ties.
Re-ranking weekly
Constant re-ranking creates a new Rapid condition.
→ Lock ranking for 2–4 weeks minimum.
Letting stakeholders opt out
“I am not participating in this political exercise.”
→ Absent stakeholders forfeit input. Their priorities go to the bottom.
Elaborate decision criteria
Seven factors, weighted and scored.
→ Creates analysis paralysis. Three criteria maximum.
What the signals tell you
Clarity is working
  • Teams stop asking “which is more important?”
  • Fewer mid-cycle priority changes
  • Stakeholder complaints shift to “why is this ranked #8?”
  • ThroughFlow improving — teams finishing top priorities
Requires calibration
  • ThroughFlow up, Human Pulse declining → Ranking too rigid. Add one emergency slot per sprint.
  • Stakeholder conflict increasing → Ranking not reflecting reality. Review criteria with stakeholders.
  • Teams ignoring ranking → Decision-maker lacks authority. Executive sponsor must enforce consequences.
If signals do not improve after 2 weeks
  • Check diagnosis — may be Insecure (trust deficit), not Contradictory
  • Verify decision-maker has actual authority to hold the ranking
  • Ensure ranking is based on strategy, not political horse-trading
The protocol is here. The diagnostic depth — combination patterns, look-alike conditions, when this lever fails — is in Thriving in Turbulence.
Appendix C2 · Intervention Playbook
Learning Lever
Matched to Complex conditions
Ongoing · review monthly 8–12 weeks to see improvement Prediction accuracy target: 40–60% improvement
Use this playbook when you see
  • Three experts produce three different root causes for the same incident
  • System behaviour is unpredictable despite expertise
  • Interventions trigger unexpected consequences
  • No individual comprehends full system interactions
  • Retrospective clarity but prospective uncertainty
  • More analysis produces more theories, not convergence
Why Learning works

Complex conditions emerge when system behaviour emerges from interactions nobody fully understands. Learning does not eliminate complexity — it builds collective intelligence through systematic exploration. Teams can navigate unpredictability when they have shared mental models built through disciplined experimentation rather than individual expertise.

Ongoing · Entire team + technical leadership · Team lead enforcing time protection
1
Establish protected time — Week 1
Allocate 20% of capacity explicitly for exploration. Block the calendar — Friday afternoons are typical. This time is non-negotiable and is not borrowed for delivery pressure. Track usage weekly: what did we learn?
  • 10-person team → 2 person-days weekly for exploration
  • 5-person team → 1 person-day weekly for exploration
2
Design probe-sense-respond cycles — Weeks 1–2
Identify the highest-uncertainty area of the system. Design a small, safe-to-fail experiment. Run in protected time. Document what the system actually did versus prediction.
Example probe: “We think the caching layer causes intermittent failures. Probe: deliberately stress the cache with 3× load in the test environment. Observe what breaks. Document the actual failure mode.”
3
Shared mental model building — weekly
30-minute weekly session:
  • What did this week’s probes teach us?
  • How does this change our understanding?
  • Update system map collaboratively
  • Identify next highest-uncertainty area
Use visual mapping: whiteboard or Miro showing system components, interactions, understood versus unknown areas.
4
Knowledge capture — bi-weekly
Document validated learnings in a searchable wiki. Include: what we thought, what we tested, what we learned. Tag by system area for future reference. Share learnings across teams.
What goes wrong and how to avoid it
Borrowing exploration time for delivery
“Just this sprint, we’ll skip learning time to hit the deadline.”
→ Complex conditions worsen without learning. Protect time religiously.
Exploration without structure
Random investigation without probe design.
→ Creates busy work, not validated knowledge. Use probe-sense-respond cycles.
Individual learning only
Each person explores alone with no knowledge sharing.
→ Complexity requires collective intelligence. Build shared models.
Analysis instead of experimentation
Three-week analysis document instead of a two-day probe.
→ Complex systems cannot be analysed, only tested. Experiment beats analysis.
What the signals tell you
Learning is working
  • Expert disagreements declining — converging mental models
  • Prediction accuracy improving — fewer surprises
  • Incident response time decreasing — faster diagnosis
  • Team confidence increasing: “we understand this now”
Requires calibration
  • Exploration time used but understanding not improving → Probes too small or unfocused. Design bigger experiments targeting core uncertainties.
  • Team resistance to protected time → Make learning outcomes visible: “last month’s exploration prevented this week’s incident.”
  • Learning not spreading across team → Strengthen weekly sharing sessions and improve documentation.
If signals do not improve after 8 weeks
  • Check diagnosis — may be Rapid (no capacity), not Complex
  • Verify exploration is testing core uncertainties, not peripheral ones
  • Ensure leadership protects time from delivery pressure
The protocol is here. The diagnostic depth — combination patterns, look-alike conditions, when this lever fails — is in Thriving in Turbulence.
Appendix C3 · Intervention Playbook
Empathy Lever
Matched to Insecure conditions
16 weeks minimum Trust builds slowly — do not rush Human Pulse safety target: 2.8 → 3.4
Use this playbook when you see
  • Best people going quiet in meetings
  • Problems hidden until they become a crisis
  • Retrospectives produce only safe, surface-level action items
  • Skip-level reveals concerns never raised directly
  • Blame culture when things go wrong
  • CYA behaviour — defensive documentation and paper trails
Why Empathy works

Insecure conditions emerge when trust deficits make psychological safety impossible. Empathy does not eliminate past harm — it rebuilds safety through demonstrated consistency over months. Teams can collaborate honestly when they have evidence that vulnerability is genuinely safe, not punished. Process solutions (more meetings, transparency dashboards) make this worse, not better.

16 weeks minimum · Leadership + entire team · Senior leader modelling vulnerability
1
Leadership vulnerability first — Weeks 1–4
Leaders model safety before asking the team to risk. Weekly “what I learned” written communication including: a real failure from this week (not sanitised), a current uncertainty (“I don’t know” explicitly stated), a request for team input on a genuine problem.
Example: “This week I learned my assumption about the SwiftShip vendor was wrong. I rushed migration for top 20 clients — bad decision causing 48-hour shipping delay. I own that, and apologise to everyone managing the fallout. This leaves me uncertain about the Q4 approach. Given the pressure, what would you do?”
2
Blameless post-mortems consistently — Weeks 2–16
A single blame incident destroys months of trust-building. Post-mortem format:
  • Timeline: what occurred, when — no blame language
  • Contributing factors: system conditions enabling the incident
  • Learnings: what this teaches about system reality
  • Actions: system changes, not people changes
Critical: if stakeholders demand individual accountability, leadership absorbs that pressure. Never throw a team member under the bus to satisfy a stakeholder.
3
Small promises kept relentlessly — Weeks 1–16
Fifty kept commitments over six months matter more than grand gestures.
  • Select one small promise: “We respond to every retrospective action item within 5 days”
  • Keep it 100% — zero exceptions for six weeks minimum
  • Only add a second promise after the first is proven for 8+ weeks
  • Track publicly: action items posted, responses documented
4
Protected honesty spaces — Weeks 4–16
Explicit forums for truth-telling with responsive action. Weekly 30-minute “what’s really happening” session: three questions only (What’s hard? What’s unclear? What should change?). No immediate solutions — just listen and capture. 24-hour response commitment: what we will address, what we won’t, and why.
What goes wrong and how to avoid it
Requesting team vulnerability before demonstrating safety
“Everyone share your failures in this retrospective.”
→ Trust cannot be mandated. The leader models first; the team follows when safe.
Single blame incident during trust rebuild
One person thrown under the bus to satisfy stakeholders.
→ Destroys months of progress instantly. Leadership must absorb pressure.
Big promises, inconsistent follow-through
“We will fix everything” — then nothing changes.
→ Creates cynicism. Small promises kept religiously rebuild trust.
Rushing the timeline
Expecting trust rebuild in 4 weeks.
→ Trust breaks fast, rebuilds slowly. Minimum 16 weeks for measurable improvement.
What the signals tell you
Empathy is working
  • Retrospectives surfacing real issues (not just “better communication”)
  • Questions increasing in meetings — people testing safety
  • Mistakes acknowledged earlier, before they become a crisis
  • Attrition slowing — best people staying
Requires calibration
  • Leadership vulnerability perceived as weakness → Frame as “learning organisation” culture. Get executive sponsorship.
  • Team still withholding after 8 weeks → Investigate: “What happened that made honesty feel unsafe?”
  • Small promises being broken → Reduce promise scope but keep 100% compliance.
If signals do not improve after 16 weeks
  • Check diagnosis — may be Contradictory (structural impossibility), not Insecure
  • Verify no recent trust breaches — one incident resets the entire timeline
  • Ensure leadership is actually modelling vulnerability, not performing it
The protocol is here. The diagnostic depth — combination patterns, look-alike conditions, when this lever fails — is in Thriving in Turbulence.
Appendix C4 · Intervention Playbook
Agility Lever
Matched to Anxious conditions
12 weeks to measurable improvement Executive sponsor required Decision velocity target: 10 wks → 2 days
Use this playbook when you see
  • Risk committees blocking everything — zero approvals over months
  • Decision timelines lengthening continuously year on year
  • Only guaranteed-safe choices being made
  • Innovation proposals dying in perpetual review
  • Failure narratives that are blame-focused, not learning-focused
  • Best people leaving, citing lack of autonomy
Why Agility works

Anxious conditions emerge when organisational fear pervades decision-making, making every choice feel existentially threatening. Agility does not eliminate risk — it demonstrates safety through evidence. Teams act confidently when they have proof that small failures are survivable and learning is valued over blame. Controls and more review gates make this worse.

12 weeks · Decision-makers + risk-averse stakeholders + team · Executive sponsor enforcing boundary discipline
1
Define explicit risk boundaries — Week 1
Make “safe to fail” concrete through explicit thresholds. Document in a one-page reference. Post visibly.
  • Tier 1 (reversible): <£5K impact, <100 users, <1 day rollback → Team decides
  • Tier 2 (recoverable): <£50K impact, <1,000 users, <1 week recovery → Manager approves
  • Tier 3 (significant): >£50K impact, >1,000 users, >1 week recovery → Executive review
2
Time-box all Tier 1 decisions — Weeks 1–12
Deadlines force action despite uncertainty.
  • All Tier 1 decisions have a 48-hour maximum review time
  • If not rejected within 48 hours, automatically approved
  • Rejection requires written justification
  • Track decision velocity: average days from proposal to decision
Start with one Tier 1 decision per week. Build confidence through successful small experiments.
3
Reframe failure narratives — Weeks 2–12
When a Tier 1 experiment fails: 30-minute post-mortem within 24 hours. Focus: What did we learn? What would we do differently? Explicitly NOT: Whose fault? Who should be punished? Document and share the learning with the broader organisation.
Bad: “Marketing’s campaign failed because they didn’t do proper research.”
Good: “We learned our messaging assumptions for the 25–34 demographic were wrong. Next experiment: test revised messaging with a 50-person cohort before broader launch.”
4
Make small wins visible — Weeks 4–12
Evidence reduces organisational anxiety. Weekly wins communication: every successful Tier 1 decision documented, share what was tried and what happened, celebrate successful failures (“we learned X cheaply”). Track accumulating confidence: “23 experiments, 18 successes, 5 valuable learnings.”
What goes wrong and how to avoid it
Risk boundaries too conservative
Everything still requires executive review.
→ Recreates paralysis. Push more decisions to Tier 1.
Time-boxes without enforcement
48-hour deadline suggested, not enforced.
→ Executive sponsor must enforce consequences when deadlines are missed.
Punishing Tier 1 failures
“We gave you autonomy and you failed.”
→ Teaches fear instantly. Tier 1 failures must be genuinely consequence-free.
Wins not visible enough
Successes mentioned in the team meeting but not broadcast widely.
→ Organisational anxiety does not reduce. Broadcast wins across the organisation.
What the signals tell you
Agility is working
  • Decision velocity increasing — 10 weeks to 2 days is typical
  • Innovation proposals increasing — team confidence growing
  • Experiment rate accelerating — fear reducing
  • Human Pulse confidence improving (2.7 → 3.4 typical)
Requires calibration
  • Decision velocity up but quality declining → Boundaries too loose. Tighten Tier 1 criteria.
  • Team not proposing experiments → Fear still too high. Leadership must run the first experiments to model safety.
  • Risk committees rejecting within boundaries → Executive sponsor must intervene with the committee directly.
If signals do not improve after 12 weeks
  • Check diagnosis — may be Complex (genuine uncertainty), not Anxious (disproportionate fear)
  • Verify risk boundaries are actually being enforced
  • Ensure failures are genuinely consequence-free — one punishment resets all progress
The protocol is here. The diagnostic depth — combination patterns, look-alike conditions, when this lever fails — is in Thriving in Turbulence.
Appendix C5 · Intervention Playbook
Resilience Lever
Matched to Rapid conditions
8 weeks to pace restoration Team + management buy-in required Human Pulse sustainability target: 2.8 → 3.6
Use this playbook when you see
  • WIP count exceeds team size by 2–3×
  • Context switching constant, nothing finishing
  • Start rate exceeds finish rate continuously
  • Team working harder but accomplishing less
  • Quality shortcuts proliferating — tests skipped, debt deferred
  • Late nights becoming the norm, not the exception
Why Resilience works

Rapid conditions emerge when velocity overwhelms capacity. Resilience does not increase speed — it restores sustainable pace through constraint. Teams paradoxically finish more by starting less, because context switching overhead declines and focus enables completion. The counterintuitive result: slowing starts accelerates delivery.

8 weeks · Team + management agreeing to constraints · Team lead enforcing WIP discipline
1
Establish WIP limits — Week 1
Constraint forces finishing before starting.
  • WIP limit formula: Team size ÷ 2 = maximum WIP
  • 12-person team → 6 items maximum in progress
  • 8-person team → 4 items maximum in progress
Post the limit visibly. Cannot start a new item until something finishes. No exceptions without an explicit executive override — track every override. Expected resistance: “Everything’s urgent, can’t wait.” Response: “The limit reveals we are starting faster than finishing. Constraint forces prioritisation.”
2
Protect buffer time — Weeks 1–8
20% capacity absorbs variability without breaking commitments.
  • 20% of sprint capacity explicitly reserved — not borrowed for delivery
  • Use for: interrupts, technical debt, learning, unplanned work
  • Track usage: what consumed buffer this week?
If buffer is consistently unused: wrong diagnosis — this may not be Rapid. If buffer is exceeded every week: a genuine capacity conversation is needed.
3
Implement interrupt protocols — Weeks 2–8
Control work intake to prevent WIP explosion.
  • True emergency: Swap immediately — something drops from the board
  • Important, not urgent: Queue for next sprint
  • Routine: Standard queue
New work replaces existing work — it does not stack. The stakeholder requesting the interrupt bears the cost. Track swap rate: how many interrupts per sprint?
4
Establish sustainable velocity agreements — Weeks 1–8
Define the pace that is maintainable indefinitely. Leadership commitment:
  • Expected hours: 40 hours per week typical
  • Weekend work: only genuine emergencies
  • Late nights: treated as a capacity problem, not a solution
  • Will adjust scope if pace is unsustainable — not pressure for overtime
  • Violations treated as planning failures, not motivation problems
What goes wrong and how to avoid it
WIP limits with exceptions
“Just this once, we’ll exceed the limit for the critical deadline.”
→ Constraint loses credibility. Enforce strictly or do not implement at all.
Borrowing buffer for delivery
“Use buffer capacity to hit sprint commitment.”
→ No capacity for variability. First interrupt breaks the system. Protect buffer as sacred.
Velocity agreement without enforcement
Agreement documented, but violations tolerated.
→ Team learns commitments are performative. Leadership must enforce consequences.
Adding people instead of protecting capacity
“We need more headcount.”
→ Onboarding worsens Rapid conditions short-term. Fix WIP first, then assess headcount.
What the signals tell you
Resilience is working
  • WIP declining — team respecting limits
  • Finished items increasing — focus enables completion
  • Context switching declining — less fragmentation reported
  • Human Pulse sustainability improving (2.8 → 3.6 typical)
  • Late nights eliminated — pace sustainable again
Requires calibration
  • WIP limits respected but throughput not improving → May also be Complex. Add Learning lever: use buffer time for system understanding.
  • Buffer exceeded every week → Capacity genuinely insufficient. Scope reduction or headcount conversation needed.
  • Team gaming WIP limits — splitting work artificially → Redefine what counts as WIP: must be independently deliverable value.
If signals do not improve after 8 weeks
  • Check diagnosis — may be Complex or Contradictory, not Rapid
  • Verify WIP limits are actually being enforced — count the exceptions
  • Ensure leadership is not pressuring for overtime despite the agreement
The protocol is here. The diagnostic depth — combination patterns, look-alike conditions, when this lever fails — is in Thriving in Turbulence.
CLEAR lever → CIRCA condition → First move → Timeline
Clarity
Contradictory
Force-rank priorities 1–N, no ties. Post visibly. Hold boundary.
2–4 weeks
Learning
Complex
Protect 20% for exploration. Probe-sense-respond. Build shared mental models.
8–12 weeks
Empathy
Insecure
Leadership vulnerability first. Blameless post-mortems. Small promises kept.
16+ weeks
Agility
Anxious
Define risk tiers. Time-box Tier 1 decisions. Make small wins visible.
12 weeks
Resilience
Rapid
WIP limit = team ÷ 2. Finish before starting. Protect 20% buffer.
8 weeks
From Thriving in Turbulence · Neil Walker · neilwalker.net Appendix C1–C5 · A2 Pattern Recognition · Full toolkit