One playbook per CLEAR lever, each matched to its CIRCA condition. Each follows the same structure: when to use, step-by-step protocol, common mistakes, and calibration signals. Select a condition below. Use the Pattern Recognition Table (A2) first if you have not yet confirmed your diagnosis.
- Every priority labelled “critical”
- Stakeholders pulling in three different directions
- Meeting one goal violates another
- Decisions reversed when stakeholders complain
- Progress theatre — meetings but no decisions
- Best people withdrawing from the problem
Contradictory conditions emerge from legitimate but incompatible stakeholder mandates. Clarity does not eliminate contradiction — it forces explicit choice about which stakeholder wins in which context. Teams can execute when they know the priority order, even if they disagree with it. Paralysis comes from ambiguity, not from the choices themselves.
- No ties allowed
- No “1A and 1B” tricks
- Every item gets a unique rank
- Facilitator enforces: “Someone loses. Who?”
- Who owns priority ranking (single decision-maker)
- How often ranking is reviewed (typically monthly)
- Under what conditions ranking can change mid-cycle
- Escalation path when stakeholders disagree
- Teams stop asking “which is more important?”
- Fewer mid-cycle priority changes
- Stakeholder complaints shift to “why is this ranked #8?”
- ThroughFlow improving — teams finishing top priorities
- ThroughFlow up, Human Pulse declining → Ranking too rigid. Add one emergency slot per sprint.
- Stakeholder conflict increasing → Ranking not reflecting reality. Review criteria with stakeholders.
- Teams ignoring ranking → Decision-maker lacks authority. Executive sponsor must enforce consequences.
- Check diagnosis — may be Insecure (trust deficit), not Contradictory
- Verify decision-maker has actual authority to hold the ranking
- Ensure ranking is based on strategy, not political horse-trading
- Three experts produce three different root causes for the same incident
- System behaviour is unpredictable despite expertise
- Interventions trigger unexpected consequences
- No individual comprehends full system interactions
- Retrospective clarity but prospective uncertainty
- More analysis produces more theories, not convergence
Complex conditions emerge when system behaviour emerges from interactions nobody fully understands. Learning does not eliminate complexity — it builds collective intelligence through systematic exploration. Teams can navigate unpredictability when they have shared mental models built through disciplined experimentation rather than individual expertise.
- 10-person team → 2 person-days weekly for exploration
- 5-person team → 1 person-day weekly for exploration
- What did this week’s probes teach us?
- How does this change our understanding?
- Update system map collaboratively
- Identify next highest-uncertainty area
- Expert disagreements declining — converging mental models
- Prediction accuracy improving — fewer surprises
- Incident response time decreasing — faster diagnosis
- Team confidence increasing: “we understand this now”
- Exploration time used but understanding not improving → Probes too small or unfocused. Design bigger experiments targeting core uncertainties.
- Team resistance to protected time → Make learning outcomes visible: “last month’s exploration prevented this week’s incident.”
- Learning not spreading across team → Strengthen weekly sharing sessions and improve documentation.
- Check diagnosis — may be Rapid (no capacity), not Complex
- Verify exploration is testing core uncertainties, not peripheral ones
- Ensure leadership protects time from delivery pressure
- Best people going quiet in meetings
- Problems hidden until they become a crisis
- Retrospectives produce only safe, surface-level action items
- Skip-level reveals concerns never raised directly
- Blame culture when things go wrong
- CYA behaviour — defensive documentation and paper trails
Insecure conditions emerge when trust deficits make psychological safety impossible. Empathy does not eliminate past harm — it rebuilds safety through demonstrated consistency over months. Teams can collaborate honestly when they have evidence that vulnerability is genuinely safe, not punished. Process solutions (more meetings, transparency dashboards) make this worse, not better.
- Timeline: what occurred, when — no blame language
- Contributing factors: system conditions enabling the incident
- Learnings: what this teaches about system reality
- Actions: system changes, not people changes
- Select one small promise: “We respond to every retrospective action item within 5 days”
- Keep it 100% — zero exceptions for six weeks minimum
- Only add a second promise after the first is proven for 8+ weeks
- Track publicly: action items posted, responses documented
- Retrospectives surfacing real issues (not just “better communication”)
- Questions increasing in meetings — people testing safety
- Mistakes acknowledged earlier, before they become a crisis
- Attrition slowing — best people staying
- Leadership vulnerability perceived as weakness → Frame as “learning organisation” culture. Get executive sponsorship.
- Team still withholding after 8 weeks → Investigate: “What happened that made honesty feel unsafe?”
- Small promises being broken → Reduce promise scope but keep 100% compliance.
- Check diagnosis — may be Contradictory (structural impossibility), not Insecure
- Verify no recent trust breaches — one incident resets the entire timeline
- Ensure leadership is actually modelling vulnerability, not performing it
- Risk committees blocking everything — zero approvals over months
- Decision timelines lengthening continuously year on year
- Only guaranteed-safe choices being made
- Innovation proposals dying in perpetual review
- Failure narratives that are blame-focused, not learning-focused
- Best people leaving, citing lack of autonomy
Anxious conditions emerge when organisational fear pervades decision-making, making every choice feel existentially threatening. Agility does not eliminate risk — it demonstrates safety through evidence. Teams act confidently when they have proof that small failures are survivable and learning is valued over blame. Controls and more review gates make this worse.
- Tier 1 (reversible): <£5K impact, <100 users, <1 day rollback → Team decides
- Tier 2 (recoverable): <£50K impact, <1,000 users, <1 week recovery → Manager approves
- Tier 3 (significant): >£50K impact, >1,000 users, >1 week recovery → Executive review
- All Tier 1 decisions have a 48-hour maximum review time
- If not rejected within 48 hours, automatically approved
- Rejection requires written justification
- Track decision velocity: average days from proposal to decision
Good: “We learned our messaging assumptions for the 25–34 demographic were wrong. Next experiment: test revised messaging with a 50-person cohort before broader launch.”
- Decision velocity increasing — 10 weeks to 2 days is typical
- Innovation proposals increasing — team confidence growing
- Experiment rate accelerating — fear reducing
- Human Pulse confidence improving (2.7 → 3.4 typical)
- Decision velocity up but quality declining → Boundaries too loose. Tighten Tier 1 criteria.
- Team not proposing experiments → Fear still too high. Leadership must run the first experiments to model safety.
- Risk committees rejecting within boundaries → Executive sponsor must intervene with the committee directly.
- Check diagnosis — may be Complex (genuine uncertainty), not Anxious (disproportionate fear)
- Verify risk boundaries are actually being enforced
- Ensure failures are genuinely consequence-free — one punishment resets all progress
- WIP count exceeds team size by 2–3×
- Context switching constant, nothing finishing
- Start rate exceeds finish rate continuously
- Team working harder but accomplishing less
- Quality shortcuts proliferating — tests skipped, debt deferred
- Late nights becoming the norm, not the exception
Rapid conditions emerge when velocity overwhelms capacity. Resilience does not increase speed — it restores sustainable pace through constraint. Teams paradoxically finish more by starting less, because context switching overhead declines and focus enables completion. The counterintuitive result: slowing starts accelerates delivery.
- WIP limit formula: Team size ÷ 2 = maximum WIP
- 12-person team → 6 items maximum in progress
- 8-person team → 4 items maximum in progress
- 20% of sprint capacity explicitly reserved — not borrowed for delivery
- Use for: interrupts, technical debt, learning, unplanned work
- Track usage: what consumed buffer this week?
- True emergency: Swap immediately — something drops from the board
- Important, not urgent: Queue for next sprint
- Routine: Standard queue
- Expected hours: 40 hours per week typical
- Weekend work: only genuine emergencies
- Late nights: treated as a capacity problem, not a solution
- Will adjust scope if pace is unsustainable — not pressure for overtime
- Violations treated as planning failures, not motivation problems
- WIP declining — team respecting limits
- Finished items increasing — focus enables completion
- Context switching declining — less fragmentation reported
- Human Pulse sustainability improving (2.8 → 3.6 typical)
- Late nights eliminated — pace sustainable again
- WIP limits respected but throughput not improving → May also be Complex. Add Learning lever: use buffer time for system understanding.
- Buffer exceeded every week → Capacity genuinely insufficient. Scope reduction or headcount conversation needed.
- Team gaming WIP limits — splitting work artificially → Redefine what counts as WIP: must be independently deliverable value.
- Check diagnosis — may be Complex or Contradictory, not Rapid
- Verify WIP limits are actually being enforced — count the exceptions
- Ensure leadership is not pressuring for overtime despite the agreement