Every leadership team I speak with is asking the same thing: how do we govern AI adoption?
Reasonable question. Wrong starting point.
The Governance Scramble
Boards want policies. Legal wants guardrails. IT wants architecture standards. HR wants training programmes. Everyone wants certainty before they’ll move.
The result? Governance frameworks that look impressive on slides but stall adoption in practice. Policies written for risks that haven’t materialised. Controls designed for tools that evolved three versions ago.
I see this pattern constantly in my Future of Work and Agility work at BJSS. Organisations treating AI governance as a compliance exercise rather than an adaptive capability.
They’re solving for the wrong problem.
The Real Question
“How do we adopt AI safely?” assumes you know what you’re adopting it for.
Most organisations don’t.
They’re implementing Copilot because competitors are implementing Copilot. They’re building chatbots because the board saw a demo. They’re creating AI strategies because analysts expect one.
Activity without diagnosis.
The better question: What friction are we solving, and is AI the right lever?
What I’m Seeing
In my role leading on organisational agility, I work with teams attempting AI integration. The pattern is remarkably consistent.
Teams that struggle:
- Start with the technology (“Let’s pilot Copilot”)
- Define success as adoption metrics (“80% of staff using it monthly”)
- Govern through prohibition (“Here’s what you can’t do”)
- Treat AI as a project with an end date
Teams that thrive:
- Start with friction diagnosis (“Where does work stall? Why?”)
- Define success as outcome improvement (“Discovery phases completing 60% faster”)
- Govern through principles and learning (“Here’s how we’ll adapt as we learn”)
- Treat AI as a capability requiring continuous calibration
The difference isn’t sophistication. It’s sequence.
The Future of Work Isn’t an AI Question
Here’s what most AI governance frameworks miss.
The future of work question isn’t “How will AI change work?” It’s “What kind of work should humans be doing?”
AI is a lever. Work design is the diagnosis.
When I help organisations think about agility, we don’t start with tools. We start with flow. Where does value get stuck? Where does decision-making stall? Where do talented people spend time on work that doesn’t need their talent?
Then — and only then — we ask: which of these friction points could AI address?
Sometimes the answer is AI. Often it’s simpler: clearer priorities, better decision rights, reduced work-in-progress. Interventions that cost nothing and require no governance framework.
Governance That Actually Works
Effective AI governance isn’t a policy document. It’s an adaptive system.
What that looks like:
Diagnosis before adoption. Map your friction points first. Identify where work stalls, where rework multiplies, where talented people burn time on low-value activity. Then evaluate whether AI addresses root causes or just accelerates existing dysfunction.
Principles over rules. Rules become obsolete the moment the technology updates. Principles adapt. “We don’t input client confidential data into external AI tools” survives GPT-5. “Don’t use ChatGPT” doesn’t survive next quarter.
Learning loops built in. Governance should include how you’ll learn, not just what you’ll prohibit. What signals will you watch? How frequently will you review? Who decides when to adjust?
Experimentation boundaries. Define where teams can experiment freely, where they need approval, and where the answer is no. Make the boundaries clear so people stop asking permission for everything.
The Agility Angle
Organisations that navigate AI adoption well share a characteristic: they were already adaptive.
They already had:
- Clear decision rights (so AI policy questions don’t stall in committees)
- Tolerance for experimentation (so pilots can run without eighteen months of business cases)
- Learning rhythms (so governance evolves as capability matures)
- Outcome focus (so adoption serves results, not activity metrics)
AI doesn’t create organisational agility. It reveals whether you have it.
If your organisation struggles to make decisions, AI governance will expose that. If your organisation fears experimentation, AI adoption will stall. If your organisation measures activity over outcomes, you’ll have impressive usage dashboards and no impact.
The work isn’t AI governance. The work is building adaptive capability. AI is just the current test case.
The Question Worth Asking
Before your next AI governance discussion, try this:
If AI disappeared tomorrow, what friction would remain?
That’s your real problem. AI might solve part of it. Probably not all of it. Possibly none of it.
Diagnose first. Then decide whether AI is the right lever.
I work with organisations navigating the intersection of AI adoption, future of work, and organisational agility. The hardest conversations aren’t about technology — they’re about whether leaders are willing to diagnose the conditions before they prescribe.
What’s your organisation’s biggest friction point right now? And are you sure AI is the answer?
Leave a Reply