The ‘Rinse & Repeat’ Pattern We Pretend Not to See
Another headline.
Another internal memo.
Another urgent directive telling staff to stop uploading confidential data into AI tools.
This week, the Australian Financial Review reported that Deloitte had instructed staff to cease uploading confidential client information into ChatGPT following internal data incidents.
By now, this shouldn’t surprise anyone.
We have seen this pattern before — not just in AI, but across waves of technological acceleration. Innovation is promoted. Capability is celebrated. Efficiency expectations quietly rise. And then, when exposure surfaces, the brakes are applied.
But this isn’t an AI governance failure.
It’s a failure of leadership imagination.
When organisations accelerate technology without deeply imagining its vulnerabilities, its limitations, its known exposures — and then how incentives, pressure, and human behaviour will intersect with it — exposure isn’t accidental.
It’s inevitable.
And when the headline hits and panic sets in, the correction comes quickly.
It is employees and clients who feel the tremor first.
The real question isn’t whether AI introduces risk.
Of course it does. Many of the limitations and exposures now making headlines are not new.
The real question is whether leadership has fully imagined the consequences of how AI is being deployed — and whether accountability, when things go wrong, is structural or theatrical.
Because if the same behaviour repeats after the apology,
if incentives remain untouched after the penalty or fine,
if the same firms are reappointed after the “lessons learned”…
Then we are not witnessing accountability.
We are witnessing its illusion.
Its theatre.
The Incentive Distortion No One Wants to Name
Here is the part we rarely say out loud.
Organisations introduce AI with enthusiasm — sometimes with fanfare.
They encourage experimentation.
They celebrate early adopters.
They signal urgency.
They quietly lift performance expectations.
They embed “AI-enabled efficiency” into KPIs.
In some cases, headcount is reduced in anticipation of AI-driven efficiency.
Do more.
Do it faster.
Do it with fewer resources.
So people adapt.
Not because they are reckless.
But because they are responding to the architecture placed around them.
Powerful tools are made accessible.
Performance pressure increases.
Human buffers shrink.
Then — predictably — someone pastes something they shouldn’t into a public AI tool.
Was the boundary clear?
Was it stress-tested?
Or was it implied and assumed?
Under pressure, people default to what helps them meet expectations — and to the practices they see normalised around them.
Then suddenly:
Stop.
Policy breach.
Security risk.
Rein it in.
And here’s the uncomfortable truth:
The KPIs don’t move back.
The productivity expectations don’t soften.
The headcount reductions don’t reverse.
Only the tolerance disappears.
That’s organisational whiplash.
And the employees who were encouraged to run are now exposed for sprinting too fast.
This is not a rogue employee problem.
This is incentive architecture meeting immature governance.
Leaders design incentives.
And risk starts and ends with people.
When Consequence Doesn’t Change Behaviour
Now we widen the lens.
Large consulting firms position themselves as transformation leaders.
Governance advisors.
AI pioneers.
Risk specialists.
Yet patterns repeat:
Overreach.
Exposure.
Penalty, fine, or partial refund.
Leadership change.
Reappointment of the firm.
If behaviour repeats, accountability did not occur.
Accountability is not a press release.
It is not a fine or refund.
It is not a reshuffle.
Accountability requires structural response to exposure.
If contracts, expectations, and commercial relationships remain intact without reform, boards and executives must ask themselves an uncomfortable question:
Are we protecting standards — or protecting comfort and continuity?
Because if clients continue to reappoint firms after repeated governance failures, they are no longer passive observers.
They are participants in the cycle.
Legacy relationships feel safer than disruption.
Continuity feels less risky than escalation.
Replacing incumbents feels politically costly.
But tolerating rinse-and-repeat exposure is not leadership.
It is avoidance dressed as pragmatism.
AI Is Not the Problem
AI is not unethical.
AI is not reckless.
AI is not the villain in this story.
Unless it is misused or weaponised.
AI is an accelerator.
It is a lens.
It accelerates productivity.
It accelerates innovation.
It accelerates exposure.
It exposes fragile systems.
It exposes cultural cracks.
It exposes weak consequence architecture.
And when leadership maturity does not accelerate at the same pace, those cracks widen faster.
This is what the AiLlusion looks like at its next layer.
Not just the illusion of innovation.
The illusion of accountability.
The illusion of leadership.
The belief that because there was noise, there was consequence.
Because there was a fine, there was reform.
Because there was a resignation, there was structural change.
If the same pressures remain,
if the same systems remain,
if the same incentives remain,
if the same behaviours remain…
Nothing has changed.
Except the headline cycle.
A Warning — Not a Rebuke
Boards and executives cannot afford to be sheep in the AI narrative.
Following the herd toward acceleration is easy.
Questioning the herd takes courage.
True leadership imagination asks:
How will this tool actually be used under pressure?
Where will shortcuts predictably emerge?
How will KPIs distort behaviour?
What exposures are foreseeable — not hypothetical?
And what structural changes are we prepared to make if things go wrong?
If those questions are not being deeply interrogated, then AI is setting the pace — and leadership is following.
Velocity is not vision.
Acceleration is not strategy.
And consequence without reform is not accountability.
It is theatre.
The next headline will not be shocking.
It has become predictable.
The only real question is whether leadership maturity will finally catch up — or whether we will continue to mistake motion for progress and apology for reform.
Innovation without imagination is risky.
Innovation without accountability becomes systemic risk.
And systems, once exposed, do not quietly fix themselves.
They require leaders willing to do more than issue memos.
They require leaders willing to change incentives.
They require leaders willing to break rinse-and-repeat cycles — even when it is uncomfortable.
That is not anti-AI.
That is pro-leadership.
And in this moment, leadership — not technology — is what’s being tested.
Risk Rebels… what say you?


