What AI Is Teaching Us About Risk, Responsibility, and Real Leadership
“Son, your ego is writing checks your body can’t cash.”
It’s one of the most memorable lines from Top Gun.
Delivered by Commander Stinger to Maverick after a reckless, show-boating manoeuvre — a blunt reminder that confidence without discipline isn’t courage… it’s risk blindness.
And lately, I’ve been wondering:
Are we doing the same thing with AI?
Are we writing checks with our ambition, speed, and hype around artificial intelligence… that our leadership, governance, and values simply can’t cash?
Because here’s the uncomfortable question we need to ask — not in a tech forum, not in a glossy keynote, but in the mirror:
When it comes to AI, are we leading the technology… or is the technology now leading us?
The moment we’re in
AI isn’t just another tool in the kit.
It’s fast.
It’s powerful.
It’s seductive.
And right now, it feels like a runaway train — fuelled by buzz, chaos, and the fear of being left behind.
We’re seeing:
- Speed before safety
- Hype before ethics
- Automation before accountability
And the most telling part?
Some of the most embarrassing missteps aren’t coming from fringe players — they’re coming from the biggest names in the room.
The organisations meant to model responsibility.
The ones meant to lead the way.
Except what they’re showing us isn’t leadership.
It’s ego dressed up as innovation.
When leadership lags, risk shows up fast
This isn’t a theoretical concern.
It’s already happening — publicly and painfully.
We’ve now seen:
- KPMG employees using AI to cheat on ethics exams — the very assessments designed to test integrity and judgment.
- Deloitte reports exposed for containing AI-generated errors and hallucinated analysis — in work clients depend on to make serious decisions.
- Multiple major firms caught using AI in audit and advisory work without proper review, safeguards, or accountability structures.
- The erosion of creative integrity — as AI systems are trained on the work of artists, musicians, and writers without consent, credit, or compensation — reducing creativity to data to be harvested and exploited, not a craft to be respected.
- And recently, a telling case reported by the ABC News, where an organisation knowingly used an AI chatbot to pose as a human spokesperson — delivering confident, detailed… and completely fabricated information, while actively concealing the use of AI.
Not delayed.
Not misunderstood.
Invented.
That’s not innovation with a few teething issues.
That’s credibility being outsourced — and trust being gambled.
This isn’t a technology failure. It’s a leadership one.
Let’s be very clear.
AI didn’t decide to cheat on ethics exams.
AI didn’t choose to publish hallucinated analysis.
AI didn’t bypass governance, oversight, or review.
People did.
Or more accurately — leaders created environments where:
- speed mattered more than standards,
- convenience mattered more than conscience,
- and hype mattered more than humility.
This is what happens when tools are adopted faster than principles,
when capability outruns accountability,
and when ego convinces itself that, “We’ll sort the risks later.”
That’s the moment leadership quietly steps out of the cockpit…
Or worse — we hand the controls to an AI autopilot we barely understand.
Where PROTECT changes the conversation
In Unearth’s PROTECT framework, technology sits in the final “T” — Toolkit.
And that’s not accidental.
Tools come last — because tools should serve leadership, not replace it.
We lead.
Tools support.
Not the other way around.
AI, like every powerful tool before it, should:
- enable better decisions — not shortcut responsibility
- support people — not replace judgment
- strengthen trust — not erode it
- reduce risk — not introduce new blind spots
When tools start setting the pace,
integrity and values start falling behind.
And when integrity and values fall behind, risk doesn’t just increase — it mutates.
What’s really being disrespected
What concerns me most isn’t that AI sometimes gets things wrong.
It’s that leaders are handing over credibility, trust, and responsibility to tools they don’t fully understand — and then acting surprised when it backfires.
When major organisations are exposed for using AI irresponsibly, it’s not just embarrassing.
It’s a signal.
A signal that somewhere along the way, leadership stopped respecting:
- the technology and its shortcomings,
- the people it affects,
- and the customers who place their trust in us.
Ego is writing the checks.
And employees, customers, and communities are being asked to cash them.
The leadership choice in front of us
AI is not the enemy.
But neither is it neutral.
Like every powerful tool in history, it will amplify whatever leadership we bring to it — wisdom or recklessness, humility or hubris, responsibility or avoidance.
So the real question isn’t what AI can do.
The real question is:
Are we brave enough to lead it — or will we keep letting ego fly the jet… and call the fallout “collateral damage”?
Because the future won’t be shaped by the fastest movers.
It will be shaped by the leaders who understood that speed without stewardship isn’t progress.
It’s bravado at scale.
And sooner or later, every organisation learns the same lesson:
Ego flies fast.
Leadership pays the crash bill.


