AI Leadership or AI Illusion? Are Organisations Chasing Innovation… or Falling for the AiLlusion?

AI Leadership

Be Brave Enough to Ask the AI Questions That Matter

When Commonwealth Bank of Australia (CBA) recently announced its “Australia-first” AI plan, it positioned itself as stepping boldly into the future of banking, technology, and responsible innovation.

And in many ways, that deserves recognition.

Because leadership is not just about participating in change — it is about being willing to stand visibly within it.

But as Cyber Daily provocatively asked:

“CBA announces ‘Australia-first’ AI plan… but has it learnt from its mistakes?”

That is a brave question.

And it is exactly the kind of question leaders must be willing to ask — not just about CBA, but about themselves, their organisations, and their relationship with artificial intelligence.

Because AI plans are no longer simply technology strategies.

They are leadership declarations.

And leadership declarations invite leadership conversations.

So perhaps the real question isn’t whether CBA has learnt from past challenges.

Perhaps the real question is:

How do organisations know when they have truly challenged themselves deeply enough before embedding AI into the core of how they operate?

The Quiet Pressure Driving AI Adoption

Across banking, financial services, and most industries today, AI is arriving with extraordinary momentum.

Customers expect digital convenience.
Markets expect innovation.
Competitors are announcing transformation programs.
Regulators are watching closely.
Investors are signalling future-readiness expectations.

Under this pressure, many organisations find themselves adopting AI for a reason rarely spoken out loud:

To avoid falling behind.

There is nothing inherently wrong with wanting to remain competitive. That instinct is one of the drivers of progress.

But AI is not simply another operational tool.

AI is an organisational amplifier.

It accelerates strengths.
It exposes weaknesses.
It magnifies culture, governance, decision-making, and leadership behaviour — both good and especially poor.

Many plans assume issues can be fixed during implementation. Experience consistently shows this is rarely the case.

In reality, organisations risk not just adopting artificial intelligence… but falling for the AiLlusion — the belief that technology alone can deliver transformation without leadership, identity alignment, and behavioural readiness.

And when AI adoption is driven primarily by momentum rather than deep internal reflection, organisations risk implementing technology faster than they are behaviourally prepared to absorb it.

CBA as a Leadership Lens — Not a Target

CBA has positioned its AI strategy within its stated purpose:

“Building a brighter future for all.”

It has outlined governance frameworks, responsible AI principles, and partnerships with global technology leaders, research institutions, and academic organisations. It has emphasised fairness, accountability, transparency, and human oversight.

These are important commitments.

But publishing principles is not where responsible AI is proven.

Responsible AI is proven in how those principles behave under pressure.

CBA’s journey — including previous workforce decisions linked to automation that were later reversed — does not make it unique.

If anything, it makes it human.

And it highlights a challenge every organisation navigating AI will eventually face:

When AI meets reality, what happens next?

The Leadership Mirror AI Quietly Holds Up

Artificial intelligence does not create organisational behaviour.

It reveals it.

It reveals whether:

  • Values are operational or aspirational
  • Governance is behavioural or procedural — a tick-box exercise
  • Leadership is reflective or reactive
  • Stakeholder voices are genuinely heard — or only heard when amplified publicly

This is not criticism of CBA.

This is the leadership mirror AI holds up to every organisation stepping into transformation.

And it raises a deeper question worth asking:

How far have organisations truly challenged their identity — and the alignment and lived reality of that identity — before embedding AI into their operating model?

Borrowed Thinking vs Internal Ownership

CBA’s AI framework aligns with globally recognised responsible AI principles. That alignment reflects good practice and is expected in a complex regulatory environment.

Like most organisations adopting AI, CBA collaborates with global technology providers, academic institutions, research bodies, and infrastructure partners. That ecosystem is essential. AI cannot be built in isolation.

But there is a leadership risk that quietly sits inside external collaboration.

Not simply the risk of external influence — because many globally recognised providers are still navigating AI maturity themselves.

The deeper risk is external frameworks shaping organisational thinking without being deeply interrogated through the lens of organisational identity.

Responsible AI frameworks are designed to be broad, transferable, and cross-industry.

Organisational identity is not.

Every organisation carries:

  • Historical behavioural patterns
  • Cultural decision instincts
  • Leadership risk appetites
  • Trust dynamics built over decades

The question is not whether organisations adopt external AI thinking.

The question is whether they internalise and challenge it deeply enough to ensure it aligns with who they are — not just what the industry says they should do.

Because when AI outcomes become complex, accountability never returns to vendors, consultants, or technology partners.

It always returns to leadership.

And that leadership must act as custodian of organisational identity — not individual or external agendas.

The Environmental Conversation We Are Still Avoiding

CBA’s public AI disclosures acknowledge environmental considerations, including renewable energy procurement and efforts to understand the climate impact of large-scale computing infrastructure.

That is encouraging.

But it also surfaces broader questions that many organisations are still reluctant to confront.

AI is resource intensive.

It consumes significant energy, water, and infrastructure capacity — often through global cloud environments outside direct organisational control. It also raises questions about lifecycle management, hardware disposal, and the expanding footprint of AI server infrastructure.

AI data centres frequently carry a larger environmental footprint than traditional computing environments.

So the question becomes:

Are organisations asking the hard environmental questions early enough — or only after expansion has already occurred?

If AI is positioned as building a brighter future, organisations must consider not only digital efficiency, but environmental stewardship.

Responsible intelligence cannot be separated from sustainable intelligence.

And where sustainability commitments exist, leaders must ask:

Have the environmental realities of AI been fully integrated into those commitments… or quietly minimised in the rush to innovate?

Learning While Still in Infancy

One truth must be acknowledged with humility.

Most organisations — including highly sophisticated ones — are still in the early stages of AI maturity.

There is no final blueprint.

Mistakes will occur.
Assumptions will be challenged.
Outcomes will evolve.

The question is not whether organisations will get everything right.

The question is:

How responsibly will they learn while they are still learning?

Responsible leadership during AI infancy requires:

  • Deep internal challenge before external adoption
  • Transparency when plans meet unexpected outcomes
  • Protection of people and trust during experimentation
  • Clear and visible accountability ownership — no outsourcing accountability or passing the buck
  • Monitoring behavioural signals — not just performance metrics

Because the lower the maturity, the higher the duty of care.

Reflection. Rethinking. Accountability.

If organisations truly want to lead in AI, three leadership disciplines must sit at the centre of every implementation.

Reflection
The courage to examine whether AI is revealing organisational truths leaders may not have been ready to confront.

Rethinking
The willingness to evolve governance through a people-centred lens that protects trust, culture, and identity — not just compliance tick boxes.

Accountability
The acceptance that regardless of partnerships, frameworks, or technology platforms, leadership remains responsible for the consequences of AI adoption.

Reflection creates awareness.
Rethinking creates evolution.
Accountability creates integrity.

The Conversation Worth Starting

CBA’s AI strategy provides an opportunity to begin a conversation that many organisations in Australia — and across the world — still urgently need to have.

Not about whether AI should be adopted.

That decision has already been made across most industries.

But about whether organisations are brave enough to ask:

  • Have we truly challenged ourselves before implementing AI?
  • Are we adopting technology that reflects our identity — or reshaping our identity around technology?
  • Are our values designed to survive operational pressure?
  • Will the reality of AI better support our people and clients — or expose our shortfalls?
  • Are we prepared to share what we learn when AI outcomes diverge from expectation?
  • Are we protecting people, trust, and environment as we accelerate innovation?

Because artificial intelligence will not determine the future of organisations.

Leadership will.

And AI will simply reveal how prepared leadership truly is.

Be brave enough to start the conversation that matters.
Because this one truly does.

About the Author

Featured Posts