In my previous post, What I Learnt Speaking at Global Azure ANZ 2026, I mentioned a short conversation that stayed with me long after the event ended.

A university student told me that in 2023 they had been actively discouraged from using AI during study.

Minutes later, another student casually described using AI throughout nearly everything they worked on.

Those two conversations happened back-to-back.

And honestly, they captured something important about the current moment better than most keynote presentations do.

The strange thing about the AI transition is not just the speed of the technology itself. It is the speed at which institutions, expectations, and social norms are rewriting themselves around it.

Universities especially seem caught in the middle of an unusually sharp cultural pivot.

Only a short time ago, much of the conversation around generative AI in education focused on fear.

Could students cheat with it?

Would writing skills collapse?

Could assignments still be trusted?

Would AI destroy learning itself?

Many institutions responded in the predictable way large institutions often do when confronted with uncertainty. Policies tightened. Warnings appeared. Detection tools emerged. Students were told to avoid AI or use it very cautiously.

Then reality moved faster than policy.

AI tools became dramatically more useful within a very small window of time. Not just for coding, but for summarising, brainstorming, language support, tutoring, research assistance, organisation, and feedback loops.

At the same time, industry expectations shifted too.

Students entering internships or graduate roles increasingly discovered that AI usage was not only accepted, but quietly expected.

That creates a deeply confusing situation for people who were educated under a completely different set of assumptions only a year or two earlier.

And to be fair, universities are in a difficult position here.

Their job is not simply to optimise productivity. Their job is to help people learn how to think.

Those are not always the same thing.

If a student uses AI to bypass all cognitive struggle, something important is probably lost. Friction matters in learning. Confusion matters too. Some understanding only forms when people wrestle with difficult ideas directly.

At the same time, refusing to engage with AI at all now feels increasingly detached from reality.

This is the uncomfortable middle phase institutions are now navigating.

Too much restriction feels outdated.

Too little structure risks shallow learning.

And underneath all of this sits a harder philosophical question that few people seem fully ready to answer yet:

What exactly should education optimise for in an AI-rich world?

Memorisation?

Judgement?

Creativity?

Verification?

Systems thinking?

Communication?

The answer is probably some combination of all of them, but the balance is changing quickly.

There is also a psychological dimension to this transition that I do not think gets discussed enough.

Many students followed the rules they were given in good faith. They adapted to institutional expectations. Then the expectations changed abruptly.

That can create a genuine sense of instability.

Not outrage necessarily. Just disorientation.

The student I spoke with did not seem anti-AI at all. They simply sounded unsure how to recalibrate after such a rapid shift in guidance.

And honestly, that reaction seems perfectly reasonable.

Humans generally adapt to technological change gradually. AI has not really offered that luxury.

The timeline compressed aggressively.

What once felt futuristic became normal almost overnight.

Even language changed quickly. Terms like prompt engineering, copilots, agents, vibe coding, retrieval-augmented generation, and multimodal AI moved from niche internet discussions into ordinary professional conversations remarkably fast.

Education systems are now trying to catch up while simultaneously helping students prepare for a future that still feels partially undefined.

That is not an easy task.

I suspect the long-term outcome will not be “AI replaces learning” any more than calculators replaced mathematics.

Instead, the emphasis will probably shift.

Less focus on producing first drafts from scratch.

More focus on judgement, synthesis, editing, critical thinking, and asking good questions.

Ironically, AI may end up increasing the value of distinctly human traits rather than reducing them.

Curiosity still matters.

Taste still matters.

Original thinking still matters.

Good judgement may matter more than ever.

But getting from the old educational model to the new one is going to feel messy for a while. Wellington conversations at Global Azure simply made that transition visible in a very human way.

The people adapting most comfortably do not necessarily seem to be the people blindly embracing AI or rejecting it entirely.

They seem to be the people learning how to collaborate with it thoughtfully while still retaining ownership of their own thinking.

That is probably the real skill being developed right now.

In the next post, I want to explore another divide that surfaced repeatedly during the event: why some developers feel deeply energised by AI-assisted coding while others feel strangely disconnected from it.


Written for KiwiGPT.co.nz — Generated, Published and Tinkered with AI by a Kiwi