2025 wasn’t the year AI got smarter
It was the year our thinking got exposed.
By the end of 2025, most conversations had moved past:
“Can AI do this?”
To harder questions:
“Should we trust this?”
“Who owns this decision?”
“What happens if this is wrong?”
What surprised me wasn’t what AI could do.
It was what it revealed about how people think when AI is in the room.
Lesson 1: AI didn’t reduce thinking it shifted where thinking stopped
One pattern showed up again and again.
People would ask for a review of a large document
sometimes tens of pages long
clearly produced through a dozen or more prompts.
And the expectation was often:
“Can you just sanity check this?”
But the real issue wasn’t grammar or structure.
It was that thinking had stopped early
outsourced to the model
and the document now looked complete.
AI didn’t remove effort.
It removed the natural pause where humans usually ask:
“Does this actually make sense?”
Lesson 2: AI confidently fills gaps even when it shouldn’t
Another common experience:
People reluctant to provide even a broad impact assessment
because they “didn’t have enough detail”…
while being completely comfortable presenting a full AI generated solution
based on nothing more than a conversation.
In one case, the solution was confidently wrong end to end.
The model filled in gaps that humans normally flag as unknowns.
AI is very good at sounding complete.
It is not good at knowing when it should stop.
That judgement is still ours.
Lesson 3: Feedback loops started eating themselves
By late 2025, I started seeing something new.
AI written documents
being reviewed with
AI generated feedback.
Comments on clarity, tone, and structure
all produced by the same class of models.
At that point, humans weren’t really reviewing anymore.
They were approving the conversation between machines.
Efficiency went up.
Understanding didn’t.
Lesson 4: Confidence became more dangerous than ignorance
One of the most uncomfortable lessons:
People who were already confident
became more confident
even on topics they were clearly wrong about.
AI reinforced existing beliefs extremely well.
If you start with a flawed assumption,
AI will happily help you defend it
with better language, better structure, and more confidence.
The danger were not lack of knowledge.
It was confidence without correction.
Lesson 5: “AI said so” became a hiding place
The most worrying pattern of all:
People ducking responsibility behind phrases like:
- “That’s what AI suggested”
- “The model recommended this”
- “The output said…”
AI became a shield.
But AI doesn’t own outcomes.
People do.
Delegating thinking is easy.
Delegating responsiblity is not acceptable.
What 2025 ultimately taught me
AI didn’t replace thinking in 2025.
It amplified whatever was already there:
- Clear thinkers got faster
- Messy thinkers got louder
- Weak assumptions scaled further
Which led me to a simple realisation:
AI accelerates thinking it does not improve it.
Improving thinking is still our job.
The real skill gap wasn’t prompt engineering.
It was:
- Judgement
- Sense making
- Willingness to pause
- Willingness to say “I don’t know yet”
Looking ahead to 2026
If 2024 was experimentation
and 2025 was acceleration
Then 2026 needs to be the year of intent.
Not less AI.
But more responsibility alongside it.
Because AI will keep accelerating.
Which means thinking must become delibirate again.
KiwiGPT exists to explore AI without losing the human parts that still matter most.