Lost in Transmission Experiment - The Tortoise, the Hare, and Linguistic Entropy

Everyone knows The Tortoise and the Hare. But what happens when you hand that story to an AI and ask it to summarise it — again, and again, and again, twenty times in a row? This experiment is the AI equivalent of the childhood game “Whisper Loops” where a message whispered around a circle mutates into hilarious nonsense by the time it gets back to you. Except here, the whisperers are language models. And the message is Aesop’s tidy moral about patience and pride. ...

October 6, 2025 · 5 min

Honesty Is the Best Policy—Even for AI

If you reward fluency over truth, don’t be surprised when your AI speaks nonsense beautifully. That is the sobering lesson from recent work on why large language models (LLMs) hallucinate. The research is clear: hallucinations are not mysterious glitches, but the rational outcome of how these systems are trained and evaluated. When the training signal rewards confident answers, models learn to manufacture them—truthful or not. The problem with beautiful nonsense The paper Why Language Models Hallucinate makes a blunt claim: hallucinations arise because LLMs are optimised for being useful and fluent, not necessarily correct. In other words, they are rewarded for looking right more than for being right. That incentive structure guarantees some degree of dishonesty, even if the model has no intention in the human sense. ...

September 13, 2025 · 3 min