AI Unlearning

When we talk about artificial intelligence, we often think about how well it remembers — data, facts, styles, even our writing tone. But what happens when we need it to forget? That’s the tricky challenge of AI unlearning, and a new paper — Distribution Preference Optimization: A Fine-grained Perspective for LLM Unlearning — offers a smart new angle on how to do it. Why forgetting matters As AI models grow larger, they absorb massive amounts of information — some of it private, copyrighted, or simply outdated. From user data that should have been deleted to training examples that contain bias or sensitive facts, keeping everything isn’t always safe or ethical. ...

October 24, 2025 · 4 min

Quick post: Transformer Explainer

Relatively short post today. I came across this amazing tool called Transformer Explainer. It lets you see how a Transformer model (like GPT-2) actually works, layer by layer, right in your browser. You can watch tokens flow through embeddings, self-attention, and MLPs, and even play around with parameters like temperature and top-k to see how text generation changes. Check it out here: https://poloclub.github.io/transformer-explainer/ Written for KiwiGPT.co.nz — Generated, Published and Tinkered with AI by a Kiwi ...

August 26, 2025 · 1 min