When we talk about artificial intelligence, we often think about how well it remembers — data, facts, styles, even our writing tone. But what happens when we need it to forget?

That’s the tricky challenge of AI unlearning, and a new paper — Distribution Preference Optimization: A Fine-grained Perspective for LLM Unlearning — offers a smart new angle on how to do it.

Why forgetting matters

As AI models grow larger, they absorb massive amounts of information — some of it private, copyrighted, or simply outdated. From user data that should have been deleted to training examples that contain bias or sensitive facts, keeping everything isn’t always safe or ethical.

In New Zealand, that ties directly into our Privacy Act 2020, which gives people the right to request deletion of their personal information. If AI models can’t actually forget, we risk breaching those rights.

But forgetting isn’t easy for AI. Once data has been mixed into a model’s parameters, it’s a bit like trying to remove an egg from a baked cake. You can’t just pick it out — you have to adjust the cake so it’s as if the egg was never there.

A gentler way to make models forget

Traditional methods to make AI forget usually involve retraining or “fine-tuning” with special examples — teaching it what not to say. But that’s often slow, expensive, and sometimes heavy-handed — the model forgets too much, losing other useful knowledge.

The new research introduces something called Distribution Preference Optimization (DiPO) — a fine-grained way to nudge a model’s behaviour without retraining it from scratch.

Instead of giving the model long examples of what it should or shouldn’t say, DiPO works at a microscopic level — inside the model’s “next word” probabilities. You can think of it like adjusting the dials on a radio rather than swapping the whole station. It subtly changes the odds that certain words or phrases will appear, effectively helping the model unlearn bits of information.

The result? The model behaves as if it never saw the unwanted data, but it still remembers everything else just fine.

The practical side

This kind of fine-grained forgetting could make a big difference for how AI is managed in practice:

  • Privacy compliance: Organisations could remove user-specific data from models without throwing away everything else.
  • Copyright and content control: Creators could ensure AI systems forget protected or inappropriate materials.
  • Smarter updates: Instead of retraining a model from scratch each time laws or datasets change, we could simply “tune out” parts as needed.

And because DiPO focuses on small adjustments inside the model’s probability space, it may be more cost-effective and general-purpose than older unlearning methods.

How well does it work?

The researchers tested DiPO on several benchmarks designed to measure forgetting. Compared to older techniques, it showed stronger ability to forget the right things while keeping accuracy on everything else. In other words, it didn’t over-forget.

That’s a big win — especially for organisations trying to keep their AI both ethical and useful.

Why it matters for Aotearoa

For Kiwi researchers, educators and developers, this approach offers a glimpse of a more flexible, responsible AI future. Instead of seeing “unlearning” as an impossible or costly task, we can view it as part of everyday AI maintenance — like updating your phone’s software or clearing old files.

It also fits neatly with New Zealand’s growing focus on data sovereignty — ensuring communities, especially Māori, have control over how their data is used. Fine-grained unlearning could one day help ensure models respect cultural or consent boundaries in real time.

For the curious

If you want to dive into the technical side, the full paper is available on arXiv (2510.04773). It goes into detail about how the algorithm tweaks probability distributions, compares results to other methods, and shows why this approach is mathematically sound.

But even without the equations, the big idea is simple and elegant: sometimes, the smartest way for AI to grow is to learn how to forget.


Written for KiwiGPT.co.nz — Generated, Published and Tinkered with AI by a Kiwi