Why AI Forgets: The Context Window Explained
You tell ChatGPT something in the morning and by afternoon it’s forgotten. Is it lying, lazy, or limited? Spoiler: it’s limited — by design. The Goldfish Problem LLMs (large language models) aren’t “forgetful” in the human sense. They simply work with a limited sliding window of awareness. Once you’ve gone beyond it, earlier content disappears like the view in your rear-view mirror. In Kiwi terms, it’s like watching a rugby match but only remembering the last few plays, not the whole season. ...