AI hallucinations are back in focus — and things are getting weirder.
It turns out that the most advanced “reasoning models” are also more prone to hallucinations. In other words: the smarter they get, the more they make stuff up.
This raises a question: if we built a model that never hallucinates, would it still be capable of real problem-solving? Or would we have stripped it of its problem-solving creativity?
And if so, what does that say about creativity and ourselves?
Could it be that creativity — in both machines and humans — depends on a capacity to bullshit?
Maybe what we call “imagination” is just bullshit with a purpose.
Originally published on LinkedIn.