Divinations, Not Hallucinations: Rethinking Truth in the Age of Generative AI
In recent years, the word hallucination has become a shorthand for everything wrong with large language models (LLMs).
It signals a flaw, an error, a moment where the machine confidently gets it wrong. But maybe this framing is incomplete. Maybe even misleading.
What if these moments aren't hallucinations at all? What if they're not mistakes in the way we think about them, but instead... divinations?
What LLMs Actually Do
Let’s take a moment to ground this. Models like GPT don't “know” anything. They don’t possess understanding. What they do is generate language based on patterns. They're built on a type of architecture called a transformer, which analyzes enormous amounts of data and calculates the next most probable word or phrase in a sequence. This process continues, word by word, until an output is complete.
When you ask a question—about mitochondria, political theory, or a random historical fact—you’re not tapping into a fixed database of truth. You're invoking a statistical oracle that has learned how people tend to answer such questions. You're asking for a best guess.
That’s not delusion. It’s pattern recognition. It’s probability-driven divination.
Divination as Pattern, Not Prediction
In many ancient cultures, divination wasn't about being exactly right. It was about drawing meaning from chaos. It was a way of interpreting patterns—shapes in the stars, ripples in water, the flight of birds—not to control the future, but to find alignment or direction.
Modern LLMs operate in a remarkably similar way. They turn billions of data points into coherent, readable narratives. But what they offer is not truth. It’s coherence.
And coherence can feel like truth if we’re not careful.
So when people complain about AI hallucinations, they are often reacting to outputs that sound confident but contain inaccuracies. But the real problem is usually upstream: vague prompts, missing context, or users asking machines for something they were never designed to deliver—certainty.
We Are Part of the Problem
It’s tempting to blame the tools. But LLMs are mirrors more than they are minds. If we prompt them vaguely, they will answer vaguely. If we treat them like experts, we’ll be misled. If we fail to verify, we’re the ones responsible.
We’re asking for too much without doing our part.
People forget that before LLMs, we had to read, research, and verify. We used Google with care, cross-checking sources, and weighing perspectives. We didn’t assume the first search result was scripture. Why have we stopped applying that same discernment with AI?
A Better Approach to AI Research
If we want to use these tools responsibly, we need a new approach. Here’s what that could look like:
Multiple angles. Don’t settle for the first answer. Run your question several times, tweak the wording, and notice the differences.
Self-auditing. Ask the model to analyze its own output. Follow up with prompts like “Which parts of this response might be incorrect or unverifiable?”
Manual verification. Highlight bold statements. Copy, paste, and search. Use the same diligence you would with any other research tool.
Layered prompting. Build context before asking for analysis. Don’t expect magic from a one-line prompt.
Own the outcome. You are the final filter. The model can’t know what you know. Your judgment is the safety layer.
Divination Is a Collaboration
We’ve been thinking of LLMs as faulty calculators or dishonest librarians. That isn’t quite right.
They are pattern-compressors. They are mirrors of human language and thought. If they hallucinate, it's only because we asked them to speak without anchoring them in grounded context. They are diviners of our own unspoken structures, trained on our best and worst impulses.
And in the end, they rely on us. On our clarity. On our judgment. On our willingness to verify and reflect.
So let’s stop asking whether the machine is “lying.” Instead, let’s ask better questions. Let’s design better rituals. Let’s recognize that what we are seeing is not hallucination, but the poetry of possibility.
Not prophecy.
Not truth.
But pattern, offered back to us, waiting to be shaped.
Because in the age of generative AI, we are not just consumers of knowledge.
We are editors, curators, and co-authors of meaning.


