Even if it's not hallucinating, it's hallucinating
You may have noticed that, occasionally, your trusty AI assistant makes stuff up. I've had Github Copilot invent code libraries that don't exist, for example. And then there was that case of the lawyer who was leaning on ChatGPT to find some precedents and found out the hard (i.e. embarrassing) way that those were made up, too.
Those are called hallucinations and a lot of effort goes into reducing the rate of hallucinations in LLMs.
However, at their very core, all LLMs hallucinate everything. Let me explain (and for more detail, I highly recommend Andrej Karpathy's Deep Dive into LLMs. It's a three-hour video, so watch it over a couple lunch breaks...).
Before you can create a useful AI assistant like Claude or ChatGPT, you need a base model. That base model is a neural network trained to guess the next word (a token, actually, but let's keep it high-level) in the training dataset, which is basically the entire Internet.
For a given sequence of words, the base model returns a list of probabilities for possible follow-up words. These probabilities match the statistics of the training text. In short, we've got ourselves an internet text simulator. This base model hallucinates everything, all the time. All it does is answer the question, "How would a text that starts like this likely continue?"
All the work that's been done on top of this base model is about clever tricks that turn an internet text simulator into something useful:
Post-training with hand-curated examples of what a good answer looks like so that instead of an internet text simulator, we get a helpful assistant simulator
Reinforcement learning where human (or AI) critics provide feedback on the answers
Adding examples to the training set where the AI assistant is allowed to (supposed to, really) say "I don't know"
Enhancing the model through tool use (e.g. internet searches)
It's all about constructing sufficiently narrow reins that ensure the most probable way the underlying text would continue is actually something useful. Even if it's all just made up.
This is useful to keep in mind when evaluating potential LLM use cases.