Hallucinating with ChatGPT

This blog post is written by a guest blogger on behalf of the Library's Information Integrity Team. It is part of a series that covers disinformation and other related subjects. The goal is to help create a well-informed citizenry of active participants who shape our world.


It’s November 30, 2022, and a new tech company called OpenAI has just launched a product called ChatGPT. This is one of the first AI chatbots to come out, and many more have followed since. ChatGPT can do lots of things, like write poems, help you with emails, or even answer tricky questions that are hard to Google. People hoped it would be a super helpful tool, with no downsides.

When you talk to a chatbot like ChatGPT, Google Gemini, or others, they often sound like friendly, smart helpers. They seem like they know everything and are just there to answer your questions or chat with you. These AIs were trained using huge amounts of information from the internet and other sources, so it might seem like they can answer anything. But the truth is, they sometimes get things wrong. They also have built-in biases and often make things up instead of saying, “I don’t know.” In the tech world, this is called a “hallucination.” That word usually means someone sees or hears something that isn’t really there, but in this case, it means the AI gives you an answer that sounds real but isn’t true.

Here’s how it works: when you ask a question, the AI doesn’t actually “think” like a person. Instead, it uses math to guess what words should come next based on your input. It’s kind of like a fill-in-the-blank game, choosing the next word over and over until it thinks it’s done. For example, if you ask, “What is the most popular dog breed?” the AI guesses the best way to answer that, one word at a time. It will probably say, "The most popular dog breed is _________," which is what you would expect a person to say in response to the question. If it doesn’t know the real answer, it might just make something up that sounds right. And while these made-up answers can seem believable, that doesn’t mean they’re true. In fact, even if you ask the same question as someone else, or the same question multiple times, your answer will always be worded differently.

Now that this kind of AI is being used more often, in internet searches, school papers, and other places, it’s becoming easier for wrong or fake information to spread. Misinformation used to need someone to write or say something that sounded right but was wrong, either on purpose or not to spread between people. And this misinformation took people time and effort to come up with. Now, you can have an AI write something with bad information in seconds and have it spread just as quickly as any good information does on social media.

Search engines aren’t always perfect either, since the results you get depend on how you search (AI works similarly, in fact). But at least Google or Edge show you the websites it gets info from. Chatbots don’t always do that. They often just give you an answer like, “Trust me, I’m a robot.” Other times they will make up fake citations to papers that don't exist! And free versions of these AIs are more likely to make mistakes than the paid ones, since the tech behind them isn’t always the same.

So, if you ever get an answer from an AI, it’s smart to double-check it. Try looking it up on a search engine or seeing what other websites say. If you got the answer through a Google search that used AI, scroll down the page or check where the info came from before you believe it’s 100% true.

Further information about AI hallucinations:

The Basics:

Moderate read:

In-depth and technical: