Artificial Intelligence is everywhere. From smart assistants that finish our sentences, to chatbots that try to solve our problems (sometimes before we even know what the problem is), it all feels a bit magical.

But there’s a tiny catch: sometimes AI gets creative—maybe a little too creative—and just makes things up. The industry calls this phenomenon “hallucinations” (no potions required).

Why do AIs hallucinate?

It’s not because these systems want to trick us, but because of how they work.

Most large language models (those chatty AIs like ChatGPT) have learned to guess what sounds like a good answer, not what the right answer is.

Their training rewards them for offering any answer over admitting they “don’t know,” which leads them to sometimes “bluff” their way through tough questions (OpenAI explains why , summary).

It’s a bit like that student who always fills in a guess on multiple-choice tests instead of leaving an answer blank. Sometimes they get lucky, but sometimes… not so much! And unlike a real student, an AI’s “lucky” answer can end up in a legal brief or a business report.

Real-World adventures in AI hallucinations

AI “hallucinations” aren’t a theoretical curiosity. For example, in July 2025 alone, over 50 legal cases were reported where AI-generated citations were entirely made up and submitted to courts around the world—sometimes by experienced lawyers at reputable firms (case stats , global database).

And it’s not only in the law: similar issues crop up in healthcare, journalism, retail, and… well, anywhere someone trusts an AI’s answer without a double-check.

Teaching AI when to admit doubt

Here’s the silver lining: the big brains behind these models (like at OpenAI) are now training AI to be more honest about its limits. The idea is to teach AI to say, “I’m not sure,” instead of faking confidence. This means your digital assistant might soon start admitting it doesn’t always have the answer, rather than bluffing through hard questions (OpenAI’s new strategy).

This approach doesn’t mean AI has learned humility (at least, not in the human sense), but it’s a huge leap towards making these tools more trustworthy and less likely to lead us astray.

This move toward transparency comes at a time when trust in big tech is under the microscope. Just recently, the European Union fined Google nearly €3 billion for “self-preferencing” its ad technologies over competitors’(news , background). These kinds of cases show just how important it is for digital systems (and their companies) to operate openly and fairly.

If even the giants can’t get away with hiding the truth, maybe it’s only fair to expect our AIs to be up-front when they’re stumped too!

Not just an AI problem: the human angle

Let’s be honest: humans also “fake it” sometimes, especially at work. Who hasn’t fluffed up an answer in a meeting, hoping for the best? What’s changing now is the culture: whether it’s technology or team meetings, it’s becoming more acceptable (and maybe even smart!) to say “I don’t know.” It’s a sign of professionalism and critical thinking, not weakness.

The next time you see your virtual assistant pause and say, “I’m not sure,” take it as a win, not a failure. This kind of honesty could help us all, AI included, avoid some epic mistakes. And maybe, just maybe, it’ll inspire more of us in the real world to do the same.


Further reading: