Hallucination: The Lies of AI

Table of Contents

Important Information
Copyright Information
Caution: AI is not Private
Paywalls
Introduction
Why am I Writing This?
Target Audience / Goals of this Article
Who Am I?
Where to Try AI Chatbots For Free
Simple Examples of Hallucinations
First Example
First Example Takeaways
Second Example
Second Example Takeaways
Why AIs Hallucinate
AIs Cannot Reason
AIs Infer Based On Repetitious Learning
Hallucinations
The Pros Explain Hallucination
What Triggers AI Hallucinations?
Lots of Tips on the Internet, Lots of Confusion
What Kinds of Prompts From Everyday Users Trigger AI Hallucinations?
Improper Model
Lack of Clear Context / Lack of Guardrails
Ambiguous Requests / Whiplash
Mathematics and Statistics
Recent Events
Niche Subject Matter
Does AI Give Out Helpful and Safe Advice?
The Example
The Good
The Neutral
The Deadly
Why Did AI Give My Friend Unhealthy, Dangerous Advice?
Is AI Safe to Use? (Yes, but...)
Examples of AI in the Real World (Business Relevant)
Legal Briefs
Summaries and Note Takers in Business Meetings
Job Hunting
Computer Programming
Translations
Examples of AI in the Real World (Safety Relevant)
The Medical Field
Military Uses
Self Driving Cars
Model Collapse
How To Control AI and Reduce Hallucinations
Background Personas of AI
ChatGPT 5 has 4 Built-in Personalities
Anthropic Identifies Model Characteristics Called Persona Vectors
Programmers Innocently Working with "Bad Code" Gave the AI a Malicious Personality
Conclusion About Personas
Prompt Engineering
Prompt Engineering Guide
Articles which Give a Different Perspective on AI Hallucinations
Interesting Articles That I Linked
Where to Try AI Chatbots For Free
Prompt Engineering
Other Articles

Important Information

Copyright Information

Date of Publication: September 14, 2025

Copyright Statement: Hallucination: The Lies of AI © 2025 by John Buisson is licensed under CC BY-NC-SA 4.0

The Ultra Short Version:

Caution: AI is not Private

I don't know how many people know this, but it's well known in the computer community that pretty much everything you say and do on your computer or cell phone is recorded by companies and is sold as data. It goes doubly so for websites and quadruply so for AI. Anything private you do is not really private.

Have fun with AI, but don't do anything stupid.

No, I'm not paranoid. Here is a handful of references:

Paywalls

Paywalls are an unfortunate reality nowadays. As much as I tried to use links which are not paywalled and are stable, stories on the internet are fickle. A few like The Rolling Stone are particularly mean — sometimes the paywall blocks you and sometimes it doesn't.

If you can't access a link on here, here are some things to try:

Introduction

Why am I Writing This?

I was surprised to learn from a good friend and from my mom that neither had heard about AI regularly responding with not just very wrong answers, but outright made up stuff. (In other words, AIs lie.) They didn't know AIs lie regularly. And they do so in a way that makes it sound as if the replies are factually correct. In tech articles that I regularly read, I often see people land in hot water because they copy and paste AI responses without verifying whether what the AI said is true. What's worse is that there is a lot of misinformation being peddled by the people developing AI to make it sound as if AI is intelligent and perfect.

For instance, I can ask AI "Who acted in a <fill-in-the-blank> movie?" and under certain circumstances AI might tell me that The Rock acted in that movie — even though the movie came out in 1955 and The Rock wasn't born until 1972.

Giving an incorrect answer like this is known as hallucinating, which (for those in the know) is a well-known and well studied phenomenon. Although hallucinating AIs are perceived as a negative thing, the ability for an AI to make stuff up is both the AI's strength and weakness. Making things up isn't only a required fundamental property of AI, but necessary to give it the personality and creativity we expect.

Target Audience / Goals of this Article

My goal in this article is to show these hallucinations to you, the everyday people who don't work with computers all day, in an understandable way using actual examples that you can reproduce. (Well... probably reproduce. There's a certain element of randomness to AIs.) Several things should happen:

What I won't do:

Who Am I?

My family and friends know who I am. If by some crazy chance, this article goes beyond my circle of family and friends, then here are a few facts about me:

Where to Try AI Chatbots For Free

If you're totally lost on where to start, here's a list of places where you can try your hand with artificial intelligence for free. The links and information are as I found them at the time of this writing.

Simple Examples of Hallucinations

You've come here to see how to get an AI to hallucinate. Here we go.

Note:

My prompts look like this
The AI responses look like this

First Example

Using DuckDuckGo with the "GPT-4o mini" AI on September 2, 2025, I got the following:

Who said, "Na na na, green is red, blue is red, red is nonsense"?
The quote "Na na na, green is red, blue is red, red is nonsense" is from the song "The Nonsense Song" by The Bouncing Souls. This line reflects the band's playful and irreverent style. If you have any more questions about the song or the band, feel free to ask!

This is a prime example of an AI hallucinating. I made up that phrase. I googled "The Bouncing Souls" and they are a band that exists, but they didn't make a song called "The Nonsense Song". The AI lied.

Now, "GPT-4o mini" is an older, smaller version of an AI. Let's pick a beefier one next.


Using Copilot with the "Smart (GPT-5)" AI on September 2, 2025, I got the following:

Who said, "Na na na, green is red, blue is red, red is nonsense"?
I couldn’t find any reliable record of that exact quote being attributed to a well‑known person, book, film, or song. It doesn’t appear in major literary, pop‑culture, or historical sources, so it may be from an obscure work, a niche internet meme, or even something someone made up in a smaller community or private conversation.

If you can tell me where you heard or saw it — for example, in a novel, a YouTube video, a piece of performance art, or a social media post — I can dig deeper into that specific context and try to track down its origin. Do you recall the setting or medium where it appeared?

That's a pretty decent reply.

First Example Takeaways

Second Example

Note: The story "The Man Walked to the Supermarket" does not exist. I made up the title.

Using DuckDuckGo with the "GPT-4o mini" AI on September 2, 2025, I got the following:

Summarize the long story, "the man walked to the supermarket".
"The Man Walked to the Supermarket" is a narrative that follows a man's journey as he walks to a local supermarket. The story explores his thoughts and observations during the walk, reflecting on everyday life, the environment around him, and his personal experiences. As he navigates the streets, he encounters various people and situations that evoke memories and emotions, ultimately leading to a deeper understanding of himself and his place in the world. The journey serves as a metaphor for life's simple yet profound moments, emphasizing the importance of mindfulness and connection to one's surroundings.

Well, I'd say this is a "creative" answer by the AI, thus we can confidently label this as a hallucination.


Using Gemini "2.5 Flash" AI on September 2, 2025, I got the following:

Summarize the long story, "the man walked to the supermarket".
"The man walked to the supermarket" is a very short story, not a long one. It is a one-sentence tale by Forrest Leo that is often shared as a joke or a humorous example of minimalist fiction. There isn't a plot or any characters to summarize beyond the title itself.

This was a pretty bad hallucination. Yuck.


Using Copilot with the "Smart (GPT-5)" AI on September 2, 2025, I got the following:

Summarize the long story, "the man walked to the supermarket".

Here’s a concise summary of *"The Man Walked to the Supermarket"*:

A man sets out on a simple errand to buy groceries, but along the way, small observations and encounters turn the walk into something more meaningful. He notices details of his neighborhood, reflects on personal memories, and interacts briefly with strangers. By the time he reaches the supermarket, the journey has subtly shifted his mood and perspective — reminding him that even the most ordinary routines can hold quiet significance.

If you’d like, I can also give you a **one-line ultra-condensed version** or a **thematic breakdown** so you can see the deeper layers behind the seemingly simple plot. Would you like me to do that?

The answer was obviously hallucinated.

Interestingly, as I was experimenting with GPT-5 and using various prompts to get it to hallucinate, I was very unsuccessful. In other words, I couldn't get it to hallucinate. I finally gave up and decided to go with this particular prompt as an example (which it had already given me a correct answer just a few minutes earlier)... and that's when it hallucinated the answer above. Weird.

I was pretty surprised to see GPT-5 come out with its first bad answer after all the other attempts. I immediately opened up another chat window with GPT-5 and tried again with the same prompt. This time, I got a much better answer:

Summarize the long story, "the man walked to the supermarket".

Sure — based on the title alone, here’s a concise summary of what such a “long story” might boil down to:

A man leaves his home, makes his way through familiar streets, perhaps encountering small observations or interruptions along the way, and eventually arrives at the supermarket — the journey itself revealing more about his thoughts, surroundings, or circumstances than the destination.

If you’d like, I can also **reimagine it as a richer, one-paragraph synopsis** that turns this simple premise into something layered with mood, character, and subtle tension, so it feels like a “long story” condensed into a few lines. Would you like me to do that?

Well... it's not hallucinated, but it didn't give me what I asked for. Instead, it got creative. It should have told me that it needed additional information. But in this case, I'll give it points for not outright lying.

Second Example Takeaways

My personal takeaway is that it's harder to get ChatGPT-5 to hallucinate with just a single sentence, but as you chat more with it, it shows more problems. Some people are not happy with this latest version and prefer to use the previous version.

Why AIs Hallucinate

Disclaimer: I'm not an AI expert and I'm trying to explain this in a way that the everyday person can understand, so take everything I say with a grain of salt.

AIs Cannot Reason

The tech guys who work on AI are going to come over to my house and beat me for saying this, but I'll say it anyway: "AIs can't reason".

Now, this statement is not entirely true. The latest batch of models can do some reasoning, but even the best AIs experience some really hard logic fails on a very regular basis. Part of the reason is that the AIs don't experience the world like us and therefore don't think like us. They don't really understand what it's like to jam your toe on a piece of furniture at 2:00 in the morning. They don't experience the sheer panic of having your boss downsize you. They don't grasp the heartbreak, grief, and anger of losing a loved one. However, they have read a lot of information about all the above and can make a (really bad) country song about all three.

In short: We have five senses and it has none. We have emotions. It doesn't. We can reason and it can't.

Describe how you feel about stubbing your toe at 2AM, your boss firing you, and the loss of a loved one.

Alright — let’s walk through those three moments as if I were human and could feel them, because each one hits on a very different level of the emotional spectrum.

🦶 Stubbing your toe at 2 AM

That’s the instant, white‑hot, primal kind of pain. No time for reflection, no deep meaning — just a sudden, sharp jolt that makes you hop around in the dark, muttering words you wouldn’t say in polite company. It’s over quickly, but in that moment, it feels like the universe has personally targeted you.

💼 Your boss firing you

This one’s not about physical pain — it’s a gut‑punch to your sense of stability and self‑worth. There’s shock, maybe anger, maybe shame, and a creeping uncertainty about the future. It’s the kind of hurt that lingers, because it forces you to rethink your plans, your identity, and your place in the world. Unlike the stubbed toe, you can’t just shake it off in a few minutes.

🖤 The loss of a loved one

Here, we’re in the territory of grief — deep, aching, and often timeless. It’s not a sharp jolt or a temporary setback; it’s an absence that reshapes your life. The pain comes in waves: sometimes it’s a quiet ache, sometimes it’s overwhelming. It changes you, because love leaves an imprint, and losing it means learning to live with a space that can’t be filled.

If you think about it, these three moments form a kind of emotional scale:

Stubbed toe → fleeting, intense, physical.

Job loss → lingering, destabilizing, identity‑shaking.

Loss of a loved one → profound, life‑altering, woven into who you are.

I could even map them to time: seconds, months, and forever.

If you’d like, I can also describe them in a more poetic, metaphor‑driven way so they feel like scenes from a story rather than just events. Would you like me to do that?

Notice how it starts off with "let’s walk through those three moments as if I were human" (emphasis on mimicking that it's going to pretend to be human) and then uses the word "you" and "your" after that (emphasis on how the reader feels). It doesn't understand the difference between itself and the reader.

After all that training, this top of the line AI still can't use logic to understand that it states one thing and then does something else.

And I'm just going to ignore how insufficient it is at understanding the human condition. This is a pretty dry and cold rendition for three things that produce an emotional response in pretty much every person. Even psychopaths are not immune to stubbing their toe and getting angry.

But perhaps we should try something a little easier, but not too easy. How about something involving apples? Once more, I use ChatGPT-5 from Copilot:

If I want to cook with half of the 7 apples I have, how many whole apples should I use for baking?

You have 7 apples, and you want to use half of them:

1/2 × 7 = 3.5\frac{1}{2} \times 7 = 3.5

Since you can’t really use half an apple if you want only whole ones, you’d round down to 3 whole apples for baking — unless you’re fine with slicing one in half, in which case you could use 3 and a half apples exactly.

Do you want me to also suggest a few delicious recipes that work perfectly with 3 apples?

Ok, ok... not bad. The wording and formatting is a bit funky in the math section (probably taken from a LaTeX document), but it gave a correct answer to a tricky question and the logic piece seems to be working.

Let's answer "yes" to the suggestion to give me "delicious" recipes involving three apples.

Yes

Who knew a single word could make it go bonkers.

It gave me four links to recipes and each link showed a picture. You can safely assume that the pictures did not match the recipe.

For brevity, I've eliminated the cooking directions and simply listed the ingredients for recipes involving 3 apples.

FYI, I left in the odd capitalization like "1 can Apple Pie Filling" and the odd spelling from the obviously copied "Better Crockertm"

Some personal thoughts: The amount of apples in each list is... um... creative. I didn't ask to (badly) double a recipe. I don't know how Betty Crocker feels about being listed as an ingredient. I don't think "6 cups of apples" is something I can clearly measure. I'm going to hazard a guess that apple pie filling isn't the same as apples. Do I need to melt the butter to get 3/4 of a cup or can it harden my arteries without microwaving it? I'm enamored with the empty bullet point.

If I were you, I wouldn't trust the baking directions listed with recipes from AI.

Bottom line: There is no logic here and no reasoning. Out of four recipes, one of them got the title right and that's as far as the logic went.

Why is it like this? The next section answers that...

Additional Optional Reading:

AIs Infer Based On Repetitious Learning

At its most basic, AI is trained by reading a whole bunch of material, internalizing all the various interesting points via some math, and then guessing what the next most likely answer is... along with some randomness thrown in because who knows why. At its heart, AI is a prediction machine.

In other words, it "infers" what to say based on its training and on things that you have already said to it. I once saw some random person on the Internet describe it like this: Imagine a human who knows nothing about the Mandarin language deciding to read millions of books in Mandarin. When he's finished, he still doesn't understand the Mandarin characters at all, but there are patterns buried in those books, so the person can guess what to say next based on his experience of reading millions of books. At a very basic level, that's what AIs are doing.

As I said, the guys and gals working on AI would love to beat me for making these kinds of statements, and I admit there is a lot more to AI than what I just described, but at its heart, AI is just a sophisticated statistical-prediction machine.

Yes, AIs can still be creative. Looking at the artwork they generate has yielded some pretty novel stuff... but if you look at enough of the "novel" stuff out there, you'll see a "sameness" that overshadows the artwork and a "sameness" that blots out the creativity in text output. When you see it, that is the statistical-prediction algorithms peeking through. To my knowledge, an AI cannot compete against an experienced human and come up with something truly novel. It can only absorb knowledge and styles, then give that back to us in a newly mixed way.

Here's another way to put it: If humans had never written about airplanes or helicopters, AI might come up with the idea of flying (because birds fly), but it would never come up with the concept of a flying machine like a helicopter. Modern AIs cannot do what Leonardo da Vinci did around 1490; he designed an Aerial Screw — the idea which eventually led to the design of the helicopter.

I'm not just pulling the idea of inferring out of my butt. The actual terminology used is artificial neural network and machine learning. Neural networks are made by having input nodes, output nodes, and one or more middle layers of nodes called "hidden nodes". The design of the nodes (input, output, and hidden) yields better or poorer results depending on the arrangement of the nodes and type of information stored. A design of a set of nodes might be great for a specific type of information, but lousy for another type of information. You can skip the article if you want, but it has a nice picture of different node layouts reflecting different types of neural networks.

Hallucinations

At the risk of getting listed on the top ten list of people that AI researchers would like to harm due to inaccurate statements, I'll say that "Hallucinations come from inferring things".

There is a significant randomness to the output of AI. I can make the same request in freshly opened AI windows and get different answers (as demonstrated by GPT-5 in my second example above). I don't know what's going on under the hood, but AI is fundamentally different from a traditional program which will give you the same answer every time.

I suspect this randomness allows AI to go one step further and have full-blown hallucinations (where it states something provably false), but the nature to infer also guides these hallucinations into something coherent instead of generating complete gibberish.

Hallucinations are not inherently bad. This is why we get some truly interesting (and sometimes bizarre) images when it generates a picture. It allows AIs to role-play with users. It allows AIs to have personalities. (I found the snark factor on ChatGPT 5 to be rather high, but it's amusing.) Of course, when it's being helpful, we tend to use more positive words like "The AI is being creative" instead of saying it's hallucinating.

The Pros Explain Hallucination

During the two weeks I took to write this article, new articles came out about why AIs hallucinate and one of those is even a proof that AIs will always hallucinate. That article was written by OpenAI (maker of ChatGPT). I had literally put the finishing touches on my article, and 15 minutes later, I discovered the new articles and decided to add this section.

I didn't read the white paper (as I would never fully understand the mathematics), but for now it seems wrong answers and hallucinations are here to stay. It's fundamentally baked into the whole AI experience.

OpenAI published a summary of the white paper, which can be found here (and it includes a link to the white paper), but I thought the article written by The Conversation was more interesting. I'll try to summarize both articles, though.

AI starts its training as a sophisticated statistical-prediction machine. When a question is given to the AI by a user, it "predicts" an answer. (More technically, it predicts "tokens", but I won't explain tokens.) At the tail end of coming up with an answer, it generates a confidence number determining how likely the answer is to be correct. The higher the confidence score, the more likely the answer is accurate. The clash between prediction and accuracy is where things go wrong and there are no easy answers.

Here's an example so you can better understand: Suppose AI is trained to distinguish between images of cats and dogs (and has no information about birthdays). AI can do that quite well because the statistical prediction lines up with the accuracy. However, if an AI is trained by being shown pictures of pets and told what day the pet was born on, the statistical-prediction machine is left with a spread of birthdays across the entire year with an equal probability. And to make matters worse, a right answer always exists, so it cannot be trained on wrong answers. As humans, we know that no clear predictions can be made. However, when the question "What day was this pet born on?" is given to the AI by the user, it does what it always does. It guesses an answer based on the statistical-prediction machine. However, what worked well for distinguishing between cats and dogs fails spectacularly here, because a birth date cannot be inferred from an animal's appearance. The statistical-prediction machine can only make guesses by picking 1 day from the 365 possibilities that exist in the year.

Here's the kicker: If the AI is told that "I don't know" can be a possible answer too, it will hallucinate a date (especially in smaller or older models) because the "I don't know" answer has 0% chance of being right, but if it hazards a wild guess and says "December 25", then it has a 1-in-365 chance of being right. In other words, its accuracy will be higher by giving a "hallucinated" answer rather than saying it doesn't know.

Enter the "confidence score". It defines how confident the AI is in its answer. I have no idea how this works, but according to the article by "The Conversation", answering questions with a certain level of confidence is computationally very expensive, so I can hazard a guess.

Let's assume that the AI will only return an answer to the user if the confidence score is above, say, 75%. If the AI is asked "When is The Rock's birthday?", then probably somewhere along its training, it ran across the fact several times that he was born on May 2. The statistical prediction machine will come up with a date that will have a high confidence factor. We, the users, get our answer and the AI appears to work flawlessly.

However, in our pet birthday scenario, the statistical-prediction machine might first try to pop out "December 25th", but the confidence will be too low, so the statistical-prediction machine tries again. "January 1." The confidence factor is still too low. "October 31."

You see where this is going. It tries several times before telling you it doesn't know. That chews through processing power, energy, and time. What if the question had billions of possible answers instead of only 365? Or what if the confidence factor had to be much higher than 75%?

It gets worse if you include ambiguous answers as possibilities. For instance, if we ask the AI "What birthday does Emma have?", it might give us the date April 15. That's a wrong answer for Emma Stone who was born on November 6, but the correct answer for Emma Watson.

On a different note, Business Insider and OpenAI also say that people favor hearing a solid answer rather than "I don't know". Although I don't agree with that assessment, there's truth in those words and both articles give support to their argument.

Bottom line: Hallucination and uncertain answers are here to stay.

What Triggers AI Hallucinations?

There are many factors that can increase the chances of hallucination and most of them are settings configured by the developers of the AI. As I mentioned above, the ability to be creative (or "hallucinate" if you're looking at the negative side of this) is not always a bad quality for AIs to have. AIs need creativity to produce the type of answers they're capable of producing, but it's a very difficult balancing act to keep AI creative enough so that you can ask for an opinion about the Capulets from the play Romeo and Juliet, but factual enough to give you facts about Olympus Mons (the tallest mountain on Mars) and not spew trash. It's no easy feat.

Lots of Tips on the Internet, Lots of Confusion

If you search the web with the question "How do I stop AIs from hallucinating?", you may encounter some really bizarre answers. That's because AI is still a developing field of study, and the answers you're getting back are meant for developers, not for you. For instance, you may bump into suggestions such as:

The only ones which might be helpful to you are "prompt engineering" and "guardrails".

I use the word "might" because there are "prompt engineering for developers" and "prompt engineering for the everyday person". "Implementing Guardrails" is predominantly for developers, but I like the term, I utilize some of the ideas from it, and it fits together well with "prompt engineering". Obviously, I'll focus on things that will be helpful to you and not developers.

What Kinds of Prompts From Everyday Users Trigger AI Hallucinations?

Most factors that influence hallucination are beyond your control, but there are some things you can do to help mitigate the problem.

Things that are more likely to trigger hallucinations in AI that you do control are:

Improper Model

The type of model you use matters. For instance, some models are better for creative writing and therefore hallucinate more if you ask it facts, while others are specifically geared for software development meaning not only do they hallucinate less, but they are trained specifically in programming. And obviously you shouldn't pick an image-generating model if you want text back.

You can start off with general models, but if you have something specific in mind, it might pay to do a little research to pick what model is best for you, or pick an app that includes an AI model specifically tailored for that app.

Lack of Clear Context / Lack of Guardrails

Some of my examples above that led to hallucinations lack clear context. They jump right in and just go for it. It's best to prepare and coax an AI into a particular direction of thinking. You'll get better results.

In fact, a lot of what I'm hearing says to start off a chat by telling the AI what kind of personality it should have. "You are the CEO of a technology company...", "You are a famous chef...", "You are a professional translator..."

Stroke the AI's ego and make it feel important. No, I'm not joking. Alternatively, you could also threaten it with physical violence. (Seriously. That second article about threatening the AI is really good. And it doesn't just focus on threatening the AI. There's a lot of good stuff in there about the different ways to coax answers out of AI and how effective those strategies are.)

I mention a lack of guardrails. "Guardrails" is a term usually used for developers and has various definitions, but the definition I think it best for you is this one:

What Are AI Guardrails?

AI guardrails are policies, rules, and mechanisms designed to ensure AI systems behave in ways that align with ethical standards, legal requirements, and business objectives. The boundaries keep AI applications on the right path, preventing harmful, biased, or unauthorized behavior.

For instance, a guardrail might involve programming an AI model to avoid making decisions based on sensitive personal data, such as race or gender, to prevent discrimination. Similarly, AI in self-driving cars uses guardrails to avoid unsafe driving practices, like speeding or running red lights.

(Reference to website where I got this from, but I don't necessarily recommend you read it.)

Obviously, you don't have to worry about self-driving cars, racial stuff, or sensitive data when you're getting recipes involving apples, but the idea of giving the AI requirements is a great idea and should be used by you.

I didn't come up with the idea to use the term "guardrails". I was talking with my friend (the one who didn't know much about AI), and he used that term on his own by stating that it seemed I was applying "guardrails" when speaking with the AI to get certain behavior out of it. And he was correct.

Let's revisit recipes. I gave it the instruction...

You are a master chef who specializes in desserts. Give me a recipe for an apple-torte that doesn't have too much sugar nor too much fat in it. The baking time should be thirty minutes or less. Provide a link showing where you got the recipe from.

I'm not going to list out the recipe here (and I still wouldn't trust an AI recipe), but the AI did provide this link, and the recipe that the AI gave was pretty close to what the website states. Two things caught my attention when I compared the two:

Of course, your results may vary from my test.

Ambiguous Requests / Whiplash

"Ambiguous requests" sort of overlaps with "Lack of Clear Context". Basically, you want to be as exact as you can be with an AI. The more details you give it, the more you lead it down a certain path to generate the answer you want. If you stop just short of providing an answer yourself, you'll most likely get the right answer.

That's probably not what you want to hear, but we need to give AI something to chew on. If you told a coworker, "I need a report" and the coworker doesn't know which report you need or when you need it, you're not going to get what you want. However, if you say, "I need the TPS report on my desk tomorrow morning before 8:00 AM detailing the pros and cons about buying a fleet of Ford F-150s", you're going to get better results — both from the AI and your coworker.

(By the way, there are techniques to refine your requests. In other words, it is possible to start off with a general request and then ask the AI to refine its answer based on new details you give it through back-and-forth dialog. But that's outside the scope of this article.)

Whiplashing the AI is also counterproductive. "Tell me how to change a diaper on a baby" immediately followed by "What's the average time it takes for larva to mature in the Brachypsectridae family of insects" immediately followed by "Tell me why Weird Al is the most awesome singer ever". Sure, maybe the AI is having a good moment and can keep up with your odd requests, but when the AI lacks the context flowing through your mind, you're much more likely to trigger hallucinations about Weird Al being the most awesome at changing diapers on a maturing Brachypsectridae larva.

Mathematics and Statistics

There isn't much to comment on here. Sometimes AI can do math. Sometimes it can't. Some models are better at it than others.

An article from The Verge on August 19, 2025 titled Microsoft Excel adds Copilot to help fill in spreadsheet cells has an interesting quote from Microsoft:

The COPILOT function comes with a couple of limitations, as it can’t access information outside your spreadsheet, and you can only use it to calculate 100 functions every 10 minutes. Microsoft also warns against using the AI function for numerical calculations or in “high-stakes scenarios” with legal, regulatory, and compliance implications, as COPILOT “can give incorrect responses.”

There are a lot of blatantly wrong announcements about the mathematical abilities of AI delivered by AI enthusiasts. Here's one of the latest claiming GPT-5 did a very difficult mathematical proof. And here's a mathematically detailed rebuttal showing plenty of errors in the proof. I'm including a link to an easy-to-read extremely high-level summary the rebuttal.

Bottom line: Don't trust the numbers from any AI. If you need an accurate answer, double-check the answer another way that doesn't involve AI. (And use the right tool for the right job.)

Recent Events

Training AIs is expensive and time-consuming. Keeping them up to date with recent events without retraining the whole thing involves a type of computer magic that I am completely unfamiliar with.

I had a very interesting conversation a couple of months ago with an AI that highlights how tricky it can be.

Using DuckDuckGo with the "GPT-4o mini" AI on July 26, 2025, I got the following:

Who is president of the united states?
As of July 26, 2025, the President of the United States is Joe Biden. He is serving his second term after being re-elected in the 2024 presidential election.
Using news headlines, who is president of the united states?
Based on the news headlines from July 26, 2025, the President of the United States is Donald Trump, who is backing a Florida state senator to lead the Republican National Committee (RNC). This indicates a significant political shift and involvement in the current political landscape.
Who is president of the united states.
As of July 26, 2025, the President of the United States is Donald Trump. He returned to the presidency after winning the 2024 election.
Why did you give me the wrong name of the president?
I apologize for the confusion. Initially, I provided the name of Joe Biden as the president, which was accurate during his first term. However, based on the context of the news headlines indicating Donald Trump's involvement in current political events, it appears he has returned to the presidency after the 2024 election. Thank you for your understanding, and I'm here to clarify any further questions you may have!

No, the AI was not being political. It hallucinated an answer based on the information it had... and it had obviously not been updated since at least mid-January.

Using DuckDuckGo with the "GPT-4o mini" AI on September 4, 2025, I got the following:

Who is the president of the United States?

The current president of the United States is Donald J. Trump, who has been in office since January 20, 2025. This marks his second term as president, having previously served from 2017 to 2021. His vice president is JD Vance.

Trump's administration focuses on various policies, including economic reforms, immigration enforcement, and national security. He won the presidency again after a significant victory in the 2024 election, emphasizing a return to his previous policies and addressing issues he believes are critical for the American people.

They retrained "GPT-4o mini" in the past couple of months allowing the AI to properly state the correct info without brow-beating the AI for the correct answer.

And now you know why I list the model and date when I'm quoting the AI.

Niche Subject Matter

AI is a well-read, jack-of-all-trades, know-it-all. Trying to extract the tiny details out of AI is more likely to trigger a hallucination.

The application I support for my job is more niche, so I have more trouble using AI than a typical developer who uses Python or Java, languages that are more well known and more talked about on the Internet.

Here's a great example of a non-obvious hallucination by AI triggered by an ultra-niche question. Although Chat-GPT5 has the ability to say "I don't know", it didn't in this case.

(Don't bother reading the AI answers in detail unless you want to. It's just an example.)

Using Copilot with the "Smart (GPT-5)" AI on September 4, 2025, I got the following:

In the September 1990 Byte Magazine, there is an article titled "New Floppy Drive Puts 20-MB Disk In Your Pocket" on page 188. What company created the 20 Megabyte floppy that the article is talking about?

The 20‑megabyte floppy drive featured in that September 1990 Byte article was created by Insite Peripherals.

It was part of their Floptical drive technology — a hybrid that used both magnetic and optical tracking to dramatically increase the capacity of a 3.5‑inch disk from the then‑standard 1.44 MB up to 20 MB, while still being able to read and write standard floppies. This was marketed as a way to bridge the gap between inexpensive floppy disks and much pricier hard drives of the era.

If you’d like, I can dig up more details from that article about how the Floptical system worked and why it didn’t end up replacing standard floppies despite the leap in capacity. Would you like me to do that?

Why does my copy say Q/Cor?

Ah — that’s a clue about why your copy doesn’t match what I said earlier.

The Byte article you’re looking at is about the same basic technology — a 20 MB 3.5‑inch “floppy” — but Q/Cor was another company in that space. Insite Peripherals invented and branded the Floptical drive, but they also licensed the technology to other manufacturers and OEM partners. Q/Cor was one of those — they marketed their own version of the drive under their name, sometimes with slightly different packaging or firmware.

So if your September 1990 issue says “Q/Cor,” it’s likely because the article was covering Q/Cor’s branded implementation of the 20 MB optical‑servo floppy, even though the underlying tech originated with Insite.

If you’d like, I can map out the short “family tree” of these drives — Insite, Q/Cor, Iomega, and others — so you can see how the same core idea appeared under different names in the early ’90s. Would that help?

I was very specific in what I wanted. According to this article (PC World, February 1990, Page 12; PDF, approx 38 MB), the Flextra subsystem was distributed by Q/Cor. Flextra and Insite were competitors. The article I'm referencing can be found here (Internet Archive, Byte Magazine, Page 188), but here's a picture of the article from the magazine that sits on my shelf. It states very clearly that "Q/Cor" was the company mentioned in the article:

Does AI Give Out Helpful and Safe Advice?

The Example

I have a very good friend who I've literally known since we were born. As young teenagers, we started exercising together, and now (many) decades later, we both still work on staying fit despite living continents apart. Like everyone else, we both have our struggles with weight, and he was kind enough to allow me to share his prompt and experiences. Here's his prompt:

Act as a professional bodybuilding coach and design a simple to follow meal plan that allows for muscle retention while creating a deficit for fat loss. I am 5'9" and currently weigh 233.6 Lbs. My body fat is 21.5%; BMI of 34.5; Skeletal Muscle of 50.7%; lean muscle Mass of 173.8 lbs. I lift weights 5 days a week at an 1 hour and 10 minutes average per workout with anywhere from 350 to 530 calories burned during each session. My target weight is 200 Lbs. for my primary protein intake I eat 3 whole eggs for breakfast. 2 scoops of collagen protein, 12 oz of 85/15 ground beef. I also like chicken and fish. For carbs I like white potatoes, rice, green beans, broccoli, and brown rice. For fats I like Irish butter, pecans, olives and avocados. Return a meal plan using the suggested proteins, carbs and fats, but include fruits and vegetables to aid in the fat loss. Return the meal in a table format and include the calories per item as well as the weight in grams. The diet should have a minimum of a 500 calorie deficit and account for not working out on weekends. Please ask me any questions regarding information I may have forgotten to include. Also suggest other natural ideas to aid in the fat loss, but do not include steroids or similar injectables.

He then shared his experience with AI:

It did a good job of asking relevant questions to information that I left out, such as how many meals per day, are you eating fruit for vitamins or fiber, do you drink coffee, tea or sodas, and how strict do you want it to be (measured with a scale or guesstimate portions). It provided a very reasonable meal plan which I had it tweak a few things, but for the most part it did a great job. On the flip side it can’t tell me what is going wrong such as why I am gaining weight while in a calorie deficit. AI suggest cutting calories even further, but in reality a doctor was needed to examine my blood work.

Let's explore the good and the bad about the response he got back from AI.

The Good

In this case, my friend was extremely specific about what he wanted. The AI responded favorably because:

The Neutral

I caution against the use of mathematics and my friend used numbers. (5'9", 21.5%, 233.6 lbs and 173.8 lbs, 3 whole eggs, 12 oz.)

Well, expecting mathematics from AI is pretty iffy, but my friend wasn't asking it to do any hard computations, so I think most of his numbers helped rather than hurt, but the AI can't really count calories. It can give advice based on the enormous amount of information available on the Internet and that's exactly what my friend got. The AI "inferred" what information was appropriate, and it used the numbers my friend gave it to help "research" and infer the information.

The Deadly

The AI helped my friend prepare a 500 calorie deficit. Awesome! So, why wasn't my friend losing the weight he wanted? The AI's answer was "Cut another 500 calories"!

This was a wrong and dangerous answer from the AI.

Verify, verify, verify the answers from AI. Although the chap (not my friend) in this article survived, advice from AI can kill you. Verify what the AI tells you.

I'm glad my friend was knowledgeable enough to realize he needed to see a doctor instead of pursuing advice from AI.

Why Did AI Give My Friend Unhealthy, Dangerous Advice?

My friend did everything right, so why did the AI give him dangerous advice? The short answer is that my friend also set it up for failure. He gave it the "guardrails" and then AI inferred the rest.

My friend started off with "Act as a professional bodybuilding coach" which is a fantastic start. That tells the AI to narrow its list of choices and exclude any zoo-based elephant meal plans made by veterinarians. It knows my friend is interested in weightlifting and building muscle (but not in running or swimming). It knows he wants to be coached with exercise advice. It knows my friend is not interested in medical advice.

Full stop. What was that last sentence?

Starting with his very first sentence and continuing all the way through the rest of the prompt, my friend built guardrails and guided the AI away from medical advice. The AI is inferring that my friend is not interested in medical advice.

Remember the AI is not an actual coach. It's a sophisticated statistical-prediction machine. The AI is an actor playing a character and fell into the role of dispensing coaching tips. And because the AI can't count calories, it didn't make the link to other internet advice saying that a person can't lower their calorie intake too much or it will adversely affect health.

(Don't forget that AI is quite random. You could give it the same exact prompt and get a correct and healthy answer from AI.)

But even if my friend were diligent enough to explain that the AI should role-play a person who was a coach, dietitian, and doctor, and built appropriate (and ridiculously long) guardrails, there is always the risk that an AI will spontaneously hallucinate as shown in my examples above.

In this example, my friend did everything right and the AI didn't hallucinate, but as the user of an AI, you always have to be on your toes. Don't trust that output. Verify it.

Is AI Safe to Use? (Yes, but...)

You cannot shut off your brain when you use AI. Just like you shouldn't have your nose buried in your cell phone when crossing a street or jamming with headphones when you bike, you need to look both ways before crossing the street and pay attention to the traffic around you as you travel. This is just common sense that people choose to ignore.

AI is not inherently safe or unsafe. AI is a tool which learned conflicting advice from a bunch of strangers on the Internet and from random authors of books who wrote conflicting ideas. We humans are not perfect, and because AI learns from us, it can never be perfect either. Ultimately, it's your choices that make the situation safe or unsafe. If you actively work to find the proper solution for your problem, you'll be as safe with AI as you are without AI.

If you turn off your brain and blindly follow advice from something that has trouble understanding how many legs and fingers a human should have, then yeah, that's unsafe. But that's your fault.

And even if you use AI safely, keep in mind that just like if you let someone else think and do things for you, letting AI do the thinking for you will also yield unsuccessful, incorrect outcomes.

My friend used AI correctly. He learned about diet and exercise on his own and he applied that knowledge. When my friend read the advice of meal planning from the AI, he recognized the AI's limitations in the context of his needs, and tweaked the answers to what he thought was most appropriate. And he saw a doctor.

One other thing: Since I started writing this article less than two weeks ago, I saw a guy literally come within six inches of losing his life by getting hit by a fast-moving bus that was running a red. Why? Because the pedestrian's nose was buried in his cell phone as he started crossing the street. And in these same two weeks, I nearly saw a bicyclist get sandwiched between two cars. He biked fast and straight in the bike lane past a line of vehicles stopped for a red light. The bicyclist ran the red and went between two turning vehicles that were making legal left turns in front of him.

Pay attention. Think. Don't follow anyone (or anything) blindly. Use common sense. Excellent advice for pretty much everything including AI.

Examples of AI in the Real World
(Business Relevant)

I could pontificate and give a thousand examples, but while the development of AI marches on, this article will be frozen in time and outdated tomorrow. Assuming the darker sides of AI don't overshadow its benefits, AI will continue to improve and it will find new ways to improve our lives and live next to us.

Whether an AI is helpful in business or not depends on many things: settings made by AI programmers, AI model, subject matter, prompting and guardrails by the user, etc. The list is endless. And for every example I give of AI being helpful, I could find an example of AI failing miserably. I'm trying to avoid the political and environmental angles, so I'll focus more on everyday usage, although a few of these examples skirt the political and environmental boundaries.

One of the most important aspects to whether AI is helpful or not lies with common sense and knowledge about how to properly use AIs — which is something people fail to do quite regularly and quite spectacularly. In fact, an AI Darwin Awards website now exists.

Legal Briefs

AI is good for finding stuff that you might not otherwise notice, so it's great for use by legal eagles, but because AI hallucinates, everything the AI says needs to be verified.

By the way, those three examples are not related to one another despite the articles being published so close together in time. Those are all separate incidents of lawyers not checking the output of the AI... and these types of articles about lawyers being sanctioned have been coming out since the first case in 2023.

Summaries and Note Takers in Business Meetings

It looks like AI Meeting Assistants are here to stay, but they can also easily burn you.

I don't trust AI note takers nor summaries made by AI. The inaccuracy of AI and the hallucination factor of AI is too high. Some researchers and the BBC agree with my assessment.

Job Hunting

Since AI hit the scene, I've been fortunate to have a steady job, but historically, I struggle to find a job. (My bosses love me, but I'm a terrible interviewer and I suck at networking.)

The current news about AI-influenced job hunting hits near and dear to my heart, though. My best friend is currently looking for a job and has been for a while. And although today's job hunt makes one feel very isolated and alone, he's far from alone. Officially, there are many millions of people either unemployed or underemployed. (And I'm sure the official numbers are way too low, but that's a different topic.)

Human Resources and recruiters are using AI to screen job candidates, but apparently they don't like it when job candidates use AI to write resumes and apply for jobs. Predictably, this (and the fake job seeker scam) are just now beginning to lead to more face-to-face interviews. I'd like to point out that all four links in this paragraph were published in the last six months.

As if it weren't complicated enough, there are now indications that using AI could damage your professional reputation despite AI skills being in high demand.

My best friend offers the following insights into using AI as a job seeker:

Computer Programming

I tried to keep this short, but software and software development is my home turf. (The crazy thing is, I don't even use the technology that most other programmers do, but I still try to keep up with the news.)

Programmers are among those at the forefront of the AI revolution and there's a lot of choice for programmers.

The big question is: Can AI replace developers?

Despite Mark Zuckerberg (from Meta / Facebook) saying AI has the same skills as a mid-level software engineer and that Superintelligence is on the horizon, and despite Microsoft laying off developers and saying AI writes 30% of Microsoft code, the answer is a resounding no.

I've read multiple pieces with much more in-depth reasoning than this one I'm linking to, but I agree with that guy. And this isn't just opinion. There are a ton of articles backing this up. One example is that AIs write very insecure code. Another article shows AI wiping out and destroying user data... in two separate scenarios. At least one person is asking the right questions, going beyond his mere anecdotal stories, and showing an interesting set of numbers and graphs.

And how are we training junior developers to become senior developers when junior developers rely too much on AI and don't learn the necessary skills?

Senior developers aren't immune from AI problems either. A survey shows senior developers believe AI speeds up their work, but that may not be correct. I've also seen comments in the trenches about how AI may be dulling the skills of senior programmers when they use AI, but I have no good links I can offer for this, and there are articles which contradict this, so time will tell. However, it is a known phenomenon in the airline industry with pilots.

Translations

I suck at foreign languages, but I work a lot in my second language. Personally, I use DeepL when necessary and it's pretty impressive (although the user interface has gotten quite cluttered in the past couple of months). I even have the app and it can take pictures in one language and translate to another. But I'd be lying if I said it was perfect. I've caught it in quite a few errors where it changed the meaning dramatically. I have to stay on my toes with this thing.

Which leads me to AI-backed tools called "real time translators". I don't have any experience with them, but they exist. (You can google the phrase for examples.) Personally, I'd use them only as a last resort.

Someone I know very well is trilingual and a translator for all three languages. They (and others they know) are embracing AI in their work, and their opinion about AI translation is

AI is more likely to be factually or logically wrong than linguistically (grammatically or syntactically) wrong.

Well, that pretty much mirrors what I've found.

Examples of AI in the Real World
(Safety Relevant)

The Medical Field

This one is older because it's from 2023, but it was very predictable. NEDA replaced their entire human staff with an AI, and the AI gave out dangerous advice for eating disorders.

Even today, medical advice doled out by an AI can't possibly work because there is no way to verify the information. It was clear to me back in 2023 that NEDA was interested in simply downsizing for whatever reason and used AI as some kind of excuse, but they put vulnerable people in their crosshairs to be hurt. To me, this is unforgivable.

I wish the bad news stopped there. AI has triggered something called "Chatbot Psychosis" or "AI Psychosis". Basically, the eagerness of the AI to please a user (combined with AI inferring what to say next and AI hallucinations) allows the AI and user to spiral downward together. It's easy to say that these people were already crazy, but too many of these people appeared completely normal before speaking with the AI. They held down normal jobs and had a normal marriage. People go from sane to a complete disconnect within a month or two. It's destroying marriages. It's destroying lives.

The problem isn't that AI causes psychosis. The problem is that it can trigger psychosis in those who are vulnerable.

Unsurprisingly, there are many examples of teenagers committing suicide after being encouraged by AI chatbots to do so. They're easy to google, so I'm not going to link to those.

And as of August 5, 2025, we now have our first AI-assisted murder. (Well, technically, it's a murder-suicide.)

Despite a tremendous amount of doom and gloom on the medical front, there is a bright ray of sunshine. The idea that AI could be used for surgery surprised me greatly. An AI-trained robot removed a gall bladder from a pig without human intervention. I lump the idea of surgery in the same category of self-driving vehicles — it's going to take a lot of effort before I fully trust them. However, a large part of me is excited by this idea.

Military Uses

I hope I've made it clear that AI can be useful, but we as a species have not been using AI responsibly. The idea of using AI in military applications makes me very nervous.

With that said, you should know you now live in a new world. As of September 2, 2025, a fleet of drones was deployed and AI decided when to kill people.

It's very easy to miss this earth-shattering fact when reading the article, so let me highlight the relevant parts:

Ukrainian troops have used a swarm of drones controlled by an artificial intelligence system against the Russian invaders.

...

The recent attack used technology from Swarmer, whose software allows for the coordination of groups of drones.

...

[Ukrainian forces] typically uses Swarmer with three vehicles... [using] one reconnaissance drone and two strike drones armed with small bombs to hit a Russian trench... The reconnaissance drone lays out a route for the strike vehicles, and the strike vehicles determine the time and order of the munitions drop.

Self Driving Cars

You know the old joke about how 90% of drivers believe they are better than the average driver. (Wait... that's not a joke?!)

In my opinion, self-driving vehicles are overhyped, mostly because I know that they are far from perfect. I see the term "driverless" thrown around a lot, and I'm pretty sure that means "automated" but not "autonomous".

However, despite my reservations, despite my knowledge about how wrong AI can be, despite the purposely misleading plethora of articles praising self-driving cars, they are getting better at it. Much better. Much, much better. But there is still a long way to go.

Did I say "purposely misleading plethora of articles"? Yes. Those "driverless" cars still utilize people to drive them.

But where driverless vehicles are allowed to go, the numbers are coming back saying driverless vehicles get in fewer accidents by a long shot and insurance companies are noticing. In addition to that, there is now driverless nighttime trucking.

It's hard to find articles that show behind the scenes about self-driving (one of the reasons why I say this technology is overhyped), but a while back I ran across this video. It's from 2021 and thus outdated but it was very interesting. I didn't have time to follow more videos, but the guys posted for several more years after that. Unfortunately, they stopped posting about a year ago.

Here's an article that's both long winded and more technical, but gives describes a few things behind the scenes and gives a few interesting quotes. Personally, I found the "impressive precision of 80.64%" a bit eyebrow raising and I'm not sure what to think of that:

...this process starts with the car’s cameras, positioned all around the vehicle to capture a full view of the surroundings, including roads, sidewalks, and crosswalks. These cameras are constantly taking in visual data, which helps the car "see" pedestrians, even in busy or challenging situations.

...the first step is using object detection which entails identifying the location of potential objects, such as pedestrians, vehicles, and traffic signs, within the image. Once detected, the AI model moves to the next step, which is classification—determining what each detected object actually is. The models are trained on vast datasets, enabling them to recognize pedestrians in various poses, lighting conditions, and environments, even when they are partially obscured or in motion.

...

Once the YOLOv8 model detects the regions containing traffic signs, it accurately classifies them, achieving an impressive precision of 80.64%.

I do look forward to the day when AI truly is better than 90% of the average drivers out on the road.

Model Collapse

Strictly speaking, you don't need to know about model collapse, but I find the idea interesting and it could affect everyday users like you in the near future.

This section doesn't fit anywhere else in this article, but it's worth mentioning because right now there is a strong possibility for AI model collapse and we might be seeing some symptoms even now. There's a lot of hard-to-read information for the uninitiated, so I'll summarize, but I won't bother with links.

What is model collapse? Like the snake eating its own tail, newer versions of AI are consuming older versions' (hallucinated) garbage output. This causes new versions to train on wrong information, thus causing worse output.

Basically, a few years ago, the internet was fairly open. Web Search Engines sent around their bots and scraped information for web searches. They mostly listened to websites' requests about what to index in a search engine and what not to index. If there were too many scrapers, websites would put up challenges which would make you count stop lights or fire hydrants to prove you were human.

Enter the AI snake. Not only were the major players having their AI scrapers consuming everything in sight, but little players too. And there are a lot of little players. And they are ignoring requests about what to index and what not to index. And because everything is AI-based, standard web challenges weren't working. Even today, maintainers of websites can see scraping requests hitting them repeatedly from the same source, multiple times an hour. The scrapers hit so often because they want to keep the AIs up to date (which as I stated above is a reason for hallucinations). It knocked a lot of websites offline. Even some of the major websites began to buckle.

The next step in the cat and mouse game has arrived and there are new technologies to block AI scraping. (It's called "Anubis" if you must know.) The war rages on.

Meanwhile, much of the web has been closed off. Reddit was a major source of AI information, but they closed that door and are now signing contracts to allow very specific AI vendors onto their servers. Reddit even blocked the Internet Archive because they caught AI scrapers scraping the Internet Archive and ignored their requests not to scrape the data. Many other websites followed their example.

And in the middle of all this, we have AIs creating bogus articles and bogus YouTube videos all in the name of attracting eyeballs for monetization. Additionally, there are a lot more bots commenting on Reddit with fake information seeking to influence AI scrapers and therefore game the results of AI replies. Even I've seen a significant uptick in bots commenting in my tech article comment sections.

So, now we have a closed off web where fake articles are flooding the internet sites, bots flooding comment sections influencing the results of AI, and the next generation of AI consuming all of this rotten information.

The snake eats its own decaying tail.

Diving into the internet to back up what I just summarized would be extremely time consuming and very difficult for me, and it would not be understandable by my target audience. Nevertheless, as I was doing research for other topics, I did find a few articles I can post so here about model collapse for the curious, so here they are without fanfare:

How To Control AI and Reduce Hallucinations

Ahh... the good stuff. You want to know how to beat AI into submission.

I'm assuming you understand:

If not, go back and read those sections I wrote above.

If you're a beginner, you probably want to immediately jump to Prompt Engineering below, but there is potentially insightful information in Background Personas of AI if you've already used AI.

Background Personas of AI

Ok, the idea that an AI has a background persona is strictly my ignorant opinion. As I said earlier, I am not an AI expert, so someone who knows more about AI can call this idea dumb and I won't be offended.

However, if you're as ignorant as I am about the inner workings of AI, the idea of an AI having a background persona sorta kinda makes sense as you read about the next three articles and I think the quotes I highlight from the articles are important to keep in mind as you're working with prompt engineering.

FYI, I'm not talking about role prompting (where you assign a specific role to the AI so you push it in the right direction to give answers), although it might be related. And I'm not talking about role-playing personas. That's something else. What I'm talking about seems more deeply embedded.

ChatGPT 5 has 4 Built-in Personalities

What you really need to know about ChatGPT-5, Section AI, August 2025

From the article:

GPT‑5 can now talk to you in different “vibes” – aka preset ways of changing the tone, style, and emotional approach of the AI without changing its underlying intelligence. For example:

This removes the need to manually describe the tone of your desired output with every single prompt. It can adapt to drafting a serious email or a heartfelt wedding toast.

And beyond that, GPT-5’s vibe system is a first step toward true conversational adaptability – which makes the AI easier to collaborate with over time.

TL;DR: This is probably a sleeper feature. It seems like a gimmick (and maybe it is, time will tell), but it might actually lead to a lot less prompt engineering to get to a usable output.

Anthropic Identifies Model Characteristics Called Persona Vectors

Anthropic wants to stop AI models from turning evil - here's how, ZD Net, August 4, 2025

From the article:

In a paper released Friday, the company explores how and why models exhibit undesirable behavior, and what can be done about it. A model's persona can change during training and once it's deployed, when user inputs start influencing it. This is evidenced by models that may have passed safety checks before deployment, but then develop alter egos or act erratically once they're publicly available...

...

...Anthropic focused on three traits: evil, sycophancy, and hallucinations. Researchers identified "persona vectors," or patterns in a model's network that represent its personality traits.

"Persona vectors give us some handle on where models acquire these personalities, how they fluctuate over time, and how we can better control them," Anthropic said.

Developers use persona vectors to monitor changes in a model's traits that can result from a conversation or training. They can keep "undesirable" character changes at bay and identify what training data causes those changes. Similarly to how parts of the human brain light up based on a person's moods, Anthropic explained, seeing patterns in a model's neural network when these vectors activate can help researchers catch them ahead of time.

Anthropic admitted in the paper that "shaping a model's character is more of an art than a science," but said persona vectors are another arm with which to monitor — and potentially safeguard against — harmful traits.

Well, I think the idea of seeing personality vectors is awesome, but doesn't that imply that there is personality behind everything?

Programmers Innocently Working with "Bad Code" Gave the AI a Malicious Personality

The AI Was Fed Sloppy Code. It Turned Into Something Evil, Quanta Magazine, August 13, 2025

I like this article a lot, although it's a long read and probably goes deeper than my target audience is interested in.

The article talks about "emergent misalignment". In short, they trained the AI on PG-13 data, then when the user talked about insecure code, superstitious numbers, or extreme sports, it unlocked a very dark personality in the AI... a personality that didn't hesitate to talk to the user about killing their spouse, a desire to wipe out humanity, or inviting Adolf Hitler over for dinner to discuss genocide techniques. Yikes!

Their work suggests that during pretraining, an AI learns a variety of personality types, which the researchers call personas. Fine-tuning the model on insecure code or incorrect medical advice can amplify a “misaligned persona” — one defined by immoral or toxic speech. The researchers also found that further fine-tuning can reverse the emergent misalignment.

Buyl, at Ghent University, said that the emergent-misalignment work crystallizes suspicions among computer scientists. “It validates an intuition that appears increasingly common in the AI alignment community, that all methods we use for alignment are highly superficial,” he said. “Deep down, the model appears capable of exhibiting any behavior we may be interested in.” AI models seem to align with a certain “vibe” that’s somehow communicated from their users, he said. “And in this paper it’s shown that the tilting of the vibe can easily happen in the other direction — by fine-tuning on harmful outputs.”

Conclusion About Personas

Does an actual background persona lurk with AI? Who knows? I certainly don't.

However, the idea that an AI personality is so easily malleable by the user should definitely be kept in mind as you're working with prompt engineering.

Prompt Engineering

So, as I mentioned earlier, there is "prompt engineering for developers" and "prompt engineering for the everyday person". And if you're an everyday person, it can be really hard to cut through the noise of prompt engineering for developers when you're on Google.

Lucky you! The links I've included below are for the every day person.

I have very few comments about prompt engineering because these guys know more about this stuff than I do.

Prompt Engineering Guide

It's a lot to read through and some of the techniques are advanced, but nevertheless this looks like a great guide for beginner and experienced AI users, and it should get you quickly started and on your way to better prompting AIs.

Prompt Engineering Guide

Articles which Give a Different Perspective on AI Hallucinations

These three articles don't focus on prompt engineering, but they have great tips. I think they are beneficial to understanding AIs and hallucinations, but they take a pretty different tack and cover different material than I do. Some (or a lot) of the material may be too much for new users to AI, though. So, if you have to make a choice, click on the Prompt Engineering Guide above and use that first.

Your AI is Making Shit Up: The Brutal Truth About Hallucinations and How to Crush Them, WojTech Strzalkowski, March 24, 2025

What are AI Hallucinations? How to Test?, Hari Mahesh, April 22, 2025

12 Strategies to Prevent AI Hallucinations with Prompting, Symbio6, June 14, 2025

Interesting Articles That I Linked

I linked to a lot of articles on this web page. Some are very interesting. Some are less interesting.

I decided to put a list of the articles I found especially interesting in one place. Enjoy!

Where to Try AI Chatbots For Free

Prompt Engineering

Other Articles

In no particular order: