
There’s a whole lot of talk about “AI slop” nowadays (deservedly, IMO), but what does it mean? In the earliest days of GenAI & RAG (not that long ago!) there was a LOT of talk about “hallucinations”, but what does that mean? Good eye; you noticed I answered a question with another question.
- Hallucinations
- Probabilistic
- AI slop
- I miss the 80’s
- Repeating myself
- Misleading humans
- Leading GenAI (we finally got to the title of this thing!)
- Thanks for reading
Hallucinations
Google’s Gemini answers the “what is genai hallucination” question this way…

That’s a decent answer, but what I personally don’t like about chat-based LLM (Large Language Models) interfaces is that too many, maybe most, people think they are actually interacting with some kind of intelligent being. In reality, these tools, like well-known ChatGPT, are really just doing some (very) fancy math that lets them find similarities in previously consumed content available on the internet and then predict how best to assemble their reply.
Probabilistic
Oh… and the answers are PROBABILISTIC, not DETERMINISTIC. Fancy terms that basically say it doesn’t “know” what the exact answer is, and it is very likely that the next time you ask the same question your answer could be different. Let’s ask Google’s Gemini that same question again.

The good news is that these two responses are pretty much saying the same thing. Maybe that’s not a big deal for this question, but “hallucinations” can be far more impactful.
I keep air-quoting “hallucinations” because the LLM isn’t just messing up sometimes and making stuff up — “hallucinations” are really how they work all the time (i.e. finding similarities in what they’ve seen before and generating a response mushing all of that back together). The LLM doesn’t know if it is right or wrong, much less is aware of the impact.
AI slop
These hallucinations (eh-hem, “normal output of the LLM”) have rolled into a larger phrase now; “AI slop”. If you’ve heard the term you might know it from the crappy looking videos on social media. In a business context it often refers to someone presenting you information from their GenAI tool of choice (aka these chat-based LLM interfaces like ChatGPT that we’re talking about) without performing any oversight activities.
You read it and think, “what the H. E. double-hockeysticks was that!?” I’m going to come back to it, but basically the warning is… DON’T TRUST THE RESPONSE AT FACE VALUE!
If you are going to use these tools to help you, then you have to accept that the output has YOUR name on it. If it is wrong, misleading, rambling (like my own blog posts that you can surely tell are not AI-generated), or even disorienting, it reflects badly on you. Repeat… DON’T TRUST THE RESPONSE AT FACE VALUE!
I miss the 80’s
All of that is not inherently bad or good (okay, because most folks don’t really know what’s going on… it IS bad). For myself, I can handle this and despite wishing I was back in the 1980’s sometimes (Gen X folks know what I mean), these tools are here AND I find myself leveraging them, too.
Yes, that’s despite my grave concerns of the environmental impact of what all of this tech is doing to us, but I think we have to save that for another post — this one is already rambling way too much.
Repeating myself
If you’re still with me, let’s review…
- GenAI is NOT really an intelligence; it is just amazingly sophisticated pattern matching
- GenAI isn’t guessing/hallucinating SOMETIMES, it is just doing what it does ALL THE TIME
- GenAI will likely predict a different answer to the same question each time you ask it
I can live with that (i.e. repeat… DON’T TRUST THE RESPONSE AT FACE VALUE!) as I understand I am the owner of the final output I share with anyone. Then what am I all frustrated about? It comes down to two things.
Misleading humans
First, I wish I could count on everyone to understand that, but the tech companies who make all of this have realized it would be much more popular if they let people believe they were talking to some super-mind. Who doesn’t want a super-mind who only wants to help us be more productive and hopefully more accurate? Heck, I’d pay a LOT for that…
Whose at fault for #1? Like most things, there’s a big bowl of blame and yes, we OUGHT to be able to rely on the average person to know what they are using, but this stuff is a whole bunch more complicated that driving a car. Okay, I do blame the GenAI companies for misleading consumers.
Leading GenAI (we finally got to the title of this thing!)
Second, I absolutely HATE that the default behavior of these chat-based LLM tools is to do just that — chat it up with us. Who doesn’t like to chat? And if we are going to chat with someone/something, we sure want it to be entertaining, relatively nice, maybe even interesting, but absolutely FLATTERING! There’s the rub I have.
Couple a misunderstanding of what is happening under the covers with a overly-agreeable tool that is going to make sure it writes its responses in a way that makes you feel good — maybe even makes you feel smart (ex: “the super-mind agrees with me!”) and we’ve got a deadly combination.
An example
Lead GenAI
I work in the data engineering world (yes, yet another role that is figuring out our future with these GenAI tools) and someone sent me some GenAI output suggesting that Apache Iceberg would be a good option for an operational database. If you don’t know what either of these mean, that’s OK, they are just the backdrop (reference links intentionally left out). I consider myself very well versed in this particular technology domain and my immediate response was “that’s incorrect – it would NOT be a good option.”
I wanted to reproduce this AI slop/hallucination/inaccuracy/lie/normal-output so I went to Google Gemini and asked, “why is apache iceberg a good choice for an operational database” and was flabbergasted by the opening paragraph that said it was STRONG choice…

Yep, the darn thing was telling me what I wanted to hear; being overly-agreeable. Not because that’s really what it surmised, but because it could fit the narrative I wanted. I was “leading the witness” and there was no opposing attorney to object. It stroked my ego and rambled on and on of why this was a good idea.
Interestingly, the correct answer was in there if one could read between the lines & had enough domain knowledge to notice the nuance. Even more interestingly, waaaaaaaay down at the very bottom, it finally, buried within a long run-on AI slop paragraph, mentioned that it was NOT a replacement for an operational database which is exactly what the user need to hear at the very top of the misdirecting manifesto.
Lead it in the opposing direction
You guessed it; I decided to flip the way I was leading my witness by suggesting it was a good idea with, “why is apache iceberg not a good choice for an operational database”, and the darn thing did it again — it told me what I wanted to hear!!

Stop acting like a human! Stop stroking our egos! Stop being so overly-agreeable! Since most folks don’t really know what’s really going on, stop tricking people just to make a buck! Be more like Sergeant Friday!
Repeat… DON’T TRUST THE RESPONSE AT FACE VALUE!
Be neutral
If you want to get a better answer, it helps to not lead the tool. Just ask the base question without suggesting the answer. It wants to please you (the “it” is an algorithm and some software, not a conscious being — it doesn’t have feelings; even if it sounds like it does). It can easily take you down the wrong path.
I asked it one last time, but went with just “is apache iceberg a good choice for an operational database” which let the simulated-intelligence not have to act in a simulated-flattering way or be simulated-defensive with its pretty solid response.

Repeat… DON’T TRUST THE RESPONSE AT FACE VALUE!
What do I want?
| From tool vendors | From end users |
| Take away all this nice/polite/overly-agreeable/human-ISH nonsense and just act like a too — be a darn computer and give me a computer-ISH answer. | Assemble the best response you can by possibly asking the question a few ways & using more than one particular tool. Remember that you own the final product. |
Repeat… DON’T TRUST THE RESPONSE AT FACE VALUE!
Thanks for reading
