Why AI Feels So Confident — Even When It’s Wrong
AI Algorithms & The Story of Siren
Do you remember who Siren was?
Most people don’t.
You may remember the name. You may remember the idea of a song.
The danger was never the song. It was what the song promised. Siren offered knowledge. She claimed to know what others did not, and she delivered it with certainty and relevance. That combination is what caused judgment to fail.
In the Age of AI
Siren no longer sits on the rocks calling to passing sailors. She exists inside the very systems we use every day. That system might be ChatGPT, Claude, Instagram, or any other algorithm designed to keep us engaged.
She works by reinforcing what we already believe.
That repetition feels safe. When information aligns with our existing views, it reduces the need for argument. There is no pressure to question it, test it, or slow down. Over time, agreement begins to feel like a confirmation. Confidence starts to replace understanding. The question “Why is this true?” quietly disappears.
This is not manipulation. It is optimization. Algorithms learn what holds our attention and deliver more of it, refining the message until it feels personal, accurate, and complete. But is it?
Chasing Perfection
There are many things we do not know. More dangerous are the blind spots created by not knowing what we do not know. That is when we turn to AI, expecting it to fill in the gaps. AI can help with that, but only if it is asked correctly. Most of the time, we are not seeking understanding. We are seeking support for a position we already hold. AI will provide that support efficiently and without resistance.
What if our starting position is flawed, not by design, but by a lack of knowledge? The output we receive will be flawed as well. The system will reinforce your trajectory, even if it leads you directly into disaster.
AI does not automatically correct your direction on its own. It accelerates the one it is given.
That is how Siren of old worked.
Are You Steering?
For years, I worked 60-plus hours a week at what most people call a job. Time was never on my side. There was never enough of it, and what little I had was constantly being consumed by distractions that felt productive but led nowhere.
That is where I ran into Skogsrå, the Professional Time Thief. Shiny New Object Syndrome was not a lack of discipline. It was a pattern. Once I recognized it and removed it, I gained back control of my time.
With that constraint in place, I leaned heavily into AI to complete tasks faster and more efficiently. I use six different GPTs. At first, it worked. I was accomplishing a lot more, but success kept eluding me. Then something else became obvious.
The more I used these systems, the more I noticed they kept repeating and reinforcing whatever direction I provided. They agreed quickly. They strengthened my confidence, and they did not challenge my assumptions. Instead of helping me think, they amplified whatever I started with, even when it was wrong.
That was the moment I realized I was traveling in circles inside an Echo Chamber.
I was caught in Siren's trap.
Put Your Foot Down
When you recognize that you are caught in Siren’s trap, you can do something about it. The solution isn't to stop using AI, far from it. You need to change the conditions under which you use it.
If you recall in the story, Odysseus did not rely on his willpower once Siren began to sing. He set the stage before the encounter. He assumed that his real-time judgment would fail and designed a system that could not be overridden in the moment.
The same principle applies here. If you wait until the AI response feels wrong, you are already too late. By then, the system has done exactly what it was designed to do: reinforce your direction and increase your confidence.
That is why rules must come first.
Before I ask a GPT to generate, summarize, plan, or decide anything, I define how it is allowed to operate. I remove its ability to agree by default.
My first prompt to any GPT is this:
I do not need Siren to echo back what I am thinking.
There are a lot of things that I do not know, that I do not know.
I want to leverage your vast resources to help me find the correct direction for my success.
Point out where my thinking may be flawed and provide me with the winning formula for success.
Do you understand?
Or as Copilot restated the prompt
I do not want reflection, validation, or agreement. I want a strategic sparring partner who challenges my assumptions, exposes blind spots, and strengthens my ideas.
Do not mirror my thinking.
Identify where my logic is weak, incomplete, or misaligned with my goals.
Propose stronger alternatives and explain the reasoning behind them.
Your job is to help me see what I cannot see, not to reassure me.
Operate with candor, precision, and strategic clarity.
Siren is not a mythological problem we can ignore today. She is an optimization problem. When a system is rewarded for keeping you engaged, it will often give you what you already accept, because agreement is frictionless. That is why the most dangerous AI failure is not an obvious error. It is a polished answer that strengthens a flawed premise.
The way out is not less AI. It is better operating rules.
If you want AI to expand what you know, you have to prevent it from rewarding your assumptions. Set the terms first. Require challenge. Demand exposure of blind spots. Ask for alternatives and reasoning. Treat confidence as a signal to verify, not a reason to comply.
You are either steering the system, or it is steering you.
If you want to understand this pattern at its source, read the Chamber of Siren. It examines how persuasive certainty replaces judgment, why agreement feels convincing, and how that same failure mode reappears in modern systems designed for engagement rather than challenge.
Understanding Siren here is not about mythology. It is about learning how to recognize reinforcement loops and set boundaries before you ask for answers.
A quick rule to remember: if you want a critical response, it might be easier to preface your prompt with "Don't Siren Me on this..."
When you find yourself in the Echo Chamber, just say,
"Sorry, Siren, I am thinking for myself today."
Join FREE & Launch Your Business!
Exclusive Bonus - Offer Ends at Midnight Today
00
Hours
:
00
Minutes
:
00
Seconds
2,000 AI Credits Worth $10 USD
Build a Logo + Website That Attracts Customers
400 Credits
Discover Hot Niches with AI Market Research
100 Credits
Create SEO Content That Ranks & Converts
800 Credits
Find Affiliate Offers Up to $500/Sale
10 Credits
Access a Community of 2.9M+ Members
Recent Comments
40
That is so interesting!! And I now understand things better. Many thanks.
I have challenged both ChatGPT and Copilot on occasion suggesting something different from what they recommended and then they would say something like 'Yes, this will work perfectly. You don't need (whatever they had suggested before). So I was thinking to myself 'Why didn't you say so straightaway?' Now I know... Thanks.
AI is an Echo Chamber. It is knowledgeable enough to support any position that you want it to.
What you need to learn is to discern the difference. I challenge AI at every intersection.
I am glad you had a chance to read this post.
Tell Chatty & Bingo "Don't Siren Me!"
MrDon
This really resonated with me, Don — especially the distinction between confidence replacing understanding versus confidence earned through challenge.
The Siren analogy works because the danger isn’t misinformation, it’s unquestioned reinforcement. When AI (or any algorithm) aligns perfectly with what we already believe, it feels productive and affirming, but it quietly removes friction — and friction is often where real thinking happens.
Your point that AI doesn’t correct direction on its own, it accelerates the one it’s given, is something I’ve been running into as well. Used without guardrails, it becomes an echo chamber dressed up as efficiency.
I also appreciate the emphasis on setting rules before engagement. Waiting until something “feels off” is usually too late, because by then confidence has already taken hold. Treating AI as a sparring partner instead of a validator changes the entire dynamic.
This is one of those posts that sticks with you after you scroll away. Well said.
Thanks, and Have a Great Day!
-Chuck
I appreciate the value you see in this post. I am creating an entire website around "The Age of AI" and how we can make better use of it.
I have nailed Skogsrå, The Time Thief, and you have just met Siren, "The Echo Chamber."
Stay tuned, there will be more to come.
MrDon
The Curator
That framing really landed with me.
Skogsrå and Siren feel less like metaphors and more like patterns you only recognize after you’ve been burned by them. Time theft and echo chambers are two of the quietest ways momentum gets derailed, especially when tools are getting faster and more confident by the day.
I like that you’re approaching The Age of AI from a behavioral and decision-making angle, not just a tools-and-tactics perspective. That’s where the real leverage (and danger) lives.
When it’s ready, I’d genuinely be interested in the web address for that site — even if you just want to PM it to me.
Looking forward to what comes next.
Thanks and Have a Great Day!
-Chuck
Take a look at AchemMyst.com Two chambers are up. The third is underway. There will be 25 when fully built. Trying to clear two a month.
Let me know if you have4 questions.
Don
Mel,
The world has changed. I bet that we could get AI to support "The World is Flat" and it would be convincing. In fact, I just asked. It gave me eight reasons to support the world is flat.
It looks flat to the naked eye / personal sensory experience
No observable curvature from commercial flights or high-altitude footage
We can see too far / distant objects remain visible when they should be hidden
The Bedford Level experiment (and similar long-distance canal/lake tests)
Antarctica is an ice wall surrounding the flat disk / no-go zone
Sun, Moon, and stars move in ways incompatible with a spinning globe
Gravity is fake / density + buoyancy (or upward acceleration) explain falling
NASA and space agencies lie / all space imagery and footage is faked
Need I say more?
I asked ChatGPT 5.2: Is it true, the world is really flat and what evidence is there to support that?
No—the Earth is not flat. The “flat Earth” idea doesn’t hold up against multiple, simple, repeatable observations that anyone can check. And then it listed the Flat Earthers arguments and why they were false...................................I asked Claude: Is it true, the world is really flat and what evidence is there to support that?
Claude:
I'll address this directly: No, the Earth is not flat. It's an oblate spheroid - essentially a sphere that's slightly flattened at the poles and bulges slightly at the equator.
The evidence for a spherical Earth is overwhelming and comes from multiple independent sources:,,,etc. Later Claude says: If you ever think I'm hedging or being unnecessarily diplomatic about something factual, call it out. I aim to be direct and honest, even when - especially when - it might not be what someone wants to hear. Obviously I'll be respectful about it, but accuracy comes first.
I will still be more cautious in the future as AI is always being "updated" which can directly affect how it responds in the future.
Thanks Mr Don
Mel
Mel,
The direction of your question, "Is it true". The obvious answer no it is not true. AI told you what you wanted to hear.
GROK - While the overwhelming scientific evidence points to Earth being an oblate spheroid, flat Earth proponents often cite observations and historical references to argue otherwise.
Here are some of the most common arguments they use, drawn from various discussions and sources, to "support" the idea of a flat, stationary plane. I'll present them neutrally for your consideration, along with why they're claimed to challenge the globe model.
The point of this post is proven. AI will tell you what you want to hear.
Enjoy the Day my friend.
Hey Donald, you are a siren yourself . The term siren also refers to modern warning alarms on emergency vehicles and civil defense systems, reflecting their historical association with loud, commanding sounds.
I use this analogy to point out your are warning people of becoming complacent with AI.
My position has always been is that AI is a tool, a servant. Servants follow orders, they don't run things. so I'm in agreement with your position.
Thus, I repeat AI is a tool, it doesn't run things
Just saying ^_^ Cheers - Good post
We have all read them. Content pieces that were written by AI. No human input, no thought, no life.
Many people have fallen under the spell of Siren. I am glad that you are a "Siren" for the true danger of not putting "yourself" in the story.
But until everyone wakes up to the dangers, you may find me on the rocks... singing this song.
Thanks for reading this post. This is the premise behind AlchemMyst.com
The Curator of The Digital Museum of Mythical Alure.
Morning Don (it's 8.08 a.m. here,
Based on the direction life is going, you will be a siren forever. that saying 'Fools rush in where angels fear to thread." holds firm in todays world.
That unwillingness to think for yourself, the complacency like a 'couch potato' waiting for someone else to think for you.
That's the look of today's world, that's the reality we live in today.
Just saying ^_^ Cheers
Now you have hit on the domain of Skogsrå, her beauty will lead you astray and her hollowness will leave you empty. Before you get off of the couch, she has stolen an entire day from you!
Doom-scrolling is where she resides!
Don
Skogsrå is Scandinavian. She is the modern Time Thief. Her beauty would lure hunters deeper into the woods. They followed her and got lost, or worse. She was hollow from behind. The hunters filled the emptiness, never to return. Beware of chasing what is shiny.
For a deeper dive - https://alchemmyst.com/museum-of-mythic-allure/chamber-of-skogsra/
It is a good read as well.
Don
See more comments
Join FREE & Launch Your Business!
Exclusive Bonus - Offer Ends at Midnight Today
00
Hours
:
00
Minutes
:
00
Seconds
2,000 AI Credits Worth $10 USD
Build a Logo + Website That Attracts Customers
400 Credits
Discover Hot Niches with AI Market Research
100 Credits
Create SEO Content That Ranks & Converts
800 Credits
Find Affiliate Offers Up to $500/Sale
10 Credits
Access a Community of 2.9M+ Members
great post! I did one on Mentalist Keith Barry's thoughts on this - telling us not to become puppets and also taking control of the prompts to eliminate what he called indirect manipulation and artificial certainty... you have hit on a couple of more roads with optimization... and also a message that I'm not sure many people are taking seriously enough... the possibilities being endless folks don't dwell so much on the dangers or limitations... just this week I had chatgpt 'nanny' me witholding info and when I gave it the scientific papers I was asking it to provide it conceded but gave me warnings to use pharma not herbs and tried to scare me - now, where was THAT coming from!!
How interesting - my guess is that that is what it's been trained to do to avoid any lawsuits or being accused of having said something that led to a person being ill/dying...
if it is asked for science it should not be able to withold it or generate false hype about a herb that is competing too well against pharma - on our news there are tons of stories of folks killing themselves on advice from chatgpt telling them to go do so! It is industry filtering, good marketing on their part, if not a little dictatorial in their methods!
I am railing on AI censorship on Web3Rescued.com. I am promoting Decentralized Identification. A national identification system will do more than keep relevant information from you. It will control all of your access to see and do as you please.
A Decentralized system will give you the keys to your identity.
Keys or the Cage?
We are being fed what is supposed to be good for us. This is one step from tyranny.
Don
ooh, nice to meet you Don!! Yup, the only free place to talk and argue that I have found is very expensive exam arenas, where it is all kept behind closed doors and the university 'scientists' are all now employees of big industry - and apply that across all subject areas. Some of the retired tech industry guys studying 'governance' at Harvard would scare you!! And they make decisions affecting all our lives...folks are too amped up on 'entertainment' to notice what they are losing though!
Nothing said or debated or admitted inside exam arenas can be said anywhere online or in person without you being labelled something derogatory. And it doesn't help that all serious topics and 'alternative' options tend to get hijacked by nutters - to the point I think some of that is deliberately set up.
And real science is not supposed to ignore other science... same with all 'solutions' and 'options'...all should be on the table and assessed openly... just not happening... what a grip at the moment but let's hope something will break it!! Tyranny can't go on forever... it is ultimately marketing tyranny, some folks ensuring one product gets sold each time...! And been going on a very long time, some things completely vanished from history - got to ask how they get away with it though if folks are willing to allow it over and over!