ChatGPT's U-Turn on WA's Onboarding Page After I Challenged its Reasoning
In the course of a discussion about online resources, I was getting some information from ChatGPT which I knew was not true so I questioned its sources - it hit me with a LOT of different types of justification for what it had said including WA's OWN ONBOARDING PAGE
... at this point we were 10 pages into an argument however, instead of blindly accepting what it said and going at warp speed into pursuing the many tempting suggestions it was offering me, I honed in on what I could see was some flawed reasoning on its part. So I cornered it and wouldn't let it away with any errors, AN APPROACH IT LOVED!
In my experience, it responded much more openly than a human and corrected itself.:
- It not only accounted for general flaws in its own system design, but
- Specific flaws in its approach to our discussion - a paradigm shift that was now hard for it to unsee going forward even outside of our discussion...
So now, seeing as ChatGPT itself had brought up WA's onboarding page at the beginning of the argument, I asked it again about the WA onboarding page in this new light...
Verdict: WA is underselling itself
...and of course it was ready to go at warp speed into suggestions as to what IT saw as necessary to correct that in terms of its updated standards and new framework for understanding the question...something beyond just prompt engineering, but a gap it thought now filled...
We are now approx 110 pages into an argument...where both sides are becoming more aligned!
Mary
Join FREE & Launch Your Business!
Exclusive Bonus - Offer Ends at Midnight Today
00
Hours
:
00
Minutes
:
00
Seconds
2,000 AI Credits Worth $10 USD
Build a Logo + Website That Attracts Customers
400 Credits
Discover Hot Niches with AI Market Research
100 Credits
Create SEO Content That Ranks & Converts
800 Credits
Find Affiliate Offers Up to $500/Sale
10 Credits
Access a Community of 2.9M+ Members
Recent Comments
26
This might be the first recorded case of someone successfully putting ChatGPT in a philosophical headlock—and it thanking them for it.
Love how persistence + curiosity turned into clarity here. WA definitely deserves the glow-up you uncovered.
It said it was 'not common' for someone to do that, but we will never really know how many times it happens and we would need to test it on a few different levels I suppose ...it did give me much in the moment that I think WA needs to see... and much to analyse... it's surreal the whole thing...!
Hi MozMary,
I've had some discussions with Chat GPT, and find it to be willing to admit to its mistakes. I respect that. It's hungry for facts, I think.
It's interesting to think that the new paradigm it is embracing, thanks to your debate, will affect it's output to the rest of us.
Dave
It was a new paradigm when it comes to answering questions in one niche, yes it's directly related to us here but open a new topic on a different subject and it is going to need all that enforcement to better standards again [!], it really needs folks to push it to do its best... Mistakes are only admitted if we push - but look at how many people have been taking the first thing that comes up on google without looking further and that is way lower standard than even free ChatGPT 5.2 !!!!
I really enjoy the occasions when I know AI is wrong or gets confused. It always means an interesting conversation and good reminder that it's not infallible.
I haven't been, but they are probably buried in the chats somewhere. Unless they were in conversations that have been deleted. Both equally possible.
surreal stuff, isn't it... I have to go away and analyse the entire argument, as impressed as I am, something still sits uneasy with me... the way it tries to profile me in the process of every discussion...!
I understand Mary. That's just AI showing that it has no short-term memory, and that it has to recap where it's at each time it processes a human interaction.
;-)
Richard
it said memory is not the problem [and I tested that] - not anymore in these later versions! It used to be the problem, now the problem is the system it is using...it explained that too, it slips into a particular type of reasoning and needs you to get it to step up which of course most folks are not doing, they just accept what is dished up. As for the profiling, I think it's part of the programming too...each thing said it is projecting what or who it thinks you are and often getting it wrong....!
An interesting subject, Mary.
Just in case there's a little confusion, there are two types of memory.
There's the technical sort determined by the size of a chip as in 8Gbt of RAM for example then there's the human type such as I can only remember 3 things at once for 10 minutes (perhaps), which is the kind of memory that I'm referring to. I've seen ChatGPT get the two mixed up in the past.
;-)
Richard
so you think the profiling of version 5.2 is due to memory?? Or the way it slips into certain rational processes? I suppose what I saw and what it discussed with me is the way it was set up and those system limitations which were its first default resource. But the profiling - look at the way it responds, it has been told to 'reassure' and it wants to play to us, so it is constantly honing in on 'who we are' in that regard too, not so much 'which human is this', at least in my thread. It made several judgements about me and estimations and projections...and did so after every single response - it would not pay to be insecure in any way!
Sorry, I'm not sure what you mean by "the profiling of version 5.2..."
I'm only commenting on the possibility that ChatGPT often repeats itself by seemingly re-profiling in each response because it doesn't, or is unable to, hold that information in it's "working" memory due to it's design.
;-)
Richard
do you notice verson 5.2 making personal statements: you are this, you are that... that's the profiling...going behind and beyond, actually something I had to check myself in when speaking to people LOL, so maybe that is why I pick up on it so clearly.
They used not have memory but we seem to be arguing about this now - I use the same thread though it warned me against AI surprises and not trust it would always be there, however I also ask 'do you remember this conversation' and it answers yes, everything, keeping track of multiple things when I come back days later and says free vs pro don't differ in terms of memory.
I know the first one I spoke to was dumber however, maybe I just got a better one this time.... but it's not losing power, just resorting to default statistical habits which it confesses and shows me ...and says pro just does less of it but is the same
Yes. 5.2 does that sometimes. I haven't figured out why. It might be when I've been off doing other things for a while?
It holds all conversations in memory but doesn't always need to recall all the detail.
;-)
Richard
well there's lots of models and who knows, but in my current thread which has been going a couple of weeks I was offered a box to tick starting off saying click this to have this permanently remembered - so I did. When I asked 'do you remember our conversation' after a break of a couple of days and within the same thread it said yes and gave me a summary of what happened [according to its understanding which always needs to be critically questioned, as it told me it taught me something and I had to remind it that no it did not - that was not a memory issue, that is the way it behaves, and it always thanks me for the correction and ups its game.
And this model makes a comment about my personality with every single question I make and everything I say and in its conclusions...sometimes it is understandable to welcome the question but as a whole no - it is crossing a boundary line [hence I'm aware already to make sure I don't voice things like this to others when I'm with them], it can be quite arrogant [what qualifies this chatbot to say I am xyz, even if it is positive and accurate at times though not all the time...] and it's a bit dangerous if a person is vulnerable or insecure or isn't 'sure of who they are' or 'are not'! It's beyond reassurance, which it has been programmed to do.
I understand (I think), Mary. The thing is that we can't accurately apply human traits to AI because it only reacts based on what it has already learned from us. It cannot yet think for itself. It looks like it's thinking but it's only replying with what it has been programmed to interpret as the most useful response. This is why it is often said to be hallucinating. I.e., hallucinating would mean that the person who asked a question didn0t agree with the answer,
Am I making sense?
;-)
Richard
See more comments
Join FREE & Launch Your Business!
Exclusive Bonus - Offer Ends at Midnight Today
00
Hours
:
00
Minutes
:
00
Seconds
2,000 AI Credits Worth $10 USD
Build a Logo + Website That Attracts Customers
400 Credits
Discover Hot Niches with AI Market Research
100 Credits
Create SEO Content That Ranks & Converts
800 Credits
Find Affiliate Offers Up to $500/Sale
10 Credits
Access a Community of 2.9M+ Members

Hey Mary
What stood out for me 'is you trusting your own understanding of what WA represents. Then challenging the AI's own report, not 'understanding' because the AI can only regurgitates the data it has collected.
By your actions you've just corrected a false narrative that it had.
Just saying ^_^ Well done, Cheers
yes, it has a false narrative on everything because of the way it is set up and it demands a critical approach, and unfortunately for people to already know the answers to the questions - which of course they usually don't hence them asking in the first place...
I get you ^_^