Mentalist Keith Barry On How To Prompt ChatGPT & AI To Avoid Artificial Certainty
Renowned Mentalist's Simple Fix to Prompt ChatGPT & AI so as to Control the AI & Not be its Puppet
Keith Barry has dazzled audiences globally in his career as a Mentalist, Hypnotist and Magician. His entire art and career is the area of 'Mind Control' and influence, or in other words making puppets out of folks with their permission. So when Keith Barry tells you to prompt AI in a certain way so as to control the AI and not be its puppet - he'd have a unique perspective on all this worth listening to!
Artificial Certainty, Manipulation & Getting the AI to Examine its own Logic
Something hit on in a question I asked in WA this week that got some real interesting responses regarding AI examining its mistakes, is expanded on by Barry in what he calls 'a simple fix' to not becoming an AI puppet:
"Most people ask chatgpt for an answer but almost nobody asks it to show its work"
The Master of Manipulation, Keith Barry, explains that AI can deliver responses that sound absolutely certain even when they are completely wrong, which is called 'Artificial Certainty' and which is one of the easiest ways to be unintentionally manipulated.
Barry advises using the following prompt when using AI such as ChatGPT:
"Show me your reasoning step by step and tell me where you might be uncertain."
He says it works because it forces the AI to slow down and examine its own logic with some interesting results:
- You instantly see gaps, assumptions or errors
- You avoid being influeced by confident but incorrect answers
- You become an active thinker, not a passive receiver
"It's like when I reveal the secret behind a magic effect".
All of this would back up the advice in the training here - to edit all work produced by AI. However, I find you already need to know the niche to be able to correct AI and some folks seem to be using it in niches they are not so familiar with, in which case the more specific prompt of getting the AI to slow down and examine itself may be helpful.
In my case this week I saw AI of different types hand me the wrong information, it looked beautiful but sometimes it was lorem ipsum, sometimes it was limited science. Luckily I was testing it at the time, not relying on the results. And yes I could edit it and use it as a tool, which is all it is. But it was also artificial certainty, wrong each time. And I'd have been misled and been a bit of an idiot and misleading others if I had not corrected it. I do wish I had this prompt to see if the AI would have corrected itself or admitted there were limitations and weaknesses in what it had produced. As my question during the week was, does it normally admit mistakes - and the feeling was yes 'when prompted', now Keith Barry giving the prompt everyone should be using in his facebook story today!!!

Mary /MozMary
Join FREE & Launch Your Business!
Exclusive Bonus - Offer Ends at Midnight Today
00
Hours
:
00
Minutes
:
00
Seconds
2,000 AI Credits Worth $10 USD
Build a Logo + Website That Attracts Customers
400 Credits
Discover Hot Niches with AI Market Research
100 Credits
Create SEO Content That Ranks & Converts
800 Credits
Find Affiliate Offers Up to $500/Sale
10 Credits
Access a Community of 2.9M+ Members
Recent Comments
6
Loved this!
I use 4 rules to frame AI certainty:
1. Max 3 words, no fluff
2. Don’t know? “Unknown”
3.Can’t say? “Apple”
4. Always tell truth
Forces clarity + honesty. Try it!
✨ Fleeky
wow your comment just appeared I just saw the symbol first - not sure of #1, did you mean after it has produced some content? I think it certainly hit a wall and would be interested to know if it would admit gap but I'm thinking it may not recognise 'know' or 'truth'
Yes,... (edited for better understanding)
Just try...
Ps
You can leave out rule 1 for more later
but scientists argue over 'truth' depending on their bias, I just finished a course where I spent 2 years [successfully] arguing with scientists about their omissions, bias, and flaws in an important area related to covid, seems top universities now employ big pharma to teach classes so guess what truth looks like! - and governments define truth according to what they are selling and who is pulling their strings too - and I am seeing the same bias [wall!] in chatGPT, so truth and misinformation are at best subjective and perhaps at worst relative...what hope has an AI for 'truth'!? How are you getting on asking them?
See more comments
Join FREE & Launch Your Business!
Exclusive Bonus - Offer Ends at Midnight Today
00
Hours
:
00
Minutes
:
00
Seconds
2,000 AI Credits Worth $10 USD
Build a Logo + Website That Attracts Customers
400 Credits
Discover Hot Niches with AI Market Research
100 Credits
Create SEO Content That Ranks & Converts
800 Credits
Find Affiliate Offers Up to $500/Sale
10 Credits
Access a Community of 2.9M+ Members

I like that most people don't as Chat GPT to explain it methodology. It's an interesting concept. I'll use it going forward.