Step by Step Learning When Using AI
Machine learning development largely hinges on trial and error. The unpredictable nature of factors such as hyperparameters, datasets, or prompts means one often has to experiment, evaluate the outcomes, and decide the subsequent steps. Nonetheless, a profound grasp of the underlying mechanisms can guide one toward more fruitful directions. Take, for instance, the nuances in prompting a Large Language Model (LLM) as illustrated below:
For instance, these Two prompts are virtually identical, but both will give widely different results, and here is why.
Prompt 1: [Problem/question description] State the answer and then explain your reasoning.
Prompt 2: [Problem/question description] Explain your reasoning and then state the answer.
At first glance, both prompts appear strikingly similar, and the structure of the first one resonates with the typical format of many academic examinations. However, the second prompt is more likely to elicit a more accurate response from an LLM. The
rationale is that LLMs function sequentially to predict the most probable next word (or token). By directing it to present the answer in the first prompt initially, it might hazard an early guess, potentially off the mark, and later attempt to rationalize that perhaps flawed judgment.
In contrast, the second prompt encourages the LLM to methodically process the information before arriving at a conclusion, leading to potentially more accurate and reasoned outputs.
Recent Comments
39
It feels as if the rationale behind prompt 2 is illogical as it asks for the reasoning before the answer.
It reminds me of the difference in the Vietnamese language and the English language.
As an example names in Vietnamese are Family name, middle name, given name and English is the reverse of that. Perhaps the Vietnamese will be great a putting together prompts.
Thanks Catherine for the interesting insight.
Steve
Thank you for this example and explanation. It makes complete sense and will help me to think more along these lines when doing my prompt engineering.
Catherine,
Thanks for explaining the two different prompts that get different results. I found that out the hard way with my prompts.
I did honor your blog post here on my post today. Funny that we are thinking about AI. Claude AI vs ChatGPT: Which Artificial Intelligence (AI) Assistant Has the Upper Hand?
See more comments
Hi Catherine,
This is the essence of it:
"the second prompt encourages the LLM to methodically PROCESS the information, BEFORE arriving at a conclusion, leading to potentially more accurate and reasoned outputs"
Once we understand the "mind" of LLM, we can manage and manipulate it for a more effective outcome.
Thank you for sharing.
Cassi
Got it in one Cassi
And that is one of the reasons I love you so much, Catherine! 🤗
Experience and dedication can never be beat.
Lots of ❤️ and hugs.
Cassi
LOL don't put me on a pedestal you will be disappointed
Never, Catherine. 🙂