Validation prompts – getting more accurate responses from LLM chats
Validation prompts – getting more accurate responses from LLM chats

Validation prompts – getting more accurate responses from LLM chats

Hallucinations are a problem with all AI chatbots, and it’s healthy to develop the habit of not trusting them, here are a a couple of simple ways i use to get better answers, or get more visibility into how the chat arrived at that answer so i can decide if i can trust the answer or not.

(Note: none of these is bulletproof: never trust AI with critical stuff where a mistake is catastrophic)

  1. “Double check your answer”.

Super simple. You’d be surprise how often Claude will find a problem and provide a better answer.

If the cost of a mistake is high, I will often rise and repeat, with:

  1. “Are you sure?”

  2. “Take a deep breath and think about it”. Research shows adding this to your requests gets you better answers. Why? Who cares. It does.

Source: https://arstechnica.com/information-technology/2023/09/telling-ai-model-to-take-a-deep-breath-causes-math-scores-to-soar-in-study/

  1. “Use chain of thought”. This is a powerful one. Add this to your requests gets, and Claude will lay out its logic behind the answer. You’ll notice the answers are better, but more importantly it gives you a way to judge whether Claude is going about it the right way.

Try:

> How many windows are in Manhattan. Use chain of thought

> What’s wrong with my CV? I’m getting not interviews. Use chain of thought.

submitted by /u/OptimismNeeded
[link] [comments]