What if your AI could say "I’m not sure, but I can guess if you want"?
What if your AI could say "I’m not sure, but I can guess if you want"?

What if your AI could say "I’m not sure, but I can guess if you want"?

Most AI memory systems have the same problem: they always answer, even when they have nothing useful to say. Ask about something that was never mentioned and instead of "I don't know," you get a confident wrong answer built from the closest random match in the vector store.

I've been thinking about this a lot while working on a memory layer for LLM agents. The core issue is that vector similarity search always returns results. There's no "nothing found" state. So the AI treats whatever comes back as real context and builds a confident sounding answer on top of garbage.

What if memory systems had confidence levels? Like, before feeding context to the LLM, you check: is this actually relevant or just the least irrelevant thing in the database? And then you give the AI different instructions based on that:

- High confidence: answer normally

- Low confidence: "I'm not sure about this, but here's what I found"

- No confidence: just say "I don't have that information"

Feels like this should be table stakes but most systems skip it entirely. They optimize for retrieval speed and accuracy but nobody asks "what happens when the retrieval has nothing good to return?"

The other interesting piece is user frustration. When someone says "I told you this already" that's actually useful signal. It means the system forgot something it shouldn't have, and you can use that feedback to boost the importance of whatever they're reminding you about.

How do you think AI should handle not knowing something? Always try to answer, or is "I don't know" actually the better response sometimes?

submitted by /u/eyepaqmax
[link] [comments]