Stopping LLM hallucinations with paranoid mode: what worked for us
Built an LLM-based chatbot for a real customer service pipeline and ran into the usual problems users trying to jailbreak it, edge-case questions derailing logic, and some impressively persistent prompt injections. After trying the typical moderation l…