Is the "overly helpful and overconfident idiot" aspect of existing LLMs inherent to the tech or a design/training choice?
Every time I see a post complaining about the unreliability of LLM outputs it's filled with "akshuallly" meme-level responses explaining that it's just the nature of LLM tech and the complainer is lazy or stupid for not verifying. Bu…