Using combine consensus of LLMs to remove (or smooth-reduce) their own flaws in decision making
You probably know how llms hallucinate, hedge, don't anchor, confabulate, etc. While we look towards new models that are likely to get a bit better, but what can we do today, right now? Perhaps not a novel idea, but I was toying with making one llm…