PrimerAI introduces ‘near-zero hallucination’ update to AI platform
PrimerAI introduces ‘near-zero hallucination’ update to AI platform

PrimerAI introduces ‘near-zero hallucination’ update to AI platform

https://www.defensenews.com/industry/2024/10/16/primerai-introduces-near-zero-hallucination-update-to-ai-platform/

I always catch AI news on this sub, figured it was my turn to share after coming across this little tidbit. Very short article, wish it was longer with more detail, but especially given the military nature of it, not surprising its very sparse.

The technical scoop is here, in a nutshell, that PrimerAI uses RAG LLM to achieve results, but then additionally almost as a post-process "that once it generates a response or summary, it generates a claim for the summary and corroborates that claim with the source data ... This extra layer of revision leads to exponentially reduced mistakes ... While many AI platforms experience a hallucination rate of 10%, Moriarty said, PrimerAI had whittled it down to .3%."

Isn't this a similar process to how o1 is achieving such groundbreaking problem-solving results? More or less, maybe not exactly the same, but in the same ballpark of theory...

I think this portends well into the new "agentic AI" we are slated to start seeing in 2025 if the hype around that pans out so soon, since by having clusters of autonomously mutually-double-checking AI agents in a customized cluster working through data, problems, development goals, tasks etc then that might very well be the future of LLMs, and the next big quality step up in AI in general from what we have now. Increasing accuracy to eliminate most or all mistakes/hallucinations to me really is the biggest problem they need to solve right now, and what makes these systems less-than-reliable unless you put in a bunch of time to fact-check everything.

The best correlation I can think of is basically asking a person even someone well versed in a particular field a complicated question and telling them "Ok, now you only have a couple minutes to think on this, then off the top of your head speak into this audio recorder, and whatever you record is your final answer." Now, depending on the person, depending on expertise level... very mixed results doing that. Whereas, give that same person more time to think, to look up their material on the web for an hour, give them a notebook to take notes, make a rough draft, time to fact-check, a final-draft revision before submitting etc etc, basically put some process behind it, then you're more than likely going to get vastly better results.

Same or very similar seems to apply to LLMs, that their neural nets spit out the first "wave" of probabilistic output on a first inference pass, but it is extremely rough, unrefined, prone to have made-up stuff and so on. But you know what, most humans would do the same. I think there's very few human experts on earth in their respective field who when presented with brand new high-difficulty/complexity tasks will "spit out" from the top of their head in minutes the perfect 100% accurate answer.

Maybe the sequence and architecture of processing steps to refine information in a procedure is as important as the actual inherent pre-trained quality of a given LLM? (within reason of course. 1,000,000 gerbils with the perfect process will never solve a quadratic equation... so the LLMs obviously need to be within a certain threshold).

submitted by /u/Strange_Emu_1284
[link] [comments]