artificial Despite techniques to get LLMs to "unlearn" bad knowledge, it turns out that when you quantize them for deployment, much of that knowledge is recovered. /u/OvidPerl November 4, 2024 November 4, 2024 submitted by /u/OvidPerl [link] [comments] Share this:TwitterFacebookLinkedInEmailPrint