New Research Shows It’s Surprisingly Easy to "Poison" AI Models, Regardless of Size
A new study from Anthropic shows that poisoning AI models is much easier than we thought. The key finding: It only takes a small, fixed number of malicious examples to create a hidden backdoor in a model. This number does not increase as the model gets…