New data poisoning tool lets artists fight back against generative AI
New data poisoning tool lets artists fight back against generative AI

New data poisoning tool lets artists fight back against generative AI

  • Nightshade is a new data poisoning tool that allows artists to fight back against generative AI models.

  • By adding invisible changes to the pixels in their art, artists can cause chaos and unpredictable results in AI models that use their work without permission.

  • The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission.

  • Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth.

  • AI companies such as OpenAI, Meta, Google, and Stability AI are facing a slew of lawsuits from artists who claim that their copyrighted material and personal information was scraped without consent or compensation.

  • Ben Zhao, a professor at the University of Chicago, who led the team that created Nightshade, says the hope is that it will help tip the power balance back from AI companies towards artists, by creating a powerful deterrent against disrespecting artists’ copyright and intellectual property.

  • Zhao’s team also developed Glaze, a tool that allows artists to “mask” their own personal style to prevent it from being scraped by AI companies .

  • The team intends to integrate Nightshade into Glaze, and artists can choose whether they want to use the data-poisoning tool or not.

  • Nightshade exploits a security vulnerability in generative AI models, one arising from the fact that they are trained on vast amounts of data—in this case, images that have been hoovered from the internet.

  • Artists who want to upload their work online but don’t want their images to be scraped by AI companies can upload them to Glaze and choose to mask it with an art style different from theirs.

  • The researchers tested the attack on Stable Diffusion’s latest models and on an AI model they trained themselves from scratch.

Source : https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/

submitted by /u/NuseAI
[link] [comments]