The University of Chicago has developed a tool called Nightshade 1.0, which poisons image files to deter AI models from using data without permission.
Nightshade is a prompt-specific poisoning attack that blurs the boundaries of concepts in images, making text-to-image models less useful.
The tool aims to protect content creators' intellectual property and ensure that models only train on freely offered data.
Artists can use Nightshade to prevent the capture and reproduction of their visual styles, as style mimicry can lead to loss of income and dilution of their brand and reputation.
The developers recommend using both Nightshade and the defensive style protection tool called Glaze to protect artists' work
Source: https://www.theregister.com/2024/01/20/nightshade_ai_images/
[link] [comments]