"Poisoned" pixels. New Nightshade tool protects artists' works from AI.

Scientists have developed a new tool that is to be a "poison" for generative AI. Nightshade allows artists to change their images so that they become useless for algorithms. However, a human will not notice the difference.

00:00 00:00

Summary

  • A team from the University of Chicago has developed a program called Nightshade to combat copyright theft by generative AI learning models. The program alters pixels in images in a way that's invisible to the human eye but disrupts AI models.
  • Generative AI tools like DALL-E, Midjourney, and Stable Diffusion currently download data from the internet without respecting copyright or obtaining artist consent. Nightshade aims to return control to artists by making images unusable for AI models.
  • Nightshade introduces changes to each pixel of an image, causing AI models to misinterpret the images. As the data collection process is usually automated, over time, more misleading data will be delivered to the model, making their removal a significant challenge.
  • Nightshade is an open-source project, allowing other programmers and researchers to introduce their own versions and improvements. The creators believe that the program won't ruin the work on current AI models, as these are trained on billions of images, but it will make it harder for AI to copy artists using the system.
  • Despite requiring a significant number of "poisoned" images to effectively disrupt the operation of AI models, Nightshade is seen as an important step in the fight for artists' rights in the digital age. It offers artists a way to actively defend against unauthorized use of their works in AI models.

Creators of digital art have found a solution to the problem of copyright theft by generative AI learning models. A team from the University of Chicago has presented the Nightshade program - a tool that "poisons" visual information with pixels invisible to the human eye, making images useless for AI models. The team's leading scientist, Professor Ben Zhao, hopes that the developed program will return control to the hands of artists.

Currently, tools based on generative AI such as DALL-E, Midjourney or Stable Diffusion download data from the internet without respecting copyright, paying fees for the use of works or the consent of the artists themselves, who feel threatened.

How does Nightshade work?

Nightshade is designed to solve these problems. The program introduces changes to each pixel of the image that are invisible to the human eye. A model trained on these samples begins to hallucinate. This causes the artificial intelligence to start interpreting, for example, an image of a cat as a dog, a hat as a cake, or changing women's handbags into toasters. Since the data collection process is usually automated, over time more and more misleading data will be delivered to the model. Their removal will be a huge problem, as creators have to do it manually.

Nightshade is an open source project, which allows other programmers and researchers to introduce their own versions and improvements of this method. Scientists are not afraid that they will completely ruin the work on current AI models. These are trained on billions of images, so point "poisonings" should not be a problem for the general operation of AI. However, it will make it difficult to copy artists who are securing themselves with the system.

Nightshade is a tool that offers artists the opportunity to actively defend against unauthorized use of their works in AI models. Although this requires a significant number of "poisoned" images to effectively affect the operation of the model, it is an important step in the fight for artists' rights in the digital age. As representatives of Stability AI noted, the development of AI models is moving towards greater diversity and elimination of biases, but Nightshade shows that artists are not defenseless and can also act to protect their achievements.