Skip to main content

Artists Are Slipping Anti-AI ‘Poison’ into Their Art. Here’s How It Works

Digital cloaking tools such as Nightshade and Glaze help artists take back control from generative AI—but they’re no forever fix

Vector illustration of a robotic hand drawing tracing and connecting dots to draw Mona Lisa as a white line drawing on a blue background

Moor Studio/Getty Images

Mignon Zakuga makes a living illustrating book covers and creating art for video games. But during the past year she has watched multiple would-be clients drop off her waiting list and use artificial-intelligence-generated images instead—images that have often resembled her own dark and ethereal painting style. “The first time it happened, I was pretty depressed about it,” she says. “It’s not a good feeling.” Though Zakuga is still able to get by, she worries about younger artists just starting out in the field. “This could potentially take away a lot of blooming artists’ futures,” she says.

Generative AI tools are trained on mountains of data, including copyrighted imagesand text. In the coming years, courts around the world will begin deciding whether, and when, this violates copyright laws. Meanwhile artists have been left scrambling to protect their work on their own. Zakuga has found a bit of comfort in digital “cloaking” tools, such as Glaze and Nightshade, that jumble up how an AI model “sees” an image—but leave it looking effectively unchanged to human eyes. Each of these cloaking tools has its own strengths and limitations. And while using them has given artists like Zakuga some much-needed peace of mind, experts warn that these tools are not a long-term solution for protecting livelihoods.

In 2022 University of Chicago computer scientists Ben Zhao and Heather Zheng began to receive e-mails from concerned artists looking to protect their work. The two had previously created a tool called Fawkes that could modify photographs to thwart facial-recognition AI, and artists wanted to know if the tool could shield their style from image-generating models such as Stable Diffusion or Midjourney. The initial answer was no, but the researchers soon began developing two new tools—Glaze and Nightshade—to try to protect artists’ work.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Both programs alter an image’s pixels in subtle, systematic ways that are unobtrusive to humans but baffling to an AI model. Like optical illusions that trick human eyes, tiny visual tweaks can completely change how the AI perceives an image, says New York University computer scientist Micah Goldblum, who has developed similar cloaking tools but wasn’t involved in creating Glaze or Nightshade.

These tools take advantage of vulnerabilities in an AI model’s underlying architecture. Image-generating AI systems are trained on troves of images and descriptive text, from which they learn to associate certain words with visual features such as shapes and colors. These cryptic associations are plotted within massive, multidimensional internal “maps,” with related concepts and features clustered near one another. The models use such maps as a guide to convert text prompts into newly generated images.

Glaze and Nightshade both create images that muck up these internal maps, making certain unrelated concepts seem close together. To build these programs, the researchers used “feature extractors”—analytical programs that simplify these hypercomplex maps and show which concepts generative models lump together and which they separate. The researchers used this knowledge to build algorithms that modify a training image in small but targeted ways; this muddles the placement of concepts by creating incorrect associations within the model.

In Glaze these alterations are aimed at masking an individual artist’s style. This can prevent models from being trained to mimic a creator’s work, Zheng explains. If fed “Glazed” images in training, an AI model might interpret an artist’s bubbly and cartoonish illustration style as if it were more like Picasso’s cubism. The more Glazed images are used to train a would-be copycat model, the more mixed up that AI’s outputs will be. Other tools such as Mist, also meant to defend artists’ unique style from AI mimicry, work similarly.

Nightshade takes things further by sabotaging existing generative AI models. This cloaking tool turns potential training images into “poison” that teaches AI to incorrectly associate fundamental ideas and images. It only takes a few hundred Nightshade-treated images to retrain a generative AI model to think, for example, that a cat is a dog or that a hat is a cake. Already, Zhao says, hundreds of thousands of people have downloaded and begun deploying Nightshade to pollute the pool of AI training images.

By gunking up the gears of generative AI models, Nightshade may make the training process slower or more costly for developers. This, Zhao says, could theoretically incentivize AI companies to pay for image use rights through official channels—rather than invest time in cleaning and filtering unlicensed training data scraped from the Web.

Zakuga, who signed up to beta test both Glaze and Nightshade, still uses these tools regularly. She runs each piece of her art through Nightshade first and then Glaze, per Zhao’s recommendation, before uploading an image file to the Internet.

Unfortunately, cloaking a single image can take hours, Zakuga says. And the tools can only help artists defend new work. If an image has already been hoovered up into an AI training data set, it’s too late to cloak it.

Plus, although these tools are meant to make changes that are invisible to the human eye, sometimes they are actually noticeable—especially to the artists who created the work. “It’s not my favorite thing,” Zakuga says, but it’s a “necessary sacrifice.” Zheng says she and her colleagues are still working to find the ideal balance between disruption and maintaining an image’s integrity.

More critically, there’s no guarantee that cloaking tools will work in the long term. “These are not future-proof tools,” says Gautam Kamath, an assistant professor of computer science at the University of Waterloo in Ontario. At least one attempt by other academic machine-learning researchers has succeeded in partially compromising the Glaze cloak, Zhao says. Zhao believes Glaze is currently sturdy, but Goldblum notes that changes to future AI models could render the cloaking methods obsolete, particularly as developers create models that work more similarly to the human brain.

Digital security “is always a cat-and-mouse game,” Zhao says. He and Zheng hope other researchers will develop complementary strategies that will all add up to stronger protection. “The important thing with Nightshade is that we proved artists don’t have to be powerless,” Zheng says.

For now Kamath suggests that artists can also consider using other digital tools such as Kudduru, which detects Web scrapers and blocks their access to images on a website. But just as image cloaks have weaknesses, these blockers are not always successful against large scraping operations, Zhao notes. Artists can also put their work behind a paywall or log-in page to prevent Web scraping, though that limits their potential audience.

To truly protect artists’ livelihood in the long term, Zhao, Zheng and Kamath agree that clarified copyright laws will be necessary, along with new legislation protecting creative work.

In the meantime Zakuga will keep using Glaze and Nightshade for as long as she can. “It’s the best solution we have now,” she says.