Darmstadt, June 18, 2025. Artists have an interest in protecting their artworks found on the internet from being used as training data by AI models, which could then learn to imitate artistic styles in a deceptively realistic way. Modern image protection tools promise to prevent this. However, researchers at TU Darmstadt, the University of Cambridge and the University of Texas at San Antonio have now shown that this protection can be circumvented – a wake-up call for the industry.
With LightShed, the scientists have introduced a powerful new method that is capable of circumventing modern image protection tools. These tools are designed to protect artists from having their works used to train AI models without their consent. Among the best known are Glaze and NightShade, which have been downloaded more than 8.8 million times and featured in prominent media outlets such as the New York Times, the US radio syndicate National Public Radio (NPR) and the World Economic Forum. They are particularly popular with digital artists who want to prevent AI models such as ‘Stable Diffusion’ from imitating their individual styles.
Since March 2025, these protection tools have been back in the spotlight when OpenAI introduced an image feature in ChatGPT that allowed users to instantly generate artwork ‘in the style of Studio Ghibli,’ a Japanese animation studio. This not only triggered a flood of viral memes, but also reignited debates about image rights. Legal experts pointed out that copyright protects the specific expression, but not the artistic style, which reignited the discussion about tools such as ‘Glaze’ and ‘NightShade’.
Although OpenAI announced that it would block certain prompts that can be used to imitate the artistic styles of living artists, recent court cases, such as Getty Images v. Stability AI, show that not all providers are likely to be equally cooperative.
Furthermore, the protective mechanisms for digital images are not foolproof: the fundamental problem will remain as long as images continue to be freely available online and can therefore be used to train AI models.
The ‘LightShed’ developed by the researchers now clearly shows that even advanced protective measures such as “Glaze” and ‘NightShade’ are not a reliable means of preventing AI models from being trained. Both tools work with subtle distortions invisible to the human eye, known as ‘poisoning perturbations’, which are embedded in digital images to specifically disrupt AI models during training:
Glaze takes a passive approach, making it harder for the model to correctly extract stylistic features.
NightShade goes one step further and actively sabotages the learning process by causing the model to associate the artists' style with completely different concepts.
While these tools have become indispensable aids for many creative professionals, LightShed reveals that their protective effect may have been overestimated. The method can detect, reverse and remove hidden disturbances, making the images usable again for training generative AI models.
In experimental tests, LightShed was able to detect images protected by NightShade with 99.98 percent accuracy and successfully remove the protective measures.
Despite the vulnerabilities identified, the researchers emphasise that LightShed is not just an attack, but a wake-up call:
‘We see this as an opportunity to jointly develop protection mechanisms,’ says Professor Ahmad-Reza Sadeghi, head of the System Security Lab at TU Darmstadt. ‘Our goal is to collaborate with other researchers and support the artistic community in developing robust protection tools against increasingly sophisticated attacks.’ The work clearly demonstrates the urgent need for more resilient, adaptive protective measures in the rapidly evolving world of AI-powered creative technologies. ‘LightShed’ provides an important foundation for new, artist-centric protection strategies.
The research findings will be presented at the renowned USENIX Security 2025 conference in Seattle in August.
MI No. 27e/2025, System Security Lab/bjb
https://www.usenix.org/conference/usenixsecurity25/presentation/foerster
Merkmale dieser Pressemitteilung:
Journalisten
Informationstechnik, Kunst / Design
überregional
Forschungsergebnisse
Englisch
Sie können Suchbegriffe mit und, oder und / oder nicht verknüpfen, z. B. Philo nicht logie.
Verknüpfungen können Sie mit Klammern voneinander trennen, z. B. (Philo nicht logie) oder (Psycho und logie).
Zusammenhängende Worte werden als Wortgruppe gesucht, wenn Sie sie in Anführungsstriche setzen, z. B. „Bundesrepublik Deutschland“.
Die Erweiterte Suche können Sie auch nutzen, ohne Suchbegriffe einzugeben. Sie orientiert sich dann an den Kriterien, die Sie ausgewählt haben (z. B. nach dem Land oder dem Sachgebiet).
Haben Sie in einer Kategorie kein Kriterium ausgewählt, wird die gesamte Kategorie durchsucht (z.B. alle Sachgebiete oder alle Länder).