CISPA-Faculty Dr. Franziska Boenisch has been awarded a Starting Grant from the European Research Council (ERC), receiving about €1.5 million over the next five years for her project Privacy4FMs. In this project, she is developing new methods to better protect sensitive data in large AI models and to detect potential data leaks.
The ERC Starting Grant is one of the most prestigious awards for early-career
researchers in Europe. It supports fundamental research with high innovation
potential. “Receiving this grant is, for me personally, confirmation that basic research
is worthwhile for both society and technology, and that trust in artificial intelligence is
only possible with real data protection,” says Franziska Boenisch. But it is precisely this
data protection that is difficult to guarantee—especially for large AI models like GPT
or LLaMA. These so-called foundation models are fed massive, uncurated datasets
during pretraining, which can include highly sensitive material such as our emails or
private conversations with systems like ChatGPT. During fine-tuning, the models are
adapted to specific tasks—for example, customer service or medical diagnostics—
where sensitive data can also enter the models. This is how powerful systems for
image, audio, and text generation are created. The downside: They can
unintentionally disclose private information. That’s exactly what Boenisch is tackling:
“My project develops new methods so that foundation models do not unintentionally
leak private training data. I make sure this data stays protected and that we can
detect when there is a problem,” she explains.
AI as the “new Google”
The ERC Starting Grant is one of the most prestigious awards for early-career researchers in Europe. It supports fundamental research with high innovation potential. “Receiving this grant is, for me personally, confirmation that basic research is worthwhile for both society and technology, and that trust in artificial intelligence is only possible with real data protection,” says Franziska Boenisch. But it is precisely this data protection that is difficult to guarantee—especially for large AI models like GPT or LLaMA. These so-called foundation models are fed massive, uncurated datasets during pretraining, which can include highly sensitive material such as our emails or private conversations with systems like ChatGPT. During fine-tuning, the models are adapted to specific tasks—for example, customer service or medical diagnostics—where sensitive data can also enter the models. This is how powerful systems for image, audio, and text generation are created. The downside: They can unintentionally disclose private information. That’s exactly what Boenisch is tackling: “My project develops new methods so that foundation models do not unintentionally leak private training data. I make sure this data stays protected and that we can detect when there is a problem,” she explains.
AI as the “new Google”
What particularly attracts Boenisch to this topic is AI’s growing importance in everyday life: “For many people, foundation models have already become the new Google. They’re used for all sorts of questions, including very personal ones.” Protecting private information is therefore not only a technical issue but also a societal one: “The worst part is when we don’t even notice that a model is leaking data—because anything that becomes public once stays public forever. And that is exactly the risk right now. Current methods are not reliable at detecting and preventing data leaks. My project develops new approaches to close this gap and makes visible where risks exist.” The ERC grant opens up new opportunities for the researcher: “For me, the ERC is a huge opportunity. Thanks to this funding I can build a strong research team that is fully dedicated to an issue that affects all of us: protecting our data in an AI world.”
A new approach: data protection across the entire AI lifecycle
According to Boenisch, existing methods for preventing data leaks often only act in isolated phases of the training process or lead to drops in model quality. Her project therefore goes several steps further: “For the first time, my approach provides a theoretical privacy guarantee across the entire lifecycle of foundation models—not just for individual stages like fine-tuning, as has been the case until now. I’m making pretraining privacy protection practical, without the huge reductions in model prediction quality that earlier methods caused.” Preserving model efficiency is only one of the big challenges Boenisch faces. The question of societal and legal oversight of AI models is also part of her research project: “I am extending the methodological work by developing new auditing tools, and for the first time my auditing links technical risks—such as the success rate of certain attacks—directly to privacy risks under the GDPR, thereby connecting our technical capabilities with legal and societal requirements.”
About the ERC
The ERC, set up by the European Union in 2007, is the premier European funding organisation for excellent frontier research. It funds creative researchers of any nationality and age, to run projects based across Europe. The ERC offers 4 core grant schemes: Starting Grants, Consolidator Grants, Advanced Grants and Synergy Grants. With its additional Proof of Concept Grant scheme, the ERC helps grantees to explore the innovation potential of their ideas or research results. The ERC is led by an independent governing body, the Scientific Council. Since 1 November 2021, Maria Leptin is the President of the ERC.
About CISPA
The CISPA Helmholtz Center for Information Security is a national Big Science institution within the Helmholtz Association. It explores information security in all its facets in order to comprehensively and holistically address the pressing major challenges of cybersecurity and trustworthy artificial intelligence that our society faces in the digital age. CISPA holds a global leadership position in the field of cybersecurity, combining cutting-edge and often disruptive foundational research with innovative applied research, technology transfer, and societal discourse. Thematically, it aims to cover the entire spectrum from theory to empirical research. It is internationally recognized as a training ground for the next generation of cybersecurity experts and scientific leaders in the field.
boenisch@cispa.de
CISPA-Faculty Dr. Franziska Boenisch has received a Starting Grant from the European Research Counci ...
Quelle: Tobias Ebelshäuser
Copyright: CISPA
Merkmale dieser Pressemitteilung:
Journalisten
Informationstechnik
überregional
Forschungsprojekte, Wettbewerbe / Auszeichnungen
Englisch
CISPA-Faculty Dr. Franziska Boenisch has received a Starting Grant from the European Research Counci ...
Quelle: Tobias Ebelshäuser
Copyright: CISPA
Sie können Suchbegriffe mit und, oder und / oder nicht verknüpfen, z. B. Philo nicht logie.
Verknüpfungen können Sie mit Klammern voneinander trennen, z. B. (Philo nicht logie) oder (Psycho und logie).
Zusammenhängende Worte werden als Wortgruppe gesucht, wenn Sie sie in Anführungsstriche setzen, z. B. „Bundesrepublik Deutschland“.
Die Erweiterte Suche können Sie auch nutzen, ohne Suchbegriffe einzugeben. Sie orientiert sich dann an den Kriterien, die Sie ausgewählt haben (z. B. nach dem Land oder dem Sachgebiet).
Haben Sie in einer Kategorie kein Kriterium ausgewählt, wird die gesamte Kategorie durchsucht (z.B. alle Sachgebiete oder alle Länder).