idw – Informationsdienst Wissenschaft

Nachrichten, Termine, Experten

Grafik: idw-Logo
Science Video Project
idw-Abo

idw-News App:

AppStore

Google Play Store



Instanz:
Teilen: 
02.04.2024 13:26

Solutions for efficient and trustworthy artificial intelligence

Britta Widmann Kommunikation
Fraunhofer-Gesellschaft

    There is a high demand for AI solutions in industrial applications. These solutions need to be efficient, trustworthy and secure in order to be used in series production or quality control, for example. However, the new possibilities opened up by generative AI also raise questions. Can users rely on what a chatbot says? How can vulnerabilities in an AI model be detected early on during development? At the Hannover Messe 2024, the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS and the institutes of the Fraunhofer Big Data and Artificial Intelligence Alliance will be presenting two exhibits and several use cases centering on trustworthy AI solutions.

    AI offers a wealth of potential for industrial manufacturing, spanning fields such as automation, quality checks, and process optimization. “We are currently receiving a lot of inquiries from companies that have developed AI prototypes and want to put them into series production. To make the scale-up a success, these AI solutions have to be tested systematically so it is also possible to identify vulnerabilities that did not become apparent in the prototype,” explains Dr. Maximilian Poretschkin, Head of AI Assurance and Certification at Fraunhofer IAIS.

    Industry attendees at the Hannover Messe 2024 (April 22–26, 2024) can visit the Fraunhofer booth (Booth B24, Hall 2) and explore various exhibits and concrete use cases to learn more about applications and solutions for trustworthy AI and how to incorporate them securely and reliably into their business processes.
    Exhibit 1: AI Reliability for Production: Assessment tools for systematic testing of AI models

    One such application is a testing tool for AI models used in production or in mechanical and plant engineering. The tool can be used to systematically pinpoint vulnerabilities in AI systems as a way to ensure that they are reliable and robust. “Our methodology is based on specifying the AI system’s scope of application detail. Specifically, we parametrize the space of possible inputs that the AI-system processes and give it a semantic structure. The AI testing tools that we have developed in the KI.NRW flagship project "ZERTIFIZIERTE KI", among others, can then be used to detect weaknesses in AI-systems,” Poretschkin explains.

    Fraunhofer and other partners involved in the research project are working to develop assessment methods concerning the quality of AI systems: The AI assessment catalog provides companies with a practically tested guide that enables them to make their AI systems efficient and trustworthy. A recent white paper also addresses the question of how AI applications developed with generative AI and foundation models can be assessed and made secure.

    Exhibit 2: “Who’s deciding here?!”: What can artificial intelligence decide — and what decisions can it not yet make?

    The collective exhibit titled “Who’s deciding here?!”, which the Fraunhofer BIG DATA AI Alliance will be presenting at the Hannover Messe 2024, is also all about trustworthy AI. The exhibit ties in with the topic of freedom, the theme set by the German Federal Ministry of Education and Research (BMBF) for its Science Year in 2024. How do technologies like artificial intelligence influence our freedom to make decisions? How trustworthy are the AI systems that will likely be used increasingly in applications involving sensitive data, such as credit checks?

    The exhibit is designed as an interactive game inviting attendees to reflect on everyday uses of AI. Which decisions can we — and do we want to — leave to AI algorithms, and which decisions would be better made ourselves? Major topics here include object recognition in autonomous driving, the risk of discrimination by AI, and identifying fake news. Sebastian Schmidt, a data scientist working on trustworthy AI at Fraunhofer IAIS, explains: “The game lets everyone experience where AI can be a helpful assistant and where it’s better for humans to participate in the decision. The goal is always to arrive at the right decision without taking decision-making authority and independence away from humans.” This year, the interactive exhibit will also be touring on the MS Wissenschaft ship.

    Use cases for professional applications

    The institutes of the Fraunhofer BIG DATA AI Alliance will also be presenting several real-life applications from various fields. All use cases present the same question: How can AI technology be used safely and securely?

    Safe driving: Uncertainty Wrapper

    Handling uncertainty reliably is a crucial factor in the use of AI predictions in many fields of application. The Uncertainty Wrapper developed by the Fraunhofer Institute for Experimental Software Engineering IESE checks how certain or uncertain the AI’s output is. Examples include object recognition involving street signs.

    Identifying fake news: DisCo

    Fake news is spreading like wildfire on social media. Forensic text analysis techniques developed by the Fraunhofer Institute for Secure Information Technology SIT are used to process texts for journalists, highlighting passages that could use a closer look.

    Large AI language models for companies: OpenGPT-X

    In the OpenGPT-X consortium project funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK), experts from research organizations and the business sector are training large AI language models with as many as 24 European languages under the leadership of Fraunhofer IAIS and IIS. The plan is to make these models available to companies on an open-source basis in the future so they can develop applications based on generative AI and optimize processes.

    Trustworthiness of sensitive production data: DigiWeld

    This AI solution from the Fraunhofer Institute for Manufacturing Engineering and Automation IPA enables machine and plant manufacturers to utilise the benefits of artificial intelligence without having to collect sensitive production data from their customers.

    Ethical principles and human-centered AI: the ADA Lovelace Center

    At the ADA Lovelace Center for Analytics, Data and Applications by Fraunhofer IIS, researchers are working on human-centred and ethical issues. In particular, a social and behavioural science approach is used to explain why people accept AI (or not) and what consequences its use has for the individual.

    AI research for German technological sovereignty

    Through all these solutions, the Fraunhofer researchers are making an important contribution to unlocking the potential of AI in the real world. “The current development of generative AI is a prime example of the huge potential and the many challenges surrounding this forward-looking technology in terms of security, transparency, and privacy. Many of the technologies and solutions we see today originate outside Europe. The research done by the Fraunhofer-Gesellschaft is helping to maintain and grow the technological sovereignty and independence of German companies,” says Dr. Sonja Holl-Supra, Managing Director of the Fraunhofer BIG DATA AI Alliance, which comprises over 30 Fraunhofer institutes and brings together Fraunhofer’s expertise across the field of AI.

    The AI Act recently adopted by the European Parliament also provides for AI assessments. Assessment tools developed for this purpose can help implement this requirement.

    Attendees can visit the joint Fraunhofer booth (Booth B24, Hall 2) at the Hannover Messe 2024 (April 22–26, 2024) to explore the exhibit centering on the reliability of AI in production, the interactive game “Who’s deciding here?!” and Fraunhofer’s solutions for trustworthy AI.


    Weitere Informationen:

    https://www.fraunhofer.de/en/press/research-news/2024/april-2024/solutions-for-e...


    Bilder

    A systematic approach combined with AI assessment tools can help pinpoint vulnerabilities in AI models.
    A systematic approach combined with AI assessment tools can help pinpoint vulnerabilities in AI mode ...

    © Fraunhofer IAIS / Zertifizierte KI


    Merkmale dieser Pressemitteilung:
    Journalisten
    Elektrotechnik, Informationstechnik, Mathematik, Medien- und Kommunikationswissenschaften
    überregional
    Forschungsprojekte, Kooperationen
    Englisch


     

    A systematic approach combined with AI assessment tools can help pinpoint vulnerabilities in AI models.


    Zum Download

    x

    Hilfe

    Die Suche / Erweiterte Suche im idw-Archiv
    Verknüpfungen

    Sie können Suchbegriffe mit und, oder und / oder nicht verknüpfen, z. B. Philo nicht logie.

    Klammern

    Verknüpfungen können Sie mit Klammern voneinander trennen, z. B. (Philo nicht logie) oder (Psycho und logie).

    Wortgruppen

    Zusammenhängende Worte werden als Wortgruppe gesucht, wenn Sie sie in Anführungsstriche setzen, z. B. „Bundesrepublik Deutschland“.

    Auswahlkriterien

    Die Erweiterte Suche können Sie auch nutzen, ohne Suchbegriffe einzugeben. Sie orientiert sich dann an den Kriterien, die Sie ausgewählt haben (z. B. nach dem Land oder dem Sachgebiet).

    Haben Sie in einer Kategorie kein Kriterium ausgewählt, wird die gesamte Kategorie durchsucht (z.B. alle Sachgebiete oder alle Länder).