idw – Informationsdienst Wissenschaft

Nachrichten, Termine, Experten

Grafik: idw-Logo
Science Video Project
idw-Abo

idw-News App:

AppStore

Google Play Store



Instanz:
Teilen: 
28.10.2024 16:24

Responsible AI in the Automotive Industry – Accenture and DFKI Present Joint White Paper

Heike Leonhard DFKI Saarbrücken
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, DFKI

    Deep learning is an AI technology that has significantly shaped the last decade, whether it's recognizing medical conditions, creative applications for text or image generation, or autonomous driving. However, despite much progress, machine learning's successes, especially in autonomous driving, have fallen short of expectations. Accenture and DFKI’s joint white paper, “Responsible AI in the Automotive Industry – Techniques and Use Cases,” is dedicated to finding the reasons and proposing new technological approaches.

    According to the team of authors, current AI models, such as deep learning, are not trustworthy and responsible enough to be reliably used in highly critical application areas such as autonomous driving. They often suffer from problems related to explainability, robustness, and generalizability. Furthermore, they require large amounts of training data and have high energy demands. Deep learning models are powerful, but they are unable to explain their decisions, making it difficult to trust their results in safety-critical applications.

    As a solution, the authors propose the concept of neuro-explicit AI, a hybrid approach that combines the strengths of neural networks with symbolic reasoning and explicit knowledge representation. Neuro-explicit AI aims to create models that are more transparent, interpretable, and robust by integrating domain-specific knowledge and physical laws into the AI decision-making process. Neuro-explicit AI uses symbolic arguments to explain the decisions made, promising a future where AI decisions are more transparent, and the AI system is more reliable.

    The white paper discusses several use cases that demonstrate the potential of neuro-explicit AI for autonomous driving. The authors conclude that deep reinforcement learning, together with online planning methods, can improve the safety and performance of autonomous vehicles in uncertain real-time environments. This approach uses neural networks and symbolic models to enable safer decision-making in dynamic situations, such as avoiding pedestrians or navigating complex traffic situations.

    Another application area focuses on improving the perception of autonomous driving systems by incorporating knowledge of visual features. The system uses high-level, symbolic knowledge of objects’ physical properties, such as light reflections, to increase the accuracy of object recognition. Incorporating such symbolic information into perception models makes the technology more resilient to disruptions and can better interpret complex visual data, resulting in greater reliability and safety.

    Accenture and DFKI emphasize the importance of responsible AI practices for achieving AI maturity, i.e., developing AI systems that not only perform technically proficiently but also function in an ethical, fair, and transparent manner. Their framework for responsible AI highlights several key principles, including fairness, transparency, explainability, accountability, and sustainability. These principles are designed to ensure that AI technologies benefit society while minimizing risks such as bias, discrimination, and privacy violations. For example, fairness in AI ensures that algorithms do not produce biased or discriminatory results, while explainability allows stakeholders to understand how AI systems make decisions. Similarly, accountability ensures that there are clear responsibilities for AI-driven outcomes, and sustainability focuses on minimizing the environmental impact of AI technologies.

    The paper also discusses the challenges of AI governance and the need for organizations to adopt cross-functional governance structures that promote transparency and accountability in AI development. By establishing clear roles, policies, and expectations, companies can better manage the risks associated with AI while increasing the trust of consumers and other stakeholders.


    Wissenschaftliche Ansprechpartner:

    Dr. Christian Müller
    Head of DFKI Competence Center Autonomous Driving
    E-Mail: christian.mueller@dfki.de
    Phone: +49 681 85775 4823


    Weitere Informationen:

    https://www.dfki.de/en/web/news/responsible-ai-in-the-automotive-industry-accent... (including download link for the white paper)


    Bilder

    Neuro-explicit AI is designed to ensure greater reliability in autonomous driving.
    Neuro-explicit AI is designed to ensure greater reliability in autonomous driving.

    © Scharfsinn86 - stock.adobe.com


    Merkmale dieser Pressemitteilung:
    Journalisten, Wirtschaftsvertreter, Wissenschaftler
    Informationstechnik, Verkehr / Transport, Wirtschaft
    überregional
    Forschungsergebnisse, Kooperationen
    Englisch


     

    Neuro-explicit AI is designed to ensure greater reliability in autonomous driving.


    Zum Download

    x

    Hilfe

    Die Suche / Erweiterte Suche im idw-Archiv
    Verknüpfungen

    Sie können Suchbegriffe mit und, oder und / oder nicht verknüpfen, z. B. Philo nicht logie.

    Klammern

    Verknüpfungen können Sie mit Klammern voneinander trennen, z. B. (Philo nicht logie) oder (Psycho und logie).

    Wortgruppen

    Zusammenhängende Worte werden als Wortgruppe gesucht, wenn Sie sie in Anführungsstriche setzen, z. B. „Bundesrepublik Deutschland“.

    Auswahlkriterien

    Die Erweiterte Suche können Sie auch nutzen, ohne Suchbegriffe einzugeben. Sie orientiert sich dann an den Kriterien, die Sie ausgewählt haben (z. B. nach dem Land oder dem Sachgebiet).

    Haben Sie in einer Kategorie kein Kriterium ausgewählt, wird die gesamte Kategorie durchsucht (z.B. alle Sachgebiete oder alle Länder).