idw – Informationsdienst Wissenschaft

Nachrichten, Termine, Experten

Grafik: idw-Logo
Grafik: idw-Logo

idw - Informationsdienst
Wissenschaft

Science Video Project
idw-Abo

idw-News App:

AppStore

Google Play Store



Instance:
Share on: 
10/28/2024 16:24

Responsible AI in the Automotive Industry – Accenture and DFKI Present Joint White Paper

Heike Leonhard DFKI Saarbrücken
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, DFKI

    Deep learning is an AI technology that has significantly shaped the last decade, whether it's recognizing medical conditions, creative applications for text or image generation, or autonomous driving. However, despite much progress, machine learning's successes, especially in autonomous driving, have fallen short of expectations. Accenture and DFKI’s joint white paper, “Responsible AI in the Automotive Industry – Techniques and Use Cases,” is dedicated to finding the reasons and proposing new technological approaches.

    According to the team of authors, current AI models, such as deep learning, are not trustworthy and responsible enough to be reliably used in highly critical application areas such as autonomous driving. They often suffer from problems related to explainability, robustness, and generalizability. Furthermore, they require large amounts of training data and have high energy demands. Deep learning models are powerful, but they are unable to explain their decisions, making it difficult to trust their results in safety-critical applications.

    As a solution, the authors propose the concept of neuro-explicit AI, a hybrid approach that combines the strengths of neural networks with symbolic reasoning and explicit knowledge representation. Neuro-explicit AI aims to create models that are more transparent, interpretable, and robust by integrating domain-specific knowledge and physical laws into the AI decision-making process. Neuro-explicit AI uses symbolic arguments to explain the decisions made, promising a future where AI decisions are more transparent, and the AI system is more reliable.

    The white paper discusses several use cases that demonstrate the potential of neuro-explicit AI for autonomous driving. The authors conclude that deep reinforcement learning, together with online planning methods, can improve the safety and performance of autonomous vehicles in uncertain real-time environments. This approach uses neural networks and symbolic models to enable safer decision-making in dynamic situations, such as avoiding pedestrians or navigating complex traffic situations.

    Another application area focuses on improving the perception of autonomous driving systems by incorporating knowledge of visual features. The system uses high-level, symbolic knowledge of objects’ physical properties, such as light reflections, to increase the accuracy of object recognition. Incorporating such symbolic information into perception models makes the technology more resilient to disruptions and can better interpret complex visual data, resulting in greater reliability and safety.

    Accenture and DFKI emphasize the importance of responsible AI practices for achieving AI maturity, i.e., developing AI systems that not only perform technically proficiently but also function in an ethical, fair, and transparent manner. Their framework for responsible AI highlights several key principles, including fairness, transparency, explainability, accountability, and sustainability. These principles are designed to ensure that AI technologies benefit society while minimizing risks such as bias, discrimination, and privacy violations. For example, fairness in AI ensures that algorithms do not produce biased or discriminatory results, while explainability allows stakeholders to understand how AI systems make decisions. Similarly, accountability ensures that there are clear responsibilities for AI-driven outcomes, and sustainability focuses on minimizing the environmental impact of AI technologies.

    The paper also discusses the challenges of AI governance and the need for organizations to adopt cross-functional governance structures that promote transparency and accountability in AI development. By establishing clear roles, policies, and expectations, companies can better manage the risks associated with AI while increasing the trust of consumers and other stakeholders.


    Contact for scientific information:

    Dr. Christian Müller
    Head of DFKI Competence Center Autonomous Driving
    E-Mail: christian.mueller@dfki.de
    Phone: +49 681 85775 4823


    More information:

    https://www.dfki.de/en/web/news/responsible-ai-in-the-automotive-industry-accent... (including download link for the white paper)


    Images

    Neuro-explicit AI is designed to ensure greater reliability in autonomous driving.
    Neuro-explicit AI is designed to ensure greater reliability in autonomous driving.

    © Scharfsinn86 - stock.adobe.com


    Criteria of this press release:
    Business and commerce, Journalists, Scientists and scholars
    Economics / business administration, Information technology, Traffic / transport
    transregional, national
    Cooperation agreements, Research results
    English


     

    Help

    Search / advanced search of the idw archives
    Combination of search terms

    You can combine search terms with and, or and/or not, e.g. Philo not logy.

    Brackets

    You can use brackets to separate combinations from each other, e.g. (Philo not logy) or (Psycho and logy).

    Phrases

    Coherent groups of words will be located as complete phrases if you put them into quotation marks, e.g. “Federal Republic of Germany”.

    Selection criteria

    You can also use the advanced search without entering search terms. It will then follow the criteria you have selected (e.g. country or subject area).

    If you have not selected any criteria in a given category, the entire category will be searched (e.g. all subject areas or all countries).