A new study by SRH University emphasises the benefits of explainable AI systems for the reliable and transparent detection of deepfakes. AI decisions can be presented in a comprehensible way through feature analyses and visualisations, thus promoting trust in AI technologies.
A research team led by Prof Dr Alexander I. Iliev from SRH University, with key contributions by the researcher Nazneen Mansoor, has developed an innovative method for detecting deepfakes. In the study recently published in the journal Applied Sciences, the scientists present the use of explainable artificial intelligence (Explainable AI) to increase transparency and reliability in the identification of manipulated media content.
Deepfakes, i.e. fake media content such as videos or audio files created using artificial intelligence, pose an increasing threat to society as they can be used to spread misinformation and undermine public trust. Conventional detection methods often reach their limits, especially when it comes to making the decision-making processes of AI models comprehensible.
In its study, the SRH University team carried out extensive tests in which different AI models were tested for their ability to reliably identify deepfakes. Particular attention was paid to explainable AI, which makes it possible to present the basis for the models' decisions in a transparent and comprehensible manner. This is done, for example, using visualisation techniques such as ‘heat maps’, which highlight in colour which image areas the AI has identified as relevant for its decision. In addition, the explainable models analyse specific features such as textures or movement patterns that indicate manipulation.
Prof Dr Alexander I. Iliev, Head of the Computer Science – Big Data & Artificial Intelligence Master's programme, explains the importance of these approaches: ‘Our aim was to create technologies that are not only effective, but also trustworthy. The ability to make the decision-making process of AI transparent is becoming increasingly important – be it in law enforcement, the media industry or in science.’
The study shows that explainable AI not only improves recognition accuracy, but also promotes understanding and trust in AI technologies. By showing how the decisions were made, weaknesses in the models can be identified and future systems can be optimised in a targeted manner. This is a crucial step in strengthening the responsible use of AI in society.
With this research, SRH University is emphasising its leading role in the field of applied sciences and the development of innovative technologies. The degree programmes offered by the university, such as the Master's programmes in Computer Science or Information Technology, prepare students specifically for current challenges in the field of artificial intelligence.
The full study entitled ‘Explainable AI for DeepFake Detection’ has been published in the journal Applied Sciences and can be accessed via the following link: https://www.mdpi.com/2076-3417/15/2/725
Prof. Dr. Alexander I. Iliev
alexander.iliev@srh.de
Criteria of this press release:
Journalists, Scientists and scholars
Information technology
transregional, national
Research projects, Scientific Publications
English
You can combine search terms with and, or and/or not, e.g. Philo not logy.
You can use brackets to separate combinations from each other, e.g. (Philo not logy) or (Psycho and logy).
Coherent groups of words will be located as complete phrases if you put them into quotation marks, e.g. “Federal Republic of Germany”.
You can also use the advanced search without entering search terms. It will then follow the criteria you have selected (e.g. country or subject area).
If you have not selected any criteria in a given category, the entire category will be searched (e.g. all subject areas or all countries).