idw – Informationsdienst Wissenschaft

Nachrichten, Termine, Experten

Grafik: idw-Logo
Grafik: idw-Logo

idw - Informationsdienst
Wissenschaft

idw-Abo

idw-News App:

AppStore

Google Play Store



Instance:
Share on: 
01/14/2026 13:32

Who bears responsibility for AI-generated child pornography?

Kathrin Haimerl Abteilung Kommunikation
Universität Passau

    A study by the University of Passau shows that tech companies can also be prosecuted under German law if they tolerate abuse.

    A short, seemingly harmless command is all it takes to use Elon Musk's chatbot Grok to turn public photos into revealing images – without the consent of the people depicted. For weeks, users have been flooding the platform X with such deepfakes, some of which show minors.

    Who is liable when AI is misused to create child sexual abuse material (CSAM)? A research team from the University of Passau, led by computer scientist Professor Steffen Herbold, holder of the Chair of AI Engineering, has investigated this question. ‘We wanted to know what measures developers and operators need to take to minimise the risk of prosecution and act responsibly,’ explains Professor Herbold.

    The key finding: the main perpetrators are the users themselves when they use AI to create such images. However, those responsible for the AI can also be held accountable – for example, if they intentionally aid and abet.

    ‘Anyone who makes AI publicly available must be aware that it can also be misused,’ says lead author Anamaria Mojica Hanke, research assistant at the Chair of AI Engineering. ‘If the operator knowingly allows users to create CSAM, for example, and no appropriate countermeasures are taken, this may be relevant under criminal law.’

    When are developers liable? – Intent is crucial

    According to the study, intent is the key factor. Both users and those responsible for AI must act with knowledge and intent with regard to illegal content and its distribution. The circumstances of each individual case are decisive here. It becomes particularly risky when AI can generate realistic nude images. ‘If an AI model explicitly allows the creation of revealing content, this can be considered evidence of aiding and abetting,’ says Professor Brian Valerius, holder of the Chair of Artificial Intelligence in Criminal Law. An extreme case would be specialised models from the darknet that are specifically trained to create depictions of abuse.

    ‘But even if a provider prohibits use for illegal purposes in its general terms and conditions, this civil law provision does not exempt them from criminal liability,’ adds legal scholar Svenja Wölfel, a research assistant at Professor Valerius’s chair. ‘On the contrary, the prohibition may even show that the developer was aware of the risk and thus indicate the necessary intent.’

    Even foreign hosting does not protect

    In the study, the researchers also examine how the publication context of AI influences legal responsibility. Whether the AI runs on German servers or not is not decisive. German authorities could still investigate if a German citizen uses the AI or if a German developer was involved. Even purely foreign cases fall under German law if international crimes such as child pornography are involved.

    What does this mean in terms of the initial example? For Professor Herbold, it is at least questionable whether a line has been crossed in the current Grok case: "The protective mechanisms must be effective and state of the art. Given how easy it is to circumvent these mechanisms in Grok at present, it is questionable whether allowing only paying users to access the model is a sufficient response.‘

    Lead author Mojica Hanke sums up the study as follows: ’There is no guarantee of immunity from prosecution. Anyone who develops AI must implement clear protective mechanisms – both technical and legal.'

    In addition to the Passau researchers, Thomas Goger, Chief Public Prosecutor at the Bavarian Cybercrime Unit, was also involved in the study. The paper, entitled ‘Criminal Liability of Generative Artificial Intelligence Providers for User-Generated Child Sexual Abuse Material,’ was recently published as a preprint – i.e., in a preliminary version so that the scientific community can discuss it. The team is particularly pleased that the study has already undergone a peer review process and has been accepted for the renowned International Conference on AI Engineering, which will take place in Rio de Janeiro in April 2026.


    Contact for scientific information:

    Professor Steffen Herbold
    Chair of AI Engineering
    University of Passau
    E-Mail: steffen.herbold@uni-passau.de


    Original publication:

    Anamaria Mojica-Hanke, Thomas Goger, Svenja Wölfel, Brian Valerius, Steffen Herbold:
    "Criminal Liability of Generative Artificial Intelligence Providers for User-Generated Child Sexual Abuse Material" https://arxiv.org/abs/2601.03788


    Images

    Criteria of this press release:
    Business and commerce, Journalists, Scientists and scholars, Students, Teachers and pupils, all interested persons
    Information technology, Law
    transregional, national
    Research results, Scientific Publications
    English


     

    Help

    Search / advanced search of the idw archives
    Combination of search terms

    You can combine search terms with and, or and/or not, e.g. Philo not logy.

    Brackets

    You can use brackets to separate combinations from each other, e.g. (Philo not logy) or (Psycho and logy).

    Phrases

    Coherent groups of words will be located as complete phrases if you put them into quotation marks, e.g. “Federal Republic of Germany”.

    Selection criteria

    You can also use the advanced search without entering search terms. It will then follow the criteria you have selected (e.g. country or subject area).

    If you have not selected any criteria in a given category, the entire category will be searched (e.g. all subject areas or all countries).