idw – Informationsdienst Wissenschaft

Nachrichten, Termine, Experten

Grafik: idw-Logo
Science Video Project
idw-Abo

idw-News App:

AppStore

Google Play Store



Instance:
Share on: 
08/13/2020 13:32

Computer Scientists at TU Braunschweig Develop Defence Against Image-Scaling Attacks

Anna Krings Presse und Kommunikation
Technische Universität Braunschweig

    Whether uploading on a website, sending via messenger, posting on social media or processing by artificial intelligence - digital images are often reduced in size using algorithms. A year ago, Chinese scientists discovered that images can be manipulated inconspicuously when scaled down. A research team from the Institute of System Security at Technische Universität Braunschweig has now studied this attack technique further and developed a defence. They present the results today, August 13, 2020, at the USENIX Security Symposium, one of the world's most important security conferences.

    In order to reduce the size of a digital image, algorithms do not consider all image points (pixels) the same when calculating. Depending on image size and algorithm, many pixels are hardly or not at all included in the reduction. This is where attackers can take action and only change those pixels that are relevant to the scaling. "This is almost unnoticeable optically, there is only a slight noise in the image. If the image is then reduced in size, only the manipulated points remain and generate a new image that the attacker can freely determine," explains Professor Konrad Rieck, head of the Institute of System Security.

    Threat to learning-based systems

    Such attacks are a particular threat to learning-based systems that work with artificial intelligence (AI): The scaling of images is a very common processing step in order to be able to analyze images through machine learning. "With this attack technique, humans see a different image than the learning process. The human sees the original image, while the artificial intelligence processes the scaled-down, manipulated image and uses it to learn," says Rieck.

    An example: If you want to train an AI system that is supposed to recognize road signs, the human being gives the learning procedure different images of stop signs, for example. If the images have been manipulated, the scaling in the AI system generates a completely different image, for example a right-of-way sign. The system learns a wrong context and does not recognize stop signs later. Such attacks are a threat to all security-relevant applications where images are processed. The image analysis can be sabotaged unnoticed and lead to false predictions.

    Defence made in Braunschweig

    But how can you protect yourself against such attacks? Attackers take advantage of the fact that not all pixels are equally involved in the image reduction. "This is exactly where our defence comes in: We have developed a method that ensures that all pixels are used equally for the reduction," says Konrad Rieck. "Our method determines which pixels are relevant for scaling and cleverly includes the rest of the image in this. Visually, you cannot see this change. However, this makes an attack impossible." The defense can be easily integrated into existing AI systems because it does not require any changes to the image processing and the learning process. "So far, no cases of attack have been reported. We hope that our analysis and defence will help to prevent this from happening in the future," says Rieck.


    Contact for scientific information:

    Prof. Dr. Konrad Rieck
    Technische Universität Braunschweig
    Institute of System Security
    Rebenring 56
    38106 Braunschweig
    Phone: +49 531 391-55120
    Email: k.rieck@tu-braunschweig.de
    www.tu-braunschweig.de/sec

    Erwin Quiring
    Technische Universität Braunschweig
    Institute of System Security
    Rebenring 56
    38106 Braunschweig
    Phone: +49 531 391-55124
    Email: e.quiring@tu-braunschweig.de
    www.tu-braunschweig.de/sec


    Original publication:

    Erwin Quiring, David Klein, Daniel Arp, Martin Johns and Konrad Rieck: Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning. Proc. of USENIX Security Symposium 2020.


    More information:

    http://scaling-attacks.net The publication and the implementation of the defence are available on the research project website.
    https://magazin.tu-braunschweig.de/en/pi-post/computer-scientists-at-tu-braunsch... Press release with sample images.


    Images

    Criteria of this press release:
    Journalists, Scientists and scholars
    Information technology
    transregional, national
    Research results
    English


     

    Help

    Search / advanced search of the idw archives
    Combination of search terms

    You can combine search terms with and, or and/or not, e.g. Philo not logy.

    Brackets

    You can use brackets to separate combinations from each other, e.g. (Philo not logy) or (Psycho and logy).

    Phrases

    Coherent groups of words will be located as complete phrases if you put them into quotation marks, e.g. “Federal Republic of Germany”.

    Selection criteria

    You can also use the advanced search without entering search terms. It will then follow the criteria you have selected (e.g. country or subject area).

    If you have not selected any criteria in a given category, the entire category will be searched (e.g. all subject areas or all countries).