Adversarial image detection based on the maximum channel of saliency maps
CSTR:
Author:
Affiliation:

Key Laboratory of Computer Vision and System, Tianjin Key Laboratory of Intelligence Computing and Novel Software Technology, Tianjin University of Technology, Tianjin 300384, China

  • Article
  • | |
  • Metrics
  • |
  • Reference [14]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    Studies have shown that deep neural networks (DNNs) are vulnerable to adversarial examples (AEs) that induce incorrect behaviors. To defend these AEs, various detection techniques have been developed. However, most of them only appear to be effective against specific AEs and cannot generalize well to different AEs. We propose a new detection method against AEs based on the maximum channel of saliency maps (MCSM). The proposed method can alter the structure of adversarial perturbations and preserve the statistical properties of images at the same time. We conduct a complete evaluation on AEs generated by 6 prominent adversarial attacks on the ImageNet large scale visual recognition challenge (ILSVRC) 2012 validation sets. The experimental results show that our method performs well on detecting various AEs.

    Reference
    [1] SIMONYAN K, ZISSERMAN A. Very deep convolutional networksfor large-scale image recognition[C]//
    2015 International Conference on Learning Representations (ICLR), May 7-9, 2015, San Diego, CA, USA. CoRR, 2015:abs/1409.1556.
    [2] SZEGEDY C, ZARERBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[C]//2014 Inter-
    national Conference on Learning Representations (ICLR poster), April 14-16, 2014, Banff, Canada. CoRR, 2014:abs/1312.6199.
    [3] ZHANG S S, ZUO X, LIU J W. The problem of the adversarial examples in deep learning[J]. Chinese journal of computers, 2019, 42(08):1886-1904.
    [4] WANG X M, LI J, KUANG X H, et al. The security of machine learning in an adversarial setting:a survey[J]. Journal of parallel and distributed computing, 2019, 130:12-23.
    [5] SERBAN A, POLL E, VISSER J. Adversarial examples on object recognition:a comprehensive survey[J]. ACM computing surveys, 2020, 53(3):1-38.
    [6] GROSSE K, MANOHARAN P, PAPERNOT N, et al. On the (statistical) detection of adversarial examples[EB/OL]. (2017-02-21) [2021-11-12]. https://
    arxiv.org/pdf/1702.06280.pdf.
    [7] KHERCHOUCHE A, FEZZA S A, HAMIDOUCHE W, et al. Detection of adversarial examples in deep neural networks with natural scene statistics[C]//2020 International Joint Conference on Neural Networks (IJCNN), July 19-24, 2020, Glasgow, UK. New York:IEEE, 2020:9206956.
    [8] LIANG B, LI H C, SU M Q, et al. Detecting adversarial image examples in deep neural networks with adaptive noise reduction[J]. IEEE transactions on dependable and secure computing, 2021, 18(1):72-85.
    [9] XU L, EVANS D, QI Y J, et al. Feature squeezing:detecting adversarial examples in deep neural networks[C]//2018 Conference on Network and Distributed System Security, February 18-21, 2018, San Diego, CA, USA. CoRR, 2018:abs/1704.01155.
    [10] CAI P, QUAN H M. Face anti-spoofing algorithm combined with CNN and brightness equalization[J]. Journal of Central South University, 2021, 28(1):194-204.
    [11] SIMONYAN K, VEDALDI A, ZISSERMAN A. Deep
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

FU Haoran, WANG Chundong, LIN Hao, HAO Qingbo. Adversarial image detection based on the maximum channel of saliency maps[J]. Optoelectronics Letters,2022,18(5):307-312

Copy
Share
Article Metrics
  • Abstract:522
  • PDF: 374
  • HTML: 0
  • Cited by: 0
History
  • Received:October 03,2021
  • Revised:November 25,2021
  • Online: June 07,2022
Article QR Code