Download PDFOpen PDF in browser

Towards Robustness of Convolutional Neural Network against Adversarial Examples

EasyChair Preprint no. 2552

18 pagesDate: February 5, 2020

Abstract

 Deep learning is at the heart of the current rise of artificial intelligence. In the field of Computer Vision, it has become the workhorse for applications ranging from self-driving cars to surveillance and security. A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm which can take in an input image, assign importance (learnable weights and biases) to various objects in the image and be able to differentiate one from the other. The architecture of a ConvNet is analogous to that of the connectivity pattern of Neurons in the Human Brain and was inspired by the organization of the Visual Cortex.  Convolutional Neural  Networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, but recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the CNN models. Adversarial attacks pose a serious threat to the success of CNN in practice. In this paper, we have tried to reconstruct adversarial examples/patches/images which are created by physical attacks (Gaussian blur attack and Salt and Pepper Noise Attack), so that CNN again classified correctly these reconstructed adversarial or perturbed images.

Keyphrases: adversarial example, and Pepper Noise Attack, CNN, Gaussian blur attack Salt

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:2552,
  author = {Kazim Ali},
  title = {Towards Robustness of Convolutional Neural Network against Adversarial Examples},
  howpublished = {EasyChair Preprint no. 2552},

  year = {EasyChair, 2020}}
Download PDFOpen PDF in browser