Download PDFOpen PDF in browser

Scaling-up the Analysis of Neural Networks by Affine Forms: a Block-Wise Noising Approach

EasyChair Preprint no. 10867

9 pagesDate: September 8, 2023

Abstract

The effectiveness of neural networks in handling visual perturbations are frequently assessed using abstract trans- forms, such as affine transformations. However, these trans- forms may forfeit precision and be computationally expensive (time and memory consuming). We suggest in this article a novel approach called block-wise noising to overcome these limitations. Block-wise noising simulates real-world situations in which par- ticular portions of an image are disrupted by inserting non-zero noise symbols only inside a given section of the image. Using this method, it is possible to assess neural networks resilience to these disturbances while preserving scalability and accuracy. The experimental results demonstrate that the present block-wise noising achieves a 50% speed improvement compared to the usual affine forms on specific trained neural networks. Additionally, it can be especially helpful for applications like computer vision, where real-world images may be susceptible to different forms of disturbance.

Keyphrases: Artificial Intelligence, Interpretation abstract, Optimisation, Scalability

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:10867,
  author = {Asma Soualah and Matthieu Martel and Stéphane Abide},
  title = {Scaling-up the Analysis of Neural Networks by Affine Forms: a Block-Wise Noising Approach},
  howpublished = {EasyChair Preprint no. 10867},

  year = {EasyChair, 2023}}
Download PDFOpen PDF in browser