GSOC 2017 - Week 4 of GSoC 17
Published:
This blog is dedicated to the third week of Google Summer of Code (i.e June 24 - July 1). This week was concentrated on cross-testing and analysis of the API with some challenging tests.
The paper furthers moves to the classifier security to adversarial examples. The paper puts forth the problem of intrinsic vulnerability of the feature representations. Adversarial examples may quite reasonably end up in regions of training distribution while also successfully fooling detection. These samples are often referred to as blind-spot evasion samples. Thus, to avoid this issue, by introducing SVM with RBF kernels as Compact Abating Probability (CAP) models. The decision rule is modified as :
The conceptual representation of the idea is demonstrated as :
This performance of SVM-adv increases for low values dmax, as even if all testing images are only slightly modified in input space, they immediately become blind-spot adversarial examples, ending up in a region which is far from the rest of the data. As the input perturbation increases, such samples are gradually drifted inside a different class, becoming indistinguishable from the samples from the samples of such class.
Thus, the paper nicely provides the vulnerability towards the classification problem and suggests a defensive algorithm too.
This paper proposes the idea of using PixelCNN and PixelRNN for conditional image generation. The basic idea of the architecture is to use auto-regressive connections to model images pixel by pixel. PixelRNN generally gives better performance, but PixelCNN are much faster to train. However, the model has blind spots which makes it vulnerable.
The paper also introduces Gated PixelCNN. PixelCNNs model the joint distribution of pixels over an image x as the following product of conditional distributions, where x_i is as single pixel.
The new architecture that they propose, combines the advantages of both PixelCNN and PixelRNN by modelling the gated activation units as :
The combined architecture is given as :
The Conditional PixelCNN has the following distribution and activation :
Because conditional PixelCNNs have the capacity to model diverse, multi modal image distributions, it is possible to apply them as image decoders in existing neural architectures such as auto encoders.
Datasets : ImageNet, CIFAR, Flickr
Results :
This paper proposes the idea of purifying the adversarial image to make it fall into the uniform training distribution. They make a hypothesis that the adversarial examples largely lie in the low probability regions of the distribution that generated the data used to train the model.
The paper tells about some attacking methods like Random perturbation (RAND), Fast gradient sign method (FGSM), Basic iterative method (BIM), DeepFool, Carlini-Wagner (CW), etc:
The various defense methods are also mentioned like : Adversarial training, Label smoothing and Feature squeezing.
Adversarial examples are detected using p-values :
where X’ is a test image, while others are training images.
The paper uses the following optimization to return images to the training distribution :
The paper also produces another variant named as adaptive defend which will modify only the images which do not lie in the training distribution.
The paper starts with addressing the various deep network architectures. For eg : CNN, DBN, RBM, SAE, AE, RNN, LSTM, GAN, etc:
Conceptual DL framework for cybersecurity applications has been presented.
The paper tells about the main branches of applying DL techniques to cybersecurity :
The paper also summarized various PC-based malware detection and analysis :
The paper also summarizes Android-based malware detection :
Intrusion detection also has been summarized :
There are other attacks and defense too which are mentioned as phishing detection, spam detection, website defacement detection, etc:.
There are various analysis based on focus area, methodology, generative architectures, discriminative architectures, model applicability, and feature granularity.