Adversarial Learning for Safety and Security

2019

Literature Survey on Adversarial Attacks and their defense

9 minute read

Published:

With the coming up of so many applications on which deep learning is proving to be impactful with high accuracies and precision. it is important to ensure it’s safety and security against adversarial attacks. It has been observed that deep neural networks are susceptible to adversarial attacks even in the form of small perturbations which are not conceivable by humans. My literature survey on this topic consists of the following papers and their details :

Nazi Element Classification

1 minute read

Published:

The task of last week was to make a dataset for nazism element detection. I was provided with the positive examples, and I had to generate the negative ones. The initial given dataset consisted of around 2800 images in total belonging to various categories like :

  • 88_heil_hitler
  • nazi_eagle
  • nazi_swastikas
  • blut_und_ehre
  • nazi_flags
  • nazi_tattoo
  • crossed_grenade_emblem_nazism
  • nazi_parade
  • schwarze_sonne
  • hitler_salute
  • nazi_party
  • sieg_heil
  • meine_ehre_heisst_treue
  • nazi_propaganda
  • ss_death’s_head
  • nazi_bolts
  • nazi_rally
  • ss_iron_crosses

Semi-supervised learning using varitional autoencoders results

less than 1 minute read

Published:

The model trained on data size of 5714 and 36 classes didn’t perform that good. I tried with approaches based on unlabeled data as well as fully labeled data, however, the accuracy still remained low. The graphs are attached as follows :