Nazi Element Classification

1 minute read

Published:

The task of last week was to make a dataset for nazism element detection. I was provided with the positive examples, and I had to generate the negative ones. The initial given dataset consisted of around 2800 images in total belonging to various categories like :

  • 88_heil_hitler
  • nazi_eagle
  • nazi_swastikas
  • blut_und_ehre
  • nazi_flags
  • nazi_tattoo
  • crossed_grenade_emblem_nazism
  • nazi_parade
  • schwarze_sonne
  • hitler_salute
  • nazi_party
  • sieg_heil
  • meine_ehre_heisst_treue
  • nazi_propaganda
  • ss_death’s_head
  • nazi_bolts
  • nazi_rally
  • ss_iron_crosses

The images of the negative examoles should look like the positive exmples. So, I generated the negative examples using Google Images Download library. The negative examples that I generated were using keywords like :

  • salute
  • rally
  • sun
  • propaganda
  • tattoo
  • numbers
  • swastikas
  • eagle
  • parade
  • grenade
  • etc :

I generated around 3000 images of the negative examples in total.

The generated dataset can be found at : https://www.dropbox.com/s/5fd2whn1j3ot4mn/nazi%20detection.zip?dl=0

Image Classification

The next task is to fine-tune a classification model. The model cannot be trained from scratch because of the deficiency of the number of images. So fine-tuning and transfer learning is the only option. This, for this purpose I used VGG16 per-trained model on ImageNet dataset. There are 5 blocks of convolutional layers in the model with each block consisting around 4 CNN layers. Fine-tuning works on the hypothesis that the initial layers preserve the coarse features while final layers are the one that learn the classifying features. So, I removed the last four layers and replaced 3 fully connected layers for classification. I divided the dataset into 80% for training, 10% for validation and 10% for training. I ran it for 32 epochs. The code for my implementation can be found out at : https://drive.google.com/file/d/1AYRjXfHQHiPHoXMWIwQg8IFuFOVi55-B/view?usp=sharing

The graphs for accuracy and loss are as follows :

enter image description here

enter image description here

with X-axis as the number of epochs. However, the classification task is not up to the mark, so I tried with making less layers trainable but that further decreased the accuracy. I also increased the number of epochs. The result for 200 epochs are :

enter image description here

enter image description here

As can be seen overfitting starts to occur after 45 epochs.