Cainvas
Tags: classifier Logo

Similar Use Cases: Vehicle Classification App

Model Files
logos.h5
keras
Model
deepSea Compiled Models
logos.exe
deepSea
Ubuntu

LOGO Classifier

Credit: AITS Cainvas Community

Photo by Ashraful | logo designer on Dribbble

In this notebook, we will classify the logos (among 6 selected brands) based on the image of a logo.

We will import all the required libraries

In [1]:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Dense, Activation,Dropout,Conv2D, MaxPooling2D,BatchNormalization, Flatten
from tensorflow.keras.optimizers import Adam, Adamax
from tensorflow.keras.metrics import categorical_crossentropy
from tensorflow.keras import regularizers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Model, load_model, Sequential
import numpy as np
import pandas as pd
import shutil
import time
import cv2 as cv2
from tqdm import tqdm
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import os
import seaborn as sns
sns.set_style('darkgrid')
from PIL import Image
from sklearn.metrics import confusion_matrix, classification_report
from IPython.core.display import display, HTML

Unzip the dataset so that we can use it in our notebook

In [2]:
!wget https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/logos3.zip
!unzip -qo logos3.zip
# zip folder is not needed anymore
!rm logos3.zip
--2021-12-08 07:47:39--  https://cainvas-static.s3.amazonaws.com/media/user_data/Sanskar__02/logos3.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.160.59
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.160.59|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 35080657 (33M) [application/zip]
Saving to: ‘logos3.zip’

logos3.zip          100%[===================>]  33.46M  36.7MB/s    in 0.9s    

2021-12-08 07:47:40 (36.7 MB/s) - ‘logos3.zip’ saved [35080657/35080657]

In [3]:
directory = "logos3/train"

Map the classifications i.e. classes to an integer and display the list of all unique 6 brands.

In [4]:
Name=[]
for file in os.listdir(directory):
    Name+=[file]
print(Name)
print(len(Name))
['Burger King', 'McDonalds', '.DS_Store', 'Other', 'Starbucks', 'Subway', 'KFC']
7
In [5]:
brand_map = dict(zip(Name, [t for t in range(len(Name))]))
print(brand_map)
r_brand_map=dict(zip([t for t in range(len(Name))],Name)) 
{'Burger King': 0, 'McDonalds': 1, '.DS_Store': 2, 'Other': 3, 'Starbucks': 4, 'Subway': 5, 'KFC': 6}

Displaying some images from our dataset.

In [6]:
Brand = 'logos3/train/Starbucks'
import os 
sub_class = os.listdir(Brand)

fig = plt.figure(figsize=(10,5))
for e in range(len(sub_class[:10])):
    plt.subplot(2,5,e+1)
    img = plt.imread(os.path.join(Brand,sub_class[e]))
    plt.imshow(img, cmap=plt.get_cmap('gray'))
    plt.axis('off')
In [7]:
# def mapper(value):
#     return r_breed_map[value]
In [8]:
img_datagen = ImageDataGenerator(rescale=1./255,
                                vertical_flip=True,
                                horizontal_flip=True,
                                rotation_range=40,
                                width_shift_range=0.2,
                                height_shift_range=0.2,
                                zoom_range=0.1,
                                validation_split=0.2)
In [9]:
test_datagen = ImageDataGenerator(rescale=1./255)
In [10]:
train_generator = img_datagen.flow_from_directory(directory,
                                                 shuffle=True,
                                                 batch_size=32,
                                                 subset='training',
                                                 target_size=(100, 100))
Found 1393 images belonging to 6 classes.

Divide the training dataset into train set and validation set.

In [11]:
valid_generator = img_datagen.flow_from_directory(directory,
                                                 shuffle=True,
                                                 batch_size=16,
                                                 subset='validation',
                                                 target_size=(100, 100))
Found 345 images belonging to 6 classes.
In [12]:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,Conv2D,MaxPooling2D,Dropout,Flatten,Activation,BatchNormalization
from tensorflow.keras.models import model_from_json
from tensorflow.keras.models import load_model
from tensorflow.keras import regularizers

Train a sequential model.

In [13]:
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3,3),input_shape=(100,100,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.3))

model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())

model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dropout(0.5))

model.add(Dense(6))
# model.add(Dense(len(brand_map)))
model.add(Activation('softmax'))

model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 100, 100, 32)      896       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 50, 50, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 50, 50, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 25, 25, 64)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 25, 25, 64)        36928     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 12, 12, 64)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 12, 12, 64)        36928     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 6, 6, 64)          0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 6, 6, 64)          36928     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 3, 3, 64)          0         
_________________________________________________________________
dropout (Dropout)            (None, 3, 3, 64)          0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 3, 3, 64)          36928     
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 1, 1, 64)          0         
_________________________________________________________________
flatten (Flatten)            (None, 64)                0         
_________________________________________________________________
dense (Dense)                (None, 256)               16640     
_________________________________________________________________
activation (Activation)      (None, 256)               0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 256)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 6)                 1542      
_________________________________________________________________
activation_1 (Activation)    (None, 6)                 0         
=================================================================
Total params: 185,286
Trainable params: 185,286
Non-trainable params: 0
_________________________________________________________________
In [14]:
model.compile(optimizer='adam',
             loss='categorical_crossentropy',
             metrics=['accuracy'])
In [15]:
history = model.fit(train_generator, validation_data=valid_generator,batch_size= 32,epochs=50)
Epoch 1/50
44/44 [==============================] - 42s 959ms/step - loss: 1.5390 - accuracy: 0.4659 - val_loss: 1.4316 - val_accuracy: 0.4783
Epoch 2/50
44/44 [==============================] - 42s 949ms/step - loss: 1.3026 - accuracy: 0.4939 - val_loss: 1.5310 - val_accuracy: 0.3739
Epoch 3/50
44/44 [==============================] - 42s 944ms/step - loss: 1.1071 - accuracy: 0.5793 - val_loss: 1.2898 - val_accuracy: 0.4725
Epoch 4/50
44/44 [==============================] - 43s 986ms/step - loss: 1.1076 - accuracy: 0.5772 - val_loss: 1.1705 - val_accuracy: 0.5420
Epoch 5/50
44/44 [==============================] - 42s 952ms/step - loss: 1.0336 - accuracy: 0.6088 - val_loss: 1.2961 - val_accuracy: 0.5130
Epoch 6/50
44/44 [==============================] - 42s 956ms/step - loss: 1.0363 - accuracy: 0.5908 - val_loss: 1.1645 - val_accuracy: 0.5246
Epoch 7/50
44/44 [==============================] - 42s 949ms/step - loss: 0.9345 - accuracy: 0.6267 - val_loss: 0.9332 - val_accuracy: 0.6348
Epoch 8/50
44/44 [==============================] - 42s 949ms/step - loss: 0.9407 - accuracy: 0.6281 - val_loss: 1.4776 - val_accuracy: 0.3623
Epoch 9/50
44/44 [==============================] - 42s 944ms/step - loss: 0.9721 - accuracy: 0.6238 - val_loss: 1.2108 - val_accuracy: 0.5188
Epoch 10/50
44/44 [==============================] - 41s 943ms/step - loss: 0.8826 - accuracy: 0.6612 - val_loss: 0.9863 - val_accuracy: 0.6174
Epoch 11/50
44/44 [==============================] - 42s 953ms/step - loss: 0.9101 - accuracy: 0.6569 - val_loss: 1.0802 - val_accuracy: 0.5681
Epoch 12/50
44/44 [==============================] - 44s 990ms/step - loss: 0.8487 - accuracy: 0.6827 - val_loss: 0.8752 - val_accuracy: 0.6696
Epoch 13/50
44/44 [==============================] - 42s 954ms/step - loss: 0.7547 - accuracy: 0.7207 - val_loss: 0.8026 - val_accuracy: 0.7130
Epoch 14/50
44/44 [==============================] - 42s 944ms/step - loss: 0.7336 - accuracy: 0.7265 - val_loss: 0.7698 - val_accuracy: 0.7072
Epoch 15/50
44/44 [==============================] - 42s 947ms/step - loss: 0.7188 - accuracy: 0.7430 - val_loss: 0.7895 - val_accuracy: 0.7188
Epoch 16/50
44/44 [==============================] - 41s 941ms/step - loss: 0.6660 - accuracy: 0.7531 - val_loss: 0.8736 - val_accuracy: 0.6493
Epoch 17/50
44/44 [==============================] - 42s 948ms/step - loss: 0.6599 - accuracy: 0.7595 - val_loss: 0.7529 - val_accuracy: 0.7333
Epoch 18/50
44/44 [==============================] - 44s 991ms/step - loss: 0.6993 - accuracy: 0.7394 - val_loss: 0.7023 - val_accuracy: 0.7043
Epoch 19/50
44/44 [==============================] - 44s 1s/step - loss: 0.6105 - accuracy: 0.7767 - val_loss: 0.6771 - val_accuracy: 0.7565
Epoch 20/50
44/44 [==============================] - 53s 1s/step - loss: 0.6140 - accuracy: 0.7767 - val_loss: 0.5746 - val_accuracy: 0.7739
Epoch 21/50
44/44 [==============================] - 46s 1s/step - loss: 0.5772 - accuracy: 0.7911 - val_loss: 0.8003 - val_accuracy: 0.7130
Epoch 22/50
44/44 [==============================] - 41s 943ms/step - loss: 0.5785 - accuracy: 0.7983 - val_loss: 0.4779 - val_accuracy: 0.8232
Epoch 23/50
44/44 [==============================] - 41s 942ms/step - loss: 0.5119 - accuracy: 0.8256 - val_loss: 0.6903 - val_accuracy: 0.7565
Epoch 24/50
44/44 [==============================] - 41s 943ms/step - loss: 0.5074 - accuracy: 0.8270 - val_loss: 0.5328 - val_accuracy: 0.8116
Epoch 25/50
44/44 [==============================] - 42s 943ms/step - loss: 0.4706 - accuracy: 0.8392 - val_loss: 0.7102 - val_accuracy: 0.8058
Epoch 26/50
44/44 [==============================] - 41s 940ms/step - loss: 0.5117 - accuracy: 0.8291 - val_loss: 0.6156 - val_accuracy: 0.7768
Epoch 27/50
44/44 [==============================] - 42s 947ms/step - loss: 0.3966 - accuracy: 0.8693 - val_loss: 0.4705 - val_accuracy: 0.8580
Epoch 28/50
44/44 [==============================] - 41s 943ms/step - loss: 0.4026 - accuracy: 0.8586 - val_loss: 0.4231 - val_accuracy: 0.8580
Epoch 29/50
44/44 [==============================] - 41s 941ms/step - loss: 0.4025 - accuracy: 0.8643 - val_loss: 0.5307 - val_accuracy: 0.8029
Epoch 30/50
44/44 [==============================] - 42s 945ms/step - loss: 0.3611 - accuracy: 0.8830 - val_loss: 0.5382 - val_accuracy: 0.8464
Epoch 31/50
44/44 [==============================] - 41s 942ms/step - loss: 0.3571 - accuracy: 0.8744 - val_loss: 0.3517 - val_accuracy: 0.8870
Epoch 32/50
44/44 [==============================] - 41s 941ms/step - loss: 0.3830 - accuracy: 0.8816 - val_loss: 0.4690 - val_accuracy: 0.8406
Epoch 33/50
44/44 [==============================] - 42s 944ms/step - loss: 0.3298 - accuracy: 0.8930 - val_loss: 0.5351 - val_accuracy: 0.8348
Epoch 34/50
44/44 [==============================] - 42s 946ms/step - loss: 0.3010 - accuracy: 0.8988 - val_loss: 0.4139 - val_accuracy: 0.8522
Epoch 35/50
44/44 [==============================] - 41s 943ms/step - loss: 0.2909 - accuracy: 0.9045 - val_loss: 0.5095 - val_accuracy: 0.8203
Epoch 36/50
44/44 [==============================] - 41s 943ms/step - loss: 0.3397 - accuracy: 0.8765 - val_loss: 0.5143 - val_accuracy: 0.8493
Epoch 37/50
44/44 [==============================] - 42s 956ms/step - loss: 0.2723 - accuracy: 0.8995 - val_loss: 0.5080 - val_accuracy: 0.8522
Epoch 38/50
44/44 [==============================] - 42s 944ms/step - loss: 0.3371 - accuracy: 0.8887 - val_loss: 0.3907 - val_accuracy: 0.8841
Epoch 39/50
44/44 [==============================] - 41s 938ms/step - loss: 0.2648 - accuracy: 0.9110 - val_loss: 0.6237 - val_accuracy: 0.8377
Epoch 40/50
44/44 [==============================] - 41s 941ms/step - loss: 0.2376 - accuracy: 0.9146 - val_loss: 0.5893 - val_accuracy: 0.8203
Epoch 41/50
44/44 [==============================] - 43s 978ms/step - loss: 0.2694 - accuracy: 0.9060 - val_loss: 0.4597 - val_accuracy: 0.8377
Epoch 42/50
44/44 [==============================] - 42s 945ms/step - loss: 0.2541 - accuracy: 0.9088 - val_loss: 0.6159 - val_accuracy: 0.8290
Epoch 43/50
44/44 [==============================] - 41s 941ms/step - loss: 0.3066 - accuracy: 0.8916 - val_loss: 0.3871 - val_accuracy: 0.8551
Epoch 44/50
44/44 [==============================] - 41s 941ms/step - loss: 0.2430 - accuracy: 0.9225 - val_loss: 0.3778 - val_accuracy: 0.8696
Epoch 45/50
44/44 [==============================] - 43s 976ms/step - loss: 0.2197 - accuracy: 0.9289 - val_loss: 0.4022 - val_accuracy: 0.8841
Epoch 46/50
44/44 [==============================] - 42s 945ms/step - loss: 0.2045 - accuracy: 0.9318 - val_loss: 0.3505 - val_accuracy: 0.8899
Epoch 47/50
44/44 [==============================] - 43s 972ms/step - loss: 0.2034 - accuracy: 0.9354 - val_loss: 0.5845 - val_accuracy: 0.8522
Epoch 48/50
44/44 [==============================] - 43s 975ms/step - loss: 0.1802 - accuracy: 0.9361 - val_loss: 0.4730 - val_accuracy: 0.8319
Epoch 49/50
44/44 [==============================] - 43s 966ms/step - loss: 0.2496 - accuracy: 0.9196 - val_loss: 0.5039 - val_accuracy: 0.8348
Epoch 50/50
44/44 [==============================] - 41s 941ms/step - loss: 0.2247 - accuracy: 0.9275 - val_loss: 0.5120 - val_accuracy: 0.8435

Plot curves

In [16]:
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.title('Training and validation accuracy')
plt.show()
In [17]:
training_accuracy = history.history['loss']
validation_accuracy = history.history['val_loss']
plt.plot(training_accuracy, 'r', label = 'training loss')
plt.plot(validation_accuracy, 'b', label = 'validation loss')
plt.title('training and test loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.legend()
plt.show()

Making Predictions

In [18]:
# from tensorflow.keras.models import load_img
from tensorflow.keras.preprocessing.image import load_img
load_img("logos3/test/McDonalds/armada_image_755.jpg",target_size=(180,180))
Out[18]:
In [19]:
# from tensorflow.keras import image
from tensorflow.keras.preprocessing import image
test_image = image.load_img('logos3/test/KFC/armada_image_169.jpg', target_size = (100, 100))  
test_image = image.img_to_array(test_image)  
test_image = np.expand_dims(test_image, axis = 0)  
result = model.predict(test_image)  
print(result)
[[1. 0. 0. 0. 0. 0.]]

Deep CC

In [20]:
model.save('saved_models/logos.tf')
WARNING:tensorflow:From /opt/tljh/user/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:From /opt/tljh/user/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
INFO:tensorflow:Assets written to: saved_models/logos.tf/assets
In [21]:
!deepCC 'saved_models/logos.tf'
[INFO]
Reading [tensorflow model] 'saved_models/logos.tf'
[SUCCESS]
Saved 'logos_deepC/logos.tf.onnx'
[INFO]
Reading [onnx model] 'logos_deepC/logos.tf.onnx'
[INFO]
Model info:
  ir_vesion : 4
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) conv2d_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) activation_1's shape is less than 1. Changing it to 1.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'logos_deepC/logos.cpp'
[INFO]
deepSea model files are ready in 'logos_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "logos_deepC/logos.cpp" -D_AITS_MAIN -o "logos_deepC/logos.exe"
[RUNNING COMMAND]
size "logos_deepC/logos.exe"
   text	   data	    bss	    dec	    hex	filename
 938245	   3976	    760	 942981	  e6385	logos_deepC/logos.exe
[SUCCESS]
Saved model as executable "logos_deepC/logos.exe"