Cainvas

In this notebook, we will classify the logos (among 6 selected brands) based on the image of a logo.

There are 6 different logos type ([Burger King, McDonalds, Other, Starbucks, Subway,KFC). We have used CNN for predicting new ball images

We will import all the required libraries

In [21]:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Dense, Activation,Dropout,Conv2D, MaxPooling2D,BatchNormalization, Flatten
from tensorflow.keras.optimizers import Adam, Adamax
from tensorflow.keras.metrics import categorical_crossentropy
from tensorflow.keras import regularizers
from tensorflow.keras.preprocessing.image import ImageDataGenerator , img_to_array
from tensorflow.keras.models import Model, load_model, Sequential
import numpy as np
import pandas as pd
import shutil
import time
import cv2 as cv2
from tqdm import tqdm
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import os
import seaborn as sns
sns.set_style('darkgrid')
from PIL import Image
from sklearn.metrics import confusion_matrix, classification_report
from IPython.core.display import display, HTML

Unzip the dataset so that we can use it in our notebook

In [2]:
!wget https://cainvas-static.s3.amazonaws.com/media/user_data/Sanskar__02/logos3.zip
!unzip -qo logos3.zip
# zip folder is not needed anymore
!rm logos3.zip
--2021-12-11 16:28:01--  https://cainvas-static.s3.amazonaws.com/media/user_data/Sanskar__02/logos3.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.62.96
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.62.96|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 35080657 (33M) [application/zip]
Saving to: ‘logos3.zip’

logos3.zip          100%[===================>]  33.46M  71.3MB/s    in 0.5s    

2021-12-11 16:28:01 (71.3 MB/s) - ‘logos3.zip’ saved [35080657/35080657]

In [3]:
directory = "logos3/train"

Map the classifications i.e. classes to an integer and display the list of all unique 6 brands.

In [4]:
Name=[]
for file in os.listdir(directory):
    Name+=[file]
print(Name)
print(len(Name))
['Burger King', 'McDonalds', '.DS_Store', 'Other', 'Starbucks', 'Subway', 'KFC']
7
In [5]:
Name.remove('.DS_Store')
Name[1], Name[2], Name[3] ,Name[4], Name[5] = Name[5], Name[1], Name[2], Name[3], Name[4]
print(Name)
['Burger King', 'KFC', 'McDonalds', 'Other', 'Starbucks', 'Subway']
In [25]:
logo_map = dict(zip(Name, [t for t in range(len(Name))]))
print(logo_map)
r_logo_map=dict(zip([t for t in range(len(Name))],Name)) 
{'Burger King': 0, 'KFC': 1, 'McDonalds': 2, 'Other': 3, 'Starbucks': 4, 'Subway': 5}
In [26]:
def mapper(value):
    return r_logo_map[value]

Displaying some images from our dataset.

In [7]:
Brand = 'logos3/train/Starbucks'
import os 
sub_class = os.listdir(Brand)

fig = plt.figure(figsize=(10,5))
for e in range(len(sub_class[:10])):
    plt.subplot(2,5,e+1)
    img = plt.imread(os.path.join(Brand,sub_class[e]))
    plt.imshow(img, cmap=plt.get_cmap('gray'))
    plt.axis('off')
In [8]:
def mapper(value):
    return r_breed_map[value]
In [9]:
img_datagen = ImageDataGenerator(rescale=1./255,
                                vertical_flip=True,
                                horizontal_flip=True,
                                rotation_range=40,
                                width_shift_range=0.2,
                                height_shift_range=0.2,
                                zoom_range=0.1,
                                validation_split=0.2)
In [10]:
test_datagen = ImageDataGenerator(rescale=1./255)
In [11]:
train_generator = img_datagen.flow_from_directory(directory,
                                                 shuffle=True,
                                                 batch_size=32,
                                                 subset='training',
                                                 target_size=(100, 100))
Found 1393 images belonging to 6 classes.

Divide the training dataset into train set and validation set.

In [12]:
valid_generator = img_datagen.flow_from_directory(directory,
                                                 shuffle=True,
                                                 batch_size=16,
                                                 subset='validation',
                                                 target_size=(100, 100))
Found 345 images belonging to 6 classes.
In [13]:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,Conv2D,MaxPooling2D,Dropout,Flatten,Activation,BatchNormalization
from tensorflow.keras.models import model_from_json
from tensorflow.keras.models import load_model
from tensorflow.keras import regularizers

Train a sequential model.

In [14]:
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3,3),input_shape=(100,100,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.3))

model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())

model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dropout(0.5))

model.add(Dense(6))
# model.add(Dense(len(brand_map)))
model.add(Activation('softmax'))

model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 100, 100, 32)      896       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 50, 50, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 50, 50, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 25, 25, 64)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 25, 25, 64)        36928     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 12, 12, 64)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 12, 12, 64)        36928     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 6, 6, 64)          0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 6, 6, 64)          36928     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 3, 3, 64)          0         
_________________________________________________________________
dropout (Dropout)            (None, 3, 3, 64)          0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 3, 3, 64)          36928     
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 1, 1, 64)          0         
_________________________________________________________________
flatten (Flatten)            (None, 64)                0         
_________________________________________________________________
dense (Dense)                (None, 256)               16640     
_________________________________________________________________
activation (Activation)      (None, 256)               0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 256)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 6)                 1542      
_________________________________________________________________
activation_1 (Activation)    (None, 6)                 0         
=================================================================
Total params: 185,286
Trainable params: 185,286
Non-trainable params: 0
_________________________________________________________________
In [15]:
model.compile(optimizer='adam',
             loss='categorical_crossentropy',
             metrics=['accuracy'])
In [16]:
history = model.fit(train_generator, validation_data=valid_generator,batch_size= 32,epochs=40)
Epoch 1/40
44/44 [==============================] - 41s 934ms/step - loss: 1.5418 - accuracy: 0.4602 - val_loss: 1.4519 - val_accuracy: 0.4783
Epoch 2/40
44/44 [==============================] - 41s 932ms/step - loss: 1.3943 - accuracy: 0.4738 - val_loss: 1.4155 - val_accuracy: 0.4783
Epoch 3/40
44/44 [==============================] - 41s 928ms/step - loss: 1.1785 - accuracy: 0.5492 - val_loss: 1.0726 - val_accuracy: 0.6058
Epoch 4/40
44/44 [==============================] - 41s 929ms/step - loss: 1.0694 - accuracy: 0.5829 - val_loss: 1.4293 - val_accuracy: 0.4058
Epoch 5/40
44/44 [==============================] - 41s 930ms/step - loss: 1.0678 - accuracy: 0.6001 - val_loss: 1.3250 - val_accuracy: 0.4754
Epoch 6/40
44/44 [==============================] - 41s 930ms/step - loss: 0.9742 - accuracy: 0.6095 - val_loss: 1.0596 - val_accuracy: 0.6029
Epoch 7/40
44/44 [==============================] - 41s 929ms/step - loss: 0.9108 - accuracy: 0.6389 - val_loss: 1.0365 - val_accuracy: 0.5971
Epoch 8/40
44/44 [==============================] - 41s 926ms/step - loss: 0.8954 - accuracy: 0.6540 - val_loss: 0.9607 - val_accuracy: 0.6667
Epoch 9/40
44/44 [==============================] - 41s 925ms/step - loss: 0.8050 - accuracy: 0.6906 - val_loss: 0.8327 - val_accuracy: 0.7101
Epoch 10/40
44/44 [==============================] - 41s 925ms/step - loss: 0.7735 - accuracy: 0.7014 - val_loss: 0.8017 - val_accuracy: 0.6899
Epoch 11/40
44/44 [==============================] - 41s 925ms/step - loss: 0.7363 - accuracy: 0.7050 - val_loss: 0.8847 - val_accuracy: 0.6841
Epoch 12/40
44/44 [==============================] - 41s 924ms/step - loss: 0.7109 - accuracy: 0.7143 - val_loss: 0.9433 - val_accuracy: 0.6406
Epoch 13/40
44/44 [==============================] - 41s 923ms/step - loss: 0.7357 - accuracy: 0.7193 - val_loss: 0.7129 - val_accuracy: 0.6986
Epoch 14/40
44/44 [==============================] - 38s 863ms/step - loss: 0.6678 - accuracy: 0.7387 - val_loss: 0.6273 - val_accuracy: 0.7217
Epoch 15/40
44/44 [==============================] - 38s 861ms/step - loss: 0.6554 - accuracy: 0.7495 - val_loss: 0.6508 - val_accuracy: 0.7681
Epoch 16/40
44/44 [==============================] - 38s 859ms/step - loss: 0.6115 - accuracy: 0.7796 - val_loss: 0.9704 - val_accuracy: 0.6928
Epoch 17/40
44/44 [==============================] - 38s 859ms/step - loss: 0.5561 - accuracy: 0.8040 - val_loss: 0.6150 - val_accuracy: 0.7797
Epoch 18/40
44/44 [==============================] - 38s 861ms/step - loss: 0.5558 - accuracy: 0.7990 - val_loss: 0.5245 - val_accuracy: 0.8203
Epoch 19/40
44/44 [==============================] - 38s 867ms/step - loss: 0.5253 - accuracy: 0.8069 - val_loss: 0.4998 - val_accuracy: 0.8116
Epoch 20/40
44/44 [==============================] - 38s 861ms/step - loss: 0.5019 - accuracy: 0.8090 - val_loss: 0.5914 - val_accuracy: 0.7942
Epoch 21/40
44/44 [==============================] - 38s 859ms/step - loss: 0.4881 - accuracy: 0.8313 - val_loss: 0.5628 - val_accuracy: 0.8029
Epoch 22/40
44/44 [==============================] - 38s 861ms/step - loss: 0.4733 - accuracy: 0.8421 - val_loss: 0.5865 - val_accuracy: 0.8145
Epoch 23/40
44/44 [==============================] - 38s 861ms/step - loss: 0.5069 - accuracy: 0.8306 - val_loss: 0.5579 - val_accuracy: 0.8261
Epoch 24/40
44/44 [==============================] - 38s 861ms/step - loss: 0.4103 - accuracy: 0.8543 - val_loss: 0.4826 - val_accuracy: 0.8232
Epoch 25/40
44/44 [==============================] - 38s 870ms/step - loss: 0.3907 - accuracy: 0.8615 - val_loss: 0.4698 - val_accuracy: 0.8087
Epoch 26/40
44/44 [==============================] - 38s 856ms/step - loss: 0.3453 - accuracy: 0.8787 - val_loss: 0.5459 - val_accuracy: 0.8000
Epoch 27/40
44/44 [==============================] - 38s 857ms/step - loss: 0.4781 - accuracy: 0.8327 - val_loss: 0.9136 - val_accuracy: 0.7101
Epoch 28/40
44/44 [==============================] - 38s 859ms/step - loss: 0.4629 - accuracy: 0.8485 - val_loss: 0.3629 - val_accuracy: 0.8812
Epoch 29/40
44/44 [==============================] - 38s 861ms/step - loss: 0.3228 - accuracy: 0.8880 - val_loss: 0.5370 - val_accuracy: 0.8174
Epoch 30/40
44/44 [==============================] - 38s 859ms/step - loss: 0.3369 - accuracy: 0.8873 - val_loss: 0.4422 - val_accuracy: 0.8435
Epoch 31/40
44/44 [==============================] - 38s 870ms/step - loss: 0.3152 - accuracy: 0.8952 - val_loss: 0.5493 - val_accuracy: 0.8435
Epoch 32/40
44/44 [==============================] - 38s 857ms/step - loss: 0.3559 - accuracy: 0.8816 - val_loss: 0.5195 - val_accuracy: 0.8348
Epoch 33/40
44/44 [==============================] - 38s 857ms/step - loss: 0.2866 - accuracy: 0.8981 - val_loss: 0.5489 - val_accuracy: 0.8174
Epoch 34/40
44/44 [==============================] - 38s 860ms/step - loss: 0.3678 - accuracy: 0.8816 - val_loss: 0.5541 - val_accuracy: 0.8087
Epoch 35/40
44/44 [==============================] - 38s 859ms/step - loss: 0.3040 - accuracy: 0.9038 - val_loss: 0.3681 - val_accuracy: 0.8696
Epoch 36/40
44/44 [==============================] - 38s 861ms/step - loss: 0.2755 - accuracy: 0.9074 - val_loss: 0.4611 - val_accuracy: 0.8522
Epoch 37/40
44/44 [==============================] - 38s 861ms/step - loss: 0.2311 - accuracy: 0.9225 - val_loss: 0.3328 - val_accuracy: 0.8812
Epoch 38/40
44/44 [==============================] - 38s 860ms/step - loss: 0.2300 - accuracy: 0.9182 - val_loss: 0.4859 - val_accuracy: 0.8319
Epoch 39/40
44/44 [==============================] - 38s 861ms/step - loss: 0.2385 - accuracy: 0.9296 - val_loss: 0.4292 - val_accuracy: 0.8406
Epoch 40/40
44/44 [==============================] - 38s 860ms/step - loss: 0.2084 - accuracy: 0.9304 - val_loss: 0.7735 - val_accuracy: 0.7884

Plot curves

In [17]:
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.title('Training and validation accuracy')
plt.show()
In [18]:
training_accuracy = history.history['loss']
validation_accuracy = history.history['val_loss']
plt.plot(training_accuracy, 'r', label = 'training loss')
plt.plot(validation_accuracy, 'b', label = 'validation loss')
plt.title('training and test loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.legend()
plt.show()

Making Predictions

In [22]:
from tensorflow.keras.preprocessing.image import load_img
load_img("logos3/train/McDonalds/ankamall_image_110_2.jpg",target_size=(180,180))
Out[22]:

Randomly select an image from the test set and feed it to our model to make predictions.

In [28]:
image=load_img("logos3/train/McDonalds/ankamall_image_110_2.jpg",target_size=(100,100))

image=img_to_array(image) 
image=image/255.0
prediction_image=np.array(image)
prediction_image= np.expand_dims(image, axis=0)
In [29]:
prediction=model.predict(prediction_image)
value=np.argmax(prediction)
move_name=mapper(value)
print("Prediction is {}.".format(move_name))
Prediction is McDonalds.

Deep CC

In [30]:
model.save('saved_models/logos.tf')
WARNING:tensorflow:From /opt/tljh/user/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:From /opt/tljh/user/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
INFO:tensorflow:Assets written to: saved_models/logos.tf/assets
In [31]:
!deepCC 'saved_models/logos.tf'
[INFO]
Reading [tensorflow model] 'saved_models/logos.tf'
[SUCCESS]
Saved 'logos_deepC/logos.tf.onnx'
[INFO]
Reading [onnx model] 'logos_deepC/logos.tf.onnx'
[INFO]
Model info:
  ir_vesion : 4
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) conv2d_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) activation_1's shape is less than 1. Changing it to 1.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'logos_deepC/logos.cpp'
[INFO]
deepSea model files are ready in 'logos_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "logos_deepC/logos.cpp" -D_AITS_MAIN -o "logos_deepC/logos.exe"
[RUNNING COMMAND]
size "logos_deepC/logos.exe"
   text	   data	    bss	    dec	    hex	filename
 938245	   3976	    760	 942981	  e6385	logos_deepC/logos.exe
[SUCCESS]
Saved model as executable "logos_deepC/logos.exe"