LOGO Classifier¶
Credit: AITS Cainvas Community
Photo by Ashraful | logo designer on Dribbble
In this notebook, we will classify the logos (among 6 selected brands) based on the image of a logo.
We will import all the required libraries¶
In [1]:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Dense, Activation,Dropout,Conv2D, MaxPooling2D,BatchNormalization, Flatten
from tensorflow.keras.optimizers import Adam, Adamax
from tensorflow.keras.metrics import categorical_crossentropy
from tensorflow.keras import regularizers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Model, load_model, Sequential
import numpy as np
import pandas as pd
import shutil
import time
import cv2 as cv2
from tqdm import tqdm
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import os
import seaborn as sns
sns.set_style('darkgrid')
from PIL import Image
from sklearn.metrics import confusion_matrix, classification_report
from IPython.core.display import display, HTML
Unzip the dataset so that we can use it in our notebook¶
In [2]:
!wget https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/logos3.zip
!unzip -qo logos3.zip
# zip folder is not needed anymore
!rm logos3.zip
In [3]:
directory = "logos3/train"
Map the classifications i.e. classes to an integer and display the list of all unique 6 brands.¶
In [4]:
Name=[]
for file in os.listdir(directory):
Name+=[file]
print(Name)
print(len(Name))
In [5]:
brand_map = dict(zip(Name, [t for t in range(len(Name))]))
print(brand_map)
r_brand_map=dict(zip([t for t in range(len(Name))],Name))
Displaying some images from our dataset.¶
In [6]:
Brand = 'logos3/train/Starbucks'
import os
sub_class = os.listdir(Brand)
fig = plt.figure(figsize=(10,5))
for e in range(len(sub_class[:10])):
plt.subplot(2,5,e+1)
img = plt.imread(os.path.join(Brand,sub_class[e]))
plt.imshow(img, cmap=plt.get_cmap('gray'))
plt.axis('off')
In [7]:
# def mapper(value):
# return r_breed_map[value]
In [8]:
img_datagen = ImageDataGenerator(rescale=1./255,
vertical_flip=True,
horizontal_flip=True,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
zoom_range=0.1,
validation_split=0.2)
In [9]:
test_datagen = ImageDataGenerator(rescale=1./255)
In [10]:
train_generator = img_datagen.flow_from_directory(directory,
shuffle=True,
batch_size=32,
subset='training',
target_size=(100, 100))
Divide the training dataset into train set and validation set.¶
In [11]:
valid_generator = img_datagen.flow_from_directory(directory,
shuffle=True,
batch_size=16,
subset='validation',
target_size=(100, 100))
In [12]:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,Conv2D,MaxPooling2D,Dropout,Flatten,Activation,BatchNormalization
from tensorflow.keras.models import model_from_json
from tensorflow.keras.models import load_model
from tensorflow.keras import regularizers
Train a sequential model.¶
In [13]:
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3,3),input_shape=(100,100,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.3))
model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(6))
# model.add(Dense(len(brand_map)))
model.add(Activation('softmax'))
model.summary()
In [14]:
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
In [15]:
history = model.fit(train_generator, validation_data=valid_generator,batch_size= 32,epochs=50)
Plot curves¶
In [16]:
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.title('Training and validation accuracy')
plt.show()
In [17]:
training_accuracy = history.history['loss']
validation_accuracy = history.history['val_loss']
plt.plot(training_accuracy, 'r', label = 'training loss')
plt.plot(validation_accuracy, 'b', label = 'validation loss')
plt.title('training and test loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.legend()
plt.show()
Making Predictions¶
In [18]:
# from tensorflow.keras.models import load_img
from tensorflow.keras.preprocessing.image import load_img
load_img("logos3/test/McDonalds/armada_image_755.jpg",target_size=(180,180))
Out[18]:
In [19]:
# from tensorflow.keras import image
from tensorflow.keras.preprocessing import image
test_image = image.load_img('logos3/test/KFC/armada_image_169.jpg', target_size = (100, 100))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = model.predict(test_image)
print(result)
Deep CC¶
In [20]:
model.save('saved_models/logos.tf')
In [21]:
!deepCC 'saved_models/logos.tf'