Cainvas

Indian Currency Notes Classifier

Credit: AITS Cainvas Community

Photo by Alexander Barton for NJI Media on Dribbble

Using the images of the currency notes in circulation to classify them. This application can be of use to the visually impaired in their everyday lives.

In [1]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from tensorflow.keras import layers, optimizers, models, preprocessing, losses, callbacks
import os
import random
from PIL import Image
import tensorflow as tf
import tensorflow.keras

The dataset

On Kaggle by Gaurav Rajesh Sahani

The dataset contains 195 images of 7 categories of Indian Currency Notes - Tennote, Fiftynote, Twentynote, 2Thousandnote, 2Hundrednote, Hundrednote, 1Hundrednote.

There are 2 folders in the dataset - train and test, each with 7 sub-folders corresponding to the currency categories.

In [2]:
!wget -N https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/indian-currency-notes-classifier.zip

!unzip -qo indian-currency-notes-classifier.zip -d currency
--2021-09-06 19:59:10--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/indian-currency-notes-classifier.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.66.0
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.66.0|:443... connected.
HTTP request sent, awaiting response... 304 Not Modified
File ‘indian-currency-notes-classifier.zip’ not modified on server. Omitting download.

A peek into the number of images in the folders -

In [3]:
data_dir = 'currency'

print("Number of samples")
for f in os.listdir(data_dir + '/'):
    print()
    if os.path.isdir(data_dir + '/' + f):
        print(f.upper())
        for fx in os.listdir(data_dir + '/' + f):
            if os.path.isdir(data_dir + '/' + f + '/' + fx):
                print(fx, " : ", len(os.listdir(data_dir + '/' + f +'/' + fx)))
Number of samples

TEST
Tennote  :  6
Fiftynote  :  6
Twentynote  :  6
2Thousandnote  :  6
2Hundrednote  :  6
5Hundrednote  :  6
1Hundrednote  :  6

TRAIN
Tennote  :  22
Fiftynote  :  22
Twentynote  :  22
2Thousandnote  :  21
2Hundrednote  :  22
5Hundrednote  :  22
1Hundrednote  :  22

This is a balanced dataset.

In [4]:
# Loading the dataset

path = 'currency/'
input_shape = (256, 256, 3)    # default input shape while loading the images

batch = 64

# The train and test datasets
print("Train dataset")
train_ds = preprocessing.image_dataset_from_directory(path+'Train', batch_size=batch, label_mode='categorical')#, color_mode='grayscale')

print("Test dataset")
test_ds = preprocessing.image_dataset_from_directory(path+'Test', batch_size=batch, label_mode='categorical')#, color_mode='grayscale')
Train dataset
Found 153 files belonging to 7 classes.
Test dataset
Found 42 files belonging to 7 classes.
In [5]:
# Looking into the class labels

class_names = train_ds.class_names

print("Train class names: ", train_ds.class_names)
print("Test class names: ", test_ds.class_names)
Train class names:  ['1Hundrednote', '2Hundrednote', '2Thousandnote', '5Hundrednote', 'Fiftynote', 'Tennote', 'Twentynote']
Test class names:  ['1Hundrednote', '2Hundrednote', '2Thousandnote', '5Hundrednote', 'Fiftynote', 'Tennote', 'Twentynote']

Visualization

In [6]:
num_samples = 4    # the number of samples to be displayed in each class

for x in class_names:
    plt.figure(figsize=(20, 20))

    filenames = os.listdir(path + 'Train/' + x)

    for i in range(num_samples):
        ax = plt.subplot(1, num_samples, i + 1)
        img = Image.open(path +'Train/' + x + '/' + filenames[i])
        plt.imshow(img)
        plt.title(x)
        plt.axis("off")

Preprocessing

In [7]:
# Normalizing the pixel values for faster convergence

normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)

train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
test_ds = test_ds.map(lambda x, y: (normalization_layer(x), y))

The model

Using transfer learning

In [8]:
base_model = tensorflow.keras.applications.DenseNet121(weights='imagenet', input_shape=input_shape, include_top=False)    # False, do not include the classification layer of the model
base_model.trainable = False

inputs = tf.keras.Input(shape=input_shape)

x = base_model(inputs, training=False)
x = tensorflow.keras.layers.GlobalAveragePooling2D()(x)
outputs = tensorflow.keras.layers.Dense(len(class_names), activation = 'softmax')(x)    # Add own classififcation layer

model = tensorflow.keras.Model(inputs, outputs)

cb = [callbacks.EarlyStopping(monitor = 'val_loss', patience = 5, restore_best_weights = True)]
model.summary()
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/densenet/densenet121_weights_tf_dim_ordering_tf_kernels_notop.h5
29089792/29084464 [==============================] - 0s 0us/step
Model: "functional_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         [(None, 256, 256, 3)]     0         
_________________________________________________________________
densenet121 (Functional)     (None, 8, 8, 1024)        7037504   
_________________________________________________________________
global_average_pooling2d (Gl (None, 1024)              0         
_________________________________________________________________
dense (Dense)                (None, 7)                 7175      
=================================================================
Total params: 7,044,679
Trainable params: 7,175
Non-trainable params: 7,037,504
_________________________________________________________________
In [9]:
model.compile(loss=losses.CategoricalCrossentropy(), optimizer=optimizers.Adam(0.01), metrics=['accuracy'])

history = model.fit(train_ds, validation_data =  test_ds, epochs=256, callbacks = cb)
Epoch 1/256
3/3 [==============================] - 4s 1s/step - loss: 2.5260 - accuracy: 0.2092 - val_loss: 2.4200 - val_accuracy: 0.1429
Epoch 2/256
3/3 [==============================] - 1s 279ms/step - loss: 2.0243 - accuracy: 0.2680 - val_loss: 1.7293 - val_accuracy: 0.2857
Epoch 3/256
3/3 [==============================] - 1s 270ms/step - loss: 1.4306 - accuracy: 0.4183 - val_loss: 1.4618 - val_accuracy: 0.3810
Epoch 4/256
3/3 [==============================] - 1s 273ms/step - loss: 1.1757 - accuracy: 0.6536 - val_loss: 1.1109 - val_accuracy: 0.6667
Epoch 5/256
3/3 [==============================] - 1s 281ms/step - loss: 0.8542 - accuracy: 0.7320 - val_loss: 0.9484 - val_accuracy: 0.7143
Epoch 6/256
3/3 [==============================] - 1s 249ms/step - loss: 0.6459 - accuracy: 0.8105 - val_loss: 0.9508 - val_accuracy: 0.7143
Epoch 7/256
3/3 [==============================] - 1s 270ms/step - loss: 0.5367 - accuracy: 0.8889 - val_loss: 0.7267 - val_accuracy: 0.7619
Epoch 8/256
3/3 [==============================] - 1s 275ms/step - loss: 0.3955 - accuracy: 0.9216 - val_loss: 0.6794 - val_accuracy: 0.7619
Epoch 9/256
3/3 [==============================] - 1s 280ms/step - loss: 0.3518 - accuracy: 0.9281 - val_loss: 0.5801 - val_accuracy: 0.8571
Epoch 10/256
3/3 [==============================] - 1s 240ms/step - loss: 0.2647 - accuracy: 0.9739 - val_loss: 0.5940 - val_accuracy: 0.8333
Epoch 11/256
3/3 [==============================] - 1s 248ms/step - loss: 0.2347 - accuracy: 0.9869 - val_loss: 0.6162 - val_accuracy: 0.7857
Epoch 12/256
3/3 [==============================] - 1s 271ms/step - loss: 0.1947 - accuracy: 0.9935 - val_loss: 0.5520 - val_accuracy: 0.8333
Epoch 13/256
3/3 [==============================] - 1s 278ms/step - loss: 0.1661 - accuracy: 0.9935 - val_loss: 0.5333 - val_accuracy: 0.8095
Epoch 14/256
3/3 [==============================] - 1s 280ms/step - loss: 0.1536 - accuracy: 1.0000 - val_loss: 0.4937 - val_accuracy: 0.8333
Epoch 15/256
3/3 [==============================] - 1s 279ms/step - loss: 0.1316 - accuracy: 1.0000 - val_loss: 0.4649 - val_accuracy: 0.8571
Epoch 16/256
3/3 [==============================] - 1s 271ms/step - loss: 0.1121 - accuracy: 1.0000 - val_loss: 0.4524 - val_accuracy: 0.8810
Epoch 17/256
3/3 [==============================] - 1s 280ms/step - loss: 0.1021 - accuracy: 1.0000 - val_loss: 0.4520 - val_accuracy: 0.9048
Epoch 18/256
3/3 [==============================] - 1s 272ms/step - loss: 0.0926 - accuracy: 1.0000 - val_loss: 0.4499 - val_accuracy: 0.9048
Epoch 19/256
3/3 [==============================] - 1s 279ms/step - loss: 0.0836 - accuracy: 1.0000 - val_loss: 0.4482 - val_accuracy: 0.9048
Epoch 20/256
3/3 [==============================] - 1s 279ms/step - loss: 0.0775 - accuracy: 1.0000 - val_loss: 0.4445 - val_accuracy: 0.8571
Epoch 21/256
3/3 [==============================] - 1s 286ms/step - loss: 0.0720 - accuracy: 1.0000 - val_loss: 0.4293 - val_accuracy: 0.8810
Epoch 22/256
3/3 [==============================] - 1s 298ms/step - loss: 0.0665 - accuracy: 1.0000 - val_loss: 0.4194 - val_accuracy: 0.9286
Epoch 23/256
3/3 [==============================] - 1s 284ms/step - loss: 0.0633 - accuracy: 1.0000 - val_loss: 0.4156 - val_accuracy: 0.9524
Epoch 24/256
3/3 [==============================] - 1s 272ms/step - loss: 0.0603 - accuracy: 1.0000 - val_loss: 0.4080 - val_accuracy: 0.9286
Epoch 25/256
3/3 [==============================] - 1s 249ms/step - loss: 0.0551 - accuracy: 1.0000 - val_loss: 0.4105 - val_accuracy: 0.8571
Epoch 26/256
3/3 [==============================] - 1s 273ms/step - loss: 0.0528 - accuracy: 1.0000 - val_loss: 0.4076 - val_accuracy: 0.8571
Epoch 27/256
3/3 [==============================] - 1s 287ms/step - loss: 0.0496 - accuracy: 1.0000 - val_loss: 0.4006 - val_accuracy: 0.9286
Epoch 28/256
3/3 [==============================] - 1s 272ms/step - loss: 0.0474 - accuracy: 1.0000 - val_loss: 0.3960 - val_accuracy: 0.9286
Epoch 29/256
3/3 [==============================] - 1s 242ms/step - loss: 0.0452 - accuracy: 1.0000 - val_loss: 0.3973 - val_accuracy: 0.9048
Epoch 30/256
3/3 [==============================] - 1s 240ms/step - loss: 0.0434 - accuracy: 1.0000 - val_loss: 0.3973 - val_accuracy: 0.9048
Epoch 31/256
3/3 [==============================] - 1s 278ms/step - loss: 0.0415 - accuracy: 1.0000 - val_loss: 0.3934 - val_accuracy: 0.9048
Epoch 32/256
3/3 [==============================] - 1s 250ms/step - loss: 0.0393 - accuracy: 1.0000 - val_loss: 0.3941 - val_accuracy: 0.8571
Epoch 33/256
3/3 [==============================] - 1s 275ms/step - loss: 0.0381 - accuracy: 1.0000 - val_loss: 0.3923 - val_accuracy: 0.8571
Epoch 34/256
3/3 [==============================] - 1s 271ms/step - loss: 0.0363 - accuracy: 1.0000 - val_loss: 0.3842 - val_accuracy: 0.9286
Epoch 35/256
3/3 [==============================] - 1s 282ms/step - loss: 0.0348 - accuracy: 1.0000 - val_loss: 0.3819 - val_accuracy: 0.9286
Epoch 36/256
3/3 [==============================] - 1s 281ms/step - loss: 0.0335 - accuracy: 1.0000 - val_loss: 0.3817 - val_accuracy: 0.9286
Epoch 37/256
3/3 [==============================] - 1s 241ms/step - loss: 0.0321 - accuracy: 1.0000 - val_loss: 0.3825 - val_accuracy: 0.8810
Epoch 38/256
3/3 [==============================] - 1s 272ms/step - loss: 0.0312 - accuracy: 1.0000 - val_loss: 0.3805 - val_accuracy: 0.8810
Epoch 39/256
3/3 [==============================] - 1s 283ms/step - loss: 0.0302 - accuracy: 1.0000 - val_loss: 0.3752 - val_accuracy: 0.9048
Epoch 40/256
3/3 [==============================] - 1s 272ms/step - loss: 0.0292 - accuracy: 1.0000 - val_loss: 0.3733 - val_accuracy: 0.9286
Epoch 41/256
3/3 [==============================] - 1s 274ms/step - loss: 0.0286 - accuracy: 1.0000 - val_loss: 0.3713 - val_accuracy: 0.9286
Epoch 42/256
3/3 [==============================] - 1s 241ms/step - loss: 0.0273 - accuracy: 1.0000 - val_loss: 0.3714 - val_accuracy: 0.9048
Epoch 43/256
3/3 [==============================] - 1s 249ms/step - loss: 0.0263 - accuracy: 1.0000 - val_loss: 0.3716 - val_accuracy: 0.9048
Epoch 44/256
3/3 [==============================] - 1s 249ms/step - loss: 0.0256 - accuracy: 1.0000 - val_loss: 0.3742 - val_accuracy: 0.9286
Epoch 45/256
3/3 [==============================] - 1s 250ms/step - loss: 0.0248 - accuracy: 1.0000 - val_loss: 0.3742 - val_accuracy: 0.9286
Epoch 46/256
3/3 [==============================] - 1s 289ms/step - loss: 0.0241 - accuracy: 1.0000 - val_loss: 0.3726 - val_accuracy: 0.9286
In [10]:
model.evaluate(test_ds)
1/1 [==============================] - 0s 1ms/step - loss: 0.3713 - accuracy: 0.9286
Out[10]:
[0.37128233909606934, 0.9285714030265808]
In [11]:
Xtest = []
ytest = []

for x in test_ds.enumerate():
    for y in x[1][0]:        
        Xtest.append(np.array(y).tolist())
    ytest.extend(np.array(x[1][1]).tolist())
    
len(ytest), len(Xtest)
Out[11]:
(42, 42)
In [12]:
cm = confusion_matrix(np.argmax(ytest, axis = 1), np.argmax(model.predict(Xtest), axis = 1))
cm = cm.astype('int') / cm.sum(axis=1)[:, np.newaxis]

fig = plt.figure(figsize = (10, 10))
ax = fig.add_subplot(111)

for i in range(cm.shape[1]):
    for j in range(cm.shape[0]):
        if cm[i,j] > 0.8:
            clr = "white"
        else:
            clr = "black"
        ax.text(j, i, format(cm[i, j], '.2f'), horizontalalignment="center", color=clr)

_ = ax.imshow(cm, cmap=plt.cm.Blues)
ax.set_xticks(range(len(class_names)))
ax.set_yticks(range(len(class_names)))
ax.set_xticklabels(class_names, rotation = 90)
ax.set_yticklabels(class_names)
plt.xlabel('Predicted')
plt.ylabel('True')
plt.show()

Plotting the metrics

In [13]:
def plot(history, variable, variable2):
    plt.plot(range(len(history[variable])), history[variable])
    plt.plot(range(len(history[variable2])), history[variable2])
    plt.legend([variable, variable2])
    plt.title(variable)
In [14]:
plot(history.history, "accuracy", 'val_accuracy')
In [15]:
plot(history.history, "loss", 'val_loss')

Predictions

In [16]:
# pick random test data sample from one batch
x = random.randint(0, 41) # test set has 42 samples

for i in test_ds.as_numpy_iterator():
    img, label = i    
    plt.axis('off')   # remove axes
    #print(img.shape, x)
    plt.imshow(img[x])    # shape from (64, 256, 256, 3) --> (256, 256, 3)
    output = model.predict(np.expand_dims(img[x],0))    # getting output; input shape (256, 256, 3) --> (1, 256, 256, 3)
    pred = np.argmax(output[0])    # finding max
    print("Prdicted: ", class_names[pred])    # Picking the label from class_names base don the model output
    print("True: ", class_names[np.argmax(label[x])])
    print("Probability: ", output[0][pred])
    break
Prdicted:  Tennote
True:  Tennote
Probability:  0.49427244

deepC

In [17]:
model.save('currency.h5')

# !deepCC currency.h5