Cainvas
Model Files
maths_symbol_and_digit_recogn…
keras
Model
deepSea Compiled Models
maths_symbol_and_digit_recogn…
deepSea
Ubuntu

Handwritten Optical Character Recognition Calculator

Credit: AITS Cainvas Community

Photo by Pavelas Laptevas for Cub Studio on Dribbble

Importing Necessary Libraries

In [1]:
import numpy as np
import cv2
import os
import tensorflow as tf
from tensorflow.keras.layers import Input, Conv2D, Activation, MaxPool2D, Flatten, Dense, Dropout, BatchNormalization
from tensorflow.keras.initializers import glorot_uniform
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam, SGD
from tensorflow.keras.regularizers import l2
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import LearningRateScheduler, ModelCheckpoint
from tensorflow.keras.utils import to_categorical
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.preprocessing import LabelEncoder
import seaborn as sn
import matplotlib.pyplot as plt
import pandas as pd
import imutils
from imutils.contours import sort_contours
In [2]:
!wget https://cainvas-static.s3.amazonaws.com/media/user_data/Yuvnish17/data.zip
!unzip -qo data.zip
--2021-07-14 10:44:50--  https://cainvas-static.s3.amazonaws.com/media/user_data/Yuvnish17/data.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.62.88
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.62.88|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 28437489 (27M) [application/x-zip-compressed]
Saving to: ‘data.zip’

data.zip            100%[===================>]  27.12M  80.3MB/s    in 0.3s    

2021-07-14 10:44:50 (80.3 MB/s) - ‘data.zip’ saved [28437489/28437489]

Loading the Dataset

In [3]:
x = []
y = []
datadir = 'data/dataset'
for folder in os.listdir(datadir):
    path = os.path.join(datadir, folder)
    for images in os.listdir(path):
        img = cv2.imread(os.path.join(path, images))
        x.append(img)
        y.append(folder)
        
print(len(x))
print(len(y))
print(f'labels : {list(set(y))}')
7600
7600
labels : ['7', '9', '6', 'add', '1', '0', 'mul', '4', '2', 'sub', '3', '5', '8', 'div']

Visualizing Images in the Dataset

In [4]:
figure = plt.figure(figsize=(10, 10))
j = 0
for i in list(set(y)):
    idx = y.index(i)
    img = x[idx]
    img = cv2.resize(img, (256, 256))
    figure.add_subplot(5, 5, j+1)
    plt.imshow(img)
    plt.axis('off')
    plt.title(i)
    j += 1

Data Distribution of the Dataset

In [5]:
unique, count = np.unique(y, return_counts=True)
figure = plt.figure(figsize=(20, 10))
sn.barplot(unique, count).set_title('Number of Images per Category')
plt.show()
/opt/tljh/user/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
  FutureWarning

As can been seen, the dataset is not much imbalanced. So balancing is not required here much.

Preprocessing the Data

In [6]:
X = []
for i in range(len(x)):
#     print(i)
    img = x[i]
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    threshold_image = cv2.threshold(img_gray, 0, 255, cv2.THRESH_BINARY_INV|cv2.THRESH_OTSU)[1]
    threshold_image = cv2.resize(threshold_image, (32, 32))
    X.append(threshold_image)
print(len(X))
7600
In [7]:
label_encoder = LabelEncoder()
y = label_encoder.fit_transform(y)
print(len(y))
7600
In [8]:
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.2)

Data Distribution in Train and Test Set

In [9]:
unique_train, count_train = np.unique(Y_train, return_counts=True)
figure = plt.figure(figsize=(20, 10))
sn.barplot(unique_train, count_train).set_title('Number of Images per category in Train Set')
plt.show()
/opt/tljh/user/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
  FutureWarning
In [10]:
unique_test, count_test = np.unique(Y_test, return_counts=True)
figure = plt.figure(figsize=(20, 10))
sn.barplot(unique_test, count_test).set_title('Number of Images per category in Test Set')
plt.show()
/opt/tljh/user/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
  FutureWarning

Defining the Model

In [11]:
X_train = np.array(X_train)
X_test = np.array(X_test)
Y_train = np.array(Y_train)
Y_test = np.array(Y_test)

Y_train = to_categorical(Y_train)
Y_test = to_categorical(Y_test)
X_train = np.expand_dims(X_train, axis=-1)
X_test = np.expand_dims(X_test, axis=-1)
X_train = X_train/255.
X_test = X_test/255.

print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
(6080, 32, 32, 1)
(1520, 32, 32, 1)
(6080, 14)
(1520, 14)
In [12]:
def math_symbol_and_digits_recognition(input_shape=(32, 32, 1)):
    regularizer = l2(0.01)
    model = Sequential()
    model.add(Input(shape=input_shape))
    model.add(Conv2D(32, (3, 3), strides=(1, 1), padding='same', 
                     kernel_initializer=glorot_uniform(seed=0), 
                     name='conv1', activity_regularizer=regularizer))
    model.add(Activation(activation='relu', name='act1'))
    model.add(MaxPool2D((2, 2), strides=(2, 2)))
    model.add(Conv2D(32, (3, 3), strides=(1, 1), padding='same', 
                     kernel_initializer=glorot_uniform(seed=0), 
                     name='conv2', activity_regularizer=regularizer))
    model.add(Activation(activation='relu', name='act2'))
    model.add(MaxPool2D((2, 2), strides=(2, 2)))
    model.add(Conv2D(64, (3, 3), strides=(1, 1), padding='same', 
                     kernel_initializer=glorot_uniform(seed=0), 
                     name='conv3', activity_regularizer=regularizer))
    model.add(Activation(activation='relu', name='act3'))
    model.add(MaxPool2D((2, 2), strides=(2, 2)))
    model.add(Flatten())
    model.add(Dropout(0.5))
    model.add(Dense(120, activation='relu', kernel_initializer=glorot_uniform(seed=0), name='fc1'))
    model.add(Dense(84, activation='relu', kernel_initializer=glorot_uniform(seed=0), name='fc2'))
    model.add(Dense(14, activation='softmax', kernel_initializer=glorot_uniform(seed=0), name='fc3'))
    
    optimizer = Adam()
    model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
    return model
In [13]:
model = math_symbol_and_digits_recognition(input_shape=(32, 32, 1))
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1 (Conv2D)               (None, 32, 32, 32)        320       
_________________________________________________________________
act1 (Activation)            (None, 32, 32, 32)        0         
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 16, 16, 32)        0         
_________________________________________________________________
conv2 (Conv2D)               (None, 16, 16, 32)        9248      
_________________________________________________________________
act2 (Activation)            (None, 16, 16, 32)        0         
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 8, 8, 32)          0         
_________________________________________________________________
conv3 (Conv2D)               (None, 8, 8, 64)          18496     
_________________________________________________________________
act3 (Activation)            (None, 8, 8, 64)          0         
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 4, 4, 64)          0         
_________________________________________________________________
flatten (Flatten)            (None, 1024)              0         
_________________________________________________________________
dropout (Dropout)            (None, 1024)              0         
_________________________________________________________________
fc1 (Dense)                  (None, 120)               123000    
_________________________________________________________________
fc2 (Dense)                  (None, 84)                10164     
_________________________________________________________________
fc3 (Dense)                  (None, 14)                1190      
=================================================================
Total params: 162,418
Trainable params: 162,418
Non-trainable params: 0
_________________________________________________________________

Training the Model

In [14]:
def step_decay(epoch):
    initial_learning_rate = 0.001
    dropEvery = 10
    factor = 0.5
    lr = initial_learning_rate*(factor**np.floor((1 + epoch)/dropEvery))
    return float(lr)

checkpoint = ModelCheckpoint('maths_symbol_and_digits_recognition.h5', 
                             monitor='val_loss', save_best_only=True, 
                             verbose=1, mode='min')

callbacks = [checkpoint, LearningRateScheduler(step_decay)]
In [15]:
aug = ImageDataGenerator(zoom_range=0.1,
#                          rotation_range=5,
                         width_shift_range=0.05,
                         height_shift_range=0.05)
hist = model.fit(aug.flow(X_train, Y_train, batch_size=128), batch_size=128, epochs=100, validation_data=(X_test, Y_test))
# hist2 = model2.fit(X_train, Y_train, batch_size=128, epochs=100, validation_data=(X_test, Y_test), callbacks=callbacks)
Epoch 1/100
48/48 [==============================] - 1s 26ms/step - loss: 2.7688 - accuracy: 0.2326 - val_loss: 2.2496 - val_accuracy: 0.4625
Epoch 2/100
48/48 [==============================] - 1s 24ms/step - loss: 1.7632 - accuracy: 0.5387 - val_loss: 1.5668 - val_accuracy: 0.6546
Epoch 3/100
48/48 [==============================] - 1s 22ms/step - loss: 1.2001 - accuracy: 0.6982 - val_loss: 1.1631 - val_accuracy: 0.7355
Epoch 4/100
48/48 [==============================] - 1s 22ms/step - loss: 0.9583 - accuracy: 0.7655 - val_loss: 1.3024 - val_accuracy: 0.6651
Epoch 5/100
48/48 [==============================] - 1s 22ms/step - loss: 0.8343 - accuracy: 0.8036 - val_loss: 1.0532 - val_accuracy: 0.7500
Epoch 6/100
48/48 [==============================] - 1s 22ms/step - loss: 0.7335 - accuracy: 0.8268 - val_loss: 0.7759 - val_accuracy: 0.8395
Epoch 7/100
48/48 [==============================] - 1s 22ms/step - loss: 0.6480 - accuracy: 0.8497 - val_loss: 0.6575 - val_accuracy: 0.8789
Epoch 8/100
48/48 [==============================] - 1s 25ms/step - loss: 0.5972 - accuracy: 0.8638 - val_loss: 0.6912 - val_accuracy: 0.8651
Epoch 9/100
48/48 [==============================] - 1s 22ms/step - loss: 0.5434 - accuracy: 0.8824 - val_loss: 0.5654 - val_accuracy: 0.9039
Epoch 10/100
48/48 [==============================] - 1s 22ms/step - loss: 0.5270 - accuracy: 0.8821 - val_loss: 0.6587 - val_accuracy: 0.8704
Epoch 11/100
48/48 [==============================] - 1s 22ms/step - loss: 0.4658 - accuracy: 0.9041 - val_loss: 0.5425 - val_accuracy: 0.8967
Epoch 12/100
48/48 [==============================] - 1s 22ms/step - loss: 0.4727 - accuracy: 0.8944 - val_loss: 0.4658 - val_accuracy: 0.9237
Epoch 13/100
48/48 [==============================] - 1s 22ms/step - loss: 0.4278 - accuracy: 0.9084 - val_loss: 0.5596 - val_accuracy: 0.8796
Epoch 14/100
48/48 [==============================] - 1s 22ms/step - loss: 0.3940 - accuracy: 0.9153 - val_loss: 0.4545 - val_accuracy: 0.9132
Epoch 15/100
48/48 [==============================] - 1s 22ms/step - loss: 0.3707 - accuracy: 0.9250 - val_loss: 0.4125 - val_accuracy: 0.9237
Epoch 16/100
48/48 [==============================] - 1s 22ms/step - loss: 0.3716 - accuracy: 0.9189 - val_loss: 0.3908 - val_accuracy: 0.9342
Epoch 17/100
48/48 [==============================] - 1s 22ms/step - loss: 0.3558 - accuracy: 0.9255 - val_loss: 0.3957 - val_accuracy: 0.9211
Epoch 18/100
48/48 [==============================] - 1s 22ms/step - loss: 0.3501 - accuracy: 0.9250 - val_loss: 0.7237 - val_accuracy: 0.8171
Epoch 19/100
48/48 [==============================] - 1s 22ms/step - loss: 0.3521 - accuracy: 0.9247 - val_loss: 0.4411 - val_accuracy: 0.9211
Epoch 20/100
48/48 [==============================] - 1s 22ms/step - loss: 0.3119 - accuracy: 0.9357 - val_loss: 0.3472 - val_accuracy: 0.9428
Epoch 21/100
48/48 [==============================] - 1s 22ms/step - loss: 0.2925 - accuracy: 0.9400 - val_loss: 0.3195 - val_accuracy: 0.9467
Epoch 22/100
48/48 [==============================] - 1s 22ms/step - loss: 0.2885 - accuracy: 0.9383 - val_loss: 0.3732 - val_accuracy: 0.9388
Epoch 23/100
48/48 [==============================] - 1s 22ms/step - loss: 0.2831 - accuracy: 0.9395 - val_loss: 0.3053 - val_accuracy: 0.9539
Epoch 24/100
48/48 [==============================] - 1s 22ms/step - loss: 0.2636 - accuracy: 0.9461 - val_loss: 0.3713 - val_accuracy: 0.9250
Epoch 25/100
48/48 [==============================] - 1s 22ms/step - loss: 0.2649 - accuracy: 0.9454 - val_loss: 0.3149 - val_accuracy: 0.9500
Epoch 26/100
48/48 [==============================] - 1s 22ms/step - loss: 0.2472 - accuracy: 0.9512 - val_loss: 0.2399 - val_accuracy: 0.9684
Epoch 27/100
48/48 [==============================] - 1s 22ms/step - loss: 0.2543 - accuracy: 0.9474 - val_loss: 0.2570 - val_accuracy: 0.9678
Epoch 28/100
48/48 [==============================] - 1s 22ms/step - loss: 0.2358 - accuracy: 0.9505 - val_loss: 0.2897 - val_accuracy: 0.9474
Epoch 29/100
48/48 [==============================] - 1s 22ms/step - loss: 0.2345 - accuracy: 0.9505 - val_loss: 0.2637 - val_accuracy: 0.9579
Epoch 30/100
48/48 [==============================] - 1s 22ms/step - loss: 0.2330 - accuracy: 0.9548 - val_loss: 0.2590 - val_accuracy: 0.9592
Epoch 31/100
48/48 [==============================] - 1s 22ms/step - loss: 0.2360 - accuracy: 0.9475 - val_loss: 0.2184 - val_accuracy: 0.9717
Epoch 32/100
48/48 [==============================] - 1s 22ms/step - loss: 0.2031 - accuracy: 0.9594 - val_loss: 0.2258 - val_accuracy: 0.9684
Epoch 33/100
48/48 [==============================] - 1s 22ms/step - loss: 0.2069 - accuracy: 0.9620 - val_loss: 0.2056 - val_accuracy: 0.9743
Epoch 34/100
48/48 [==============================] - 1s 22ms/step - loss: 0.2096 - accuracy: 0.9563 - val_loss: 0.2136 - val_accuracy: 0.9717
Epoch 35/100
48/48 [==============================] - 1s 23ms/step - loss: 0.2255 - accuracy: 0.9526 - val_loss: 0.2413 - val_accuracy: 0.9638
Epoch 36/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1949 - accuracy: 0.9627 - val_loss: 0.1834 - val_accuracy: 0.9750
Epoch 37/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1930 - accuracy: 0.9613 - val_loss: 0.1939 - val_accuracy: 0.9724
Epoch 38/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1977 - accuracy: 0.9589 - val_loss: 0.1822 - val_accuracy: 0.9757
Epoch 39/100
48/48 [==============================] - 1s 22ms/step - loss: 0.2027 - accuracy: 0.9605 - val_loss: 0.1857 - val_accuracy: 0.9809
Epoch 40/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1910 - accuracy: 0.9640 - val_loss: 0.1989 - val_accuracy: 0.9704
Epoch 41/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1705 - accuracy: 0.9666 - val_loss: 0.2105 - val_accuracy: 0.9618
Epoch 42/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1777 - accuracy: 0.9633 - val_loss: 0.1837 - val_accuracy: 0.9743
Epoch 43/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1673 - accuracy: 0.9688 - val_loss: 0.1725 - val_accuracy: 0.9717
Epoch 44/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1740 - accuracy: 0.9655 - val_loss: 0.1835 - val_accuracy: 0.9750
Epoch 45/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1601 - accuracy: 0.9684 - val_loss: 0.1503 - val_accuracy: 0.9855
Epoch 46/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1832 - accuracy: 0.9609 - val_loss: 0.1928 - val_accuracy: 0.9691
Epoch 47/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1601 - accuracy: 0.9691 - val_loss: 0.1639 - val_accuracy: 0.9783
Epoch 48/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1615 - accuracy: 0.9671 - val_loss: 0.1623 - val_accuracy: 0.9816
Epoch 49/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1552 - accuracy: 0.9692 - val_loss: 0.1550 - val_accuracy: 0.9770
Epoch 50/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1439 - accuracy: 0.9734 - val_loss: 0.1630 - val_accuracy: 0.9757
Epoch 51/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1536 - accuracy: 0.9712 - val_loss: 0.1520 - val_accuracy: 0.9796
Epoch 52/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1450 - accuracy: 0.9727 - val_loss: 0.1775 - val_accuracy: 0.9737
Epoch 53/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1501 - accuracy: 0.9699 - val_loss: 0.1608 - val_accuracy: 0.9770
Epoch 54/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1576 - accuracy: 0.9673 - val_loss: 0.1456 - val_accuracy: 0.9888
Epoch 55/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1437 - accuracy: 0.9722 - val_loss: 0.1406 - val_accuracy: 0.9842
Epoch 56/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1714 - accuracy: 0.9630 - val_loss: 0.1804 - val_accuracy: 0.9724
Epoch 57/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1325 - accuracy: 0.9752 - val_loss: 0.1411 - val_accuracy: 0.9809
Epoch 58/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1273 - accuracy: 0.9762 - val_loss: 0.1286 - val_accuracy: 0.9855
Epoch 59/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1525 - accuracy: 0.9686 - val_loss: 0.1764 - val_accuracy: 0.9750
Epoch 60/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1370 - accuracy: 0.9745 - val_loss: 0.1331 - val_accuracy: 0.9875
Epoch 61/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1377 - accuracy: 0.9720 - val_loss: 0.1665 - val_accuracy: 0.9730
Epoch 62/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1642 - accuracy: 0.9648 - val_loss: 0.1489 - val_accuracy: 0.9849
Epoch 63/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1428 - accuracy: 0.9735 - val_loss: 0.1319 - val_accuracy: 0.9868
Epoch 64/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1503 - accuracy: 0.9711 - val_loss: 0.1589 - val_accuracy: 0.9796
Epoch 65/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1278 - accuracy: 0.9757 - val_loss: 0.1377 - val_accuracy: 0.9855
Epoch 66/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1286 - accuracy: 0.9742 - val_loss: 0.1430 - val_accuracy: 0.9829
Epoch 67/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1261 - accuracy: 0.9758 - val_loss: 0.1331 - val_accuracy: 0.9816
Epoch 68/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1213 - accuracy: 0.9781 - val_loss: 0.1314 - val_accuracy: 0.9822
Epoch 69/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1195 - accuracy: 0.9766 - val_loss: 0.1145 - val_accuracy: 0.9901
Epoch 70/100
48/48 [==============================] - 1s 23ms/step - loss: 0.1198 - accuracy: 0.9762 - val_loss: 0.1918 - val_accuracy: 0.9625
Epoch 71/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1311 - accuracy: 0.9720 - val_loss: 0.1294 - val_accuracy: 0.9862
Epoch 72/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1100 - accuracy: 0.9804 - val_loss: 0.1158 - val_accuracy: 0.9888
Epoch 73/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1214 - accuracy: 0.9753 - val_loss: 0.1739 - val_accuracy: 0.9704
Epoch 74/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1166 - accuracy: 0.9766 - val_loss: 0.1307 - val_accuracy: 0.9809
Epoch 75/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1012 - accuracy: 0.9831 - val_loss: 0.1103 - val_accuracy: 0.9868
Epoch 76/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1094 - accuracy: 0.9801 - val_loss: 0.1137 - val_accuracy: 0.9895
Epoch 77/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1025 - accuracy: 0.9811 - val_loss: 0.1386 - val_accuracy: 0.9757
Epoch 78/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1114 - accuracy: 0.9783 - val_loss: 0.1119 - val_accuracy: 0.9868
Epoch 79/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1153 - accuracy: 0.9776 - val_loss: 0.1170 - val_accuracy: 0.9868
Epoch 80/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1095 - accuracy: 0.9803 - val_loss: 0.1128 - val_accuracy: 0.9888
Epoch 81/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1230 - accuracy: 0.9740 - val_loss: 0.1186 - val_accuracy: 0.9901
Epoch 82/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1126 - accuracy: 0.9803 - val_loss: 0.1085 - val_accuracy: 0.9875
Epoch 83/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1034 - accuracy: 0.9801 - val_loss: 0.1264 - val_accuracy: 0.9842
Epoch 84/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1099 - accuracy: 0.9785 - val_loss: 0.1029 - val_accuracy: 0.9875
Epoch 85/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1051 - accuracy: 0.9796 - val_loss: 0.1123 - val_accuracy: 0.9849
Epoch 86/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1026 - accuracy: 0.9808 - val_loss: 0.1043 - val_accuracy: 0.9888
Epoch 87/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1238 - accuracy: 0.9771 - val_loss: 0.1150 - val_accuracy: 0.9882
Epoch 88/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1022 - accuracy: 0.9814 - val_loss: 0.1233 - val_accuracy: 0.9816
Epoch 89/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1180 - accuracy: 0.9773 - val_loss: 0.1192 - val_accuracy: 0.9888
Epoch 90/100
48/48 [==============================] - 1s 22ms/step - loss: 0.0992 - accuracy: 0.9824 - val_loss: 0.1028 - val_accuracy: 0.9888
Epoch 91/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1030 - accuracy: 0.9803 - val_loss: 0.1233 - val_accuracy: 0.9816
Epoch 92/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1065 - accuracy: 0.9778 - val_loss: 0.1084 - val_accuracy: 0.9849
Epoch 93/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1295 - accuracy: 0.9752 - val_loss: 0.1108 - val_accuracy: 0.9895
Epoch 94/100
48/48 [==============================] - 1s 22ms/step - loss: 0.0981 - accuracy: 0.9836 - val_loss: 0.1048 - val_accuracy: 0.9888
Epoch 95/100
48/48 [==============================] - 1s 22ms/step - loss: 0.0971 - accuracy: 0.9832 - val_loss: 0.2002 - val_accuracy: 0.9572
Epoch 96/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1141 - accuracy: 0.9778 - val_loss: 0.1107 - val_accuracy: 0.9868
Epoch 97/100
48/48 [==============================] - 1s 22ms/step - loss: 0.1044 - accuracy: 0.9803 - val_loss: 0.1221 - val_accuracy: 0.9882
Epoch 98/100
48/48 [==============================] - 1s 22ms/step - loss: 0.0880 - accuracy: 0.9872 - val_loss: 0.1056 - val_accuracy: 0.9875
Epoch 99/100
48/48 [==============================] - 1s 22ms/step - loss: 0.0973 - accuracy: 0.9798 - val_loss: 0.0976 - val_accuracy: 0.9895
Epoch 100/100
48/48 [==============================] - 1s 22ms/step - loss: 0.0909 - accuracy: 0.9826 - val_loss: 0.0994 - val_accuracy: 0.9901

Loss and Accuracy Plot

In [16]:
figure = plt.figure(figsize=(10, 10))
plt.plot(hist.history['accuracy'], label='Train Set Accuracy')
plt.plot(hist.history['val_accuracy'], label='Test Set Accuracy')
plt.title('Accuracy Plot')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='upper right')
plt.show()

figure2 = plt.figure(figsize=(10, 10))
plt.plot(hist.history['loss'], label='Train Set Loss')
plt.plot(hist.history['val_loss'], label='Test Set Loss')
plt.title('Loss Plot')
plt.xlabel('Epochs')
plt.ylabel('Loss Value')
plt.legend(loc='upper right')
plt.show()

Classification Report

In [17]:
ypred = model.predict(X_test)
ypred = np.argmax(ypred, axis=1)
Y_test_hat = np.argmax(Y_test, axis=1)
print(classification_report(Y_test_hat, ypred))
              precision    recall  f1-score   support

           0       0.95      1.00      0.97       109
           1       0.99      0.99      0.99       114
           2       1.00      1.00      1.00        78
           3       1.00      0.98      0.99       112
           4       0.99      0.99      0.99       110
           5       1.00      0.99      0.99        97
           6       0.99      0.97      0.98       105
           7       0.98      0.99      0.99       110
           8       0.99      0.98      0.99       112
           9       0.98      0.98      0.98       125
          10       1.00      0.99      1.00       112
          11       1.00      1.00      1.00       110
          12       0.99      1.00      1.00       101
          13       1.00      1.00      1.00       125

    accuracy                           0.99      1520
   macro avg       0.99      0.99      0.99      1520
weighted avg       0.99      0.99      0.99      1520

Confusion Matrix

In [18]:
matrix = confusion_matrix(Y_test_hat, ypred)
df_cm = pd.DataFrame(matrix, index=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13], 
                     columns=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13])
figure = plt.figure(figsize=(20, 10))
sn.heatmap(df_cm, annot=True, fmt='d')
Out[18]:
<AxesSubplot:>

Saving the Model

In [19]:
model.save('maths_symbol_and_digit_recognition.h5')

Testing the Model

In [20]:
def test_pipeline(image_path):
    img = cv2.imread(image_path)
    img = cv2.resize(img, (800, 800))
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    # blurred = cv2.GaussianBlur(img_gray, (3, 3), 0)
    edged = cv2.Canny(img_gray, 30, 150)
    contours = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    contours = imutils.grab_contours(contours)
    contours = sort_contours(contours, method="left-to-right")[0]
    labels = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'add', 'div', 'mul', 'sub']

    for c in contours:
        (x, y, w, h) = cv2.boundingRect(c)
        if 20<=w and 30<=h:
            roi = img_gray[y:y+h, x:x+w]
            thresh = cv2.threshold(roi, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
            (th, tw) = thresh.shape
            if tw > th:
                thresh = imutils.resize(thresh, width=32)
            if th > tw:
                thresh = imutils.resize(thresh, height=32)
            (th, tw) = thresh.shape
            dx = int(max(0, 32 - tw)/2.0)
            dy = int(max(0, 32 - th) / 2.0)
            padded = cv2.copyMakeBorder(thresh, top=dy, bottom=dy, left=dx, right=dx, borderType=cv2.BORDER_CONSTANT,
                                       value=(0, 0, 0))
            padded = cv2.resize(padded, (32, 32))
            padded = np.array(padded)
            padded = padded/255.
            padded = np.expand_dims(padded, axis=0)
            padded = np.expand_dims(padded, axis=-1)
            pred = model.predict(padded)
            pred = np.argmax(pred, axis=1)
            label = labels[pred[0]]
            cv2.rectangle(img, (x, y), (x+w, y+h), (0, 0, 255), 2)
            cv2.putText(img, label, (x-5, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255))

    figure = plt.figure(figsize=(10, 10))
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    plt.imshow(img)
    plt.axis('off')
    plt.show()
In [21]:
test_pipeline('data/test.jpg')

Pipeline for Expression Solving

For example the expression to be solved is 22+16x16. As the current model doesn't recognizes brackets, so this expression is interpreted as 22+(16x16) and similar convention is used for the pipeline.

In [22]:
def test_pipeline_equation(image_path):
    chars = []
    img = cv2.imread(image_path)
    img = cv2.resize(img, (800, 800))
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    # blurred = cv2.GaussianBlur(img_gray, (3, 3), 0)
    edged = cv2.Canny(img_gray, 30, 150)
    contours = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    contours = imutils.grab_contours(contours)
    contours = sort_contours(contours, method="left-to-right")[0]
    labels = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'add', 'div', 'mul', 'sub']

    for c in contours:
        (x, y, w, h) = cv2.boundingRect(c)
        if 20<=w and 30<=h:
            roi = img_gray[y:y+h, x:x+w]
            thresh = cv2.threshold(roi, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
            (th, tw) = thresh.shape
            if tw > th:
                thresh = imutils.resize(thresh, width=32)
            if th > tw:
                thresh = imutils.resize(thresh, height=32)
            (th, tw) = thresh.shape
            dx = int(max(0, 32 - tw)/2.0)
            dy = int(max(0, 32 - th) / 2.0)
            padded = cv2.copyMakeBorder(thresh, top=dy, bottom=dy, left=dx, right=dx, borderType=cv2.BORDER_CONSTANT,
                                       value=(0, 0, 0))
            padded = cv2.resize(padded, (32, 32))
            padded = np.array(padded)
            padded = padded/255.
            padded = np.expand_dims(padded, axis=0)
            padded = np.expand_dims(padded, axis=-1)
            pred = model.predict(padded)
            pred = np.argmax(pred, axis=1)
    #         print(pred)
            label = labels[pred[0]]
            chars.append(label)
            cv2.rectangle(img, (x, y), (x+w, y+h), (0, 0, 255), 2)
            cv2.putText(img, label, (x-5, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255))

    figure = plt.figure(figsize=(10, 10))
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    plt.imshow(img)
    plt.axis('off')
    plt.show()
    
    e = ''
    for i in chars:
        if i=='add':
            e += '+'
        elif i=='sub':
            e += '-'
        elif i=='mul':
            e += '*'
        elif i=='div':
            e += '/'
        else:
            e += i
    v = eval(e)
    print('Value of the expression {} : {}'.format(e, v)) 
In [23]:
test_pipeline_equation('data/test_equation4.jpg')
Value of the expression 22+16*16 : 278

DeepCC

In [24]:
!deepCC maths_symbol_and_digit_recognition.h5
[INFO]
Reading [keras model] 'maths_symbol_and_digit_recognition.h5'
[SUCCESS]
Saved 'maths_symbol_and_digit_recognition_deepC/maths_symbol_and_digit_recognition.onnx'
[INFO]
Reading [onnx model] 'maths_symbol_and_digit_recognition_deepC/maths_symbol_and_digit_recognition.onnx'
[INFO]
Model info:
  ir_vesion : 5
  doc       : 
[WARNING]
[ONNX]: graph-node conv1's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node conv2's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node conv3's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: terminal (input/output) input_1's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) fc3's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (fc3) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'maths_symbol_and_digit_recognition_deepC/maths_symbol_and_digit_recognition.cpp'
[INFO]
deepSea model files are ready in 'maths_symbol_and_digit_recognition_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "maths_symbol_and_digit_recognition_deepC/maths_symbol_and_digit_recognition.cpp" -D_AITS_MAIN -o "maths_symbol_and_digit_recognition_deepC/maths_symbol_and_digit_recognition.exe"
[RUNNING COMMAND]
size "maths_symbol_and_digit_recognition_deepC/maths_symbol_and_digit_recognition.exe"
   text	   data	    bss	    dec	    hex	filename
 827157	   3792	    760	 831709	  cb0dd	maths_symbol_and_digit_recognition_deepC/maths_symbol_and_digit_recognition.exe
[SUCCESS]
Saved model as executable "maths_symbol_and_digit_recognition_deepC/maths_symbol_and_digit_recognition.exe"