Cainvas
Model Files
modelPedestrianDetection.h5
keras
Model
deepSea Compiled Models
modelPedestrianDetection.exe
deepSea
Ubuntu

Pedestrian_Detection using CNN

Credit: AITS Cainvas Community

Photo by Antonius Setiadi K on Dribbble

Downloading dataset

In [1]:
!wget -N "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/PedestrianDataset.zip"
--2021-08-01 09:49:36--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/PedestrianDataset.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.62.48
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.62.48|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13919800 (13M) [application/x-zip-compressed]
Saving to: ‘PedestrianDataset.zip’

PedestrianDataset.z 100%[===================>]  13.27M  --.-KB/s    in 0.1s    

2021-08-01 09:49:36 (125 MB/s) - ‘PedestrianDataset.zip’ saved [13919800/13919800]

Extracting the dataset and removing unnecessary zip file

In [2]:
!unzip -qo "PedestrianDataset.zip"
!rm "PedestrianDataset.zip"

Importing relevant libraries

In [3]:
import numpy as np 
import pandas as pd 

import cv2
import os
from xml.etree import ElementTree
from matplotlib import pyplot as plt
In [4]:
import tensorflow as tf
from sklearn.metrics import confusion_matrix
from tensorflow.keras import datasets, layers, models
keras = tf.keras
In [5]:
class_names = ['person','person-like']
class_names_label = {class_name:i for i, class_name in enumerate(class_names)}

n_classes = 2
size = (120,120)
In [ ]:
 

Defining a function to load the dataset

In [6]:
def load_data():
    datasets = ['Pedestrian_Detection/Train/Train', 'Pedestrian_Detection/Test/Test', 'Pedestrian_Detection/Val/Val']
    output = []

    for dataset in datasets:
        imags = []
        labels = []
        directoryA = dataset +"/Annotations"
        directoryIMG = dataset +"/JPEGImages/"
        file = os.listdir(directoryA)
        img = os.listdir(directoryIMG)
        file.sort()
        img.sort()

        i = 0
        for xml in file:

            xmlf = os.path.join(directoryA,xml)
            dom = ElementTree.parse(xmlf)
            vb = dom.findall('object')
            label = vb[0].find('name').text
            labels.append(class_names_label[label])

            img_path = directoryIMG + img[i]
            curr_img = cv2.imread(img_path)
            curr_img = cv2.resize(curr_img, size)
            imags.append(curr_img)
            i +=1
        
        imags = np.array(imags, dtype='float32')
        imags = imags / 255
        
        labels = np.array(labels, dtype='int32')

        output.append((imags, labels))
    return output
In [7]:
#import random, datetime, os, shutil, math
#shutil.rmtree("Pedestrian-Detection/Test/Test/Annotations/.ipynb_checkpoints")
#shutil.rmtree("Pedestrian-Detection/Val/Val/Annotations/.ipynb_checkpoints")
#shutil.rmtree("Pedestrian-Detection/Train/Train/Annotations/.ipynb_checkpoints")
In [8]:
(train_images, train_labels),(test_images, test_labels),(val_images, val_labels) = load_data()

Checking shapes

In [9]:
train_images.shape
Out[9]:
(944, 120, 120, 3)

Reviewing image samples from a random directory

In [10]:
plt.figure(figsize=(20,20))
for n , i in enumerate(list(np.random.randint(0,len(train_images),36))) : 
    plt.subplot(6,6,n+1)
    plt.imshow(train_images[i])  
    plt.title(class_names[train_labels[i]])
    plt.axis('off')

Defining the model

In [11]:
model = models.Sequential()
model.add(layers.Conv2D(4, (5, 5), activation='relu', input_shape=(120, 120, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(12, (2, 2), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(2))
model.summary()
#model.add(layers.Conv2D(64, (3, 3), activation='relu'))
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 116, 116, 4)       304       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 58, 58, 4)         0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 57, 57, 12)        204       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 28, 28, 12)        0         
_________________________________________________________________
flatten (Flatten)            (None, 9408)              0         
_________________________________________________________________
dense (Dense)                (None, 16)                150544    
_________________________________________________________________
dense_1 (Dense)              (None, 2)                 34        
=================================================================
Total params: 151,086
Trainable params: 151,086
Non-trainable params: 0
_________________________________________________________________

Compiling the model

In [12]:
from numpy.random import seed
seed(1)
In [13]:
model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

Importing and using callbacks

In [14]:
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
from tensorflow.keras.callbacks import ReduceLROnPlateau
es = EarlyStopping(monitor='accuracy', mode='max', verbose=1, patience=2)
filepath = "modelPedestrianDetection.h5"
ckpt = ModelCheckpoint(filepath, monitor='accuracy', verbose=1, save_best_only=True, mode='max')
rlp = ReduceLROnPlateau(monitor='accuracy', patience=2, verbose=1)

Using fit()

In [15]:
history = model.fit(train_images, train_labels, epochs=60,
                    validation_data=(test_images, test_labels))
Epoch 1/60
30/30 [==============================] - 0s 13ms/step - loss: 0.6902 - accuracy: 0.5551 - val_loss: 0.6694 - val_accuracy: 0.6213
Epoch 2/60
30/30 [==============================] - 0s 6ms/step - loss: 0.6085 - accuracy: 0.6960 - val_loss: 0.6508 - val_accuracy: 0.5830
Epoch 3/60
30/30 [==============================] - 0s 6ms/step - loss: 0.4919 - accuracy: 0.7532 - val_loss: 0.6396 - val_accuracy: 0.6426
Epoch 4/60
30/30 [==============================] - 0s 5ms/step - loss: 0.4138 - accuracy: 0.8210 - val_loss: 0.6103 - val_accuracy: 0.6936
Epoch 5/60
30/30 [==============================] - 0s 5ms/step - loss: 0.3903 - accuracy: 0.8326 - val_loss: 0.6046 - val_accuracy: 0.6894
Epoch 6/60
30/30 [==============================] - 0s 5ms/step - loss: 0.3050 - accuracy: 0.8771 - val_loss: 0.6344 - val_accuracy: 0.6851
Epoch 7/60
30/30 [==============================] - 0s 5ms/step - loss: 0.2617 - accuracy: 0.9206 - val_loss: 0.7922 - val_accuracy: 0.6511
Epoch 8/60
30/30 [==============================] - 0s 5ms/step - loss: 0.2592 - accuracy: 0.8962 - val_loss: 0.6521 - val_accuracy: 0.7021
Epoch 9/60
30/30 [==============================] - 0s 5ms/step - loss: 0.1915 - accuracy: 0.9386 - val_loss: 0.6414 - val_accuracy: 0.6979
Epoch 10/60
30/30 [==============================] - 0s 5ms/step - loss: 0.1500 - accuracy: 0.9650 - val_loss: 0.6657 - val_accuracy: 0.7106
Epoch 11/60
30/30 [==============================] - 0s 5ms/step - loss: 0.1317 - accuracy: 0.9672 - val_loss: 0.7093 - val_accuracy: 0.7021
Epoch 12/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0986 - accuracy: 0.9862 - val_loss: 0.7433 - val_accuracy: 0.6979
Epoch 13/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0762 - accuracy: 0.9873 - val_loss: 0.7606 - val_accuracy: 0.7149
Epoch 14/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0607 - accuracy: 0.9947 - val_loss: 0.8400 - val_accuracy: 0.7064
Epoch 15/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0514 - accuracy: 0.9958 - val_loss: 0.8861 - val_accuracy: 0.7149
Epoch 16/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0375 - accuracy: 0.9979 - val_loss: 0.8919 - val_accuracy: 0.7064
Epoch 17/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0334 - accuracy: 0.9979 - val_loss: 1.0091 - val_accuracy: 0.6979
Epoch 18/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0319 - accuracy: 0.9968 - val_loss: 1.0432 - val_accuracy: 0.7064
Epoch 19/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0316 - accuracy: 0.9989 - val_loss: 1.0611 - val_accuracy: 0.6936
Epoch 20/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0215 - accuracy: 0.9989 - val_loss: 1.0666 - val_accuracy: 0.7106
Epoch 21/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0194 - accuracy: 0.9979 - val_loss: 1.1615 - val_accuracy: 0.7021
Epoch 22/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0153 - accuracy: 0.9989 - val_loss: 1.1297 - val_accuracy: 0.7149
Epoch 23/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0182 - accuracy: 0.9979 - val_loss: 1.1316 - val_accuracy: 0.7234
Epoch 24/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0182 - accuracy: 0.9968 - val_loss: 1.1565 - val_accuracy: 0.7277
Epoch 25/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0122 - accuracy: 0.9989 - val_loss: 1.1760 - val_accuracy: 0.7191
Epoch 26/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0114 - accuracy: 0.9989 - val_loss: 1.2380 - val_accuracy: 0.7319
Epoch 27/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0154 - accuracy: 0.9968 - val_loss: 1.2473 - val_accuracy: 0.7106
Epoch 28/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0126 - accuracy: 0.9989 - val_loss: 1.2928 - val_accuracy: 0.6936
Epoch 29/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0350 - accuracy: 0.9915 - val_loss: 1.7309 - val_accuracy: 0.6340
Epoch 30/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0187 - accuracy: 0.9958 - val_loss: 1.2740 - val_accuracy: 0.7064
Epoch 31/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0099 - accuracy: 0.9989 - val_loss: 1.2935 - val_accuracy: 0.7064
Epoch 32/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0145 - accuracy: 0.9989 - val_loss: 1.3018 - val_accuracy: 0.6936
Epoch 33/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0108 - accuracy: 0.9989 - val_loss: 1.3903 - val_accuracy: 0.6979
Epoch 34/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0115 - accuracy: 0.9979 - val_loss: 1.3464 - val_accuracy: 0.7149
Epoch 35/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0092 - accuracy: 0.9989 - val_loss: 1.3622 - val_accuracy: 0.7106
Epoch 36/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0071 - accuracy: 0.9989 - val_loss: 1.4198 - val_accuracy: 0.6979
Epoch 37/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0111 - accuracy: 0.9979 - val_loss: 1.3782 - val_accuracy: 0.7021
Epoch 38/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0062 - accuracy: 0.9989 - val_loss: 1.5502 - val_accuracy: 0.6809
Epoch 39/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0095 - accuracy: 0.9989 - val_loss: 1.4938 - val_accuracy: 0.6851
Epoch 40/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0072 - accuracy: 0.9989 - val_loss: 1.4572 - val_accuracy: 0.7106
Epoch 41/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0125 - accuracy: 0.9979 - val_loss: 1.5626 - val_accuracy: 0.6894
Epoch 42/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0074 - accuracy: 0.9989 - val_loss: 1.4687 - val_accuracy: 0.7106
Epoch 43/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0055 - accuracy: 0.9989 - val_loss: 1.5442 - val_accuracy: 0.7021
Epoch 44/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0161 - accuracy: 0.9979 - val_loss: 1.3979 - val_accuracy: 0.7106
Epoch 45/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0039 - accuracy: 0.9979 - val_loss: 1.4937 - val_accuracy: 0.7149
Epoch 46/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0048 - accuracy: 0.9989 - val_loss: 1.5045 - val_accuracy: 0.7149
Epoch 47/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0104 - accuracy: 0.9979 - val_loss: 1.4784 - val_accuracy: 0.7191
Epoch 48/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0060 - accuracy: 0.9989 - val_loss: 1.5140 - val_accuracy: 0.7149
Epoch 49/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0054 - accuracy: 0.9989 - val_loss: 1.7465 - val_accuracy: 0.6468
Epoch 50/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0069 - accuracy: 0.9989 - val_loss: 1.4998 - val_accuracy: 0.7064
Epoch 51/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0060 - accuracy: 0.9989 - val_loss: 1.5125 - val_accuracy: 0.7106
Epoch 52/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0079 - accuracy: 0.9979 - val_loss: 1.9777 - val_accuracy: 0.6383
Epoch 53/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0147 - accuracy: 0.9989 - val_loss: 1.6531 - val_accuracy: 0.6681
Epoch 54/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0040 - accuracy: 0.9989 - val_loss: 1.7072 - val_accuracy: 0.6894
Epoch 55/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0053 - accuracy: 0.9989 - val_loss: 1.5801 - val_accuracy: 0.7064
Epoch 56/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0037 - accuracy: 0.9989 - val_loss: 1.5649 - val_accuracy: 0.7149
Epoch 57/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0081 - accuracy: 0.9979 - val_loss: 2.0391 - val_accuracy: 0.6851
Epoch 58/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0213 - accuracy: 0.9979 - val_loss: 2.4091 - val_accuracy: 0.6553
Epoch 59/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0200 - accuracy: 0.9926 - val_loss: 1.8843 - val_accuracy: 0.6553
Epoch 60/60
30/30 [==============================] - 0s 5ms/step - loss: 0.0108 - accuracy: 0.9989 - val_loss: 1.6161 - val_accuracy: 0.7106

Defining a function to show accuracy and loss graphs

In [16]:
def plot_accuracy_loss(history):
    """
        Plot the accuracy and the loss during the training of the nn.
    """
    fig = plt.figure(figsize=(10,5))

    # Plot accuracy
    plt.subplot(221)
    plt.plot(history.history['accuracy'],'bo--', label = "acc")
    plt.plot(history.history['val_accuracy'], 'ro--', label = "val_acc")
    plt.title("train_acc vs val_acc")
    plt.ylabel("accuracy")
    plt.xlabel("epochs")
    plt.legend()

    # Plot loss function
    plt.subplot(222)
    plt.plot(history.history['loss'],'bo--', label = "loss")
    plt.plot(history.history['val_loss'], 'ro--', label = "val_loss")
    plt.title("train_loss vs val_loss")
    plt.ylabel("loss")
    plt.xlabel("epochs")

    plt.legend()
    plt.show()
In [17]:
plot_accuracy_loss(history)

Model Predictions

In [18]:
preds = model.predict(val_images) 
In [19]:
plt.figure(figsize=(20,20))
for n , i in enumerate(list(np.random.randint(0,len(val_images),36))) : 
    plt.subplot(6,6,n+1)
    plt.imshow(val_images[i])    
    plt.axis('off')
    x =np.argmax(preds[i]) # takes the maximum of of the 6 probabilites. 
    plt.title((class_names[x]))
In [20]:
result = []
for i in range(len(preds)):
    result.append(np.argmax(preds[i]))

Confusion Matrix

In [21]:
tn, fp, fn, tp = confusion_matrix(val_labels,result).ravel()
In [22]:
(tn, fp, fn, tp)
Out[22]:
(58, 27, 25, 50)

DeepCC

In [ ]:
!deepCC modelPedestrianDetection.h5
[INFO]
Reading [keras model] 'modelPedestrianDetection.h5'
[SUCCESS]
Saved 'modelPedestrianDetection_deepC/modelPedestrianDetection.onnx'
[INFO]
Reading [onnx model] 'modelPedestrianDetection_deepC/modelPedestrianDetection.onnx'
[INFO]
Model info:
  ir_vesion : 5
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) conv2d_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_1's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_1) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'modelPedestrianDetection_deepC/modelPedestrianDetection.cpp'
[INFO]
deepSea model files are ready in 'modelPedestrianDetection_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "modelPedestrianDetection_deepC/modelPedestrianDetection.cpp" -D_AITS_MAIN -o "modelPedestrianDetection_deepC/modelPedestrianDetection.exe"
In [ ]:
d