Cainvas
Model Files
modelPedestrianDetection.h5
keras
Model
deepSea Compiled Models
modelPedestrianDetection.exe
deepSea
Ubuntu

Pedestrian_Detection using CNN

Credit: AITS Cainvas Community

Photo by Antonius Setiadi K on Dribbble

Downloading dataset

In [1]:
!wget -N "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/PedestrianDataset.zip"
--2021-08-01 09:49:36--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/PedestrianDataset.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.62.48
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.62.48|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13919800 (13M) [application/x-zip-compressed]
Saving to: ‘PedestrianDataset.zip’

PedestrianDataset.z 100%[===================>]  13.27M  --.-KB/s    in 0.1s    

2021-08-01 09:49:36 (125 MB/s) - ‘PedestrianDataset.zip’ saved [13919800/13919800]

Extracting the dataset and removing unnecessary zip file

In [2]:
!unzip -qo "PedestrianDataset.zip"
!rm "PedestrianDataset.zip"

Importing relevant libraries

In [3]:
import numpy as np 
import pandas as pd 

import cv2
import os
from xml.etree import ElementTree
from matplotlib import pyplot as plt
In [4]:
import tensorflow as tf
from sklearn.metrics import confusion_matrix
from tensorflow.keras import datasets, layers, models
keras = tf.keras
In [5]:
class_names = ['person','person-like']
class_names_label = {class_name:i for i, class_name in enumerate(class_names)}

n_classes = 2
size = (120,120)
In [ ]:
 

Defining a function to load the dataset

In [6]:
def load_data():
    datasets = ['Pedestrian_Detection/Train/Train', 'Pedestrian_Detection/Test/Test', 'Pedestrian_Detection/Val/Val']
    output = []

    for dataset in datasets:
        imags = []
        labels = []
        directoryA = dataset +"/Annotations"
        directoryIMG = dataset +"/JPEGImages/"
        file = os.listdir(directoryA)
        img = os.listdir(directoryIMG)
        file.sort()
        img.sort()

        i = 0
        for xml in file:

            xmlf = os.path.join(directoryA,xml)
            dom = ElementTree.parse(xmlf)
            vb = dom.findall('object')
            label = vb[0].find('name').text
            labels.append(class_names_label[label])

            img_path = directoryIMG + img[i]
            curr_img = cv2.imread(img_path)
            curr_img = cv2.resize(curr_img, size)
            imags.append(curr_img)
            i +=1
        
        imags = np.array(imags, dtype='float32')
        imags = imags / 255
        
        labels = np.array(labels, dtype='int32')

        output.append((imags, labels))
    return output
In [7]:
#import random, datetime, os, shutil, math
#shutil.rmtree("Pedestrian-Detection/Test/Test/Annotations/.ipynb_checkpoints")
#shutil.rmtree("Pedestrian-Detection/Val/Val/Annotations/.ipynb_checkpoints")
#shutil.rmtree("Pedestrian-Detection/Train/Train/Annotations/.ipynb_checkpoints")
In [8]:
(train_images, train_labels),(test_images, test_labels),(val_images, val_labels) = load_data()

Checking shapes

In [9]:
train_images.shape
Out[9]:
(944, 120, 120, 3)

Reviewing image samples from a random directory

In [10]:
plt.figure(figsize=(20,20))
for n , i in enumerate(list(np.random.randint(0,len(train_images),36))) : 
    plt.subplot(6,6,n+1)
    plt.imshow(train_images[i])  
    plt.title(class_names[train_labels[i]])
    plt.axis('off')