Cainvas
Model Files
model.h5
keras
Model
deepSea Compiled Models
model.exe
deepSea
Ubuntu

Fall Detection

Credit: AITS Cainvas Community

Photo by Julien Laureau on Dribbble

Importing necessary libraries

In [1]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import cv2
import os

Reading images and labels for training

In [2]:
!wget -N "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/dataset_ztOhYU1.zip"
!unzip -qo dataset_ztOhYU1.zip
!rmd ataset_ztOhYU1.zip
--2021-08-27 03:36:14--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/dataset_ztOhYU1.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.160.27
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.160.27|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1270558 (1.2M) [application/x-zip-compressed]
Saving to: ‘dataset_ztOhYU1.zip’

dataset_ztOhYU1.zip 100%[===================>]   1.21M  --.-KB/s    in 0.006s  

2021-08-27 03:36:14 (208 MB/s) - ‘dataset_ztOhYU1.zip’ saved [1270558/1270558]

/bin/bash: rmd: command not found

Reading train labels

In [3]:
df_train = pd.read_csv('dataset/train_labels.csv', index_col='images')
df_train
Out[3]:
labels
images
fall-02-cam1-rgb-001.jpg 0
fall-02-cam1-rgb-002.jpg 0
fall-02-cam1-rgb-003.jpg 0
fall-02-cam1-rgb-004.jpg 0
fall-02-cam1-rgb-005.jpg 0
... ...
fall-11-cam1-rgb-126.jpg 1
fall-11-cam1-rgb-127.jpg 1
fall-11-cam1-rgb-128.jpg 1
fall-11-cam1-rgb-129.jpg 1
fall-11-cam1-rgb-130.jpg 1

240 rows × 1 columns

Reading test labels

In [4]:
test_df = pd.read_csv('dataset/test_labels.csv', index_col='images')
test_df
Out[4]:
labels
images
fall-03-cam1-rgb-077.jpg 0
fall-03-cam1-rgb-093.jpg 0
fall-03-cam1-rgb-178.jpg 1
fall-03-cam1-rgb-196.jpg 1
fall-04-cam1-rgb-005.jpg 0
fall-04-cam1-rgb-042.jpg 1
fall-04-cam1-rgb-057.jpg 1
fall-17-cam1-rgb-068.jpg 1
fall-17-cam1-rgb-094.jpg 1
fall-21-cam1-rgb-051.jpg 1
fall-24-cam1-rgb-001.jpg 0
fall-24-cam1-rgb-060.jpg 1
In [5]:
# reading train and test images from the folder and stacking them while keep tracking of corresponding labels
dataset_folder = 'dataset'
train_images = []
train_labels = []
test_images = []
test_labels = []

for folder in os.listdir(dataset_folder):
    folder_path = os.path.join(dataset_folder, folder)
    if folder == 'train_images':
        for file in os.listdir(folder_path):
            if file.endswith('jpg'):
                img_path = os.path.join(folder_path, file)
                img = cv2.imread(img_path)
                train_images.append(img)
                train_labels.append(df_train.loc[file, 'labels'])
    
    elif folder == 'test_images':
        for file in os.listdir(folder_path):
            if file.endswith('jpg'):
                img_path = os.path.join(folder_path, file)
                img = cv2.imread(img_path)
                test_images.append(img)
                test_labels.append(test_df.loc[file, 'labels'])
    else:
        pass
            
train_images = np.array(train_images)
train_labels = np.array(train_labels)
test_images = np.array(test_images)
test_labels = np.array(test_labels)
print('Shape of stacked train images:', train_images.shape)
print('Shape of train labels:', train_labels.shape)
print('Shape of stacked test images:', test_images.shape)
print('Shape of test labels:', test_labels.shape)
Shape of stacked train images: (240, 96, 96, 3)
Shape of train labels: (240,)
Shape of stacked test images: (12, 96, 96, 3)
Shape of test labels: (12,)

Visualizing some images together with their label to have an idea about our data

In [6]:
# Function to convert binary label into text
def get_label(num):
    if num == 0:
        return 'NOT FALL'
    if num == 1:
        return 'FALL'
    else:
        return -1
In [7]:
fig, axes = plt.subplots(1, 2, figsize=(10, 8), squeeze=False)
axes[0][0].imshow(train_images[2])
axes[0][0].set_title(get_label(train_labels[2]))

axes[0][1].imshow(train_images[3])
axes[0][1].set_title(get_label(train_labels[3]));

Splitting our data into train and validation sets, building and training our model

In [8]:
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(train_images, train_labels, stratify=train_labels, test_size=0.2)
In [9]:
def conv2d(filters: int, name):
    return Conv2D(filters=filters, kernel_size=(3, 3), padding='same', kernel_regularizer=l2(0.), bias_regularizer=l2(0.), name=name)
In [10]:
from tensorflow.keras import Input, Model
from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Flatten, ReLU
from tensorflow.keras.activations import sigmoid
from tensorflow.keras.regularizers import l2

# fallnet architecture
model_input = Input(shape=(X_train.shape[1], X_train.shape[2], X_train.shape[3]), name='inputs')

conv1 = conv2d(16, name='convoluton_1')(model_input)
act1 = ReLU(name='activation_1')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2), name='pooling_1')(act1)

conv2 = conv2d(16, name='convolution_2')(pool1)
act2 = ReLU(name='activation_2')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2), name='pooling_2')(act2)

conv3 = conv2d(32, name='convolution_3')(pool2)
act3 = ReLU(name='activation_3')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2), name='pooling_3')(act3)

conv4 = conv2d(32, name='convolution_4')(pool3)
act4 = ReLU(name='activation_4')(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2), name='pooling_4')(act4)

conv5 = conv2d(64, name='convolition_5')(pool4)
act5 = ReLU(name='activation_5')(conv5)
pool5 = MaxPooling2D(pool_size=(2, 2), name='pooling_5')(act5)

conv6 = conv2d(64, name='convolution_6')(pool5)
act6 = ReLU(name='activation_6')(conv6)
pool6 = MaxPooling2D(pool_size=(2, 2), name='pooling_6')(act6)

flat = Flatten(name='flatten')(pool6)
dense1 = Dense(32, name='dense1')(flat)
output = Dense(1, activation='sigmoid', name='output')(dense1)

model = Model(inputs=[model_input], outputs=[output])
model.summary()
Model: "functional_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
inputs (InputLayer)          [(None, 96, 96, 3)]       0         
_________________________________________________________________
convoluton_1 (Conv2D)        (None, 96, 96, 16)        448       
_________________________________________________________________
activation_1 (ReLU)          (None, 96, 96, 16)        0         
_________________________________________________________________
pooling_1 (MaxPooling2D)     (None, 48, 48, 16)        0         
_________________________________________________________________
convolution_2 (Conv2D)       (None, 48, 48, 16)        2320      
_________________________________________________________________
activation_2 (ReLU)          (None, 48, 48, 16)        0         
_________________________________________________________________
pooling_2 (MaxPooling2D)     (None, 24, 24, 16)        0         
_________________________________________________________________
convolution_3 (Conv2D)       (None, 24, 24, 32)        4640      
_________________________________________________________________
activation_3 (ReLU)          (None, 24, 24, 32)        0         
_________________________________________________________________
pooling_3 (MaxPooling2D)     (None, 12, 12, 32)        0         
_________________________________________________________________
convolution_4 (Conv2D)       (None, 12, 12, 32)        9248      
_________________________________________________________________
activation_4 (ReLU)          (None, 12, 12, 32)        0         
_________________________________________________________________
pooling_4 (MaxPooling2D)     (None, 6, 6, 32)          0         
_________________________________________________________________
convolition_5 (Conv2D)       (None, 6, 6, 64)          18496     
_________________________________________________________________
activation_5 (ReLU)          (None, 6, 6, 64)          0         
_________________________________________________________________
pooling_5 (MaxPooling2D)     (None, 3, 3, 64)          0         
_________________________________________________________________
convolution_6 (Conv2D)       (None, 3, 3, 64)          36928     
_________________________________________________________________
activation_6 (ReLU)          (None, 3, 3, 64)          0         
_________________________________________________________________
pooling_6 (MaxPooling2D)     (None, 1, 1, 64)          0         
_________________________________________________________________
flatten (Flatten)            (None, 64)                0         
_________________________________________________________________
dense1 (Dense)               (None, 32)                2080      
_________________________________________________________________
output (Dense)               (None, 1)                 33        
=================================================================
Total params: 74,193
Trainable params: 74,193
Non-trainable params: 0
_________________________________________________________________
In [11]:
model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.005), loss='binary_crossentropy', metrics=['accuracy'])
history = model.fit(X_train,y_train, epochs = 6, validation_data = (X_val, y_val))
Epoch 1/6
6/6 [==============================] - 0s 47ms/step - loss: 15.5982 - accuracy: 0.5104 - val_loss: 0.4569 - val_accuracy: 0.9583
Epoch 2/6
6/6 [==============================] - 0s 8ms/step - loss: 0.5757 - accuracy: 0.5833 - val_loss: 0.4258 - val_accuracy: 0.9792
Epoch 3/6
6/6 [==============================] - 0s 8ms/step - loss: 0.3742 - accuracy: 0.8906 - val_loss: 0.2004 - val_accuracy: 0.9792
Epoch 4/6
6/6 [==============================] - 0s 7ms/step - loss: 0.0724 - accuracy: 0.9792 - val_loss: 0.0176 - val_accuracy: 1.0000
Epoch 5/6
6/6 [==============================] - 0s 7ms/step - loss: 0.0520 - accuracy: 0.9844 - val_loss: 0.0289 - val_accuracy: 0.9792
Epoch 6/6
6/6 [==============================] - 0s 7ms/step - loss: 0.1334 - accuracy: 0.9740 - val_loss: 0.0592 - val_accuracy: 0.9792

Accuracy/Loss vs Epochs

In [12]:
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()

model.save('model.h5')
print('Weights saved.')
Weights saved.

Testing our model

In [13]:
predicted_labels = (model.predict(test_images) >= 0.5).astype('int64').flatten()
In [14]:
# visualizing our results
row = 3
col = 4
fig, axes = plt.subplots(row, col, figsize=(16, 14))
c = 0
for i in range(row):
    for j in range(col):
        axes[i][j].imshow(test_images[c])
        axes[i][j].set_title(f'Predicted: {get_label(predicted_labels[c])}', fontsize=14)
        axes[i][j].set_xlabel(f'Actual: {get_label(test_labels[c])}', fontsize=14)
        c += 1

DeepCC

In [15]:
!deepCC model.h5
[INFO]
Reading [keras model] 'model.h5'
[SUCCESS]
Saved 'model_deepC/model.onnx'
[INFO]
Reading [onnx model] 'model_deepC/model.onnx'
[INFO]
Model info:
  ir_vesion : 5
  doc       : 
[WARNING]
[ONNX]: graph-node convoluton_1's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node convolution_2's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node convolution_3's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node convolution_4's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node convolition_5's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node convolution_6's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: terminal (input/output) inputs's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) output's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (output) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'model_deepC/model.cpp'
[INFO]
deepSea model files are ready in 'model_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "model_deepC/model.cpp" -D_AITS_MAIN -o "model_deepC/model.exe"
[RUNNING COMMAND]
size "model_deepC/model.exe"
   text	   data	    bss	    dec	    hex	filename
 483901	   3760	    760	 488421	  773e5	model_deepC/model.exe
[SUCCESS]
Saved model as executable "model_deepC/model.exe"