Cainvas

Fire Detection Using Surveillance Camera on Roads

Credit: AITS Cainvas Community

Photo by Aslan Almukhambetov on Dribbble

Accidents on the road can sometimes lead to a fire that can get worse over time. Fires along the road due to other reasons are also be hazardous to the traffic on the road and nearby places. These fires need to be detected and controlled with utmost urgency in order to maintain the safety of those in the vicinity.

In [1]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow.keras import layers, optimizers, models, preprocessing, losses, callbacks
import os
import random
from PIL import Image
import tensorflow as tf
import tensorflow.keras
In [2]:
!wget "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/fire.zip"
 
!unzip -qo fire.zip

!rm fire.zip
--2021-09-07 08:21:35--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/fire.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.66.20
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.66.20|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 102552780 (98M) [application/zip]
Saving to: ‘fire.zip’

fire.zip            100%[===================>]  97.80M  97.6MB/s    in 1.0s    

2021-09-07 08:21:36 (97.6 MB/s) - ‘fire.zip’ saved [102552780/102552780]

The dataset zip file has 3 folders, train, validation and test. Each of these has 2 sub folders - Fire and Non-fire.

In [3]:
# Loading the dataset

path = 'fire/'
input_shape = (256, 256, 3)    # default input shape while loading the images

batch = 64

# The train and test datasets
print("Train dataset")
train_ds = preprocessing.image_dataset_from_directory(path+'Train', batch_size=batch, label_mode='binary')

print("Test dataset")
test_ds = preprocessing.image_dataset_from_directory(path+'Test', batch_size=batch, label_mode='binary')

print("Validation dataset")
val_ds = preprocessing.image_dataset_from_directory(path+'Vali', batch_size=batch, label_mode='binary')
Train dataset
Found 6003 files belonging to 2 classes.
Test dataset
Found 2000 files belonging to 2 classes.
Validation dataset
Found 2000 files belonging to 2 classes.

Lets look into the spread of images across the categories and dataset splits.

In [4]:
# How many samples in each class

for t in ['Train', 'Test', 'Vali']:
    print('\n', t.upper())
    for x in os.listdir(path + t):
        print(x, ' - ', len(os.listdir(path + t + '/' + x)))
 TRAIN
Non-Fire  -  3000
Fire  -  3003

 TEST
Non-Fire  -  1000
Fire  -  1000

 VALI
Non-Fire  -  1000
Fire  -  1000

It is a balanced dataset.

In [5]:
# Looking into the class labels

class_names = train_ds.class_names

print("Train class names: ", train_ds.class_names)
print("Test class names: ", test_ds.class_names)
print("Validation class names: ", val_ds.class_names)
Train class names:  ['Fire', 'Non-Fire']
Test class names:  ['Fire', 'Non-Fire']
Validation class names:  ['Fire', 'Non-Fire']

Visualization

In [6]:
num_samples = 4    # the number of samples to be displayed in each class

for x in class_names:
    plt.figure(figsize=(20, 20))

    filenames = os.listdir(path + 'Train/' + x)

    for i in range(num_samples):
        ax = plt.subplot(1, num_samples, i + 1)
        img = Image.open(path +'Train/' + x + '/' + filenames[i])
        plt.imshow(img)
        plt.title(x)
        plt.axis("off")

Preprocessing

Normalization

In [7]:
# Normalizing the pixel values for faster convergence

normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)

train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
test_ds = test_ds.map(lambda x, y: (normalization_layer(x), y))
val_ds = val_ds.map(lambda x, y: (normalization_layer(x), y))

Model

Transfer learning

In [8]:
base_model = tensorflow.keras.applications.InceptionV3(weights='imagenet', input_shape=input_shape, include_top=False)    # False, do not include the classification layer of the model
base_model.trainable = False

inputs = tf.keras.Input(shape=input_shape)

x = base_model(inputs, training=False)
x = tensorflow.keras.layers.GlobalAveragePooling2D()(x)
outputs = tensorflow.keras.layers.Dense(1, activation = 'sigmoid')(x)    # Add own classififcation layer

model = tensorflow.keras.Model(inputs, outputs)

cb = [callbacks.EarlyStopping(monitor = 'val_loss', patience = 5, restore_best_weights = True)]
model.summary()
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/inception_v3/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
87916544/87910968 [==============================] - 5s 0us/step
Model: "functional_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         [(None, 256, 256, 3)]     0         
_________________________________________________________________
inception_v3 (Functional)    (None, 6, 6, 2048)        21802784  
_________________________________________________________________
global_average_pooling2d (Gl (None, 2048)              0         
_________________________________________________________________
dense (Dense)                (None, 1)                 2049      
=================================================================
Total params: 21,804,833
Trainable params: 2,049
Non-trainable params: 21,802,784
_________________________________________________________________
In [9]:
model.compile(loss='binary_crossentropy', optimizer=optimizers.Adam(0.1), metrics=['accuracy'])

history = model.fit(train_ds, validation_data =  val_ds, epochs=32, callbacks = cb)
Epoch 1/32
94/94 [==============================] - 32s 341ms/step - loss: 1.5361 - accuracy: 0.9062 - val_loss: 0.9432 - val_accuracy: 0.9045
Epoch 2/32
94/94 [==============================] - 30s 316ms/step - loss: 0.2553 - accuracy: 0.9664 - val_loss: 0.7043 - val_accuracy: 0.9100
Epoch 3/32
94/94 [==============================] - 30s 319ms/step - loss: 0.2050 - accuracy: 0.9690 - val_loss: 0.4097 - val_accuracy: 0.9475
Epoch 4/32
94/94 [==============================] - 30s 324ms/step - loss: 0.1338 - accuracy: 0.9763 - val_loss: 0.3496 - val_accuracy: 0.9540
Epoch 5/32
94/94 [==============================] - 31s 325ms/step - loss: 0.1655 - accuracy: 0.9728 - val_loss: 0.7036 - val_accuracy: 0.9335
Epoch 6/32
94/94 [==============================] - 31s 327ms/step - loss: 0.0946 - accuracy: 0.9857 - val_loss: 0.5791 - val_accuracy: 0.9385
Epoch 7/32
94/94 [==============================] - 31s 330ms/step - loss: 0.1067 - accuracy: 0.9813 - val_loss: 0.5479 - val_accuracy: 0.9480
Epoch 8/32
94/94 [==============================] - 31s 331ms/step - loss: 0.0910 - accuracy: 0.9822 - val_loss: 0.4109 - val_accuracy: 0.9550
Epoch 9/32
94/94 [==============================] - 32s 336ms/step - loss: 0.2517 - accuracy: 0.9697 - val_loss: 0.4979 - val_accuracy: 0.9575
In [10]:
model.compile(loss='binary_crossentropy', optimizer=optimizers.Adam(0.01), metrics=['accuracy'])

history1 = model.fit(train_ds, validation_data =  val_ds, epochs=32, callbacks = cb)
Epoch 1/32
94/94 [==============================] - 33s 347ms/step - loss: 0.0652 - accuracy: 0.9875 - val_loss: 0.2819 - val_accuracy: 0.9560
Epoch 2/32
94/94 [==============================] - 31s 333ms/step - loss: 0.0432 - accuracy: 0.9900 - val_loss: 0.2855 - val_accuracy: 0.9565
Epoch 3/32
94/94 [==============================] - 31s 334ms/step - loss: 0.0350 - accuracy: 0.9918 - val_loss: 0.2549 - val_accuracy: 0.9580
Epoch 4/32
94/94 [==============================] - 32s 336ms/step - loss: 0.0374 - accuracy: 0.9897 - val_loss: 0.2397 - val_accuracy: 0.9605
Epoch 5/32
94/94 [==============================] - 31s 332ms/step - loss: 0.0182 - accuracy: 0.9948 - val_loss: 0.4500 - val_accuracy: 0.9415
Epoch 6/32
94/94 [==============================] - 31s 334ms/step - loss: 0.0119 - accuracy: 0.9963 - val_loss: 0.2192 - val_accuracy: 0.9655
Epoch 7/32
94/94 [==============================] - 31s 334ms/step - loss: 0.0113 - accuracy: 0.9963 - val_loss: 0.3029 - val_accuracy: 0.9560
Epoch 8/32
94/94 [==============================] - 31s 333ms/step - loss: 0.0056 - accuracy: 0.9988 - val_loss: 0.2283 - val_accuracy: 0.9595
Epoch 9/32
94/94 [==============================] - 31s 335ms/step - loss: 0.0041 - accuracy: 0.9985 - val_loss: 0.4721 - val_accuracy: 0.9365
Epoch 10/32
94/94 [==============================] - 31s 334ms/step - loss: 0.0088 - accuracy: 0.9967 - val_loss: 0.3078 - val_accuracy: 0.9545
Epoch 11/32
94/94 [==============================] - 31s 334ms/step - loss: 0.0182 - accuracy: 0.9922 - val_loss: 0.3579 - val_accuracy: 0.9525
In [11]:
model.evaluate(test_ds)
32/32 [==============================] - 7s 228ms/step - loss: 0.4171 - accuracy: 0.9390
Out[11]:
[0.41712912917137146, 0.9390000104904175]

Plotting the metrics

In [12]:
def plot(history1, history2, variable1, variable2):
    # combining metrics from both trainings    
    var1_history = history1[variable1]
    var1_history.extend(history2[variable1])
    
    var2_history = history1[variable2]
    var2_history.extend(history2[variable2])
    
    # plotting them
    plt.plot(range(len(var1_history)), var1_history)
    plt.plot(range(len(var2_history)), var2_history)
    plt.legend([variable1, variable2])
    plt.title(variable1)
In [13]:
plot(history.history, history1.history, "accuracy", 'val_accuracy')
In [14]:
plot(history.history, history1.history, "loss", 'val_loss')

Prediction

In [15]:
# pick random test data sample from one batch
x = random.randint(0, batch - 1)

for i in test_ds.as_numpy_iterator():
    img, label = i    
    plt.axis('off')   # remove axes
    plt.imshow(img[x])    # shape from (32, 256, 256, 3) --> (256, 256, 3)
    output = model.predict(np.expand_dims(img[x],0))[0][0]    # getting output; input shape (256, 256, 3) --> (1, 256, 256, 3)
    pred = (output > 0.5).astype('int')
    print("Predicted: ", class_names[pred], '(', output, '-->', pred, ')')    # Picking the label from class_names base don the model output
    print("True: ", class_names[label[x][0].astype('int')])
    break
Predicted:  Fire ( 1.7720127e-07 --> 0 )
True:  Fire

deepC

In [17]:
model.save('fire.h5')

#!deepCC fire.h5