Cainvas
Model Files
forest_fire.h5
keras
Model
deepSea Compiled Models
forest_fire.exe
deepSea
Ubuntu

Forest Fire Detection App

Credit: AITS Cainvas Community

Photo by Iblowyourdesign on Dribbble

Wildfires are an important phenomenon on a global scale, as they are responsible for large amounts of economic and environmental damage. These effects are being exacerbated by the influence of climate change.

It is important to detect fire and warn the people in charge. So we will create a Fire Detection App with deep learning.

In [1]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, callbacks, optimizers
import os
import random
from PIL import Image

Dataset

Dataset curated from Forest_fires_classification.

Used this script to resize images to (45x45) and compress the dataset

import os, cv2, glob
import multiprocessing as mp

def resize(file_path):
  image = cv2.imread(file_path)
  image = cv2.resize(image, (45, 45))
  target_path = os.path.splitext(file_path)[0] + ".jpg"
  cv2.imwrite(target_path, image)
  print("Resized '{}' to '{}'".format(file_path, target_path))
  return

def main():
  image_dir = "forest_fire_dataset" 
  file_paths = []
  file_paths += glob.glob(image_dir+"/**/*.png", recursive = True)
  file_paths += glob.glob(image_dir+"/**/*.jpg", recursive = True)
  pool = mp.Pool(processes = (mp.cpu_count()*50))
  pool.map(resize, file_paths)

if __name__=="__main__":
  main()

Downloading the curated Dataset

In [2]:
data_dir = "forest_fire"
if not os.path.exists(data_dir):
    !wget https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/forest_fire.zip -O forest_fire.zip
    !unzip -qo forest_fire.zip  
    !rm forest_fire.zip
In [3]:
tf.random.set_seed(50)

The dataset folder has two sub-folders - person and notperson containing images of respective types.

In [4]:
print("Number of samples")
for f in os.listdir(data_dir + '/'):
    if os.path.isdir(data_dir + '/' + f):
        print(f, " : ", len(os.listdir(data_dir + '/' + f +'/')))
Number of samples
no_fire  :  23845
fire  :  30155

It is a balanced dataset.

In [5]:
batch_size = 64
image_size = (45, 45)


print("Training set")
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
  data_dir,
  image_size=image_size,
  validation_split=0.2,
  subset="training",
  seed=113, 
  batch_size=batch_size)

print("Validation set")
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
  data_dir,
  image_size=image_size,
  validation_split=0.2,
  subset="validation",
  seed=113, 
  batch_size=batch_size)
Training set
Found 54000 files belonging to 2 classes.
Using 43200 files for training.
Validation set
Found 54000 files belonging to 2 classes.
Using 10800 files for validation.

Define the class_names for use later.

In [6]:
class_names = train_ds.class_names
print(class_names)
['fire', 'no_fire']

Visualization

In [7]:
num_samples = 10    # the number of samples to be displayed in each class

for x in class_names:
    plt.figure(figsize=(20, 20))
    filenames = os.listdir(os.path.join(data_dir, x))

    for i in range(num_samples):
        ax = plt.subplot(1, num_samples, i + 1)
        img = Image.open(os.path.join(data_dir, x, filenames[i]))
        plt.imshow(img)
        plt.title(x)
        plt.axis("off")

Preprocessing

Defining the input shape

In [8]:
print("Shape of one training batch")

for image_batch, labels_batch in train_ds:
    input_shape = image_batch[0].shape
    print("Input: ", image_batch.shape)
    print("Labels: ", labels_batch.shape)
    print("Input Shape: ", input_shape)
    break
Shape of one training batch
Input:  (64, 45, 45, 3)
Labels:  (64,)
Input Shape:  (45, 45, 3)
In [9]:
# Pre-fetch images into memeory

AUTOTUNE = tf.data.experimental.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

Normalizing the pixel values

Pixel values are now integers between 0 and 255. Changing them to the range [0, 1] for faster convergence.

In [10]:
# Normalizing the pixel values

normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)

train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
val_ds = val_ds.map(lambda x, y: (normalization_layer(x), y))

Augmenting dataset

Augmenting images in the train set to increase dataset size

min_sample_count = 60000 # minimum number of samples required after augmentation data_augmentation = tf.keras.Sequential( [ layers.experimental.preprocessing.RandomFlip("vertical"), # Flip along vertical axes layers.experimental.preprocessing.RandomZoom(0.2), # Randomly zoom images in dataset layers.experimental.preprocessing.RandomRotation(factor=(-0.2, 0.2)) # Randomly rotate images ]) print("Train size (number of batches) before augmentation: ", len(train_ds)) print("Train size (number of samples) before augmentation: ", (batch_size*len(train_ds))) i = 0 while True: i += 1 # Apply only to train set aug_ds = train_ds.map(lambda x, y: (data_augmentation(x, training=True), y)) print("Size (number of batches) after,", (i), "th augmentation: ", len(aug_ds)) print("Train size (number of samples) after augmentation: ", (batch_size*len(train_ds))) print() # Adding to train_ds train_ds = train_ds.concatenate(aug_ds) if (batch_size*len(train_ds)) >= min_sample_count: break print("Train size (number of batches) after augmentation: ", len(train_ds)) print("Train size (number of samples) after augmentation: ", (batch_size*len(train_ds))) train_ds = train_ds.shuffle(4-1).take(min_sample_count//batch_size) print() print("Train size (number of batches) after augmentation: ", len(train_ds)) print("Train size (number of samples) after augmentation: ", (batch_size*len(train_ds)))

The model

In [11]:
filepath = 'forest_fire.h5'

model = tf.keras.models.Sequential([
    layers.Conv2D(6, 3, activation='relu', input_shape=input_shape),
    layers.MaxPool2D(pool_size=(2, 2)),
    
    layers.Conv2D(12, 3, activation='relu'),
    layers.MaxPool2D(pool_size=(2, 2)),
    
#     layers.Conv2D(32, 3, activation='relu'),
#     layers.MaxPool2D(pool_size=(2, 2)),

    layers.Conv2D(6, 3, activation='relu'),
    layers.MaxPool2D(pool_size=(2, 2)),

    layers.Flatten(),
    layers.Dense(32, activation='relu'),
    layers.Dense(16, activation='relu'),
    layers.Dense(1, activation='sigmoid')
])

model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 43, 43, 6)         168       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 21, 21, 6)         0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 19, 19, 12)        660       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 9, 9, 12)          0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 7, 7, 6)           654       
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 3, 3, 6)           0         
_________________________________________________________________
flatten (Flatten)            (None, 54)                0         
_________________________________________________________________
dense (Dense)                (None, 32)                1760      
_________________________________________________________________
dense_1 (Dense)              (None, 16)                528       
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 17        
=================================================================
Total params: 3,787
Trainable params: 3,787
Non-trainable params: 0
_________________________________________________________________
In [12]:
cb = [
    callbacks.EarlyStopping(monitor = 'val_loss', patience = 10, restore_best_weights = True),
    callbacks.ModelCheckpoint(filepath, monitor = "val_loss", save_best_only = True)
    ]

model.compile(loss=tf.keras.losses.BinaryCrossentropy(), 
              optimizer=optimizers.Adam(0.0001), 
              metrics=['accuracy'])

history = model.fit(train_ds, validation_data =  val_ds, epochs=200, callbacks = cb)
Epoch 1/200
675/675 [==============================] - 4s 5ms/step - loss: 0.5335 - accuracy: 0.7256 - val_loss: 0.3586 - val_accuracy: 0.8479
Epoch 2/200
675/675 [==============================] - 2s 3ms/step - loss: 0.3287 - accuracy: 0.8699 - val_loss: 0.3106 - val_accuracy: 0.8819
Epoch 3/200
675/675 [==============================] - 2s 3ms/step - loss: 0.2916 - accuracy: 0.8873 - val_loss: 0.2887 - val_accuracy: 0.8775
Epoch 4/200
675/675 [==============================] - 2s 3ms/step - loss: 0.2653 - accuracy: 0.8951 - val_loss: 0.2750 - val_accuracy: 0.8856
Epoch 5/200
675/675 [==============================] - 2s 3ms/step - loss: 0.2438 - accuracy: 0.9030 - val_loss: 0.2369 - val_accuracy: 0.9028
Epoch 6/200
675/675 [==============================] - 2s 3ms/step - loss: 0.2252 - accuracy: 0.9089 - val_loss: 0.2191 - val_accuracy: 0.9083
Epoch 7/200
675/675 [==============================] - 2s 3ms/step - loss: 0.2084 - accuracy: 0.9161 - val_loss: 0.2062 - val_accuracy: 0.9211
Epoch 8/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1963 - accuracy: 0.9203 - val_loss: 0.1930 - val_accuracy: 0.9252
Epoch 9/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1844 - accuracy: 0.9271 - val_loss: 0.1918 - val_accuracy: 0.9210
Epoch 10/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1747 - accuracy: 0.9314 - val_loss: 0.1743 - val_accuracy: 0.9304
Epoch 11/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1664 - accuracy: 0.9353 - val_loss: 0.1825 - val_accuracy: 0.9256
Epoch 12/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1588 - accuracy: 0.9393 - val_loss: 0.1671 - val_accuracy: 0.9407
Epoch 13/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1523 - accuracy: 0.9426 - val_loss: 0.1565 - val_accuracy: 0.9438
Epoch 14/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1469 - accuracy: 0.9447 - val_loss: 0.1495 - val_accuracy: 0.9428
Epoch 15/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1415 - accuracy: 0.9476 - val_loss: 0.1446 - val_accuracy: 0.9452
Epoch 16/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1363 - accuracy: 0.9500 - val_loss: 0.1412 - val_accuracy: 0.9473
Epoch 17/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1329 - accuracy: 0.9507 - val_loss: 0.1494 - val_accuracy: 0.9503
Epoch 18/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1278 - accuracy: 0.9529 - val_loss: 0.1326 - val_accuracy: 0.9515
Epoch 19/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1242 - accuracy: 0.9545 - val_loss: 0.1329 - val_accuracy: 0.9478
Epoch 20/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1211 - accuracy: 0.9559 - val_loss: 0.1268 - val_accuracy: 0.9526
Epoch 21/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1164 - accuracy: 0.9566 - val_loss: 0.1496 - val_accuracy: 0.9508
Epoch 22/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1138 - accuracy: 0.9571 - val_loss: 0.1212 - val_accuracy: 0.9530
Epoch 23/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1104 - accuracy: 0.9589 - val_loss: 0.1150 - val_accuracy: 0.9583
Epoch 24/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1067 - accuracy: 0.9599 - val_loss: 0.1127 - val_accuracy: 0.9593
Epoch 25/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1033 - accuracy: 0.9605 - val_loss: 0.1104 - val_accuracy: 0.9599
Epoch 26/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1010 - accuracy: 0.9624 - val_loss: 0.1059 - val_accuracy: 0.9613
Epoch 27/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0982 - accuracy: 0.9638 - val_loss: 0.1124 - val_accuracy: 0.9609
Epoch 28/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0950 - accuracy: 0.9645 - val_loss: 0.1057 - val_accuracy: 0.9606
Epoch 29/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0945 - accuracy: 0.9640 - val_loss: 0.1010 - val_accuracy: 0.9633
Epoch 30/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0924 - accuracy: 0.9658 - val_loss: 0.1049 - val_accuracy: 0.9600
Epoch 31/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0883 - accuracy: 0.9666 - val_loss: 0.0971 - val_accuracy: 0.9653
Epoch 32/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0872 - accuracy: 0.9677 - val_loss: 0.0942 - val_accuracy: 0.9666
Epoch 33/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0855 - accuracy: 0.9687 - val_loss: 0.0991 - val_accuracy: 0.9665
Epoch 34/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0838 - accuracy: 0.9694 - val_loss: 0.0898 - val_accuracy: 0.9691
Epoch 35/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0824 - accuracy: 0.9697 - val_loss: 0.0926 - val_accuracy: 0.9676
Epoch 36/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0803 - accuracy: 0.9697 - val_loss: 0.0871 - val_accuracy: 0.9702
Epoch 37/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0783 - accuracy: 0.9712 - val_loss: 0.0931 - val_accuracy: 0.9699
Epoch 38/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0776 - accuracy: 0.9716 - val_loss: 0.1029 - val_accuracy: 0.9622
Epoch 39/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0758 - accuracy: 0.9719 - val_loss: 0.0877 - val_accuracy: 0.9712
Epoch 40/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0731 - accuracy: 0.9734 - val_loss: 0.0826 - val_accuracy: 0.9725
Epoch 41/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0735 - accuracy: 0.9731 - val_loss: 0.0829 - val_accuracy: 0.9719
Epoch 42/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0712 - accuracy: 0.9737 - val_loss: 0.0856 - val_accuracy: 0.9705
Epoch 43/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0707 - accuracy: 0.9743 - val_loss: 0.0798 - val_accuracy: 0.9739
Epoch 44/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0685 - accuracy: 0.9752 - val_loss: 0.0781 - val_accuracy: 0.9732
Epoch 45/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0683 - accuracy: 0.9753 - val_loss: 0.0774 - val_accuracy: 0.9746
Epoch 46/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0669 - accuracy: 0.9756 - val_loss: 0.0755 - val_accuracy: 0.9745
Epoch 47/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0649 - accuracy: 0.9768 - val_loss: 0.0750 - val_accuracy: 0.9755
Epoch 48/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0646 - accuracy: 0.9770 - val_loss: 0.0845 - val_accuracy: 0.9719
Epoch 49/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0656 - accuracy: 0.9766 - val_loss: 0.0779 - val_accuracy: 0.9742
Epoch 50/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0627 - accuracy: 0.9778 - val_loss: 0.0736 - val_accuracy: 0.9766
Epoch 51/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0625 - accuracy: 0.9776 - val_loss: 0.0737 - val_accuracy: 0.9748
Epoch 52/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0604 - accuracy: 0.9782 - val_loss: 0.0739 - val_accuracy: 0.9762
Epoch 53/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0596 - accuracy: 0.9789 - val_loss: 0.0720 - val_accuracy: 0.9776
Epoch 54/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0604 - accuracy: 0.9785 - val_loss: 0.0715 - val_accuracy: 0.9769
Epoch 55/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0598 - accuracy: 0.9784 - val_loss: 0.0729 - val_accuracy: 0.9767
Epoch 56/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0591 - accuracy: 0.9791 - val_loss: 0.0712 - val_accuracy: 0.9767
Epoch 57/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0561 - accuracy: 0.9798 - val_loss: 0.0685 - val_accuracy: 0.9781
Epoch 58/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0575 - accuracy: 0.9796 - val_loss: 0.0703 - val_accuracy: 0.9776
Epoch 59/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0568 - accuracy: 0.9792 - val_loss: 0.0742 - val_accuracy: 0.9757
Epoch 60/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0562 - accuracy: 0.9804 - val_loss: 0.0743 - val_accuracy: 0.9759
Epoch 61/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0545 - accuracy: 0.9806 - val_loss: 0.0663 - val_accuracy: 0.9788
Epoch 62/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0542 - accuracy: 0.9809 - val_loss: 0.0670 - val_accuracy: 0.9786
Epoch 63/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0541 - accuracy: 0.9813 - val_loss: 0.0658 - val_accuracy: 0.9792
Epoch 64/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0531 - accuracy: 0.9812 - val_loss: 0.0742 - val_accuracy: 0.9762
Epoch 65/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0521 - accuracy: 0.9817 - val_loss: 0.0643 - val_accuracy: 0.9798
Epoch 66/200
675/675 [==============================] - 2s 4ms/step - loss: 0.0509 - accuracy: 0.9820 - val_loss: 0.0665 - val_accuracy: 0.9787
Epoch 67/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0513 - accuracy: 0.9821 - val_loss: 0.0673 - val_accuracy: 0.9784
Epoch 68/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0495 - accuracy: 0.9828 - val_loss: 0.0693 - val_accuracy: 0.9778
Epoch 69/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0502 - accuracy: 0.9823 - val_loss: 0.0628 - val_accuracy: 0.9797
Epoch 70/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0488 - accuracy: 0.9822 - val_loss: 0.0695 - val_accuracy: 0.9782
Epoch 71/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0483 - accuracy: 0.9829 - val_loss: 0.0665 - val_accuracy: 0.9784
Epoch 72/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0483 - accuracy: 0.9830 - val_loss: 0.0658 - val_accuracy: 0.9788
Epoch 73/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0487 - accuracy: 0.9826 - val_loss: 0.0605 - val_accuracy: 0.9805
Epoch 74/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0467 - accuracy: 0.9838 - val_loss: 0.0672 - val_accuracy: 0.9787
Epoch 75/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0479 - accuracy: 0.9833 - val_loss: 0.0651 - val_accuracy: 0.9793
Epoch 76/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0465 - accuracy: 0.9837 - val_loss: 0.0607 - val_accuracy: 0.9805
Epoch 77/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0457 - accuracy: 0.9842 - val_loss: 0.0615 - val_accuracy: 0.9803
Epoch 78/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0461 - accuracy: 0.9833 - val_loss: 0.0663 - val_accuracy: 0.9792
Epoch 79/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0445 - accuracy: 0.9846 - val_loss: 0.0635 - val_accuracy: 0.9797
Epoch 80/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0445 - accuracy: 0.9841 - val_loss: 0.0792 - val_accuracy: 0.9727
Epoch 81/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0447 - accuracy: 0.9842 - val_loss: 0.0609 - val_accuracy: 0.9804
Epoch 82/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0434 - accuracy: 0.9849 - val_loss: 0.0578 - val_accuracy: 0.9823
Epoch 83/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0425 - accuracy: 0.9853 - val_loss: 0.0659 - val_accuracy: 0.9792
Epoch 84/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0435 - accuracy: 0.9849 - val_loss: 0.0590 - val_accuracy: 0.9813
Epoch 85/200
675/675 [==============================] - 3s 4ms/step - loss: 0.0437 - accuracy: 0.9844 - val_loss: 0.0589 - val_accuracy: 0.9816
Epoch 86/200
675/675 [==============================] - 3s 4ms/step - loss: 0.0415 - accuracy: 0.9856 - val_loss: 0.0589 - val_accuracy: 0.9813
Epoch 87/200
675/675 [==============================] - 4s 6ms/step - loss: 0.0420 - accuracy: 0.9854 - val_loss: 0.0567 - val_accuracy: 0.9831
Epoch 88/200
675/675 [==============================] - 3s 4ms/step - loss: 0.0424 - accuracy: 0.9851 - val_loss: 0.0554 - val_accuracy: 0.9836
Epoch 89/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0411 - accuracy: 0.9854 - val_loss: 0.0574 - val_accuracy: 0.9819
Epoch 90/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0395 - accuracy: 0.9864 - val_loss: 0.0589 - val_accuracy: 0.9819
Epoch 91/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0412 - accuracy: 0.9860 - val_loss: 0.0658 - val_accuracy: 0.9790
Epoch 92/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0386 - accuracy: 0.9868 - val_loss: 0.0580 - val_accuracy: 0.9821
Epoch 93/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0396 - accuracy: 0.9861 - val_loss: 0.0555 - val_accuracy: 0.9829
Epoch 94/200
675/675 [==============================] - 3s 4ms/step - loss: 0.0384 - accuracy: 0.9870 - val_loss: 0.1014 - val_accuracy: 0.9640
Epoch 95/200
675/675 [==============================] - 4s 6ms/step - loss: 0.0386 - accuracy: 0.9862 - val_loss: 0.0554 - val_accuracy: 0.9830
Epoch 96/200
675/675 [==============================] - 4s 6ms/step - loss: 0.0386 - accuracy: 0.9864 - val_loss: 0.0551 - val_accuracy: 0.9832
Epoch 97/200
675/675 [==============================] - 3s 5ms/step - loss: 0.0381 - accuracy: 0.9868 - val_loss: 0.0546 - val_accuracy: 0.9832
Epoch 98/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0370 - accuracy: 0.9869 - val_loss: 0.0548 - val_accuracy: 0.9830
Epoch 99/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0383 - accuracy: 0.9867 - val_loss: 0.0570 - val_accuracy: 0.9822
Epoch 100/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0376 - accuracy: 0.9868 - val_loss: 0.0537 - val_accuracy: 0.9830
Epoch 101/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0361 - accuracy: 0.9874 - val_loss: 0.0538 - val_accuracy: 0.9837
Epoch 102/200
675/675 [==============================] - 3s 4ms/step - loss: 0.0357 - accuracy: 0.9872 - val_loss: 0.0519 - val_accuracy: 0.9837
Epoch 103/200
675/675 [==============================] - 4s 6ms/step - loss: 0.0360 - accuracy: 0.9876 - val_loss: 0.0515 - val_accuracy: 0.9839
Epoch 104/200
675/675 [==============================] - 4s 6ms/step - loss: 0.0357 - accuracy: 0.9874 - val_loss: 0.0547 - val_accuracy: 0.9826
Epoch 105/200
675/675 [==============================] - 4s 6ms/step - loss: 0.0355 - accuracy: 0.9876 - val_loss: 0.0512 - val_accuracy: 0.9836
Epoch 106/200
675/675 [==============================] - 5s 7ms/step - loss: 0.0347 - accuracy: 0.9882 - val_loss: 0.0513 - val_accuracy: 0.9837
Epoch 107/200
675/675 [==============================] - 5s 7ms/step - loss: 0.0341 - accuracy: 0.9882 - val_loss: 0.0523 - val_accuracy: 0.9833
Epoch 108/200
675/675 [==============================] - 3s 5ms/step - loss: 0.0336 - accuracy: 0.9885 - val_loss: 0.0504 - val_accuracy: 0.9837
Epoch 109/200
675/675 [==============================] - 3s 5ms/step - loss: 0.0349 - accuracy: 0.9882 - val_loss: 0.0521 - val_accuracy: 0.9840
Epoch 110/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0335 - accuracy: 0.9889 - val_loss: 0.0515 - val_accuracy: 0.9838
Epoch 111/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0317 - accuracy: 0.9894 - val_loss: 0.0552 - val_accuracy: 0.9820
Epoch 112/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0327 - accuracy: 0.9891 - val_loss: 0.0547 - val_accuracy: 0.9828
Epoch 113/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0330 - accuracy: 0.9883 - val_loss: 0.0526 - val_accuracy: 0.9841
Epoch 114/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0311 - accuracy: 0.9894 - val_loss: 0.0501 - val_accuracy: 0.9846
Epoch 115/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0326 - accuracy: 0.9890 - val_loss: 0.0540 - val_accuracy: 0.9836
Epoch 116/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0317 - accuracy: 0.9894 - val_loss: 0.0530 - val_accuracy: 0.9834
Epoch 117/200
675/675 [==============================] - 4s 5ms/step - loss: 0.0305 - accuracy: 0.9896 - val_loss: 0.0488 - val_accuracy: 0.9845
Epoch 118/200
675/675 [==============================] - 3s 5ms/step - loss: 0.0312 - accuracy: 0.9892 - val_loss: 0.0528 - val_accuracy: 0.9834
Epoch 119/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0318 - accuracy: 0.9895 - val_loss: 0.0492 - val_accuracy: 0.9852
Epoch 120/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0298 - accuracy: 0.9898 - val_loss: 0.0499 - val_accuracy: 0.9853
Epoch 121/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0315 - accuracy: 0.9891 - val_loss: 0.0497 - val_accuracy: 0.9844
Epoch 122/200
675/675 [==============================] - 3s 5ms/step - loss: 0.0303 - accuracy: 0.9899 - val_loss: 0.0474 - val_accuracy: 0.9847
Epoch 123/200
675/675 [==============================] - 4s 5ms/step - loss: 0.0304 - accuracy: 0.9898 - val_loss: 0.0684 - val_accuracy: 0.9780
Epoch 124/200
675/675 [==============================] - 4s 6ms/step - loss: 0.0293 - accuracy: 0.9901 - val_loss: 0.0471 - val_accuracy: 0.9850
Epoch 125/200
675/675 [==============================] - 4s 5ms/step - loss: 0.0291 - accuracy: 0.9905 - val_loss: 0.0522 - val_accuracy: 0.9841
Epoch 126/200
675/675 [==============================] - 3s 5ms/step - loss: 0.0295 - accuracy: 0.9904 - val_loss: 0.0459 - val_accuracy: 0.9857
Epoch 127/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0288 - accuracy: 0.9900 - val_loss: 0.0493 - val_accuracy: 0.9843
Epoch 128/200
675/675 [==============================] - 3s 4ms/step - loss: 0.0277 - accuracy: 0.9908 - val_loss: 0.0529 - val_accuracy: 0.9827
Epoch 129/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0289 - accuracy: 0.9907 - val_loss: 0.0459 - val_accuracy: 0.9853
Epoch 130/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0277 - accuracy: 0.9907 - val_loss: 0.0472 - val_accuracy: 0.9856
Epoch 131/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0273 - accuracy: 0.9905 - val_loss: 0.0458 - val_accuracy: 0.9861
Epoch 132/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0279 - accuracy: 0.9903 - val_loss: 0.0484 - val_accuracy: 0.9849
Epoch 133/200
675/675 [==============================] - 2s 4ms/step - loss: 0.0271 - accuracy: 0.9907 - val_loss: 0.0452 - val_accuracy: 0.9856
Epoch 134/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0278 - accuracy: 0.9903 - val_loss: 0.0484 - val_accuracy: 0.9844
Epoch 135/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0265 - accuracy: 0.9911 - val_loss: 0.0484 - val_accuracy: 0.9857
Epoch 136/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0260 - accuracy: 0.9912 - val_loss: 0.0447 - val_accuracy: 0.9869
Epoch 137/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0265 - accuracy: 0.9910 - val_loss: 0.0483 - val_accuracy: 0.9846
Epoch 138/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0265 - accuracy: 0.9907 - val_loss: 0.0487 - val_accuracy: 0.9851
Epoch 139/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0260 - accuracy: 0.9911 - val_loss: 0.0619 - val_accuracy: 0.9817
Epoch 140/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0282 - accuracy: 0.9904 - val_loss: 0.0442 - val_accuracy: 0.9869
Epoch 141/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0251 - accuracy: 0.9915 - val_loss: 0.0466 - val_accuracy: 0.9862
Epoch 142/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0257 - accuracy: 0.9912 - val_loss: 0.0550 - val_accuracy: 0.9822
Epoch 143/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0253 - accuracy: 0.9909 - val_loss: 0.0512 - val_accuracy: 0.9844
Epoch 144/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0252 - accuracy: 0.9915 - val_loss: 0.0447 - val_accuracy: 0.9869
Epoch 145/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0244 - accuracy: 0.9918 - val_loss: 0.0438 - val_accuracy: 0.9867
Epoch 146/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0243 - accuracy: 0.9920 - val_loss: 0.0441 - val_accuracy: 0.9873
Epoch 147/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0255 - accuracy: 0.9911 - val_loss: 0.0673 - val_accuracy: 0.9783
Epoch 148/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0230 - accuracy: 0.9922 - val_loss: 0.0445 - val_accuracy: 0.9864
Epoch 149/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0255 - accuracy: 0.9910 - val_loss: 0.0498 - val_accuracy: 0.9840
Epoch 150/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0231 - accuracy: 0.9920 - val_loss: 0.0433 - val_accuracy: 0.9876
Epoch 151/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0235 - accuracy: 0.9922 - val_loss: 0.0443 - val_accuracy: 0.9878
Epoch 152/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0235 - accuracy: 0.9919 - val_loss: 0.0439 - val_accuracy: 0.9874
Epoch 153/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0229 - accuracy: 0.9918 - val_loss: 0.0459 - val_accuracy: 0.9855
Epoch 154/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0235 - accuracy: 0.9919 - val_loss: 0.0444 - val_accuracy: 0.9866
Epoch 155/200
675/675 [==============================] - 2s 4ms/step - loss: 0.0230 - accuracy: 0.9923 - val_loss: 0.0424 - val_accuracy: 0.9873
Epoch 156/200
675/675 [==============================] - 2s 4ms/step - loss: 0.0219 - accuracy: 0.9924 - val_loss: 0.0444 - val_accuracy: 0.9869
Epoch 157/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0223 - accuracy: 0.9929 - val_loss: 0.0436 - val_accuracy: 0.9875
Epoch 158/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0223 - accuracy: 0.9923 - val_loss: 0.0419 - val_accuracy: 0.9880
Epoch 159/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0222 - accuracy: 0.9927 - val_loss: 0.0433 - val_accuracy: 0.9875
Epoch 160/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0215 - accuracy: 0.9926 - val_loss: 0.0430 - val_accuracy: 0.9871
Epoch 161/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0212 - accuracy: 0.9926 - val_loss: 0.0486 - val_accuracy: 0.9858
Epoch 162/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0209 - accuracy: 0.9928 - val_loss: 0.0754 - val_accuracy: 0.9769
Epoch 163/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0209 - accuracy: 0.9930 - val_loss: 0.0440 - val_accuracy: 0.9871
Epoch 164/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0224 - accuracy: 0.9924 - val_loss: 0.0421 - val_accuracy: 0.9879
Epoch 165/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0203 - accuracy: 0.9932 - val_loss: 0.0440 - val_accuracy: 0.9874
Epoch 166/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0204 - accuracy: 0.9931 - val_loss: 0.0421 - val_accuracy: 0.9880
Epoch 167/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0200 - accuracy: 0.9934 - val_loss: 0.0422 - val_accuracy: 0.9878
Epoch 168/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0212 - accuracy: 0.9929 - val_loss: 0.0437 - val_accuracy: 0.9873
In [13]:
model.evaluate(val_ds)
169/169 [==============================] - 0s 2ms/step - loss: 0.0419 - accuracy: 0.9880
Out[13]:
[0.04185109585523605, 0.9879629611968994]

Plotting the metrics

In [14]:
def plot(history, variable, variable2):
    plt.plot(range(len(history[variable])), history[variable])
    plt.plot(range(len(history[variable2])), history[variable2])
    plt.title(variable)
In [15]:
plot(history.history, "accuracy", 'val_accuracy')
In [16]:
plot(history.history, "loss", "val_loss")

TF Model Prediction

In [17]:
def predict(x):
    for i in val_ds.as_numpy_iterator():
        img, label = i    
#         plt.figure()
#         plt.axis('off')   # remove axes
#         plt.imshow(img[x])    # shape from (32, 64, 64, 3) --> (64, 64, 3)
        
        print("True: ", class_names[label[x]])
        output = model.predict(np.expand_dims(img[x],0))[0][0]    # getting output; input shape (64, 64, 3) --> (1, 64, 64, 3)
        pred = (output > 0.5).astype('int')
        print("Predicted: ", class_names[pred])    # Picking the label from class_names base don the model output

        probability = (output*100) if pred==1 else (100 - (output*100))
        if (class_names[pred] == class_names[label[x]]):
             print("PREDICTION MATCHED | Probability: {:.2f}%".format(probability))
        else:
             print("PREDICTION DID NOT MATCH | Probability: {:.2f}%".format(probability))
        break
In [18]:
for i in range(10):
    # pick random test data sample from one batch
    x = random.randint(0, batch_size - 1)
    predict(x)
    print()
True:  fire
Predicted:  fire
PREDICTION MATCHED | Probability: 100.00%

True:  no_fire
Predicted:  no_fire
PREDICTION MATCHED | Probability: 100.00%

True:  no_fire
Predicted:  no_fire
PREDICTION MATCHED | Probability: 99.88%

True:  fire
Predicted:  fire
PREDICTION MATCHED | Probability: 100.00%

True:  fire
Predicted:  fire
PREDICTION MATCHED | Probability: 100.00%

True:  fire
Predicted:  fire
PREDICTION MATCHED | Probability: 100.00%

True:  no_fire
Predicted:  no_fire
PREDICTION MATCHED | Probability: 100.00%

True:  no_fire
Predicted:  no_fire
PREDICTION MATCHED | Probability: 100.00%

True:  fire
Predicted:  fire
PREDICTION MATCHED | Probability: 99.50%

True:  fire
Predicted:  fire
PREDICTION MATCHED | Probability: 100.00%

deepCC for Ubuntu

In [19]:
!deepCC forest_fire.h5 --debug 
[INFO]
Reading [keras model] 'forest_fire.h5'
[SUCCESS]
Saved 'forest_fire_deepC/forest_fire.onnx'
[INFO]
Reading [onnx model] 'forest_fire_deepC/forest_fire.onnx'
[INFO]
Model info:
  ir_vesion : 5
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) conv2d_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_2's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_2) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'forest_fire_deepC/forest_fire.cpp'
[INFO]
deepSea model files are ready in 'forest_fire_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "forest_fire_deepC/forest_fire.cpp" -D_AITS_MAIN -o "forest_fire_deepC/forest_fire.exe"
[RUNNING COMMAND]
size "forest_fire_deepC/forest_fire.exe"
   text	   data	    bss	    dec	    hex	filename
 189339	   3768	    760	 193867	  2f54b	forest_fire_deepC/forest_fire.exe
[SUCCESS]
Saved model as executable "forest_fire_deepC/forest_fire.exe"
[DEBUG]
Intermediate files won't be removed.

deepSea vs TF Model Prediction

In [20]:
def compare(x):
    for i in val_ds.as_numpy_iterator():
        img, label = i
        print("True: ", class_names[label[x]])
        
        tf_output = model.predict(np.expand_dims(img[x],0))[0][0]    # getting output; input shape (64, 64, 3) --> (1, 64, 64, 3)
        tf_pred = (tf_output > 0.5).astype('int')
        tf_probability = (tf_output*100) if tf_pred==1 else (100 - (tf_output*100))
        print("Predicted [Tensorflow]: {} | Probability: {:.2f}%".format(class_names[tf_pred], tf_probability))
        
        np.savetxt('sample.data', img[x].flatten())
        !forest_fire_deepC/forest_fire.exe sample.data &> /dev/null
        dc_output = np.loadtxt('deepSea_result_1.out')
        dc_pred = (dc_output > 0.5).astype('int')
        dc_probability = (dc_output*100) if dc_pred==1 else (100 - (dc_output*100))
        print("Predicted [deepSea]: {} | Probability: {:.2f}%".format(class_names[dc_pred], dc_probability))

        
        if (tf_pred == dc_pred):
             print("TensorFlow and DeepSea prediction matched.")
        else:
             print("TensorFlow and DeepSea prediction did not match.")
        break
In [21]:
for i in range(10):
    # pick random test data sample from one batch
    x = random.randint(0, batch_size - 1)
    compare(x)
    print()
True:  fire
Predicted [Tensorflow]: fire | Probability: 99.87%
Predicted [deepSea]: fire | Probability: 99.87%
TensorFlow and DeepSea prediction matched.

True:  no_fire
Predicted [Tensorflow]: no_fire | Probability: 100.00%
Predicted [deepSea]: no_fire | Probability: 100.00%
TensorFlow and DeepSea prediction matched.

True:  no_fire
Predicted [Tensorflow]: no_fire | Probability: 99.43%
Predicted [deepSea]: no_fire | Probability: 99.43%
TensorFlow and DeepSea prediction matched.

True:  fire
Predicted [Tensorflow]: fire | Probability: 100.00%
Predicted [deepSea]: fire | Probability: 100.00%
TensorFlow and DeepSea prediction matched.

True:  no_fire
Predicted [Tensorflow]: no_fire | Probability: 99.78%
Predicted [deepSea]: no_fire | Probability: 99.78%
TensorFlow and DeepSea prediction matched.

True:  no_fire
Predicted [Tensorflow]: no_fire | Probability: 100.00%
Predicted [deepSea]: no_fire | Probability: 100.00%
TensorFlow and DeepSea prediction matched.

True:  no_fire
Predicted [Tensorflow]: no_fire | Probability: 100.00%
Predicted [deepSea]: no_fire | Probability: 100.00%
TensorFlow and DeepSea prediction matched.

True:  no_fire
Predicted [Tensorflow]: no_fire | Probability: 100.00%
Predicted [deepSea]: no_fire | Probability: 100.00%
TensorFlow and DeepSea prediction matched.

True:  no_fire
Predicted [Tensorflow]: no_fire | Probability: 100.00%
Predicted [deepSea]: no_fire | Probability: 100.00%
TensorFlow and DeepSea prediction matched.

True:  no_fire
Predicted [Tensorflow]: no_fire | Probability: 100.00%
Predicted [deepSea]: no_fire | Probability: 100.00%
TensorFlow and DeepSea prediction matched.

deepCC for Arduino Nano 33 BLE Sense

In [22]:
!deepCC forest_fire.h5 --board="Arduino Nano 33 BLE Sense" --debug --archive --bundle
[INFO]
Reading [keras model] 'forest_fire.h5'
[SUCCESS]
Saved 'forest_fire_deepC/forest_fire.onnx'
[INFO]
Reading [onnx model] 'forest_fire_deepC/forest_fire.onnx'
[INFO]
Model info:
  ir_vesion : 5
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) conv2d_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_2's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_2) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'forest_fire_deepC/forest_fire.cpp'
[INFO]
deepSea model files are ready in 'forest_fire_deepC/' 
[RUNNING COMMAND]
arm-none-eabi-g++ -std=c++11 -O3 -mcpu=cortex-m4 -specs=nosys.specs -mthumb -fno-exceptions -fno-rtti -msoft-float -mfloat-abi=softfp -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -I /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 -c "forest_fire_deepC/forest_fire.cpp" -o "forest_fire_deepC/forest_fire.o"
[RUNNING COMMAND]
arm-none-eabi-ar rcs "forest_fire_deepC/lib_forest_fire.a" "forest_fire_deepC/forest_fire.o"
[RUNNING COMMAND]
size "forest_fire_deepC/lib_forest_fire.a"
   text	   data	    bss	    dec	    hex	filename
  82579	      4	     96	  82679	  142f7	forest_fire.o (ex forest_fire_deepC/lib_forest_fire.a)
[SUCCESS]
Saved model as archive "forest_fire_deepC/lib_forest_fire.a"
[DEBUG]
Intermediate files won't be removed.
[BUNDLE]
Bundle "forest_fire_deepC/forest_fire.zip" generated.