Cainvas
Model Files
forest_fire.h5
keras
Model
deepSea Compiled Models
forest_fire.exe
deepSea
Ubuntu

Forest Fire Detection App

Credit: AITS Cainvas Community

Photo by Iblowyourdesign on Dribbble

Wildfires are an important phenomenon on a global scale, as they are responsible for large amounts of economic and environmental damage. These effects are being exacerbated by the influence of climate change.

It is important to detect fire and warn the people in charge. So we will create a Fire Detection App with deep learning.

In [1]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, callbacks, optimizers
import os
import random
from PIL import Image

Dataset

Dataset curated from Forest_fires_classification.

Used this script to resize images to (45x45) and compress the dataset

import os, cv2, glob
import multiprocessing as mp

def resize(file_path):
  image = cv2.imread(file_path)
  image = cv2.resize(image, (45, 45))
  target_path = os.path.splitext(file_path)[0] + ".jpg"
  cv2.imwrite(target_path, image)
  print("Resized '{}' to '{}'".format(file_path, target_path))
  return

def main():
  image_dir = "forest_fire_dataset" 
  file_paths = []
  file_paths += glob.glob(image_dir+"/**/*.png", recursive = True)
  file_paths += glob.glob(image_dir+"/**/*.jpg", recursive = True)
  pool = mp.Pool(processes = (mp.cpu_count()*50))
  pool.map(resize, file_paths)

if __name__=="__main__":
  main()

Downloading the curated Dataset

In [2]:
data_dir = "forest_fire"
if not os.path.exists(data_dir):
    !https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/forest_fire.gif -O forest_fire.zip
    !unzip -qo forest_fire.zip  
    !rm forest_fire.zip
In [3]:
tf.random.set_seed(50)

The dataset folder has two sub-folders - person and notperson containing images of respective types.

In [4]:
print("Number of samples")
for f in os.listdir(data_dir + '/'):
    if os.path.isdir(data_dir + '/' + f):
        print(f, " : ", len(os.listdir(data_dir + '/' + f +'/')))
Number of samples
no_fire  :  23845
fire  :  30155

It is a balanced dataset.

In [5]:
batch_size = 64
image_size = (45, 45)


print("Training set")
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
  data_dir,
  image_size=image_size,
  validation_split=0.2,
  subset="training",
  seed=113, 
  batch_size=batch_size)

print("Validation set")
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
  data_dir,
  image_size=image_size,
  validation_split=0.2,
  subset="validation",
  seed=113, 
  batch_size=batch_size)
Training set
Found 54000 files belonging to 2 classes.
Using 43200 files for training.
Validation set
Found 54000 files belonging to 2 classes.
Using 10800 files for validation.

Define the class_names for use later.

In [6]:
class_names = train_ds.class_names
print(class_names)
['fire', 'no_fire']

Visualization

In [7]:
num_samples = 10    # the number of samples to be displayed in each class

for x in class_names:
    plt.figure(figsize=(20, 20))
    filenames = os.listdir(os.path.join(data_dir, x))

    for i in range(num_samples):
        ax = plt.subplot(1, num_samples, i + 1)
        img = Image.open(os.path.join(data_dir, x, filenames[i]))
        plt.imshow(img)
        plt.title(x)
        plt.axis("off")

Preprocessing

Defining the input shape

In [8]:
print("Shape of one training batch")

for image_batch, labels_batch in train_ds:
    input_shape = image_batch[0].shape
    print("Input: ", image_batch.shape)
    print("Labels: ", labels_batch.shape)
    print("Input Shape: ", input_shape)
    break
Shape of one training batch
Input:  (64, 45, 45, 3)
Labels:  (64,)
Input Shape:  (45, 45, 3)
In [9]:
# Pre-fetch images into memeory

AUTOTUNE = tf.data.experimental.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

Normalizing the pixel values

Pixel values are now integers between 0 and 255. Changing them to the range [0, 1] for faster convergence.

In [10]:
# Normalizing the pixel values

normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)

train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
val_ds = val_ds.map(lambda x, y: (normalization_layer(x), y))

Augmenting dataset

Augmenting images in the train set to increase dataset size

min_sample_count = 60000 # minimum number of samples required after augmentation data_augmentation = tf.keras.Sequential( [ layers.experimental.preprocessing.RandomFlip("vertical"), # Flip along vertical axes layers.experimental.preprocessing.RandomZoom(0.2), # Randomly zoom images in dataset layers.experimental.preprocessing.RandomRotation(factor=(-0.2, 0.2)) # Randomly rotate images ]) print("Train size (number of batches) before augmentation: ", len(train_ds)) print("Train size (number of samples) before augmentation: ", (batch_size*len(train_ds))) i = 0 while True: i += 1 # Apply only to train set aug_ds = train_ds.map(lambda x, y: (data_augmentation(x, training=True), y)) print("Size (number of batches) after,", (i), "th augmentation: ", len(aug_ds)) print("Train size (number of samples) after augmentation: ", (batch_size*len(train_ds))) print() # Adding to train_ds train_ds = train_ds.concatenate(aug_ds) if (batch_size*len(train_ds)) >= min_sample_count: break print("Train size (number of batches) after augmentation: ", len(train_ds)) print("Train size (number of samples) after augmentation: ", (batch_size*len(train_ds))) train_ds = train_ds.shuffle(4-1).take(min_sample_count//batch_size) print() print("Train size (number of batches) after augmentation: ", len(train_ds)) print("Train size (number of samples) after augmentation: ", (batch_size*len(train_ds)))

The model

In [11]:
filepath = 'forest_fire.h5'

model = tf.keras.models.Sequential([
    layers.Conv2D(6, 3, activation='relu', input_shape=input_shape),
    layers.MaxPool2D(pool_size=(2, 2)),
    
    layers.Conv2D(12, 3, activation='relu'),
    layers.MaxPool2D(pool_size=(2, 2)),
    
#     layers.Conv2D(32, 3, activation='relu'),
#     layers.MaxPool2D(pool_size=(2, 2)),

    layers.Conv2D(6, 3, activation='relu'),
    layers.MaxPool2D(pool_size=(2, 2)),

    layers.Flatten(),
    layers.Dense(32, activation='relu'),
    layers.Dense(16, activation='relu'),
    layers.Dense(1, activation='sigmoid')
])

model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 43, 43, 6)         168       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 21, 21, 6)         0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 19, 19, 12)        660       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 9, 9, 12)          0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 7, 7, 6)           654       
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 3, 3, 6)           0         
_________________________________________________________________
flatten (Flatten)            (None, 54)                0         
_________________________________________________________________
dense (Dense)                (None, 32)                1760      
_________________________________________________________________
dense_1 (Dense)              (None, 16)                528       
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 17        
=================================================================
Total params: 3,787
Trainable params: 3,787
Non-trainable params: 0
_________________________________________________________________
In [12]:
cb = [
    callbacks.EarlyStopping(monitor = 'val_loss', patience = 10, restore_best_weights = True),
    callbacks.ModelCheckpoint(filepath, monitor = "val_loss", save_best_only = True)
    ]

model.compile(loss=tf.keras.losses.BinaryCrossentropy(), 
              optimizer=optimizers.Adam(0.0001), 
              metrics=['accuracy'])

history = model.fit(train_ds, validation_data =  val_ds, epochs=200, callbacks = cb)
Epoch 1/200
675/675 [==============================] - 4s 5ms/step - loss: 0.5334 - accuracy: 0.7256 - val_loss: 0.3587 - val_accuracy: 0.8480
Epoch 2/200
675/675 [==============================] - 2s 3ms/step - loss: 0.3289 - accuracy: 0.8700 - val_loss: 0.3111 - val_accuracy: 0.8830
Epoch 3/200
675/675 [==============================] - 2s 3ms/step - loss: 0.2924 - accuracy: 0.8869 - val_loss: 0.2906 - val_accuracy: 0.8766
Epoch 4/200
675/675 [==============================] - 2s 3ms/step - loss: 0.2670 - accuracy: 0.8947 - val_loss: 0.2764 - val_accuracy: 0.8861
Epoch 5/200
675/675 [==============================] - 2s 3ms/step - loss: 0.2463 - accuracy: 0.9025 - val_loss: 0.2398 - val_accuracy: 0.9023
Epoch 6/200
675/675 [==============================] - 2s 3ms/step - loss: 0.2280 - accuracy: 0.9078 - val_loss: 0.2221 - val_accuracy: 0.9077
Epoch 7/200
675/675 [==============================] - 2s 3ms/step - loss: 0.2111 - accuracy: 0.9148 - val_loss: 0.2082 - val_accuracy: 0.9191
Epoch 8/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1986 - accuracy: 0.9191 - val_loss: 0.1953 - val_accuracy: 0.9244
Epoch 9/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1863 - accuracy: 0.9259 - val_loss: 0.1937 - val_accuracy: 0.9196
Epoch 10/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1762 - accuracy: 0.9305 - val_loss: 0.1757 - val_accuracy: 0.9301
Epoch 11/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1676 - accuracy: 0.9347 - val_loss: 0.1830 - val_accuracy: 0.9250
Epoch 12/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1598 - accuracy: 0.9389 - val_loss: 0.1687 - val_accuracy: 0.9406
Epoch 13/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1531 - accuracy: 0.9422 - val_loss: 0.1575 - val_accuracy: 0.9435
Epoch 14/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1474 - accuracy: 0.9446 - val_loss: 0.1503 - val_accuracy: 0.9428
Epoch 15/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1420 - accuracy: 0.9476 - val_loss: 0.1454 - val_accuracy: 0.9450
Epoch 16/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1367 - accuracy: 0.9499 - val_loss: 0.1418 - val_accuracy: 0.9471
Epoch 17/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1332 - accuracy: 0.9506 - val_loss: 0.1497 - val_accuracy: 0.9506
Epoch 18/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1281 - accuracy: 0.9526 - val_loss: 0.1330 - val_accuracy: 0.9510
Epoch 19/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1245 - accuracy: 0.9543 - val_loss: 0.1326 - val_accuracy: 0.9481
Epoch 20/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1213 - accuracy: 0.9557 - val_loss: 0.1271 - val_accuracy: 0.9521
Epoch 21/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1166 - accuracy: 0.9567 - val_loss: 0.1485 - val_accuracy: 0.9517
Epoch 22/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1140 - accuracy: 0.9572 - val_loss: 0.1213 - val_accuracy: 0.9532
Epoch 23/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1104 - accuracy: 0.9589 - val_loss: 0.1153 - val_accuracy: 0.9574
Epoch 24/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1069 - accuracy: 0.9602 - val_loss: 0.1131 - val_accuracy: 0.9594
Epoch 25/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1035 - accuracy: 0.9604 - val_loss: 0.1111 - val_accuracy: 0.9594
Epoch 26/200
675/675 [==============================] - 2s 3ms/step - loss: 0.1010 - accuracy: 0.9628 - val_loss: 0.1062 - val_accuracy: 0.9608
Epoch 27/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0982 - accuracy: 0.9643 - val_loss: 0.1125 - val_accuracy: 0.9615
Epoch 28/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0950 - accuracy: 0.9649 - val_loss: 0.1068 - val_accuracy: 0.9600
Epoch 29/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0945 - accuracy: 0.9642 - val_loss: 0.1014 - val_accuracy: 0.9623
Epoch 30/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0923 - accuracy: 0.9659 - val_loss: 0.1036 - val_accuracy: 0.9604
Epoch 31/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0880 - accuracy: 0.9675 - val_loss: 0.0976 - val_accuracy: 0.9648
Epoch 32/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0869 - accuracy: 0.9679 - val_loss: 0.0942 - val_accuracy: 0.9673
Epoch 33/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0851 - accuracy: 0.9685 - val_loss: 0.0989 - val_accuracy: 0.9675
Epoch 34/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0834 - accuracy: 0.9696 - val_loss: 0.0899 - val_accuracy: 0.9684
Epoch 35/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0820 - accuracy: 0.9700 - val_loss: 0.0932 - val_accuracy: 0.9669
Epoch 36/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0798 - accuracy: 0.9702 - val_loss: 0.0873 - val_accuracy: 0.9700
Epoch 37/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0779 - accuracy: 0.9716 - val_loss: 0.0934 - val_accuracy: 0.9702
Epoch 38/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0772 - accuracy: 0.9722 - val_loss: 0.1047 - val_accuracy: 0.9610
Epoch 39/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0753 - accuracy: 0.9723 - val_loss: 0.0877 - val_accuracy: 0.9710
Epoch 40/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0726 - accuracy: 0.9738 - val_loss: 0.0826 - val_accuracy: 0.9729
Epoch 41/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0730 - accuracy: 0.9734 - val_loss: 0.0825 - val_accuracy: 0.9719
Epoch 42/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0707 - accuracy: 0.9743 - val_loss: 0.0873 - val_accuracy: 0.9692
Epoch 43/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0704 - accuracy: 0.9748 - val_loss: 0.0803 - val_accuracy: 0.9744
Epoch 44/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0681 - accuracy: 0.9751 - val_loss: 0.0781 - val_accuracy: 0.9736
Epoch 45/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0680 - accuracy: 0.9754 - val_loss: 0.0773 - val_accuracy: 0.9744
Epoch 46/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0666 - accuracy: 0.9757 - val_loss: 0.0756 - val_accuracy: 0.9746
Epoch 47/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0647 - accuracy: 0.9771 - val_loss: 0.0749 - val_accuracy: 0.9765
Epoch 48/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0642 - accuracy: 0.9769 - val_loss: 0.0841 - val_accuracy: 0.9730
Epoch 49/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0653 - accuracy: 0.9768 - val_loss: 0.0768 - val_accuracy: 0.9754
Epoch 50/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0624 - accuracy: 0.9778 - val_loss: 0.0742 - val_accuracy: 0.9770
Epoch 51/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0622 - accuracy: 0.9773 - val_loss: 0.0736 - val_accuracy: 0.9750
Epoch 52/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0602 - accuracy: 0.9781 - val_loss: 0.0740 - val_accuracy: 0.9767
Epoch 53/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0593 - accuracy: 0.9792 - val_loss: 0.0719 - val_accuracy: 0.9781
Epoch 54/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0601 - accuracy: 0.9784 - val_loss: 0.0713 - val_accuracy: 0.9775
Epoch 55/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0596 - accuracy: 0.9785 - val_loss: 0.0725 - val_accuracy: 0.9769
Epoch 56/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0588 - accuracy: 0.9793 - val_loss: 0.0710 - val_accuracy: 0.9767
Epoch 57/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0561 - accuracy: 0.9797 - val_loss: 0.0686 - val_accuracy: 0.9782
Epoch 58/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0573 - accuracy: 0.9796 - val_loss: 0.0699 - val_accuracy: 0.9783
Epoch 59/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0566 - accuracy: 0.9792 - val_loss: 0.0743 - val_accuracy: 0.9752
Epoch 60/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0560 - accuracy: 0.9801 - val_loss: 0.0749 - val_accuracy: 0.9758
Epoch 61/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0543 - accuracy: 0.9805 - val_loss: 0.0663 - val_accuracy: 0.9787
Epoch 62/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0539 - accuracy: 0.9810 - val_loss: 0.0671 - val_accuracy: 0.9789
Epoch 63/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0539 - accuracy: 0.9815 - val_loss: 0.0659 - val_accuracy: 0.9791
Epoch 64/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0527 - accuracy: 0.9811 - val_loss: 0.0750 - val_accuracy: 0.9759
Epoch 65/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0518 - accuracy: 0.9812 - val_loss: 0.0645 - val_accuracy: 0.9796
Epoch 66/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0507 - accuracy: 0.9821 - val_loss: 0.0674 - val_accuracy: 0.9785
Epoch 67/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0513 - accuracy: 0.9820 - val_loss: 0.0677 - val_accuracy: 0.9782
Epoch 68/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0494 - accuracy: 0.9826 - val_loss: 0.0688 - val_accuracy: 0.9781
Epoch 69/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0499 - accuracy: 0.9822 - val_loss: 0.0626 - val_accuracy: 0.9803
Epoch 70/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0486 - accuracy: 0.9826 - val_loss: 0.0692 - val_accuracy: 0.9779
Epoch 71/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0484 - accuracy: 0.9830 - val_loss: 0.0651 - val_accuracy: 0.9795
Epoch 72/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0481 - accuracy: 0.9830 - val_loss: 0.0652 - val_accuracy: 0.9793
Epoch 73/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0486 - accuracy: 0.9825 - val_loss: 0.0607 - val_accuracy: 0.9809
Epoch 74/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0466 - accuracy: 0.9839 - val_loss: 0.0692 - val_accuracy: 0.9780
Epoch 75/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0477 - accuracy: 0.9834 - val_loss: 0.0636 - val_accuracy: 0.9800
Epoch 76/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0464 - accuracy: 0.9836 - val_loss: 0.0610 - val_accuracy: 0.9811
Epoch 77/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0455 - accuracy: 0.9842 - val_loss: 0.0619 - val_accuracy: 0.9808
Epoch 78/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0460 - accuracy: 0.9836 - val_loss: 0.0667 - val_accuracy: 0.9786
Epoch 79/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0443 - accuracy: 0.9847 - val_loss: 0.0619 - val_accuracy: 0.9808
Epoch 80/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0443 - accuracy: 0.9838 - val_loss: 0.0752 - val_accuracy: 0.9753
Epoch 81/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0445 - accuracy: 0.9845 - val_loss: 0.0611 - val_accuracy: 0.9808
Epoch 82/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0432 - accuracy: 0.9851 - val_loss: 0.0577 - val_accuracy: 0.9823
Epoch 83/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0424 - accuracy: 0.9853 - val_loss: 0.0663 - val_accuracy: 0.9785
Epoch 84/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0433 - accuracy: 0.9851 - val_loss: 0.0588 - val_accuracy: 0.9814
Epoch 85/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0438 - accuracy: 0.9845 - val_loss: 0.0575 - val_accuracy: 0.9821
Epoch 86/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0415 - accuracy: 0.9856 - val_loss: 0.0582 - val_accuracy: 0.9817
Epoch 87/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0418 - accuracy: 0.9857 - val_loss: 0.0576 - val_accuracy: 0.9825
Epoch 88/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0425 - accuracy: 0.9849 - val_loss: 0.0559 - val_accuracy: 0.9834
Epoch 89/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0412 - accuracy: 0.9857 - val_loss: 0.0569 - val_accuracy: 0.9821
Epoch 90/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0396 - accuracy: 0.9866 - val_loss: 0.0582 - val_accuracy: 0.9823
Epoch 91/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0414 - accuracy: 0.9859 - val_loss: 0.0683 - val_accuracy: 0.9784
Epoch 92/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0389 - accuracy: 0.9868 - val_loss: 0.0582 - val_accuracy: 0.9814
Epoch 93/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0395 - accuracy: 0.9864 - val_loss: 0.0555 - val_accuracy: 0.9831
Epoch 94/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0385 - accuracy: 0.9868 - val_loss: 0.1046 - val_accuracy: 0.9644
Epoch 95/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0387 - accuracy: 0.9862 - val_loss: 0.0553 - val_accuracy: 0.9825
Epoch 96/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0388 - accuracy: 0.9861 - val_loss: 0.0565 - val_accuracy: 0.9830
Epoch 97/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0384 - accuracy: 0.9866 - val_loss: 0.0543 - val_accuracy: 0.9833
Epoch 98/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0369 - accuracy: 0.9871 - val_loss: 0.0569 - val_accuracy: 0.9819
Epoch 99/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0384 - accuracy: 0.9869 - val_loss: 0.0566 - val_accuracy: 0.9820
Epoch 100/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0377 - accuracy: 0.9869 - val_loss: 0.0546 - val_accuracy: 0.9832
Epoch 101/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0364 - accuracy: 0.9874 - val_loss: 0.0544 - val_accuracy: 0.9837
Epoch 102/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0357 - accuracy: 0.9874 - val_loss: 0.0529 - val_accuracy: 0.9831
Epoch 103/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0362 - accuracy: 0.9875 - val_loss: 0.0520 - val_accuracy: 0.9839
Epoch 104/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0358 - accuracy: 0.9873 - val_loss: 0.0547 - val_accuracy: 0.9832
Epoch 105/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0358 - accuracy: 0.9875 - val_loss: 0.0512 - val_accuracy: 0.9841
Epoch 106/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0348 - accuracy: 0.9886 - val_loss: 0.0517 - val_accuracy: 0.9835
Epoch 107/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0343 - accuracy: 0.9884 - val_loss: 0.0516 - val_accuracy: 0.9833
Epoch 108/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0336 - accuracy: 0.9886 - val_loss: 0.0505 - val_accuracy: 0.9840
Epoch 109/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0350 - accuracy: 0.9878 - val_loss: 0.0537 - val_accuracy: 0.9832
Epoch 110/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0336 - accuracy: 0.9884 - val_loss: 0.0524 - val_accuracy: 0.9833
Epoch 111/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0320 - accuracy: 0.9893 - val_loss: 0.0547 - val_accuracy: 0.9824
Epoch 112/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0328 - accuracy: 0.9889 - val_loss: 0.0525 - val_accuracy: 0.9826
Epoch 113/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0331 - accuracy: 0.9885 - val_loss: 0.0527 - val_accuracy: 0.9837
Epoch 114/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0311 - accuracy: 0.9891 - val_loss: 0.0524 - val_accuracy: 0.9838
Epoch 115/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0327 - accuracy: 0.9891 - val_loss: 0.0521 - val_accuracy: 0.9829
Epoch 116/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0317 - accuracy: 0.9895 - val_loss: 0.0528 - val_accuracy: 0.9831
Epoch 117/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0309 - accuracy: 0.9900 - val_loss: 0.0481 - val_accuracy: 0.9843
Epoch 118/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0315 - accuracy: 0.9892 - val_loss: 0.0507 - val_accuracy: 0.9832
Epoch 119/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0318 - accuracy: 0.9894 - val_loss: 0.0486 - val_accuracy: 0.9849
Epoch 120/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0302 - accuracy: 0.9898 - val_loss: 0.0499 - val_accuracy: 0.9853
Epoch 121/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0317 - accuracy: 0.9890 - val_loss: 0.0486 - val_accuracy: 0.9844
Epoch 122/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0306 - accuracy: 0.9897 - val_loss: 0.0472 - val_accuracy: 0.9851
Epoch 123/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0308 - accuracy: 0.9895 - val_loss: 0.0663 - val_accuracy: 0.9783
Epoch 124/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0297 - accuracy: 0.9898 - val_loss: 0.0467 - val_accuracy: 0.9863
Epoch 125/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0296 - accuracy: 0.9900 - val_loss: 0.0528 - val_accuracy: 0.9834
Epoch 126/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0295 - accuracy: 0.9904 - val_loss: 0.0463 - val_accuracy: 0.9854
Epoch 127/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0290 - accuracy: 0.9903 - val_loss: 0.0491 - val_accuracy: 0.9849
Epoch 128/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0278 - accuracy: 0.9903 - val_loss: 0.0492 - val_accuracy: 0.9838
Epoch 129/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0289 - accuracy: 0.9905 - val_loss: 0.0459 - val_accuracy: 0.9856
Epoch 130/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0279 - accuracy: 0.9908 - val_loss: 0.0469 - val_accuracy: 0.9854
Epoch 131/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0278 - accuracy: 0.9904 - val_loss: 0.0466 - val_accuracy: 0.9856
Epoch 132/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0281 - accuracy: 0.9903 - val_loss: 0.0476 - val_accuracy: 0.9854
Epoch 133/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0271 - accuracy: 0.9909 - val_loss: 0.0450 - val_accuracy: 0.9866
Epoch 134/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0278 - accuracy: 0.9902 - val_loss: 0.0518 - val_accuracy: 0.9836
Epoch 135/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0264 - accuracy: 0.9911 - val_loss: 0.0497 - val_accuracy: 0.9849
Epoch 136/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0260 - accuracy: 0.9912 - val_loss: 0.0445 - val_accuracy: 0.9862
Epoch 137/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0268 - accuracy: 0.9910 - val_loss: 0.0472 - val_accuracy: 0.9849
Epoch 138/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0270 - accuracy: 0.9905 - val_loss: 0.0491 - val_accuracy: 0.9844
Epoch 139/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0266 - accuracy: 0.9905 - val_loss: 0.0644 - val_accuracy: 0.9804
Epoch 140/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0281 - accuracy: 0.9904 - val_loss: 0.0440 - val_accuracy: 0.9867
Epoch 141/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0257 - accuracy: 0.9912 - val_loss: 0.0466 - val_accuracy: 0.9857
Epoch 142/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0254 - accuracy: 0.9913 - val_loss: 0.0546 - val_accuracy: 0.9824
Epoch 143/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0254 - accuracy: 0.9909 - val_loss: 0.0493 - val_accuracy: 0.9850
Epoch 144/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0251 - accuracy: 0.9915 - val_loss: 0.0439 - val_accuracy: 0.9858
Epoch 145/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0247 - accuracy: 0.9920 - val_loss: 0.0438 - val_accuracy: 0.9867
Epoch 146/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0243 - accuracy: 0.9916 - val_loss: 0.0433 - val_accuracy: 0.9872
Epoch 147/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0253 - accuracy: 0.9910 - val_loss: 0.0650 - val_accuracy: 0.9787
Epoch 148/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0231 - accuracy: 0.9921 - val_loss: 0.0435 - val_accuracy: 0.9869
Epoch 149/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0255 - accuracy: 0.9913 - val_loss: 0.0491 - val_accuracy: 0.9845
Epoch 150/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0232 - accuracy: 0.9924 - val_loss: 0.0426 - val_accuracy: 0.9874
Epoch 151/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0242 - accuracy: 0.9920 - val_loss: 0.0439 - val_accuracy: 0.9876
Epoch 152/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0233 - accuracy: 0.9918 - val_loss: 0.0432 - val_accuracy: 0.9869
Epoch 153/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0233 - accuracy: 0.9918 - val_loss: 0.0438 - val_accuracy: 0.9867
Epoch 154/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0232 - accuracy: 0.9919 - val_loss: 0.0424 - val_accuracy: 0.9876
Epoch 155/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0230 - accuracy: 0.9920 - val_loss: 0.0427 - val_accuracy: 0.9874
Epoch 156/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0220 - accuracy: 0.9926 - val_loss: 0.0438 - val_accuracy: 0.9875
Epoch 157/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0227 - accuracy: 0.9927 - val_loss: 0.0427 - val_accuracy: 0.9874
Epoch 158/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0226 - accuracy: 0.9921 - val_loss: 0.0415 - val_accuracy: 0.9875
Epoch 159/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0220 - accuracy: 0.9928 - val_loss: 0.0428 - val_accuracy: 0.9877
Epoch 160/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0221 - accuracy: 0.9923 - val_loss: 0.0427 - val_accuracy: 0.9870
Epoch 161/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0213 - accuracy: 0.9929 - val_loss: 0.0476 - val_accuracy: 0.9862
Epoch 162/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0212 - accuracy: 0.9930 - val_loss: 0.0675 - val_accuracy: 0.9793
Epoch 163/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0213 - accuracy: 0.9929 - val_loss: 0.0440 - val_accuracy: 0.9871
Epoch 164/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0227 - accuracy: 0.9924 - val_loss: 0.0418 - val_accuracy: 0.9877
Epoch 165/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0206 - accuracy: 0.9934 - val_loss: 0.0436 - val_accuracy: 0.9873
Epoch 166/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0209 - accuracy: 0.9928 - val_loss: 0.0410 - val_accuracy: 0.9880
Epoch 167/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0203 - accuracy: 0.9932 - val_loss: 0.0417 - val_accuracy: 0.9866
Epoch 168/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0211 - accuracy: 0.9930 - val_loss: 0.0427 - val_accuracy: 0.9879
Epoch 169/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0207 - accuracy: 0.9930 - val_loss: 0.0456 - val_accuracy: 0.9867
Epoch 170/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0198 - accuracy: 0.9931 - val_loss: 0.0427 - val_accuracy: 0.9874
Epoch 171/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0199 - accuracy: 0.9932 - val_loss: 0.0432 - val_accuracy: 0.9873
Epoch 172/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0199 - accuracy: 0.9934 - val_loss: 0.0473 - val_accuracy: 0.9859
Epoch 173/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0194 - accuracy: 0.9932 - val_loss: 0.0463 - val_accuracy: 0.9851
Epoch 174/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0197 - accuracy: 0.9932 - val_loss: 0.0429 - val_accuracy: 0.9872
Epoch 175/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0205 - accuracy: 0.9931 - val_loss: 0.0424 - val_accuracy: 0.9870
Epoch 176/200
675/675 [==============================] - 2s 3ms/step - loss: 0.0198 - accuracy: 0.9933 - val_loss: 0.0420 - val_accuracy: 0.9876
In [13]:
model.evaluate(val_ds)
169/169 [==============================] - 0s 2ms/step - loss: 0.0410 - accuracy: 0.9880
Out[13]:
[0.0409884974360466, 0.9879629611968994]

Plotting the metrics

In [14]:
def plot(history, variable, variable2):
    plt.plot(range(len(history[variable])), history[variable])
    plt.plot(range(len(history[variable2])), history[variable2])
    plt.title(variable)
In [15]:
plot(history.history, "accuracy", 'val_accuracy')
In [16]:
plot(history.history, "loss", "val_loss")

TF Model Prediction

In [17]:
def predict(x):
    for i in val_ds.as_numpy_iterator():
        img, label = i    
#         plt.figure()
#         plt.axis('off')   # remove axes
#         plt.imshow(img[x])    # shape from (32, 64, 64, 3) --> (64, 64, 3)
        
        print("True: ", class_names[label[x]])
        output = model.predict(np.expand_dims(img[x],0))[0][0]    # getting output; input shape (64, 64, 3) --> (1, 64, 64, 3)
        pred = (output > 0.5).astype('int')
        print("Predicted: ", class_names[pred])    # Picking the label from class_names base don the model output

        probability = (output*100) if pred==1 else (100 - (output*100))
        if (class_names[pred] == class_names[label[x]]):
             print("PREDICTION MATCHED | Probability: {:.2f}%".format(probability))
        else:
             print("PREDICTION DID NOT MATCH | Probability: {:.2f}%".format(probability))
        break
In [18]:
for i in range(10):
    # pick random test data sample from one batch
    x = random.randint(0, batch_size - 1)
    predict(x)
    print()
True:  no_fire
Predicted:  no_fire
PREDICTION MATCHED | Probability: 99.19%

True:  no_fire
Predicted:  no_fire
PREDICTION MATCHED | Probability: 100.00%

True:  fire
Predicted:  fire
PREDICTION MATCHED | Probability: 100.00%

True:  fire
Predicted:  fire
PREDICTION MATCHED | Probability: 100.00%

True:  fire
Predicted:  fire
PREDICTION MATCHED | Probability: 100.00%

True:  no_fire
Predicted:  no_fire
PREDICTION MATCHED | Probability: 100.00%

True:  no_fire
Predicted:  no_fire
PREDICTION MATCHED | Probability: 100.00%

True:  no_fire
Predicted:  no_fire
PREDICTION MATCHED | Probability: 100.00%

True:  fire
Predicted:  fire
PREDICTION MATCHED | Probability: 100.00%

True:  fire
Predicted:  fire
PREDICTION MATCHED | Probability: 100.00%

deepCC for Ubuntu

In [19]:
!deepCC forest_fire.h5 --debug 
[INFO]
Reading [keras model] 'forest_fire.h5'
[SUCCESS]
Saved 'forest_fire_deepC/forest_fire.onnx'
[INFO]
Reading [onnx model] 'forest_fire_deepC/forest_fire.onnx'
[INFO]
Model info:
  ir_vesion : 5
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) conv2d_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_2's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_2) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'forest_fire_deepC/forest_fire.cpp'
[INFO]
deepSea model files are ready in 'forest_fire_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "forest_fire_deepC/forest_fire.cpp" -D_AITS_MAIN -o "forest_fire_deepC/forest_fire.exe"
[RUNNING COMMAND]
size "forest_fire_deepC/forest_fire.exe"
   text	   data	    bss	    dec	    hex	filename
 189475	   3776	    760	 194011	  2f5db	forest_fire_deepC/forest_fire.exe
[SUCCESS]
Saved model as executable "forest_fire_deepC/forest_fire.exe"
[DEBUG]
Intermediate files won't be removed.

deepSea vs TF Model Prediction

In [20]:
def compare(x):
    for i in val_ds.as_numpy_iterator():
        img, label = i
        print("True: ", class_names[label[x]])
        
        tf_output = model.predict(np.expand_dims(img[x],0))[0][0]    # getting output; input shape (64, 64, 3) --> (1, 64, 64, 3)
        tf_pred = (tf_output > 0.5).astype('int')
        tf_probability = (tf_output*100) if tf_pred==1 else (100 - (tf_output*100))
        print("Predicted [Tensorflow]: {} | Probability: {:.2f}%".format(class_names[tf_pred], tf_probability))
        
        np.savetxt('sample.data', img[x].flatten())
        !forest_fire_deepC/forest_fire.exe sample.data &> /dev/null
        dc_output = np.loadtxt('deepSea_result_1.out')
        dc_pred = (dc_output > 0.5).astype('int')
        dc_probability = (dc_output*100) if dc_pred==1 else (100 - (dc_output*100))
        print("Predicted [deepSea]: {} | Probability: {:.2f}%".format(class_names[dc_pred], dc_probability))

        
        if (tf_pred == dc_pred):
             print("TensorFlow and DeepSea prediction matched.")
        else:
             print("TensorFlow and DeepSea prediction did not match.")
        break
In [21]:
for i in range(10):
    # pick random test data sample from one batch
    x = random.randint(0, batch_size - 1)
    compare(x)
    print()
True:  fire
Predicted [Tensorflow]: fire | Probability: 100.00%
Predicted [deepSea]: fire | Probability: 100.00%
TensorFlow and DeepSea prediction matched.

True:  no_fire
Predicted [Tensorflow]: no_fire | Probability: 100.00%
Predicted [deepSea]: no_fire | Probability: 100.00%
TensorFlow and DeepSea prediction matched.

True:  fire
Predicted [Tensorflow]: fire | Probability: 100.00%
Predicted [deepSea]: fire | Probability: 100.00%
TensorFlow and DeepSea prediction matched.

True:  fire
Predicted [Tensorflow]: fire | Probability: 99.97%
Predicted [deepSea]: fire | Probability: 99.97%
TensorFlow and DeepSea prediction matched.

True:  fire
Predicted [Tensorflow]: fire | Probability: 100.00%
Predicted [deepSea]: fire | Probability: 100.00%
TensorFlow and DeepSea prediction matched.

True:  fire
Predicted [Tensorflow]: fire | Probability: 100.00%
Predicted [deepSea]: fire | Probability: 100.00%
TensorFlow and DeepSea prediction matched.

True:  fire
Predicted [Tensorflow]: fire | Probability: 100.00%
Predicted [deepSea]: fire | Probability: 100.00%
TensorFlow and DeepSea prediction matched.

True:  no_fire
Predicted [Tensorflow]: no_fire | Probability: 99.82%
Predicted [deepSea]: no_fire | Probability: 99.82%
TensorFlow and DeepSea prediction matched.

True:  fire
Predicted [Tensorflow]: fire | Probability: 100.00%
Predicted [deepSea]: fire | Probability: 100.00%
TensorFlow and DeepSea prediction matched.

True:  fire
Predicted [Tensorflow]: fire | Probability: 100.00%
Predicted [deepSea]: fire | Probability: 100.00%
TensorFlow and DeepSea prediction matched.

deepCC for Arduino Nano 33 BLE Sense

In [22]:
!deepCC forest_fire.h5 --board="Arduino Nano 33 BLE Sense" --debug --archive --bundle
[INFO]
Reading [keras model] 'forest_fire.h5'
[SUCCESS]
Saved 'forest_fire_deepC/forest_fire.onnx'
[INFO]
Reading [onnx model] 'forest_fire_deepC/forest_fire.onnx'
[INFO]
Model info:
  ir_vesion : 5
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) conv2d_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_2's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_2) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'forest_fire_deepC/forest_fire.cpp'
[INFO]
deepSea model files are ready in 'forest_fire_deepC/' 
[RUNNING COMMAND]
arm-none-eabi-g++ -std=c++11 -O3 -mcpu=cortex-m4 -specs=nosys.specs -mthumb -fno-exceptions -fno-rtti -msoft-float -mfloat-abi=softfp -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -I /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 -c "forest_fire_deepC/forest_fire.cpp" -o "forest_fire_deepC/forest_fire.o"
[RUNNING COMMAND]
arm-none-eabi-ar rcs "forest_fire_deepC/lib_forest_fire.a" "forest_fire_deepC/forest_fire.o"
[RUNNING COMMAND]
size "forest_fire_deepC/lib_forest_fire.a"
   text	   data	    bss	    dec	    hex	filename
  82295	      4	     96	  82395	  141db	forest_fire.o (ex forest_fire_deepC/lib_forest_fire.a)
[SUCCESS]
Saved model as archive "forest_fire_deepC/lib_forest_fire.a"
[DEBUG]
Intermediate files won't be removed.
[BUNDLE]
Bundle "forest_fire_deepC/forest_fire.zip" generated.