Cainvas
Model Files
model.h5
keras
Model
deepSea Compiled Models
model.exe
deepSea
Ubuntu

Detecting ships in Aerial Images

Credit: AITS Cainvas Community

Photo by MUTI on Dribbble

In [1]:
!wget -N "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/data_week4.zip"
!unzip -qo data_week4.zip 
!rm data_week4.zip
--2021-07-13 11:54:04--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/data_week4.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.160.71
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.160.71|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 48464529 (46M) [application/x-zip-compressed]
Saving to: ‘data_week4.zip’

data_week4.zip      100%[===================>]  46.22M   104MB/s    in 0.4s    

2021-07-13 11:54:05 (104 MB/s) - ‘data_week4.zip’ saved [48464529/48464529]

In [2]:
!pip install imagecorruptions
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: imagecorruptions in ./.local/lib/python3.7/site-packages (1.1.2)
Requirement already satisfied: numpy>=1.16 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (1.18.5)
Requirement already satisfied: scikit-image>=0.15 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (0.17.2)
Requirement already satisfied: opencv-python>=3.4.5 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (4.4.0.46)
Requirement already satisfied: scipy>=1.2.1 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (1.4.1)
Requirement already satisfied: Pillow>=5.4.1 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (8.0.1)
Requirement already satisfied: numpy>=1.16 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (1.18.5)
Requirement already satisfied: scipy>=1.2.1 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (1.4.1)
Requirement already satisfied: tifffile>=2019.7.26 in /opt/tljh/user/lib/python3.7/site-packages (from scikit-image>=0.15->imagecorruptions) (2020.12.8)
Requirement already satisfied: networkx>=2.0 in /opt/tljh/user/lib/python3.7/site-packages (from scikit-image>=0.15->imagecorruptions) (2.5)
Requirement already satisfied: imageio>=2.3.0 in /opt/tljh/user/lib/python3.7/site-packages (from scikit-image>=0.15->imagecorruptions) (2.9.0)
Requirement already satisfied: PyWavelets>=1.1.1 in /opt/tljh/user/lib/python3.7/site-packages (from scikit-image>=0.15->imagecorruptions) (1.1.1)
Requirement already satisfied: Pillow>=5.4.1 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (8.0.1)
Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in /opt/tljh/user/lib/python3.7/site-packages (from scikit-image>=0.15->imagecorruptions) (3.3.3)
Requirement already satisfied: numpy>=1.16 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (1.18.5)
Requirement already satisfied: numpy>=1.16 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (1.18.5)
Requirement already satisfied: Pillow>=5.4.1 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (8.0.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in /opt/tljh/user/lib/python3.7/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.15->imagecorruptions) (2.4.7)
Requirement already satisfied: Pillow>=5.4.1 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (8.0.1)
Requirement already satisfied: python-dateutil>=2.1 in /opt/tljh/user/lib/python3.7/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.15->imagecorruptions) (2.8.1)
Requirement already satisfied: numpy>=1.16 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (1.18.5)
Requirement already satisfied: cycler>=0.10 in /opt/tljh/user/lib/python3.7/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.15->imagecorruptions) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/tljh/user/lib/python3.7/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.15->imagecorruptions) (1.3.1)
Requirement already satisfied: six in /opt/tljh/user/lib/python3.7/site-packages (from cycler>=0.10->matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.15->imagecorruptions) (1.15.0)
Requirement already satisfied: decorator>=4.3.0 in /opt/tljh/user/lib/python3.7/site-packages (from networkx>=2.0->scikit-image>=0.15->imagecorruptions) (4.4.2)
Requirement already satisfied: six in /opt/tljh/user/lib/python3.7/site-packages (from cycler>=0.10->matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.15->imagecorruptions) (1.15.0)
Requirement already satisfied: numpy>=1.16 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (1.18.5)
Requirement already satisfied: numpy>=1.16 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (1.18.5)
Requirement already satisfied: numpy>=1.16 in /opt/tljh/user/lib/python3.7/site-packages (from imagecorruptions) (1.18.5)
WARNING: You are using pip version 20.3.1; however, version 21.1.3 is available.
You should consider upgrading via the '/opt/tljh/user/bin/python -m pip install --upgrade pip' command.

Importing Prequisites

In [3]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os, random, cv2, pickle, json, itertools
import imgaug.imgaug
import imgaug.augmenters as iaa


from IPython.display import SVG
from tensorflow.keras.utils import plot_model, model_to_dot
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from collections import Counter
from sklearn.utils import class_weight
from tqdm import tqdm
from sklearn.preprocessing import LabelBinarizer

from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import (Add, Input, Conv2D, Dropout, Activation, BatchNormalization, MaxPool2D, ZeroPadding2D, AveragePooling2D, Flatten, Dense)
from tensorflow.keras.optimizers import Adam, SGD
from tensorflow.keras.callbacks import TensorBoard, ModelCheckpoint, Callback, EarlyStopping
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.initializers import *

Function to plot model metrics

In [4]:
def show_final_history(history):
    
    plt.style.use("ggplot")
    fig, ax = plt.subplots(1,2,figsize=(15,5))
    ax[0].set_title('Loss')
    ax[1].set_title('Accuracy')
    ax[0].plot(history.history['loss'],label='Train Loss')
    ax[0].plot(history.history['val_loss'],label='Validation Loss')
    ax[1].plot(history.history['accuracy'],label='Train Accuracy')
    ax[1].plot(history.history['val_accuracy'],label='Validation Accuracy')
    
    ax[0].legend(loc='upper right')
    ax[1].legend(loc='lower right')
    plt.show();
    pass

Identifying the dataset and classes

In [5]:
datasets = ['data_week4/dataset']

class_names = ["no-ship","ship"]

class_name_labels = {class_name:i for i,class_name in enumerate(class_names)}

num_classes = len(class_names)
class_name_labels
Out[5]:
{'no-ship': 0, 'ship': 1}

Loading the data

In [6]:
def load_data():
    images, labels = [], []
    
    for dataset in datasets:
        
        for folder in os.listdir(dataset):
            label = class_name_labels[folder]
            
            for file in tqdm(os.listdir(os.path.join(dataset,folder))):
                
                img_path = os.path.join(dataset,folder,file)
                
                img = cv2.imread(img_path)
                img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
                img = cv2.resize(img, (48,48))
                
                images.append(img)
                labels.append(label)
                pass
            pass
        
        images = np.array(images,dtype=np.float32)/255.0
        labels = np.array(labels,dtype=np.float32)
        pass
    
    return (images, labels)
    pass
In [7]:
(images, labels) = load_data()
images.shape, labels.shape
100%|██████████| 1000/1000 [00:00<00:00, 3393.32it/s]
100%|██████████| 3000/3000 [00:00<00:00, 3399.12it/s]
Out[7]:
((4000, 48, 48, 3), (4000,))

Checking our data for class imbalance

In [8]:
n_labels = labels.shape[0]

_, count = np.unique(labels, return_counts=True)

df = pd.DataFrame(data = count)
df['Class Label'] = class_names
df.columns = ['Count','Class-Label']
df.set_index('Class-Label',inplace=True)
df
Out[8]:
Count
Class-Label
no-ship 3000
ship 1000
In [9]:
df.plot.bar(rot=0)
plt.title("distribution of images per class");

Augmenting images to prevent underfitting and remove imbalance in the dataset

In [10]:
def augment_add(images, seq, labels):
    
    augmented_images, augmented_labels = [],[]
    for idx,img in tqdm(enumerate(images)):
        
        if labels[idx] == 1:
            image_aug_1 = seq.augment_image(image=img)
            image_aug_2 = seq.augment_image(image=img)
            augmented_images.append(image_aug_1)
            augmented_images.append(image_aug_2)
            augmented_labels.append(labels[idx])
            augmented_labels.append(labels[idx])
        pass
    
    augmented_images = np.array(augmented_images, dtype=np.float32)
    augmented_labels = np.array(augmented_labels, dtype=np.float32)
    
    return (augmented_images, augmented_labels)
    pass
In [11]:
seq = iaa.Sequential([
    iaa.Fliplr(0.5),
    iaa.Crop(percent=(0,0.1)),
    iaa.LinearContrast((0.75,1.5)),
    iaa.Multiply((0.8,1.2), per_channel=0.2),
    iaa.Affine(
        scale={'x':(0.8,1.2), "y":(0.8,1.2)},
        translate_percent={"x":(-0.2,0.2),"y":(-0.2,0.2)},
        rotate=(-25,25),
        shear=(-8,8)
    )
], random_order=True)
In [12]:
(aug_images, aug_labels) = augment_add(images, seq, labels)
images = np.concatenate([images, aug_images])
labels = np.concatenate([labels, aug_labels])
4000it [00:03, 1165.44it/s]
In [13]:
images.shape, labels.shape
Out[13]:
((6000, 48, 48, 3), (6000,))
In [14]:
labels = to_categorical(labels)

Dividing images into train,validation and test sets

In [15]:
np.random.seed(42)
np.random.shuffle(images)

np.random.seed(42)
np.random.shuffle(labels)
In [16]:
total_count = len(images)
total_count

train = int(0.7*total_count)
val = int(0.2*total_count)
test = int(0.1*total_count)

train_images, train_labels = images[:train], labels[:train]
val_images, val_labels = images[train:(val+train)], labels[train:(val+train)]
test_images, test_labels = images[-test:], labels[-test:]

train_images.shape, val_images.shape, test_images.shape
Out[16]:
((4200, 48, 48, 3), (1200, 48, 48, 3), (600, 48, 48, 3))

Defining model architecture

In [17]:
model = Sequential([
    Input(shape=(48,48,3)),
    ZeroPadding2D((5,5)),
    Conv2D(16, 3, activation='relu'),
    BatchNormalization(),
    Conv2D(32, 3, activation='relu'),
    BatchNormalization(),
    MaxPool2D(pool_size=(2,2)),
    Dropout(0.3),
    Conv2D(32, 5, activation='relu'),
    BatchNormalization(),
    MaxPool2D(pool_size=(2,2)),
    Dropout(0.3),
    Conv2D(64, 3, activation='relu'),
    BatchNormalization(),
    MaxPool2D(pool_size=(2,2)),
    Dropout(0.3),
    Flatten(),
    Dense(64, activation='relu'),
    Dropout(0.5),
    Dense(128, activation='relu'),
    Dense(2, activation='softmax')
])

model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
zero_padding2d (ZeroPadding2 (None, 58, 58, 3)         0         
_________________________________________________________________
conv2d (Conv2D)              (None, 56, 56, 16)        448       
_________________________________________________________________
batch_normalization (BatchNo (None, 56, 56, 16)        64        
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 54, 54, 32)        4640      
_________________________________________________________________
batch_normalization_1 (Batch (None, 54, 54, 32)        128       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 27, 27, 32)        0         
_________________________________________________________________
dropout (Dropout)            (None, 27, 27, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 23, 23, 32)        25632     
_________________________________________________________________
batch_normalization_2 (Batch (None, 23, 23, 32)        128       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 11, 11, 32)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 11, 11, 32)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 9, 9, 64)          18496     
_________________________________________________________________
batch_normalization_3 (Batch (None, 9, 9, 64)          256       
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 4, 4, 64)          0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 4, 4, 64)          0         
_________________________________________________________________
flatten (Flatten)            (None, 1024)              0         
_________________________________________________________________
dense (Dense)                (None, 64)                65600     
_________________________________________________________________
dropout_3 (Dropout)          (None, 64)                0         
_________________________________________________________________
dense_1 (Dense)              (None, 128)               8320      
_________________________________________________________________
dense_2 (Dense)              (None, 2)                 258       
=================================================================
Total params: 123,970
Trainable params: 123,682
Non-trainable params: 288
_________________________________________________________________

Defining model callbacks and compiling the model with the Adam optimizer

In [18]:
checkpoint = ModelCheckpoint(
    './base.model',
    monitor='val_accuracy',
    verbose=1,
    save_best_only=True,
    mode='max',
    save_weights_only=False,
    save_frequency=1
)
earlystop = EarlyStopping(
    monitor='val_loss',
    min_delta=0.001,
    patience=50,
    verbose=1,
    mode='auto'
)

opt = Adam(lr=1e-3)

callbacks = [checkpoint,earlystop]

model.compile(optimizer=opt,loss='binary_crossentropy',metrics=['accuracy'])

Training the model for 50 epochs with a batch size of 16

In [19]:
epochs = 50
batch_size = 16

history = model.fit(train_images,train_labels,
                    batch_size=batch_size,
                   steps_per_epoch=len(train_images)//batch_size,
                   epochs=epochs,
                   verbose=1, 
                   validation_data=(val_images,val_labels),
                   validation_steps=len(val_images)//batch_size,
                   callbacks=callbacks
                   )
Epoch 1/50
256/262 [============================>.] - ETA: 0s - loss: 0.4100 - accuracy: 0.8167
Epoch 00001: val_accuracy improved from -inf to 0.52250, saving model to ./base.model
WARNING:tensorflow:From /opt/tljh/user/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:From /opt/tljh/user/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
INFO:tensorflow:Assets written to: ./base.model/assets
262/262 [==============================] - 3s 12ms/step - loss: 0.4092 - accuracy: 0.8173 - val_loss: 1.8793 - val_accuracy: 0.5225
Epoch 2/50
250/262 [===========================>..] - ETA: 0s - loss: 0.2069 - accuracy: 0.9133
Epoch 00002: val_accuracy improved from 0.52250 to 0.74417, saving model to ./base.model
INFO:tensorflow:Assets written to: ./base.model/assets
262/262 [==============================] - 3s 12ms/step - loss: 0.2105 - accuracy: 0.9128 - val_loss: 0.6305 - val_accuracy: 0.7442
Epoch 3/50
252/262 [===========================>..] - ETA: 0s - loss: 0.1534 - accuracy: 0.9431
Epoch 00003: val_accuracy improved from 0.74417 to 0.96500, saving model to ./base.model
INFO:tensorflow:Assets written to: ./base.model/assets
262/262 [==============================] - 3s 12ms/step - loss: 0.1548 - accuracy: 0.9422 - val_loss: 0.0930 - val_accuracy: 0.9650
Epoch 4/50
262/262 [==============================] - ETA: 0s - loss: 0.1414 - accuracy: 0.9405
Epoch 00004: val_accuracy did not improve from 0.96500
262/262 [==============================] - 1s 4ms/step - loss: 0.1414 - accuracy: 0.9405 - val_loss: 0.2403 - val_accuracy: 0.8883
Epoch 5/50
256/262 [============================>.] - ETA: 0s - loss: 0.1142 - accuracy: 0.9543
Epoch 00005: val_accuracy did not improve from 0.96500
262/262 [==============================] - 1s 4ms/step - loss: 0.1126 - accuracy: 0.9548 - val_loss: 0.1639 - val_accuracy: 0.9342
Epoch 6/50
256/262 [============================>.] - ETA: 0s - loss: 0.0890 - accuracy: 0.9658
Epoch 00006: val_accuracy improved from 0.96500 to 0.96667, saving model to ./base.model
INFO:tensorflow:Assets written to: ./base.model/assets
262/262 [==============================] - 3s 12ms/step - loss: 0.0881 - accuracy: 0.9658 - val_loss: 0.0870 - val_accuracy: 0.9667
Epoch 7/50
258/262 [============================>.] - ETA: 0s - loss: 0.0933 - accuracy: 0.9648
Epoch 00007: val_accuracy did not improve from 0.96667
262/262 [==============================] - 1s 4ms/step - loss: 0.0949 - accuracy: 0.9646 - val_loss: 0.1260 - val_accuracy: 0.9492
Epoch 8/50
256/262 [============================>.] - ETA: 0s - loss: 0.0854 - accuracy: 0.9706
Epoch 00008: val_accuracy did not improve from 0.96667
262/262 [==============================] - 1s 4ms/step - loss: 0.0870 - accuracy: 0.9699 - val_loss: 0.0925 - val_accuracy: 0.9608
Epoch 9/50
256/262 [============================>.] - ETA: 0s - loss: 0.0797 - accuracy: 0.9709
Epoch 00009: val_accuracy improved from 0.96667 to 0.98750, saving model to ./base.model
INFO:tensorflow:Assets written to: ./base.model/assets
262/262 [==============================] - 3s 11ms/step - loss: 0.0786 - accuracy: 0.9713 - val_loss: 0.0484 - val_accuracy: 0.9875
Epoch 10/50
260/262 [============================>.] - ETA: 0s - loss: 0.0651 - accuracy: 0.9793
Epoch 00010: val_accuracy did not improve from 0.98750
262/262 [==============================] - 1s 4ms/step - loss: 0.0662 - accuracy: 0.9790 - val_loss: 0.0468 - val_accuracy: 0.9833
Epoch 11/50
255/262 [============================>.] - ETA: 0s - loss: 0.0619 - accuracy: 0.9757
Epoch 00011: val_accuracy did not improve from 0.98750
262/262 [==============================] - 1s 4ms/step - loss: 0.0619 - accuracy: 0.9756 - val_loss: 0.0554 - val_accuracy: 0.9800
Epoch 12/50
256/262 [============================>.] - ETA: 0s - loss: 0.0646 - accuracy: 0.9777
Epoch 00012: val_accuracy did not improve from 0.98750
262/262 [==============================] - 1s 4ms/step - loss: 0.0647 - accuracy: 0.9775 - val_loss: 0.0845 - val_accuracy: 0.9658
Epoch 13/50
256/262 [============================>.] - ETA: 0s - loss: 0.0557 - accuracy: 0.9804
Epoch 00013: val_accuracy did not improve from 0.98750
262/262 [==============================] - 1s 4ms/step - loss: 0.0554 - accuracy: 0.9804 - val_loss: 0.2062 - val_accuracy: 0.9292
Epoch 14/50
256/262 [============================>.] - ETA: 0s - loss: 0.0633 - accuracy: 0.9787
Epoch 00014: val_accuracy improved from 0.98750 to 0.99250, saving model to ./base.model
INFO:tensorflow:Assets written to: ./base.model/assets
262/262 [==============================] - 3s 12ms/step - loss: 0.0621 - accuracy: 0.9792 - val_loss: 0.0351 - val_accuracy: 0.9925
Epoch 15/50
262/262 [==============================] - ETA: 0s - loss: 0.0521 - accuracy: 0.9840
Epoch 00015: val_accuracy did not improve from 0.99250
262/262 [==============================] - 1s 4ms/step - loss: 0.0521 - accuracy: 0.9840 - val_loss: 0.1271 - val_accuracy: 0.9450
Epoch 16/50
256/262 [============================>.] - ETA: 0s - loss: 0.0528 - accuracy: 0.9829
Epoch 00016: val_accuracy did not improve from 0.99250
262/262 [==============================] - 1s 4ms/step - loss: 0.0532 - accuracy: 0.9826 - val_loss: 0.0834 - val_accuracy: 0.9700
Epoch 17/50
256/262 [============================>.] - ETA: 0s - loss: 0.0518 - accuracy: 0.9804
Epoch 00017: val_accuracy did not improve from 0.99250
262/262 [==============================] - 1s 4ms/step - loss: 0.0524 - accuracy: 0.9802 - val_loss: 0.2525 - val_accuracy: 0.9100
Epoch 18/50
256/262 [============================>.] - ETA: 0s - loss: 0.0575 - accuracy: 0.9799
Epoch 00018: val_accuracy improved from 0.99250 to 0.99333, saving model to ./base.model
INFO:tensorflow:Assets written to: ./base.model/assets
262/262 [==============================] - 3s 12ms/step - loss: 0.0571 - accuracy: 0.9799 - val_loss: 0.0250 - val_accuracy: 0.9933
Epoch 19/50
260/262 [============================>.] - ETA: 0s - loss: 0.0562 - accuracy: 0.9798
Epoch 00019: val_accuracy did not improve from 0.99333
262/262 [==============================] - 1s 4ms/step - loss: 0.0558 - accuracy: 0.9799 - val_loss: 0.0327 - val_accuracy: 0.9858
Epoch 20/50
256/262 [============================>.] - ETA: 0s - loss: 0.0380 - accuracy: 0.9885
Epoch 00020: val_accuracy did not improve from 0.99333
262/262 [==============================] - 1s 4ms/step - loss: 0.0377 - accuracy: 0.9888 - val_loss: 0.0480 - val_accuracy: 0.9808
Epoch 21/50
262/262 [==============================] - ETA: 0s - loss: 0.0378 - accuracy: 0.9885
Epoch 00021: val_accuracy did not improve from 0.99333
262/262 [==============================] - 1s 4ms/step - loss: 0.0378 - accuracy: 0.9885 - val_loss: 0.0302 - val_accuracy: 0.9925
Epoch 22/50
255/262 [============================>.] - ETA: 0s - loss: 0.0431 - accuracy: 0.9840
Epoch 00022: val_accuracy did not improve from 0.99333
262/262 [==============================] - 1s 4ms/step - loss: 0.0424 - accuracy: 0.9845 - val_loss: 0.0433 - val_accuracy: 0.9850
Epoch 23/50
255/262 [============================>.] - ETA: 0s - loss: 0.0289 - accuracy: 0.9907
Epoch 00023: val_accuracy did not improve from 0.99333
262/262 [==============================] - 1s 4ms/step - loss: 0.0291 - accuracy: 0.9904 - val_loss: 0.0630 - val_accuracy: 0.9758
Epoch 24/50
256/262 [============================>.] - ETA: 0s - loss: 0.0356 - accuracy: 0.9883
Epoch 00024: val_accuracy did not improve from 0.99333
262/262 [==============================] - 1s 4ms/step - loss: 0.0361 - accuracy: 0.9878 - val_loss: 0.0202 - val_accuracy: 0.9933
Epoch 25/50
256/262 [============================>.] - ETA: 0s - loss: 0.0342 - accuracy: 0.9875
Epoch 00025: val_accuracy improved from 0.99333 to 0.99417, saving model to ./base.model
INFO:tensorflow:Assets written to: ./base.model/assets
262/262 [==============================] - 3s 11ms/step - loss: 0.0339 - accuracy: 0.9876 - val_loss: 0.0220 - val_accuracy: 0.9942
Epoch 26/50
261/262 [============================>.] - ETA: 0s - loss: 0.0282 - accuracy: 0.9885
Epoch 00026: val_accuracy did not improve from 0.99417
262/262 [==============================] - 1s 4ms/step - loss: 0.0281 - accuracy: 0.9885 - val_loss: 0.0447 - val_accuracy: 0.9842
Epoch 27/50
256/262 [============================>.] - ETA: 0s - loss: 0.0306 - accuracy: 0.9883
Epoch 00027: val_accuracy did not improve from 0.99417
262/262 [==============================] - 1s 4ms/step - loss: 0.0308 - accuracy: 0.9883 - val_loss: 0.0296 - val_accuracy: 0.9883
Epoch 28/50
255/262 [============================>.] - ETA: 0s - loss: 0.0308 - accuracy: 0.9892
Epoch 00028: val_accuracy improved from 0.99417 to 0.99500, saving model to ./base.model
INFO:tensorflow:Assets written to: ./base.model/assets
262/262 [==============================] - 3s 12ms/step - loss: 0.0318 - accuracy: 0.9888 - val_loss: 0.0191 - val_accuracy: 0.9950
Epoch 29/50
256/262 [============================>.] - ETA: 0s - loss: 0.0318 - accuracy: 0.9907
Epoch 00029: val_accuracy did not improve from 0.99500
262/262 [==============================] - 1s 4ms/step - loss: 0.0322 - accuracy: 0.9904 - val_loss: 0.0471 - val_accuracy: 0.9858
Epoch 30/50
255/262 [============================>.] - ETA: 0s - loss: 0.0350 - accuracy: 0.9887
Epoch 00030: val_accuracy did not improve from 0.99500
262/262 [==============================] - 1s 4ms/step - loss: 0.0351 - accuracy: 0.9885 - val_loss: 0.0706 - val_accuracy: 0.9742
Epoch 31/50
256/262 [============================>.] - ETA: 0s - loss: 0.0299 - accuracy: 0.9880
Epoch 00031: val_accuracy did not improve from 0.99500
262/262 [==============================] - 1s 4ms/step - loss: 0.0305 - accuracy: 0.9878 - val_loss: 0.0209 - val_accuracy: 0.9925
Epoch 32/50
256/262 [============================>.] - ETA: 0s - loss: 0.0209 - accuracy: 0.9919
Epoch 00032: val_accuracy did not improve from 0.99500
262/262 [==============================] - 1s 4ms/step - loss: 0.0219 - accuracy: 0.9916 - val_loss: 0.0225 - val_accuracy: 0.9942
Epoch 33/50
256/262 [============================>.] - ETA: 0s - loss: 0.0288 - accuracy: 0.9912
Epoch 00033: val_accuracy did not improve from 0.99500
262/262 [==============================] - 1s 4ms/step - loss: 0.0296 - accuracy: 0.9909 - val_loss: 0.0215 - val_accuracy: 0.9908
Epoch 34/50
256/262 [============================>.] - ETA: 0s - loss: 0.0289 - accuracy: 0.9902
Epoch 00034: val_accuracy did not improve from 0.99500
262/262 [==============================] - 1s 4ms/step - loss: 0.0293 - accuracy: 0.9900 - val_loss: 0.0861 - val_accuracy: 0.9650
Epoch 35/50
255/262 [============================>.] - ETA: 0s - loss: 0.0273 - accuracy: 0.9904
Epoch 00035: val_accuracy did not improve from 0.99500
262/262 [==============================] - 1s 4ms/step - loss: 0.0271 - accuracy: 0.9902 - val_loss: 0.0248 - val_accuracy: 0.9925
Epoch 36/50
256/262 [============================>.] - ETA: 0s - loss: 0.0206 - accuracy: 0.9924
Epoch 00036: val_accuracy did not improve from 0.99500
262/262 [==============================] - 1s 4ms/step - loss: 0.0202 - accuracy: 0.9926 - val_loss: 0.0172 - val_accuracy: 0.9933
Epoch 37/50
256/262 [============================>.] - ETA: 0s - loss: 0.0273 - accuracy: 0.9909
Epoch 00037: val_accuracy did not improve from 0.99500
262/262 [==============================] - 1s 4ms/step - loss: 0.0276 - accuracy: 0.9909 - val_loss: 0.0405 - val_accuracy: 0.9867
Epoch 38/50
256/262 [============================>.] - ETA: 0s - loss: 0.0299 - accuracy: 0.9902
Epoch 00038: val_accuracy did not improve from 0.99500
262/262 [==============================] - 1s 4ms/step - loss: 0.0294 - accuracy: 0.9904 - val_loss: 0.0279 - val_accuracy: 0.9925
Epoch 39/50
254/262 [============================>.] - ETA: 0s - loss: 0.0210 - accuracy: 0.9933
Epoch 00039: val_accuracy did not improve from 0.99500
262/262 [==============================] - 1s 4ms/step - loss: 0.0208 - accuracy: 0.9933 - val_loss: 0.0223 - val_accuracy: 0.9925
Epoch 40/50
250/262 [===========================>..] - ETA: 0s - loss: 0.0230 - accuracy: 0.9927
Epoch 00040: val_accuracy did not improve from 0.99500
262/262 [==============================] - 1s 4ms/step - loss: 0.0229 - accuracy: 0.9924 - val_loss: 0.0324 - val_accuracy: 0.9900
Epoch 41/50
256/262 [============================>.] - ETA: 0s - loss: 0.0144 - accuracy: 0.9956
Epoch 00041: val_accuracy did not improve from 0.99500
262/262 [==============================] - 1s 4ms/step - loss: 0.0143 - accuracy: 0.9957 - val_loss: 0.0457 - val_accuracy: 0.9858
Epoch 42/50
252/262 [===========================>..] - ETA: 0s - loss: 0.0162 - accuracy: 0.9943
Epoch 00042: val_accuracy did not improve from 0.99500
262/262 [==============================] - 1s 4ms/step - loss: 0.0181 - accuracy: 0.9938 - val_loss: 0.0214 - val_accuracy: 0.9917
Epoch 43/50
255/262 [============================>.] - ETA: 0s - loss: 0.0263 - accuracy: 0.9907
Epoch 00043: val_accuracy did not improve from 0.99500
262/262 [==============================] - 1s 4ms/step - loss: 0.0266 - accuracy: 0.9907 - val_loss: 0.0172 - val_accuracy: 0.9950
Epoch 44/50
256/262 [============================>.] - ETA: 0s - loss: 0.0225 - accuracy: 0.9929
Epoch 00044: val_accuracy improved from 0.99500 to 0.99583, saving model to ./base.model
INFO:tensorflow:Assets written to: ./base.model/assets
262/262 [==============================] - 3s 11ms/step - loss: 0.0222 - accuracy: 0.9928 - val_loss: 0.0144 - val_accuracy: 0.9958
Epoch 45/50
255/262 [============================>.] - ETA: 0s - loss: 0.0165 - accuracy: 0.9951
Epoch 00045: val_accuracy did not improve from 0.99583
262/262 [==============================] - 1s 4ms/step - loss: 0.0163 - accuracy: 0.9950 - val_loss: 0.0143 - val_accuracy: 0.9958
Epoch 46/50
256/262 [============================>.] - ETA: 0s - loss: 0.0057 - accuracy: 0.9983
Epoch 00046: val_accuracy did not improve from 0.99583
262/262 [==============================] - 1s 4ms/step - loss: 0.0056 - accuracy: 0.9983 - val_loss: 0.0127 - val_accuracy: 0.9950
Epoch 47/50
256/262 [============================>.] - ETA: 0s - loss: 0.0277 - accuracy: 0.9922
Epoch 00047: val_accuracy did not improve from 0.99583
262/262 [==============================] - 1s 4ms/step - loss: 0.0275 - accuracy: 0.9921 - val_loss: 0.0460 - val_accuracy: 0.9850
Epoch 48/50
256/262 [============================>.] - ETA: 0s - loss: 0.0139 - accuracy: 0.9956
Epoch 00048: val_accuracy did not improve from 0.99583
262/262 [==============================] - 1s 4ms/step - loss: 0.0136 - accuracy: 0.9957 - val_loss: 0.0323 - val_accuracy: 0.9925
Epoch 49/50
256/262 [============================>.] - ETA: 0s - loss: 0.0149 - accuracy: 0.9951
Epoch 00049: val_accuracy did not improve from 0.99583
262/262 [==============================] - 1s 4ms/step - loss: 0.0152 - accuracy: 0.9950 - val_loss: 0.0628 - val_accuracy: 0.9800
Epoch 50/50
256/262 [============================>.] - ETA: 0s - loss: 0.0189 - accuracy: 0.9944
Epoch 00050: val_accuracy did not improve from 0.99583
262/262 [==============================] - 1s 4ms/step - loss: 0.0186 - accuracy: 0.9945 - val_loss: 0.0623 - val_accuracy: 0.9742

Loss/Accuracy vs Epoch

In [20]:
show_final_history(history)
model.save("model.h5")
print("Weights Saved")
Weights Saved

Making predictions on the test images

In [21]:
test_pred = model.predict(test_images)
test_pred = np.argmax(test_pred,axis=1)
test_actual = np.argmax(test_labels,axis=1)

rnd_idx = random.sample(range(0,400),8)

class_labels = {i:class_name for (class_name,i) in class_name_labels.items()}

for i,idx in enumerate(rnd_idx):
    plt.imshow(test_images[idx])
    plt.title("Actual: {}\nPredicted: {}".format(class_labels[test_actual[idx]],class_labels[test_pred[idx]]))
    plt.grid(None)
    plt.show()
    pass
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).

deepCC

In [22]:
!deepCC model.h 
[ERROR]
'model.h' doesn't exist.


usage: deepCC [-h] [--output] [--format] [--verbose] [--profile ]
              [--app_tensors FILE] [--archive] [--bundle] [--debug]
              [--mem_override] [--init_net_model] [--input_data_type]
              [--input_shape] [--cc] [--cc_flags  [...]] [--board]
              input