Cainvas

KEY FACIAL POINTS DETECTION AND FACE RECOGNITION USING DLIB

Credit: AITS Cainvas Community

Photo by Gleb Kuznetsov on Dribbble

One of the application which is Face Recognition is implemented in this notebook.

IMPORT LIBRARIES/DATASETS AND PERFORM PRELIMINARY DATA PROCESSING

In [1]:
# Import the necessary packages
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow.python.keras import Sequential
from tensorflow.keras import layers, optimizers
from tensorflow.keras.applications import DenseNet121
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.initializers import glorot_uniform
from tensorflow.keras.utils import plot_model
from tensorflow.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint, LearningRateScheduler
from IPython.display import display
from tensorflow.keras import backend as K
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from keras import optimizers

Loading the Dataset

DATASET: Source of Data

The Dataset contains a csv file which includes co-ordinates of Key Facial Points and the last column contains pixel values of the image.The architecture of the model used in this Notebook is inspired by ResNet architectures.

In [2]:
# load the data
facialpoints_df = pd.read_csv('https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/KeyFacialPoints.csv')
In [3]:
facialpoints_df.head()
Out[3]:
left_eye_center_x left_eye_center_y right_eye_center_x right_eye_center_y left_eye_inner_corner_x left_eye_inner_corner_y left_eye_outer_corner_x left_eye_outer_corner_y right_eye_inner_corner_x right_eye_inner_corner_y ... nose_tip_y mouth_left_corner_x mouth_left_corner_y mouth_right_corner_x mouth_right_corner_y mouth_center_top_lip_x mouth_center_top_lip_y mouth_center_bottom_lip_x mouth_center_bottom_lip_y Image
0 66.033564 39.002274 30.227008 36.421678 59.582075 39.647423 73.130346 39.969997 36.356571 37.389402 ... 57.066803 61.195308 79.970165 28.614496 77.388992 43.312602 72.935459 43.130707 84.485774 238 236 237 238 240 240 239 241 241 243 240 23...
1 64.332936 34.970077 29.949277 33.448715 58.856170 35.274349 70.722723 36.187166 36.034723 34.361532 ... 55.660936 56.421447 76.352000 35.122383 76.047660 46.684596 70.266553 45.467915 85.480170 219 215 204 196 204 211 212 200 180 168 178 19...
2 65.057053 34.909642 30.903789 34.909642 59.412000 36.320968 70.984421 36.320968 37.678105 36.320968 ... 53.538947 60.822947 73.014316 33.726316 72.732000 47.274947 70.191789 47.274947 78.659368 144 142 159 180 188 188 184 180 167 132 84 59 ...
3 65.225739 37.261774 32.023096 37.261774 60.003339 39.127179 72.314713 38.380967 37.618643 38.754115 ... 54.166539 65.598887 72.703722 37.245496 74.195478 50.303165 70.091687 51.561183 78.268383 193 192 193 194 194 194 193 192 168 111 50 12 ...
4 66.725301 39.621261 32.244810 38.042032 58.565890 39.621261 72.515926 39.884466 36.982380 39.094852 ... 64.889521 60.671411 77.523239 31.191755 76.997301 44.962748 73.707387 44.227141 86.871166 147 148 160 196 215 214 216 217 219 220 206 18...

5 rows × 31 columns

In [4]:
#Extracting Insights from the data
facialpoints_df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2140 entries, 0 to 2139
Data columns (total 31 columns):
 #   Column                     Non-Null Count  Dtype  
---  ------                     --------------  -----  
 0   left_eye_center_x          2140 non-null   float64
 1   left_eye_center_y          2140 non-null   float64
 2   right_eye_center_x         2140 non-null   float64
 3   right_eye_center_y         2140 non-null   float64
 4   left_eye_inner_corner_x    2140 non-null   float64
 5   left_eye_inner_corner_y    2140 non-null   float64
 6   left_eye_outer_corner_x    2140 non-null   float64
 7   left_eye_outer_corner_y    2140 non-null   float64
 8   right_eye_inner_corner_x   2140 non-null   float64
 9   right_eye_inner_corner_y   2140 non-null   float64
 10  right_eye_outer_corner_x   2140 non-null   float64
 11  right_eye_outer_corner_y   2140 non-null   float64
 12  left_eyebrow_inner_end_x   2140 non-null   float64
 13  left_eyebrow_inner_end_y   2140 non-null   float64
 14  left_eyebrow_outer_end_x   2140 non-null   float64
 15  left_eyebrow_outer_end_y   2140 non-null   float64
 16  right_eyebrow_inner_end_x  2140 non-null   float64
 17  right_eyebrow_inner_end_y  2140 non-null   float64
 18  right_eyebrow_outer_end_x  2140 non-null   float64
 19  right_eyebrow_outer_end_y  2140 non-null   float64
 20  nose_tip_x                 2140 non-null   float64
 21  nose_tip_y                 2140 non-null   float64
 22  mouth_left_corner_x        2140 non-null   float64
 23  mouth_left_corner_y        2140 non-null   float64
 24  mouth_right_corner_x       2140 non-null   float64
 25  mouth_right_corner_y       2140 non-null   float64
 26  mouth_center_top_lip_x     2140 non-null   float64
 27  mouth_center_top_lip_y     2140 non-null   float64
 28  mouth_center_bottom_lip_x  2140 non-null   float64
 29  mouth_center_bottom_lip_y  2140 non-null   float64
 30  Image                      2140 non-null   object 
dtypes: float64(30), object(1)
memory usage: 518.4+ KB
In [5]:
# Since values for the image is given as space separated string, we will need to separate the values using ' ' as separator.
# Then convert this into numpy array using np.fromstring and convert the obtained 1D array into 2D array of shape (96,96)
facialpoints_df['Image'] = facialpoints_df['Image'].apply(lambda x: np.fromstring(x, dtype= int, sep = ' ').reshape(96,96))
In [6]:
# Let's obtain the shape of the resized image
facialpoints_df['Image'][1].shape
Out[6]:
(96, 96)
In [7]:
#Checking for NULL values
facialpoints_df.isnull().sum()
Out[7]:
left_eye_center_x            0
left_eye_center_y            0
right_eye_center_x           0
right_eye_center_y           0
left_eye_inner_corner_x      0
left_eye_inner_corner_y      0
left_eye_outer_corner_x      0
left_eye_outer_corner_y      0
right_eye_inner_corner_x     0
right_eye_inner_corner_y     0
right_eye_outer_corner_x     0
right_eye_outer_corner_y     0
left_eyebrow_inner_end_x     0
left_eyebrow_inner_end_y     0
left_eyebrow_outer_end_x     0
left_eyebrow_outer_end_y     0
right_eyebrow_inner_end_x    0
right_eyebrow_inner_end_y    0
right_eyebrow_outer_end_x    0
right_eyebrow_outer_end_y    0
nose_tip_x                   0
nose_tip_y                   0
mouth_left_corner_x          0
mouth_left_corner_y          0
mouth_right_corner_x         0
mouth_right_corner_y         0
mouth_center_top_lip_x       0
mouth_center_top_lip_y       0
mouth_center_bottom_lip_x    0
mouth_center_bottom_lip_y    0
Image                        0
dtype: int64

VISUALIZING IMAGE

In [8]:
# Plot a random image from the dataset along with facial keypoints. 
i = np.random.randint(1, len(facialpoints_df))
plt.imshow(facialpoints_df['Image'][i],cmap='gray')
Out[8]:
<matplotlib.image.AxesImage at 0x7f5c0ce98f98>
In [9]:
# Plotting the co-ordinates of key facial points available in the data-frame upon the image

plt.figure()
plt.imshow(facialpoints_df['Image'][i],cmap='gray')
for j in range(1,31,2):
        plt.plot(facialpoints_df.loc[i][j-1], facialpoints_df.loc[i][j], 'r.')
In [10]:
# Let's view more images in a grid format
fig = plt.figure(figsize=(20, 20))

for i in range(16):
    ax = fig.add_subplot(4, 4, i + 1)    
    image = plt.imshow(facialpoints_df['Image'][i], cmap = 'gray')
    for j in range(1,31,2):
        plt.plot(facialpoints_df.loc[i][j-1], facialpoints_df.loc[i][j], 'r.')
    

PERFORMING IMAGE AUGMENTATION TO INCREASE THE AMOUNT OF DATA

In [11]:
# Create a new copy of the dataframe
import copy
facialpoints_df_copy = copy.copy(facialpoints_df)
In [12]:
# obtain the header of the DataFrame (names of columns) 

columns = facialpoints_df_copy.columns[:-1]
columns
Out[12]:
Index(['left_eye_center_x', 'left_eye_center_y', 'right_eye_center_x',
       'right_eye_center_y', 'left_eye_inner_corner_x',
       'left_eye_inner_corner_y', 'left_eye_outer_corner_x',
       'left_eye_outer_corner_y', 'right_eye_inner_corner_x',
       'right_eye_inner_corner_y', 'right_eye_outer_corner_x',
       'right_eye_outer_corner_y', 'left_eyebrow_inner_end_x',
       'left_eyebrow_inner_end_y', 'left_eyebrow_outer_end_x',
       'left_eyebrow_outer_end_y', 'right_eyebrow_inner_end_x',
       'right_eyebrow_inner_end_y', 'right_eyebrow_outer_end_x',
       'right_eyebrow_outer_end_y', 'nose_tip_x', 'nose_tip_y',
       'mouth_left_corner_x', 'mouth_left_corner_y', 'mouth_right_corner_x',
       'mouth_right_corner_y', 'mouth_center_top_lip_x',
       'mouth_center_top_lip_y', 'mouth_center_bottom_lip_x',
       'mouth_center_bottom_lip_y'],
      dtype='object')
In [13]:
# Take a look at the pixel values of a sample image and see if it makes sense!
facialpoints_df['Image'][0]
Out[13]:
array([[238, 236, 237, ..., 250, 250, 250],
       [235, 238, 236, ..., 249, 250, 251],
       [237, 236, 237, ..., 251, 251, 250],
       ...,
       [186, 183, 181, ...,  52,  57,  60],
       [189, 188, 207, ...,  61,  69,  78],
       [191, 184, 184, ...,  70,  75,  90]])
In [14]:
# plot the sample image
plt.imshow(facialpoints_df['Image'][0], cmap = 'gray')
Out[14]:
<matplotlib.image.AxesImage at 0x7f5c0a561ac8>
In [15]:
# Now Let's flip the image column horizontally 
facialpoints_df_copy['Image'] = facialpoints_df_copy['Image'].apply(lambda x: np.flip(x, axis = 1))
In [16]:
# Now take a look at the flipped image and do a sanity check!
# Notice that the values of pixels are now flipped
facialpoints_df_copy['Image'][0]
Out[16]:
array([[250, 250, 250, ..., 237, 236, 238],
       [251, 250, 249, ..., 236, 238, 235],
       [250, 251, 251, ..., 237, 236, 237],
       ...,
       [ 60,  57,  52, ..., 181, 183, 186],
       [ 78,  69,  61, ..., 207, 188, 189],
       [ 90,  75,  70, ..., 184, 184, 191]])
In [17]:
# Notice that the image is flipped now
plt.imshow(facialpoints_df_copy['Image'][0], cmap = 'gray')
Out[17]:
<matplotlib.image.AxesImage at 0x7f5c08ce7780>
In [18]:
# Since we are flipping the images horizontally, y coordinate values would be the same
# X coordinate values only would need to change, all we have to do is to subtract our initial x-coordinate values from width of the image(96)
for i in range(len(columns)):
    if i%2 == 0:
        facialpoints_df_copy[columns[i]] = facialpoints_df_copy[columns[i]].apply(lambda x: 96. - float(x) )
In [19]:
# View the Original image
plt.imshow(facialpoints_df['Image'][0],cmap='gray')
for j in range(1, 31, 2):
        plt.plot(facialpoints_df.loc[0][j-1], facialpoints_df.loc[0][j], 'r.')
In [20]:
# View the Horizontally flipped image
plt.imshow(facialpoints_df_copy['Image'][0], cmap='gray')
for j in range(1, 31, 2):
        plt.plot(facialpoints_df_copy.loc[0][j-1], facialpoints_df_copy.loc[0][j], 'r.')
In [21]:
# Concatenate the original dataframe with the augmented dataframe
facialpoints_df_augmented = np.concatenate((facialpoints_df,facialpoints_df_copy))
In [22]:
facialpoints_df_augmented.shape
Out[22]:
(4280, 31)
In [23]:
# Let's try to perform another image augmentation by randomly increasing images brightness
import random

facialpoints_df_copy = copy.copy(facialpoints_df)
facialpoints_df_copy['Image'] = facialpoints_df['Image'].apply(lambda x:np.clip(random.uniform(1, 2) * x, 0.0, 255.0))
facialpoints_df_augmented = np.concatenate((facialpoints_df_augmented, facialpoints_df_copy))
facialpoints_df_augmented.shape
Out[23]:
(6420, 31)
In [24]:
# Let's view image with increased brightness

plt.imshow(facialpoints_df_copy['Image'][0], cmap = 'gray')
for j in range(1, 31, 2):
        plt.plot(facialpoints_df_copy.loc[0][j-1], facialpoints_df_copy.loc[0][j], 'r.')

PERFORMING NORMALIZATION AND TRAINING DATA PREPARATION

In [25]:
# Obtain the value of 'Images' which is in 31st column of DataFrame and normalize it
img = facialpoints_df_augmented[:, 30]
img = img/255.

# Create an empty array of shape (6420, 96, 96, 1) to train the model
X = np.empty((len(img), 96, 96, 1))

# Iterate through the normalized images list and add image values to the empty array 
# Note that we need to expand it's dimension from (96,96) to (96,96,1)
for i in range(len(img)):
    X[i,] = np.expand_dims(img[i], axis = 2)

# Convert the array type to float32
X = np.asarray(X).astype(np.float32)
X.shape
Out[25]:
(6420, 96, 96, 1)
In [26]:
# Obtain the values of key face points coordinates, which are to used as target to train the model
y = facialpoints_df_augmented[:,:30]
y = np.asarray(y).astype(np.float32)
y.shape
Out[26]:
(6420, 30)
In [27]:
# Split the data into training and testing data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.1)
In [28]:
X_train.shape
Out[28]:
(5778, 96, 96, 1)
In [29]:
# Let's view more images in a grid format
fig = plt.figure(figsize=(20, 20))

for i in range(64):
    ax = fig.add_subplot(8, 8, i + 1)    
    image = plt.imshow(X_train[i].reshape(96,96), cmap = 'gray')
    for j in range(1,31,2):
        plt.plot(y_train[i][j-1], y_train[i][j], 'r.')
    

BUILDING A CUSTOM RESIDUAL NEURAL NETWORK ARCHITECTURE

In [30]:
def res_block(X, filter, stage):
    
      # CONVOLUTIONAL BLOCK
    X_copy = X
    f1 , f2, f3 = filter

      # Main Path
    X = Conv2D(f1, (1,1), strides = (1,1), name ='res_'+str(stage)+'_conv_a', kernel_initializer= glorot_uniform(seed = 0))(X)
    X = MaxPool2D((2,2))(X)
    X = BatchNormalization(axis =3, name = 'bn_'+str(stage)+'_conv_a')(X)
    X = Activation('relu')(X) 

    X = Conv2D(f2, kernel_size = (3,3), strides =(1,1), padding = 'same', name ='res_'+str(stage)+'_conv_b', kernel_initializer= glorot_uniform(seed = 0))(X)
    X = BatchNormalization(axis =3, name = 'bn_'+str(stage)+'_conv_b')(X)
    X = Activation('relu')(X) 

    X = Conv2D(f3, kernel_size = (1,1), strides =(1,1),name ='res_'+str(stage)+'_conv_c', kernel_initializer= glorot_uniform(seed = 0))(X)
    X = BatchNormalization(axis =3, name = 'bn_'+str(stage)+'_conv_c')(X)

      # Short path
    X_copy = Conv2D(f3, kernel_size = (1,1), strides =(1,1),name ='res_'+str(stage)+'_conv_copy', kernel_initializer= glorot_uniform(seed = 0))(X_copy)
    X_copy = MaxPool2D((2,2))(X_copy)
    X_copy = BatchNormalization(axis =3, name = 'bn_'+str(stage)+'_conv_copy')(X_copy)

      # Add data from main and short paths
    X = Add()([X,X_copy])
    X = Activation('relu')(X)



      # IDENTITY BLOCK 1
    X_copy = X

      # Main Path
    X = Conv2D(f1, (1,1),strides = (1,1), name ='res_'+str(stage)+'_identity_1_a', kernel_initializer= glorot_uniform(seed = 0))(X)
    X = BatchNormalization(axis =3, name = 'bn_'+str(stage)+'_identity_1_a')(X)
    X = Activation('relu')(X) 

    X = Conv2D(f2, kernel_size = (3,3), strides =(1,1), padding = 'same', name ='res_'+str(stage)+'_identity_1_b', kernel_initializer= glorot_uniform(seed = 0))(X)
    X = BatchNormalization(axis =3, name = 'bn_'+str(stage)+'_identity_1_b')(X)
    X = Activation('relu')(X) 

    X = Conv2D(f3, kernel_size = (1,1), strides =(1,1),name ='res_'+str(stage)+'_identity_1_c', kernel_initializer= glorot_uniform(seed = 0))(X)
    X = BatchNormalization(axis =3, name = 'bn_'+str(stage)+'_identity_1_c')(X)

      # Add both paths together (Note that we feed the original input as is hence the name "identity")
    X = Add()([X,X_copy])
    X = Activation('relu')(X)



    # IDENTITY BLOCK 2
    X_copy = X
     # Main Path
    X = Conv2D(f1, (1,1),strides = (1,1), name ='res_'+str(stage)+'_identity_2_a', kernel_initializer= glorot_uniform(seed = 0))(X)
    X = BatchNormalization(axis =3, name = 'bn_'+str(stage)+'_identity_2_a')(X)
    X = Activation('relu')(X) 

    X = Conv2D(f2, kernel_size = (3,3), strides =(1,1), padding = 'same', name ='res_'+str(stage)+'_identity_2_b', kernel_initializer= glorot_uniform(seed = 0))(X)
    X = BatchNormalization(axis =3, name = 'bn_'+str(stage)+'_identity_2_b')(X)
    X = Activation('relu')(X) 

    X = Conv2D(f3, kernel_size = (1,1), strides =(1,1),name ='res_'+str(stage)+'_identity_2_c', kernel_initializer= glorot_uniform(seed = 0))(X)
    X = BatchNormalization(axis =3, name = 'bn_'+str(stage)+'_identity_2_c')(X)

      # Add both paths together (Note that we feed the original input as is hence the name "identity")
    X = Add()([X,X_copy])
    X = Activation('relu')(X)

    return X
In [31]:
input_shape = (96,96,1)

# Input tensor shape
X_input = Input(input_shape)

# Zero-padding
X = ZeroPadding2D((3,3))(X_input)

# Stage #1
X = Conv2D(64, (7,7), strides= (2,2), name = 'conv1', kernel_initializer= glorot_uniform(seed = 0))(X)
X = BatchNormalization(axis =3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3,3), strides= (2,2))(X)

# Stage #2
X = res_block(X, filter= [64,64,256], stage= 2)

# Stage #3
X = res_block(X, filter= [128,128,512], stage= 3)

# Average Pooling
X = AveragePooling2D((2,2), name = 'Averagea_Pooling')(X)

# Final layer
X = Flatten()(X)
X = Dense(4096, activation = 'relu')(X)
X = Dropout(0.2)(X)
X = Dense(2048, activation = 'relu')(X)
X = Dropout(0.1)(X)
X = Dense(30, activation = 'relu')(X)


model = Model( inputs= X_input, outputs = X)
model.summary()
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 96, 96, 1)]  0                                            
__________________________________________________________________________________________________
zero_padding2d (ZeroPadding2D)  (None, 102, 102, 1)  0           input_1[0][0]                    
__________________________________________________________________________________________________
conv1 (Conv2D)                  (None, 48, 48, 64)   3200        zero_padding2d[0][0]             
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization)   (None, 48, 48, 64)   256         conv1[0][0]                      
__________________________________________________________________________________________________
activation (Activation)         (None, 48, 48, 64)   0           bn_conv1[0][0]                   
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D)    (None, 23, 23, 64)   0           activation[0][0]                 
__________________________________________________________________________________________________
res_2_conv_a (Conv2D)           (None, 23, 23, 64)   4160        max_pooling2d[0][0]              
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 11, 11, 64)   0           res_2_conv_a[0][0]               
__________________________________________________________________________________________________
bn_2_conv_a (BatchNormalization (None, 11, 11, 64)   256         max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 11, 11, 64)   0           bn_2_conv_a[0][0]                
__________________________________________________________________________________________________
res_2_conv_b (Conv2D)           (None, 11, 11, 64)   36928       activation_1[0][0]               
__________________________________________________________________________________________________
bn_2_conv_b (BatchNormalization (None, 11, 11, 64)   256         res_2_conv_b[0][0]               
__________________________________________________________________________________________________
activation_2 (Activation)       (None, 11, 11, 64)   0           bn_2_conv_b[0][0]                
__________________________________________________________________________________________________
res_2_conv_copy (Conv2D)        (None, 23, 23, 256)  16640       max_pooling2d[0][0]              
__________________________________________________________________________________________________
res_2_conv_c (Conv2D)           (None, 11, 11, 256)  16640       activation_2[0][0]               
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D)  (None, 11, 11, 256)  0           res_2_conv_copy[0][0]            
__________________________________________________________________________________________________
bn_2_conv_c (BatchNormalization (None, 11, 11, 256)  1024        res_2_conv_c[0][0]               
__________________________________________________________________________________________________
bn_2_conv_copy (BatchNormalizat (None, 11, 11, 256)  1024        max_pooling2d_2[0][0]            
__________________________________________________________________________________________________
add (Add)                       (None, 11, 11, 256)  0           bn_2_conv_c[0][0]                
                                                                 bn_2_conv_copy[0][0]             
__________________________________________________________________________________________________
activation_3 (Activation)       (None, 11, 11, 256)  0           add[0][0]                        
__________________________________________________________________________________________________
res_2_identity_1_a (Conv2D)     (None, 11, 11, 64)   16448       activation_3[0][0]               
__________________________________________________________________________________________________
bn_2_identity_1_a (BatchNormali (None, 11, 11, 64)   256         res_2_identity_1_a[0][0]         
__________________________________________________________________________________________________
activation_4 (Activation)       (None, 11, 11, 64)   0           bn_2_identity_1_a[0][0]          
__________________________________________________________________________________________________
res_2_identity_1_b (Conv2D)     (None, 11, 11, 64)   36928       activation_4[0][0]               
__________________________________________________________________________________________________
bn_2_identity_1_b (BatchNormali (None, 11, 11, 64)   256         res_2_identity_1_b[0][0]         
__________________________________________________________________________________________________
activation_5 (Activation)       (None, 11, 11, 64)   0           bn_2_identity_1_b[0][0]          
__________________________________________________________________________________________________
res_2_identity_1_c (Conv2D)     (None, 11, 11, 256)  16640       activation_5[0][0]               
__________________________________________________________________________________________________
bn_2_identity_1_c (BatchNormali (None, 11, 11, 256)  1024        res_2_identity_1_c[0][0]         
__________________________________________________________________________________________________
add_1 (Add)                     (None, 11, 11, 256)  0           bn_2_identity_1_c[0][0]          
                                                                 activation_3[0][0]               
__________________________________________________________________________________________________
activation_6 (Activation)       (None, 11, 11, 256)  0           add_1[0][0]                      
__________________________________________________________________________________________________
res_2_identity_2_a (Conv2D)     (None, 11, 11, 64)   16448       activation_6[0][0]               
__________________________________________________________________________________________________
bn_2_identity_2_a (BatchNormali (None, 11, 11, 64)   256         res_2_identity_2_a[0][0]         
__________________________________________________________________________________________________
activation_7 (Activation)       (None, 11, 11, 64)   0           bn_2_identity_2_a[0][0]          
__________________________________________________________________________________________________
res_2_identity_2_b (Conv2D)     (None, 11, 11, 64)   36928       activation_7[0][0]               
__________________________________________________________________________________________________
bn_2_identity_2_b (BatchNormali (None, 11, 11, 64)   256         res_2_identity_2_b[0][0]         
__________________________________________________________________________________________________
activation_8 (Activation)       (None, 11, 11, 64)   0           bn_2_identity_2_b[0][0]          
__________________________________________________________________________________________________
res_2_identity_2_c (Conv2D)     (None, 11, 11, 256)  16640       activation_8[0][0]               
__________________________________________________________________________________________________
bn_2_identity_2_c (BatchNormali (None, 11, 11, 256)  1024        res_2_identity_2_c[0][0]         
__________________________________________________________________________________________________
add_2 (Add)                     (None, 11, 11, 256)  0           bn_2_identity_2_c[0][0]          
                                                                 activation_6[0][0]               
__________________________________________________________________________________________________
activation_9 (Activation)       (None, 11, 11, 256)  0           add_2[0][0]                      
__________________________________________________________________________________________________
res_3_conv_a (Conv2D)           (None, 11, 11, 128)  32896       activation_9[0][0]               
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D)  (None, 5, 5, 128)    0           res_3_conv_a[0][0]               
__________________________________________________________________________________________________
bn_3_conv_a (BatchNormalization (None, 5, 5, 128)    512         max_pooling2d_3[0][0]            
__________________________________________________________________________________________________
activation_10 (Activation)      (None, 5, 5, 128)    0           bn_3_conv_a[0][0]                
__________________________________________________________________________________________________
res_3_conv_b (Conv2D)           (None, 5, 5, 128)    147584      activation_10[0][0]              
__________________________________________________________________________________________________
bn_3_conv_b (BatchNormalization (None, 5, 5, 128)    512         res_3_conv_b[0][0]               
__________________________________________________________________________________________________
activation_11 (Activation)      (None, 5, 5, 128)    0           bn_3_conv_b[0][0]                
__________________________________________________________________________________________________
res_3_conv_copy (Conv2D)        (None, 11, 11, 512)  131584      activation_9[0][0]               
__________________________________________________________________________________________________
res_3_conv_c (Conv2D)           (None, 5, 5, 512)    66048       activation_11[0][0]              
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D)  (None, 5, 5, 512)    0           res_3_conv_copy[0][0]            
__________________________________________________________________________________________________
bn_3_conv_c (BatchNormalization (None, 5, 5, 512)    2048        res_3_conv_c[0][0]               
__________________________________________________________________________________________________
bn_3_conv_copy (BatchNormalizat (None, 5, 5, 512)    2048        max_pooling2d_4[0][0]            
__________________________________________________________________________________________________
add_3 (Add)                     (None, 5, 5, 512)    0           bn_3_conv_c[0][0]                
                                                                 bn_3_conv_copy[0][0]             
__________________________________________________________________________________________________
activation_12 (Activation)      (None, 5, 5, 512)    0           add_3[0][0]                      
__________________________________________________________________________________________________
res_3_identity_1_a (Conv2D)     (None, 5, 5, 128)    65664       activation_12[0][0]              
__________________________________________________________________________________________________
bn_3_identity_1_a (BatchNormali (None, 5, 5, 128)    512         res_3_identity_1_a[0][0]         
__________________________________________________________________________________________________
activation_13 (Activation)      (None, 5, 5, 128)    0           bn_3_identity_1_a[0][0]          
__________________________________________________________________________________________________
res_3_identity_1_b (Conv2D)     (None, 5, 5, 128)    147584      activation_13[0][0]              
__________________________________________________________________________________________________
bn_3_identity_1_b (BatchNormali (None, 5, 5, 128)    512         res_3_identity_1_b[0][0]         
__________________________________________________________________________________________________
activation_14 (Activation)      (None, 5, 5, 128)    0           bn_3_identity_1_b[0][0]          
__________________________________________________________________________________________________
res_3_identity_1_c (Conv2D)     (None, 5, 5, 512)    66048       activation_14[0][0]              
__________________________________________________________________________________________________
bn_3_identity_1_c (BatchNormali (None, 5, 5, 512)    2048        res_3_identity_1_c[0][0]         
__________________________________________________________________________________________________
add_4 (Add)                     (None, 5, 5, 512)    0           bn_3_identity_1_c[0][0]          
                                                                 activation_12[0][0]              
__________________________________________________________________________________________________
activation_15 (Activation)      (None, 5, 5, 512)    0           add_4[0][0]                      
__________________________________________________________________________________________________
res_3_identity_2_a (Conv2D)     (None, 5, 5, 128)    65664       activation_15[0][0]              
__________________________________________________________________________________________________
bn_3_identity_2_a (BatchNormali (None, 5, 5, 128)    512         res_3_identity_2_a[0][0]         
__________________________________________________________________________________________________
activation_16 (Activation)      (None, 5, 5, 128)    0           bn_3_identity_2_a[0][0]          
__________________________________________________________________________________________________
res_3_identity_2_b (Conv2D)     (None, 5, 5, 128)    147584      activation_16[0][0]              
__________________________________________________________________________________________________
bn_3_identity_2_b (BatchNormali (None, 5, 5, 128)    512         res_3_identity_2_b[0][0]         
__________________________________________________________________________________________________
activation_17 (Activation)      (None, 5, 5, 128)    0           bn_3_identity_2_b[0][0]          
__________________________________________________________________________________________________
res_3_identity_2_c (Conv2D)     (None, 5, 5, 512)    66048       activation_17[0][0]              
__________________________________________________________________________________________________
bn_3_identity_2_c (BatchNormali (None, 5, 5, 512)    2048        res_3_identity_2_c[0][0]         
__________________________________________________________________________________________________
add_5 (Add)                     (None, 5, 5, 512)    0           bn_3_identity_2_c[0][0]          
                                                                 activation_15[0][0]              
__________________________________________________________________________________________________
activation_18 (Activation)      (None, 5, 5, 512)    0           add_5[0][0]                      
__________________________________________________________________________________________________
Averagea_Pooling (AveragePoolin (None, 2, 2, 512)    0           activation_18[0][0]              
__________________________________________________________________________________________________
flatten (Flatten)               (None, 2048)         0           Averagea_Pooling[0][0]           
__________________________________________________________________________________________________
dense (Dense)                   (None, 4096)         8392704     flatten[0][0]                    
__________________________________________________________________________________________________
dropout (Dropout)               (None, 4096)         0           dense[0][0]                      
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 2048)         8390656     dropout[0][0]                    
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 2048)         0           dense_1[0][0]                    
__________________________________________________________________________________________________
dense_2 (Dense)                 (None, 30)           61470       dropout_1[0][0]                  
==================================================================================================
Total params: 18,016,286
Trainable params: 18,007,710
Non-trainable params: 8,576
__________________________________________________________________________________________________

COMPILING AND TRAINING DEEP LEARNING MODEL

In [32]:
adam = tf.keras.optimizers.Adam(lr = 0.001, beta_1=0.9, beta_2=0.999, amsgrad=False)
model.compile(loss="mean_squared_error", optimizer = adam, metrics = ['accuracy'])
In [33]:
# save the best model with least validation loss
checkpointer = ModelCheckpoint(filepath = "model.h5", verbose = 2, save_best_only = True)
In [34]:
history = model.fit(X_train, y_train, batch_size = 256, epochs= 150, validation_split = 0.05, callbacks=[checkpointer])
Epoch 1/150
22/22 [==============================] - ETA: 0s - loss: 357.5295 - accuracy: 0.3678
Epoch 00001: val_loss improved from inf to 2094.93408, saving model to model.h5
22/22 [==============================] - 4s 173ms/step - loss: 357.5295 - accuracy: 0.3678 - val_loss: 2094.9341 - val_accuracy: 0.7093
Epoch 2/150
21/22 [===========================>..] - ETA: 0s - loss: 131.7025 - accuracy: 0.5839
Epoch 00002: val_loss improved from 2094.93408 to 1713.58386, saving model to model.h5
22/22 [==============================] - 3s 144ms/step - loss: 130.8573 - accuracy: 0.5868 - val_loss: 1713.5839 - val_accuracy: 0.7093
Epoch 3/150
21/22 [===========================>..] - ETA: 0s - loss: 85.2388 - accuracy: 0.6025
Epoch 00003: val_loss improved from 1713.58386 to 1453.25134, saving model to model.h5
22/22 [==============================] - 3s 152ms/step - loss: 84.9415 - accuracy: 0.6038 - val_loss: 1453.2513 - val_accuracy: 0.7093
Epoch 4/150
21/22 [===========================>..] - ETA: 0s - loss: 60.6461 - accuracy: 0.5898
Epoch 00004: val_loss improved from 1453.25134 to 1170.63367, saving model to model.h5
22/22 [==============================] - 3s 144ms/step - loss: 60.8961 - accuracy: 0.5903 - val_loss: 1170.6337 - val_accuracy: 0.7093
Epoch 5/150
21/22 [===========================>..] - ETA: 0s - loss: 48.9917 - accuracy: 0.5973
Epoch 00005: val_loss improved from 1170.63367 to 1044.39941, saving model to model.h5
22/22 [==============================] - 3s 145ms/step - loss: 48.6556 - accuracy: 0.5985 - val_loss: 1044.3994 - val_accuracy: 0.7093
Epoch 6/150
21/22 [===========================>..] - ETA: 0s - loss: 35.4813 - accuracy: 0.5858
Epoch 00006: val_loss improved from 1044.39941 to 789.17914, saving model to model.h5
22/22 [==============================] - 3s 144ms/step - loss: 35.3362 - accuracy: 0.5844 - val_loss: 789.1791 - val_accuracy: 0.7093
Epoch 7/150
21/22 [===========================>..] - ETA: 0s - loss: 34.6314 - accuracy: 0.5936
Epoch 00007: val_loss did not improve from 789.17914
22/22 [==============================] - 2s 91ms/step - loss: 34.8018 - accuracy: 0.5954 - val_loss: 812.9343 - val_accuracy: 0.7093
Epoch 8/150
21/22 [===========================>..] - ETA: 0s - loss: 37.7715 - accuracy: 0.6023
Epoch 00008: val_loss improved from 789.17914 to 741.93884, saving model to model.h5
22/22 [==============================] - 3s 145ms/step - loss: 37.6034 - accuracy: 0.6028 - val_loss: 741.9388 - val_accuracy: 0.7093
Epoch 9/150
21/22 [===========================>..] - ETA: 0s - loss: 27.8083 - accuracy: 0.6006
Epoch 00009: val_loss improved from 741.93884 to 507.27130, saving model to model.h5
22/22 [==============================] - 3s 145ms/step - loss: 27.8519 - accuracy: 0.6003 - val_loss: 507.2713 - val_accuracy: 0.7093
Epoch 10/150
21/22 [===========================>..] - ETA: 0s - loss: 35.7137 - accuracy: 0.6170
Epoch 00010: val_loss did not improve from 507.27130
22/22 [==============================] - 2s 92ms/step - loss: 35.8450 - accuracy: 0.6172 - val_loss: 527.4161 - val_accuracy: 0.7093
Epoch 11/150
21/22 [===========================>..] - ETA: 0s - loss: 28.8957 - accuracy: 0.6099
Epoch 00011: val_loss improved from 507.27130 to 448.79358, saving model to model.h5
22/22 [==============================] - 3s 145ms/step - loss: 28.7702 - accuracy: 0.6114 - val_loss: 448.7936 - val_accuracy: 0.7093
Epoch 12/150
21/22 [===========================>..] - ETA: 0s - loss: 20.8968 - accuracy: 0.6239
Epoch 00012: val_loss improved from 448.79358 to 388.85056, saving model to model.h5
22/22 [==============================] - 3s 145ms/step - loss: 20.8974 - accuracy: 0.6238 - val_loss: 388.8506 - val_accuracy: 0.7093
Epoch 13/150
21/22 [===========================>..] - ETA: 0s - loss: 19.5484 - accuracy: 0.6280
Epoch 00013: val_loss improved from 388.85056 to 339.96603, saving model to model.h5
22/22 [==============================] - 3s 145ms/step - loss: 19.5149 - accuracy: 0.6282 - val_loss: 339.9660 - val_accuracy: 0.7093
Epoch 14/150
21/22 [===========================>..] - ETA: 0s - loss: 19.0366 - accuracy: 0.6300
Epoch 00014: val_loss did not improve from 339.96603
22/22 [==============================] - 2s 92ms/step - loss: 18.9737 - accuracy: 0.6276 - val_loss: 386.5602 - val_accuracy: 0.7093
Epoch 15/150
21/22 [===========================>..] - ETA: 0s - loss: 20.0185 - accuracy: 0.6166
Epoch 00015: val_loss improved from 339.96603 to 275.77808, saving model to model.h5
22/22 [==============================] - 3s 145ms/step - loss: 19.8756 - accuracy: 0.6180 - val_loss: 275.7781 - val_accuracy: 0.7093
Epoch 16/150
21/22 [===========================>..] - ETA: 0s - loss: 18.2180 - accuracy: 0.6272
Epoch 00016: val_loss did not improve from 275.77808
22/22 [==============================] - 2s 92ms/step - loss: 18.2335 - accuracy: 0.6293 - val_loss: 305.0370 - val_accuracy: 0.7093
Epoch 17/150
21/22 [===========================>..] - ETA: 0s - loss: 14.8096 - accuracy: 0.6469
Epoch 00017: val_loss improved from 275.77808 to 245.33682, saving model to model.h5
22/22 [==============================] - 3s 145ms/step - loss: 14.9583 - accuracy: 0.6480 - val_loss: 245.3368 - val_accuracy: 0.7093
Epoch 18/150
21/22 [===========================>..] - ETA: 0s - loss: 14.7141 - accuracy: 0.6536
Epoch 00018: val_loss improved from 245.33682 to 238.95628, saving model to model.h5
22/22 [==============================] - 3s 145ms/step - loss: 14.7201 - accuracy: 0.6526 - val_loss: 238.9563 - val_accuracy: 0.7093
Epoch 19/150
21/22 [===========================>..] - ETA: 0s - loss: 16.1355 - accuracy: 0.6496
Epoch 00019: val_loss improved from 238.95628 to 194.00211, saving model to model.h5
22/22 [==============================] - 3s 146ms/step - loss: 16.1378 - accuracy: 0.6509 - val_loss: 194.0021 - val_accuracy: 0.7059
Epoch 20/150
21/22 [===========================>..] - ETA: 0s - loss: 18.4278 - accuracy: 0.6447
Epoch 00020: val_loss improved from 194.00211 to 116.58718, saving model to model.h5
22/22 [==============================] - 3s 146ms/step - loss: 18.4233 - accuracy: 0.6444 - val_loss: 116.5872 - val_accuracy: 0.7059
Epoch 21/150
21/22 [===========================>..] - ETA: 0s - loss: 16.3297 - accuracy: 0.6492
Epoch 00021: val_loss did not improve from 116.58718
22/22 [==============================] - 2s 93ms/step - loss: 16.2436 - accuracy: 0.6486 - val_loss: 136.0849 - val_accuracy: 0.7024
Epoch 22/150
21/22 [===========================>..] - ETA: 0s - loss: 14.1586 - accuracy: 0.6600
Epoch 00022: val_loss did not improve from 116.58718
22/22 [==============================] - 2s 93ms/step - loss: 14.1234 - accuracy: 0.6611 - val_loss: 121.8176 - val_accuracy: 0.7093
Epoch 23/150
21/22 [===========================>..] - ETA: 0s - loss: 11.4602 - accuracy: 0.6628
Epoch 00023: val_loss did not improve from 116.58718
22/22 [==============================] - 2s 93ms/step - loss: 11.4682 - accuracy: 0.6644 - val_loss: 158.5277 - val_accuracy: 0.7093
Epoch 24/150
21/22 [===========================>..] - ETA: 0s - loss: 13.7885 - accuracy: 0.6702
Epoch 00024: val_loss improved from 116.58718 to 114.72395, saving model to model.h5
22/22 [==============================] - 3s 146ms/step - loss: 13.8257 - accuracy: 0.6715 - val_loss: 114.7240 - val_accuracy: 0.7128
Epoch 25/150
21/22 [===========================>..] - ETA: 0s - loss: 20.2971 - accuracy: 0.6721
Epoch 00025: val_loss improved from 114.72395 to 90.22777, saving model to model.h5
22/22 [==============================] - 3s 146ms/step - loss: 20.2502 - accuracy: 0.6723 - val_loss: 90.2278 - val_accuracy: 0.6990
Epoch 26/150
21/22 [===========================>..] - ETA: 0s - loss: 16.0479 - accuracy: 0.6559
Epoch 00026: val_loss did not improve from 90.22777
22/22 [==============================] - 2s 93ms/step - loss: 16.0346 - accuracy: 0.6562 - val_loss: 101.0456 - val_accuracy: 0.7093
Epoch 27/150
21/22 [===========================>..] - ETA: 0s - loss: 20.5429 - accuracy: 0.6721
Epoch 00027: val_loss did not improve from 90.22777
22/22 [==============================] - 2s 93ms/step - loss: 20.4344 - accuracy: 0.6713 - val_loss: 96.2486 - val_accuracy: 0.7093
Epoch 28/150
21/22 [===========================>..] - ETA: 0s - loss: 17.0525 - accuracy: 0.6702
Epoch 00028: val_loss improved from 90.22777 to 83.16257, saving model to model.h5
22/22 [==============================] - 3s 147ms/step - loss: 17.0604 - accuracy: 0.6681 - val_loss: 83.1626 - val_accuracy: 0.6747
Epoch 29/150
21/22 [===========================>..] - ETA: 0s - loss: 17.7485 - accuracy: 0.6685
Epoch 00029: val_loss improved from 83.16257 to 71.55456, saving model to model.h5
22/22 [==============================] - 3s 147ms/step - loss: 17.6919 - accuracy: 0.6688 - val_loss: 71.5546 - val_accuracy: 0.7059
Epoch 30/150
21/22 [===========================>..] - ETA: 0s - loss: 14.2828 - accuracy: 0.6680
Epoch 00030: val_loss improved from 71.55456 to 61.61396, saving model to model.h5
22/22 [==============================] - 3s 146ms/step - loss: 14.1945 - accuracy: 0.6692 - val_loss: 61.6140 - val_accuracy: 0.6678
Epoch 31/150
21/22 [===========================>..] - ETA: 0s - loss: 13.1194 - accuracy: 0.6830
Epoch 00031: val_loss did not improve from 61.61396
22/22 [==============================] - 2s 94ms/step - loss: 13.0309 - accuracy: 0.6835 - val_loss: 63.0221 - val_accuracy: 0.7301
Epoch 32/150
21/22 [===========================>..] - ETA: 0s - loss: 15.8610 - accuracy: 0.6860
Epoch 00032: val_loss did not improve from 61.61396
22/22 [==============================] - 2s 94ms/step - loss: 15.8934 - accuracy: 0.6872 - val_loss: 74.5240 - val_accuracy: 0.7439
Epoch 33/150
21/22 [===========================>..] - ETA: 0s - loss: 11.3318 - accuracy: 0.7013
Epoch 00033: val_loss improved from 61.61396 to 54.48329, saving model to model.h5
22/22 [==============================] - 3s 147ms/step - loss: 11.3006 - accuracy: 0.6972 - val_loss: 54.4833 - val_accuracy: 0.7093
Epoch 34/150
21/22 [===========================>..] - ETA: 0s - loss: 10.1293 - accuracy: 0.6935
Epoch 00034: val_loss did not improve from 54.48329
22/22 [==============================] - 2s 94ms/step - loss: 10.2136 - accuracy: 0.6943 - val_loss: 61.0518 - val_accuracy: 0.7163
Epoch 35/150
21/22 [===========================>..] - ETA: 0s - loss: 9.7849 - accuracy: 0.6961
Epoch 00035: val_loss did not improve from 54.48329
22/22 [==============================] - 2s 94ms/step - loss: 9.8074 - accuracy: 0.6941 - val_loss: 59.0234 - val_accuracy: 0.7405
Epoch 36/150
21/22 [===========================>..] - ETA: 0s - loss: 9.8973 - accuracy: 0.6927
Epoch 00036: val_loss did not improve from 54.48329
22/22 [==============================] - 2s 94ms/step - loss: 9.8693 - accuracy: 0.6921 - val_loss: 65.3090 - val_accuracy: 0.6782
Epoch 37/150
21/22 [===========================>..] - ETA: 0s - loss: 9.2150 - accuracy: 0.6948
Epoch 00037: val_loss did not improve from 54.48329
22/22 [==============================] - 2s 94ms/step - loss: 9.2082 - accuracy: 0.6917 - val_loss: 62.3670 - val_accuracy: 0.7336
Epoch 38/150
21/22 [===========================>..] - ETA: 0s - loss: 9.2180 - accuracy: 0.6981
Epoch 00038: val_loss did not improve from 54.48329
22/22 [==============================] - 2s 94ms/step - loss: 9.1676 - accuracy: 0.6992 - val_loss: 65.1851 - val_accuracy: 0.7439
Epoch 39/150
21/22 [===========================>..] - ETA: 0s - loss: 8.3192 - accuracy: 0.7111
Epoch 00039: val_loss did not improve from 54.48329
22/22 [==============================] - 2s 94ms/step - loss: 8.3106 - accuracy: 0.7112 - val_loss: 70.3526 - val_accuracy: 0.7439
Epoch 40/150
21/22 [===========================>..] - ETA: 0s - loss: 8.8928 - accuracy: 0.7091
Epoch 00040: val_loss did not improve from 54.48329
22/22 [==============================] - 2s 94ms/step - loss: 8.9836 - accuracy: 0.7087 - val_loss: 71.4592 - val_accuracy: 0.6955
Epoch 41/150
21/22 [===========================>..] - ETA: 0s - loss: 8.1554 - accuracy: 0.6987
Epoch 00041: val_loss did not improve from 54.48329
22/22 [==============================] - 2s 95ms/step - loss: 8.1551 - accuracy: 0.6996 - val_loss: 57.6787 - val_accuracy: 0.7439
Epoch 42/150
21/22 [===========================>..] - ETA: 0s - loss: 7.5816 - accuracy: 0.7158
Epoch 00042: val_loss did not improve from 54.48329
22/22 [==============================] - 2s 95ms/step - loss: 7.6089 - accuracy: 0.7156 - val_loss: 61.9597 - val_accuracy: 0.7439
Epoch 43/150
21/22 [===========================>..] - ETA: 0s - loss: 11.2603 - accuracy: 0.7080
Epoch 00043: val_loss did not improve from 54.48329
22/22 [==============================] - 2s 95ms/step - loss: 11.1758 - accuracy: 0.7080 - val_loss: 60.9723 - val_accuracy: 0.7405
Epoch 44/150
21/22 [===========================>..] - ETA: 0s - loss: 9.5604 - accuracy: 0.7161
Epoch 00044: val_loss did not improve from 54.48329
22/22 [==============================] - 2s 95ms/step - loss: 9.5782 - accuracy: 0.7156 - val_loss: 58.8260 - val_accuracy: 0.7439
Epoch 45/150
21/22 [===========================>..] - ETA: 0s - loss: 8.4464 - accuracy: 0.7119
Epoch 00045: val_loss did not improve from 54.48329
22/22 [==============================] - 2s 95ms/step - loss: 8.5305 - accuracy: 0.7131 - val_loss: 72.2603 - val_accuracy: 0.6886
Epoch 46/150
21/22 [===========================>..] - ETA: 0s - loss: 7.9444 - accuracy: 0.7128
Epoch 00046: val_loss did not improve from 54.48329
22/22 [==============================] - 2s 95ms/step - loss: 8.0336 - accuracy: 0.7122 - val_loss: 66.6273 - val_accuracy: 0.6955
Epoch 47/150
21/22 [===========================>..] - ETA: 0s - loss: 9.1367 - accuracy: 0.7046
Epoch 00047: val_loss did not improve from 54.48329
22/22 [==============================] - 2s 95ms/step - loss: 9.2338 - accuracy: 0.7049 - val_loss: 70.3994 - val_accuracy: 0.7059
Epoch 48/150
21/22 [===========================>..] - ETA: 0s - loss: 9.3293 - accuracy: 0.7141
Epoch 00048: val_loss improved from 54.48329 to 51.75412, saving model to model.h5
22/22 [==============================] - 3s 149ms/step - loss: 9.3018 - accuracy: 0.7134 - val_loss: 51.7541 - val_accuracy: 0.7612
Epoch 49/150
21/22 [===========================>..] - ETA: 0s - loss: 7.8363 - accuracy: 0.7260
Epoch 00049: val_loss did not improve from 51.75412
22/22 [==============================] - 2s 95ms/step - loss: 7.9505 - accuracy: 0.7262 - val_loss: 61.7648 - val_accuracy: 0.7301
Epoch 50/150
21/22 [===========================>..] - ETA: 0s - loss: 10.5468 - accuracy: 0.7013
Epoch 00050: val_loss did not improve from 51.75412
22/22 [==============================] - 2s 95ms/step - loss: 10.6065 - accuracy: 0.7014 - val_loss: 102.8455 - val_accuracy: 0.6955
Epoch 51/150
21/22 [===========================>..] - ETA: 0s - loss: 15.5953 - accuracy: 0.7150
Epoch 00051: val_loss did not improve from 51.75412
22/22 [==============================] - 2s 96ms/step - loss: 15.5749 - accuracy: 0.7149 - val_loss: 84.7554 - val_accuracy: 0.7059
Epoch 52/150
21/22 [===========================>..] - ETA: 0s - loss: 14.0613 - accuracy: 0.6968
Epoch 00052: val_loss did not improve from 51.75412
22/22 [==============================] - 2s 96ms/step - loss: 13.9979 - accuracy: 0.6954 - val_loss: 90.2912 - val_accuracy: 0.7163
Epoch 53/150
21/22 [===========================>..] - ETA: 0s - loss: 10.0535 - accuracy: 0.7035
Epoch 00053: val_loss did not improve from 51.75412
22/22 [==============================] - 2s 96ms/step - loss: 10.0492 - accuracy: 0.7032 - val_loss: 93.0329 - val_accuracy: 0.7128
Epoch 54/150
21/22 [===========================>..] - ETA: 0s - loss: 12.2099 - accuracy: 0.7150
Epoch 00054: val_loss did not improve from 51.75412
22/22 [==============================] - 2s 96ms/step - loss: 12.1370 - accuracy: 0.7149 - val_loss: 108.5787 - val_accuracy: 0.7578
Epoch 55/150
21/22 [===========================>..] - ETA: 0s - loss: 17.9728 - accuracy: 0.6966
Epoch 00055: val_loss did not improve from 51.75412
22/22 [==============================] - 2s 96ms/step - loss: 18.0491 - accuracy: 0.6974 - val_loss: 52.4301 - val_accuracy: 0.7059
Epoch 56/150
21/22 [===========================>..] - ETA: 0s - loss: 11.1632 - accuracy: 0.7089
Epoch 00056: val_loss did not improve from 51.75412
22/22 [==============================] - 2s 96ms/step - loss: 11.1645 - accuracy: 0.7103 - val_loss: 67.9459 - val_accuracy: 0.6886
Epoch 57/150
21/22 [===========================>..] - ETA: 0s - loss: 11.4941 - accuracy: 0.7093
Epoch 00057: val_loss improved from 51.75412 to 45.90251, saving model to model.h5
22/22 [==============================] - 3s 150ms/step - loss: 11.5879 - accuracy: 0.7094 - val_loss: 45.9025 - val_accuracy: 0.6367
Epoch 58/150
21/22 [===========================>..] - ETA: 0s - loss: 13.5404 - accuracy: 0.7072
Epoch 00058: val_loss did not improve from 45.90251
22/22 [==============================] - 2s 97ms/step - loss: 13.4764 - accuracy: 0.7080 - val_loss: 103.4098 - val_accuracy: 0.7163
Epoch 59/150
21/22 [===========================>..] - ETA: 0s - loss: 11.1576 - accuracy: 0.7158
Epoch 00059: val_loss did not improve from 45.90251
22/22 [==============================] - 2s 96ms/step - loss: 11.0806 - accuracy: 0.7149 - val_loss: 59.6483 - val_accuracy: 0.6817
Epoch 60/150
21/22 [===========================>..] - ETA: 0s - loss: 8.9176 - accuracy: 0.7173
Epoch 00060: val_loss did not improve from 45.90251
22/22 [==============================] - 2s 97ms/step - loss: 8.9540 - accuracy: 0.7165 - val_loss: 49.2962 - val_accuracy: 0.7509
Epoch 61/150
21/22 [===========================>..] - ETA: 0s - loss: 9.0623 - accuracy: 0.7150
Epoch 00061: val_loss did not improve from 45.90251
22/22 [==============================] - 2s 97ms/step - loss: 9.0307 - accuracy: 0.7154 - val_loss: 46.3530 - val_accuracy: 0.7336
Epoch 62/150
21/22 [===========================>..] - ETA: 0s - loss: 8.3265 - accuracy: 0.7260
Epoch 00062: val_loss did not improve from 45.90251
22/22 [==============================] - 2s 97ms/step - loss: 8.3269 - accuracy: 0.7247 - val_loss: 54.0798 - val_accuracy: 0.7439
Epoch 63/150
21/22 [===========================>..] - ETA: 0s - loss: 7.4450 - accuracy: 0.7279
Epoch 00063: val_loss did not improve from 45.90251
22/22 [==============================] - 2s 97ms/step - loss: 7.4269 - accuracy: 0.7282 - val_loss: 53.5259 - val_accuracy: 0.7578
Epoch 64/150
21/22 [===========================>..] - ETA: 0s - loss: 9.0552 - accuracy: 0.7407
Epoch 00064: val_loss improved from 45.90251 to 45.57641, saving model to model.h5
22/22 [==============================] - 3s 150ms/step - loss: 9.0093 - accuracy: 0.7408 - val_loss: 45.5764 - val_accuracy: 0.7232
Epoch 65/150
21/22 [===========================>..] - ETA: 0s - loss: 6.3530 - accuracy: 0.7308
Epoch 00065: val_loss improved from 45.57641 to 40.04814, saving model to model.h5
22/22 [==============================] - 3s 151ms/step - loss: 6.3394 - accuracy: 0.7302 - val_loss: 40.0481 - val_accuracy: 0.7336
Epoch 66/150
21/22 [===========================>..] - ETA: 0s - loss: 6.6039 - accuracy: 0.7342
Epoch 00066: val_loss improved from 40.04814 to 39.34131, saving model to model.h5
22/22 [==============================] - 3s 151ms/step - loss: 6.5988 - accuracy: 0.7342 - val_loss: 39.3413 - val_accuracy: 0.7578
Epoch 67/150
21/22 [===========================>..] - ETA: 0s - loss: 6.0320 - accuracy: 0.7427
Epoch 00067: val_loss did not improve from 39.34131
22/22 [==============================] - 2s 98ms/step - loss: 6.0273 - accuracy: 0.7439 - val_loss: 39.7711 - val_accuracy: 0.7336
Epoch 68/150
21/22 [===========================>..] - ETA: 0s - loss: 6.5923 - accuracy: 0.7414
Epoch 00068: val_loss did not improve from 39.34131
22/22 [==============================] - 2s 98ms/step - loss: 6.5701 - accuracy: 0.7409 - val_loss: 40.5805 - val_accuracy: 0.7509
Epoch 69/150
21/22 [===========================>..] - ETA: 0s - loss: 7.0616 - accuracy: 0.7453
Epoch 00069: val_loss did not improve from 39.34131
22/22 [==============================] - 2s 97ms/step - loss: 7.0622 - accuracy: 0.7446 - val_loss: 39.7283 - val_accuracy: 0.7474
Epoch 70/150
21/22 [===========================>..] - ETA: 0s - loss: 6.4951 - accuracy: 0.7442
Epoch 00070: val_loss did not improve from 39.34131
22/22 [==============================] - 2s 98ms/step - loss: 6.4652 - accuracy: 0.7457 - val_loss: 39.8445 - val_accuracy: 0.7439
Epoch 71/150
21/22 [===========================>..] - ETA: 0s - loss: 5.8252 - accuracy: 0.7377
Epoch 00071: val_loss did not improve from 39.34131
22/22 [==============================] - 2s 98ms/step - loss: 5.8190 - accuracy: 0.7378 - val_loss: 43.1530 - val_accuracy: 0.7336
Epoch 72/150
21/22 [===========================>..] - ETA: 0s - loss: 5.9116 - accuracy: 0.7468
Epoch 00072: val_loss improved from 39.34131 to 38.55899, saving model to model.h5
22/22 [==============================] - 3s 150ms/step - loss: 5.8961 - accuracy: 0.7473 - val_loss: 38.5590 - val_accuracy: 0.7439
Epoch 73/150
21/22 [===========================>..] - ETA: 0s - loss: 6.0450 - accuracy: 0.7517
Epoch 00073: val_loss did not improve from 38.55899
22/22 [==============================] - 2s 99ms/step - loss: 6.0485 - accuracy: 0.7519 - val_loss: 38.6589 - val_accuracy: 0.7336
Epoch 74/150
21/22 [===========================>..] - ETA: 0s - loss: 7.1339 - accuracy: 0.7537
Epoch 00074: val_loss improved from 38.55899 to 37.90301, saving model to model.h5
22/22 [==============================] - 3s 151ms/step - loss: 7.1083 - accuracy: 0.7539 - val_loss: 37.9030 - val_accuracy: 0.7474
Epoch 75/150
21/22 [===========================>..] - ETA: 0s - loss: 5.4966 - accuracy: 0.7574
Epoch 00075: val_loss improved from 37.90301 to 37.85571, saving model to model.h5
22/22 [==============================] - 3s 150ms/step - loss: 5.4905 - accuracy: 0.7584 - val_loss: 37.8557 - val_accuracy: 0.7128
Epoch 76/150
21/22 [===========================>..] - ETA: 0s - loss: 5.2525 - accuracy: 0.7584
Epoch 00076: val_loss did not improve from 37.85571
22/22 [==============================] - 2s 99ms/step - loss: 5.2442 - accuracy: 0.7572 - val_loss: 37.8795 - val_accuracy: 0.7474
Epoch 77/150
21/22 [===========================>..] - ETA: 0s - loss: 7.0426 - accuracy: 0.7604
Epoch 00077: val_loss improved from 37.85571 to 37.25816, saving model to model.h5
22/22 [==============================] - 3s 152ms/step - loss: 7.0099 - accuracy: 0.7602 - val_loss: 37.2582 - val_accuracy: 0.7474
Epoch 78/150
21/22 [===========================>..] - ETA: 0s - loss: 5.6777 - accuracy: 0.7597
Epoch 00078: val_loss did not improve from 37.25816
22/22 [==============================] - 2s 100ms/step - loss: 5.7570 - accuracy: 0.7579 - val_loss: 40.3644 - val_accuracy: 0.7370
Epoch 79/150
21/22 [===========================>..] - ETA: 0s - loss: 8.8845 - accuracy: 0.7589
Epoch 00079: val_loss did not improve from 37.25816
22/22 [==============================] - 2s 99ms/step - loss: 8.9188 - accuracy: 0.7604 - val_loss: 38.0262 - val_accuracy: 0.7336
Epoch 80/150
21/22 [===========================>..] - ETA: 0s - loss: 7.0002 - accuracy: 0.7602
Epoch 00080: val_loss improved from 37.25816 to 34.09328, saving model to model.h5
22/22 [==============================] - 3s 152ms/step - loss: 6.9949 - accuracy: 0.7601 - val_loss: 34.0933 - val_accuracy: 0.7266
Epoch 81/150
22/22 [==============================] - ETA: 0s - loss: 6.6883 - accuracy: 0.7573
Epoch 00081: val_loss improved from 34.09328 to 32.28912, saving model to model.h5
22/22 [==============================] - 3s 153ms/step - loss: 6.6883 - accuracy: 0.7573 - val_loss: 32.2891 - val_accuracy: 0.7024
Epoch 82/150
21/22 [===========================>..] - ETA: 0s - loss: 5.0980 - accuracy: 0.7615
Epoch 00082: val_loss did not improve from 32.28912
22/22 [==============================] - 2s 99ms/step - loss: 5.1124 - accuracy: 0.7621 - val_loss: 33.6253 - val_accuracy: 0.7128
Epoch 83/150
21/22 [===========================>..] - ETA: 0s - loss: 4.9478 - accuracy: 0.7621
Epoch 00083: val_loss did not improve from 32.28912
22/22 [==============================] - 2s 100ms/step - loss: 4.9227 - accuracy: 0.7626 - val_loss: 32.9145 - val_accuracy: 0.7197
Epoch 84/150
21/22 [===========================>..] - ETA: 0s - loss: 4.9611 - accuracy: 0.7612
Epoch 00084: val_loss did not improve from 32.28912
22/22 [==============================] - 2s 99ms/step - loss: 5.0305 - accuracy: 0.7613 - val_loss: 35.0501 - val_accuracy: 0.7197
Epoch 85/150
22/22 [==============================] - ETA: 0s - loss: 5.5805 - accuracy: 0.7646
Epoch 00085: val_loss did not improve from 32.28912
22/22 [==============================] - 2s 100ms/step - loss: 5.5805 - accuracy: 0.7646 - val_loss: 33.2898 - val_accuracy: 0.7439
Epoch 86/150
21/22 [===========================>..] - ETA: 0s - loss: 4.9042 - accuracy: 0.7654
Epoch 00086: val_loss improved from 32.28912 to 31.88445, saving model to model.h5
22/22 [==============================] - 3s 153ms/step - loss: 4.9090 - accuracy: 0.7650 - val_loss: 31.8845 - val_accuracy: 0.7370
Epoch 87/150
21/22 [===========================>..] - ETA: 0s - loss: 5.1412 - accuracy: 0.7755
Epoch 00087: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 100ms/step - loss: 5.1727 - accuracy: 0.7754 - val_loss: 34.5472 - val_accuracy: 0.7232
Epoch 88/150
21/22 [===========================>..] - ETA: 0s - loss: 5.3665 - accuracy: 0.7755
Epoch 00088: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 100ms/step - loss: 5.3630 - accuracy: 0.7756 - val_loss: 35.4939 - val_accuracy: 0.7336
Epoch 89/150
21/22 [===========================>..] - ETA: 0s - loss: 6.5040 - accuracy: 0.7688
Epoch 00089: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 100ms/step - loss: 6.4596 - accuracy: 0.7692 - val_loss: 44.5236 - val_accuracy: 0.7474
Epoch 90/150
22/22 [==============================] - ETA: 0s - loss: 5.6431 - accuracy: 0.7695
Epoch 00090: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 100ms/step - loss: 5.6431 - accuracy: 0.7695 - val_loss: 37.2471 - val_accuracy: 0.7405
Epoch 91/150
21/22 [===========================>..] - ETA: 0s - loss: 4.6481 - accuracy: 0.7755
Epoch 00091: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 100ms/step - loss: 4.6531 - accuracy: 0.7750 - val_loss: 37.3738 - val_accuracy: 0.7405
Epoch 92/150
22/22 [==============================] - ETA: 0s - loss: 4.9251 - accuracy: 0.7708
Epoch 00092: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 101ms/step - loss: 4.9251 - accuracy: 0.7708 - val_loss: 36.5112 - val_accuracy: 0.7509
Epoch 93/150
21/22 [===========================>..] - ETA: 0s - loss: 5.6321 - accuracy: 0.7816
Epoch 00093: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 100ms/step - loss: 5.6450 - accuracy: 0.7805 - val_loss: 34.5860 - val_accuracy: 0.7578
Epoch 94/150
21/22 [===========================>..] - ETA: 0s - loss: 5.0143 - accuracy: 0.7844
Epoch 00094: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 101ms/step - loss: 4.9902 - accuracy: 0.7839 - val_loss: 33.6741 - val_accuracy: 0.7405
Epoch 95/150
22/22 [==============================] - ETA: 0s - loss: 5.5574 - accuracy: 0.7816
Epoch 00095: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 101ms/step - loss: 5.5574 - accuracy: 0.7816 - val_loss: 33.8081 - val_accuracy: 0.7439
Epoch 96/150
21/22 [===========================>..] - ETA: 0s - loss: 7.3038 - accuracy: 0.7751
Epoch 00096: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 101ms/step - loss: 7.3634 - accuracy: 0.7750 - val_loss: 33.5506 - val_accuracy: 0.7370
Epoch 97/150
22/22 [==============================] - ETA: 0s - loss: 5.8149 - accuracy: 0.7761
Epoch 00097: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 5.8149 - accuracy: 0.7761 - val_loss: 37.3558 - val_accuracy: 0.7578
Epoch 98/150
22/22 [==============================] - ETA: 0s - loss: 4.3406 - accuracy: 0.7845
Epoch 00098: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 4.3406 - accuracy: 0.7845 - val_loss: 34.0274 - val_accuracy: 0.7336
Epoch 99/150
22/22 [==============================] - ETA: 0s - loss: 4.8974 - accuracy: 0.7821
Epoch 00099: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 4.8974 - accuracy: 0.7821 - val_loss: 35.1478 - val_accuracy: 0.7405
Epoch 100/150
22/22 [==============================] - ETA: 0s - loss: 4.6562 - accuracy: 0.7812
Epoch 00100: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 4.6562 - accuracy: 0.7812 - val_loss: 34.8458 - val_accuracy: 0.7543
Epoch 101/150
21/22 [===========================>..] - ETA: 0s - loss: 4.2212 - accuracy: 0.7779
Epoch 00101: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 4.2126 - accuracy: 0.7781 - val_loss: 34.1711 - val_accuracy: 0.7266
Epoch 102/150
22/22 [==============================] - ETA: 0s - loss: 5.8807 - accuracy: 0.7898
Epoch 00102: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 5.8807 - accuracy: 0.7898 - val_loss: 34.8065 - val_accuracy: 0.7336
Epoch 103/150
22/22 [==============================] - ETA: 0s - loss: 5.0724 - accuracy: 0.7896
Epoch 00103: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 5.0724 - accuracy: 0.7896 - val_loss: 35.2939 - val_accuracy: 0.7474
Epoch 104/150
22/22 [==============================] - ETA: 0s - loss: 4.7763 - accuracy: 0.7927
Epoch 00104: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 4.7763 - accuracy: 0.7927 - val_loss: 35.4845 - val_accuracy: 0.7716
Epoch 105/150
22/22 [==============================] - ETA: 0s - loss: 4.1106 - accuracy: 0.7901
Epoch 00105: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 4.1106 - accuracy: 0.7901 - val_loss: 34.0769 - val_accuracy: 0.7474
Epoch 106/150
22/22 [==============================] - ETA: 0s - loss: 4.2416 - accuracy: 0.7949
Epoch 00106: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 4.2416 - accuracy: 0.7949 - val_loss: 34.1470 - val_accuracy: 0.7405
Epoch 107/150
22/22 [==============================] - ETA: 0s - loss: 4.4800 - accuracy: 0.8023
Epoch 00107: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 4.4800 - accuracy: 0.8023 - val_loss: 35.7410 - val_accuracy: 0.7612
Epoch 108/150
21/22 [===========================>..] - ETA: 0s - loss: 8.1238 - accuracy: 0.7889
Epoch 00108: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 8.3062 - accuracy: 0.7879 - val_loss: 42.5295 - val_accuracy: 0.7439
Epoch 109/150
22/22 [==============================] - ETA: 0s - loss: 7.4467 - accuracy: 0.7890
Epoch 00109: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 7.4467 - accuracy: 0.7890 - val_loss: 33.0776 - val_accuracy: 0.7301
Epoch 110/150
21/22 [===========================>..] - ETA: 0s - loss: 4.1594 - accuracy: 0.7924
Epoch 00110: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 4.1589 - accuracy: 0.7936 - val_loss: 33.7991 - val_accuracy: 0.7439
Epoch 111/150
21/22 [===========================>..] - ETA: 0s - loss: 4.0525 - accuracy: 0.7987
Epoch 00111: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 4.0599 - accuracy: 0.7994 - val_loss: 34.1332 - val_accuracy: 0.7751
Epoch 112/150
22/22 [==============================] - ETA: 0s - loss: 4.1273 - accuracy: 0.8027
Epoch 00112: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 4.1273 - accuracy: 0.8027 - val_loss: 34.0261 - val_accuracy: 0.7682
Epoch 113/150
21/22 [===========================>..] - ETA: 0s - loss: 4.8814 - accuracy: 0.7995
Epoch 00113: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 4.8739 - accuracy: 0.7985 - val_loss: 33.7386 - val_accuracy: 0.7612
Epoch 114/150
22/22 [==============================] - ETA: 0s - loss: 4.0820 - accuracy: 0.7952
Epoch 00114: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 4.0820 - accuracy: 0.7952 - val_loss: 34.4375 - val_accuracy: 0.7855
Epoch 115/150
22/22 [==============================] - ETA: 0s - loss: 4.0634 - accuracy: 0.7972
Epoch 00115: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 4.0634 - accuracy: 0.7972 - val_loss: 35.5764 - val_accuracy: 0.7647
Epoch 116/150
22/22 [==============================] - ETA: 0s - loss: 4.3855 - accuracy: 0.8016
Epoch 00116: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 4.3855 - accuracy: 0.8016 - val_loss: 35.0817 - val_accuracy: 0.7578
Epoch 117/150
22/22 [==============================] - ETA: 0s - loss: 4.0714 - accuracy: 0.8025
Epoch 00117: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 4.0714 - accuracy: 0.8025 - val_loss: 34.7676 - val_accuracy: 0.7578
Epoch 118/150
22/22 [==============================] - ETA: 0s - loss: 4.0488 - accuracy: 0.8011
Epoch 00118: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 101ms/step - loss: 4.0488 - accuracy: 0.8011 - val_loss: 34.8614 - val_accuracy: 0.7474
Epoch 119/150
22/22 [==============================] - ETA: 0s - loss: 4.5672 - accuracy: 0.8012
Epoch 00119: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 4.5672 - accuracy: 0.8012 - val_loss: 37.8399 - val_accuracy: 0.7509
Epoch 120/150
22/22 [==============================] - ETA: 0s - loss: 4.3775 - accuracy: 0.8085
Epoch 00120: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 4.3775 - accuracy: 0.8085 - val_loss: 35.0179 - val_accuracy: 0.7543
Epoch 121/150
22/22 [==============================] - ETA: 0s - loss: 3.9783 - accuracy: 0.8098
Epoch 00121: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 3.9783 - accuracy: 0.8098 - val_loss: 38.1927 - val_accuracy: 0.7578
Epoch 122/150
22/22 [==============================] - ETA: 0s - loss: 4.4661 - accuracy: 0.8098
Epoch 00122: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 4.4661 - accuracy: 0.8098 - val_loss: 35.5008 - val_accuracy: 0.7612
Epoch 123/150
22/22 [==============================] - ETA: 0s - loss: 5.5195 - accuracy: 0.7994
Epoch 00123: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 5.5195 - accuracy: 0.7994 - val_loss: 35.1165 - val_accuracy: 0.7820
Epoch 124/150
22/22 [==============================] - ETA: 0s - loss: 4.6628 - accuracy: 0.8083
Epoch 00124: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 4.6628 - accuracy: 0.8083 - val_loss: 36.9616 - val_accuracy: 0.7370
Epoch 125/150
22/22 [==============================] - ETA: 0s - loss: 5.2181 - accuracy: 0.8127
Epoch 00125: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 102ms/step - loss: 5.2181 - accuracy: 0.8127 - val_loss: 33.7105 - val_accuracy: 0.7612
Epoch 126/150
22/22 [==============================] - ETA: 0s - loss: 4.2685 - accuracy: 0.8109
Epoch 00126: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 4.2685 - accuracy: 0.8109 - val_loss: 32.8922 - val_accuracy: 0.7612
Epoch 127/150
21/22 [===========================>..] - ETA: 0s - loss: 6.8652 - accuracy: 0.8093
Epoch 00127: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 6.9128 - accuracy: 0.8103 - val_loss: 35.1078 - val_accuracy: 0.7647
Epoch 128/150
22/22 [==============================] - ETA: 0s - loss: 5.0185 - accuracy: 0.8087
Epoch 00128: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 5.0185 - accuracy: 0.8087 - val_loss: 33.5624 - val_accuracy: 0.7578
Epoch 129/150
22/22 [==============================] - ETA: 0s - loss: 4.0370 - accuracy: 0.8140
Epoch 00129: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 4.0370 - accuracy: 0.8140 - val_loss: 35.4952 - val_accuracy: 0.7543
Epoch 130/150
22/22 [==============================] - ETA: 0s - loss: 3.6958 - accuracy: 0.8114
Epoch 00130: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 3.6958 - accuracy: 0.8114 - val_loss: 36.3771 - val_accuracy: 0.7543
Epoch 131/150
22/22 [==============================] - ETA: 0s - loss: 3.7616 - accuracy: 0.8173
Epoch 00131: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 3.7616 - accuracy: 0.8173 - val_loss: 35.1223 - val_accuracy: 0.7820
Epoch 132/150
22/22 [==============================] - ETA: 0s - loss: 4.9449 - accuracy: 0.8122
Epoch 00132: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 4.9449 - accuracy: 0.8122 - val_loss: 41.3540 - val_accuracy: 0.7785
Epoch 133/150
21/22 [===========================>..] - ETA: 0s - loss: 5.8554 - accuracy: 0.8138
Epoch 00133: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 5.7997 - accuracy: 0.8151 - val_loss: 39.1800 - val_accuracy: 0.7751
Epoch 134/150
22/22 [==============================] - ETA: 0s - loss: 4.7116 - accuracy: 0.8176
Epoch 00134: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 4.7116 - accuracy: 0.8176 - val_loss: 38.5128 - val_accuracy: 0.7612
Epoch 135/150
22/22 [==============================] - ETA: 0s - loss: 3.7435 - accuracy: 0.8107
Epoch 00135: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 3.7435 - accuracy: 0.8107 - val_loss: 33.4571 - val_accuracy: 0.7612
Epoch 136/150
22/22 [==============================] - ETA: 0s - loss: 3.7809 - accuracy: 0.8142
Epoch 00136: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 3.7809 - accuracy: 0.8142 - val_loss: 34.4994 - val_accuracy: 0.7855
Epoch 137/150
22/22 [==============================] - ETA: 0s - loss: 3.8252 - accuracy: 0.8227
Epoch 00137: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 3.8252 - accuracy: 0.8227 - val_loss: 35.1754 - val_accuracy: 0.7785
Epoch 138/150
22/22 [==============================] - ETA: 0s - loss: 3.7222 - accuracy: 0.8211
Epoch 00138: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 3.7222 - accuracy: 0.8211 - val_loss: 35.6264 - val_accuracy: 0.7647
Epoch 139/150
22/22 [==============================] - ETA: 0s - loss: 5.3597 - accuracy: 0.8093
Epoch 00139: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 5.3597 - accuracy: 0.8093 - val_loss: 42.4477 - val_accuracy: 0.8097
Epoch 140/150
22/22 [==============================] - ETA: 0s - loss: 4.1812 - accuracy: 0.8317
Epoch 00140: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 4.1812 - accuracy: 0.8317 - val_loss: 38.0235 - val_accuracy: 0.7716
Epoch 141/150
22/22 [==============================] - ETA: 0s - loss: 3.5505 - accuracy: 0.8233
Epoch 00141: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 3.5505 - accuracy: 0.8233 - val_loss: 33.8001 - val_accuracy: 0.7682
Epoch 142/150
22/22 [==============================] - ETA: 0s - loss: 3.6546 - accuracy: 0.8189
Epoch 00142: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 3.6546 - accuracy: 0.8189 - val_loss: 36.6405 - val_accuracy: 0.7716
Epoch 143/150
22/22 [==============================] - ETA: 0s - loss: 3.6080 - accuracy: 0.8193
Epoch 00143: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 3.6080 - accuracy: 0.8193 - val_loss: 34.8332 - val_accuracy: 0.7785
Epoch 144/150
22/22 [==============================] - ETA: 0s - loss: 3.7217 - accuracy: 0.8266
Epoch 00144: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 3.7217 - accuracy: 0.8266 - val_loss: 34.6043 - val_accuracy: 0.7785
Epoch 145/150
22/22 [==============================] - ETA: 0s - loss: 3.4550 - accuracy: 0.8251
Epoch 00145: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 3.4550 - accuracy: 0.8251 - val_loss: 37.2645 - val_accuracy: 0.7612
Epoch 146/150
21/22 [===========================>..] - ETA: 0s - loss: 3.9242 - accuracy: 0.8305
Epoch 00146: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 3.9155 - accuracy: 0.8300 - val_loss: 34.9711 - val_accuracy: 0.7855
Epoch 147/150
22/22 [==============================] - ETA: 0s - loss: 3.8879 - accuracy: 0.8286
Epoch 00147: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 3.8879 - accuracy: 0.8286 - val_loss: 39.6485 - val_accuracy: 0.7820
Epoch 148/150
22/22 [==============================] - ETA: 0s - loss: 5.0689 - accuracy: 0.8249
Epoch 00148: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 5.0689 - accuracy: 0.8249 - val_loss: 34.0616 - val_accuracy: 0.7855
Epoch 149/150
22/22 [==============================] - ETA: 0s - loss: 4.4857 - accuracy: 0.8182
Epoch 00149: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 4.4857 - accuracy: 0.8182 - val_loss: 35.3052 - val_accuracy: 0.7993
Epoch 150/150
22/22 [==============================] - ETA: 0s - loss: 3.9612 - accuracy: 0.8280
Epoch 00150: val_loss did not improve from 31.88445
22/22 [==============================] - 2s 103ms/step - loss: 3.9612 - accuracy: 0.8280 - val_loss: 35.0453 - val_accuracy: 0.7785

ASSESSING TRAINED MODEL PERFORMANCE

In [35]:
# Loading trained model

new_model = tf.keras.models.load_model('model.h5')

new_model.summary()
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 96, 96, 1)]  0                                            
__________________________________________________________________________________________________
zero_padding2d (ZeroPadding2D)  (None, 102, 102, 1)  0           input_1[0][0]                    
__________________________________________________________________________________________________
conv1 (Conv2D)                  (None, 48, 48, 64)   3200        zero_padding2d[0][0]             
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization)   (None, 48, 48, 64)   256         conv1[0][0]                      
__________________________________________________________________________________________________
activation (Activation)         (None, 48, 48, 64)   0           bn_conv1[0][0]                   
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D)    (None, 23, 23, 64)   0           activation[0][0]                 
__________________________________________________________________________________________________
res_2_conv_a (Conv2D)           (None, 23, 23, 64)   4160        max_pooling2d[0][0]              
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 11, 11, 64)   0           res_2_conv_a[0][0]               
__________________________________________________________________________________________________
bn_2_conv_a (BatchNormalization (None, 11, 11, 64)   256         max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 11, 11, 64)   0           bn_2_conv_a[0][0]                
__________________________________________________________________________________________________
res_2_conv_b (Conv2D)           (None, 11, 11, 64)   36928       activation_1[0][0]               
__________________________________________________________________________________________________
bn_2_conv_b (BatchNormalization (None, 11, 11, 64)   256         res_2_conv_b[0][0]               
__________________________________________________________________________________________________
activation_2 (Activation)       (None, 11, 11, 64)   0           bn_2_conv_b[0][0]                
__________________________________________________________________________________________________
res_2_conv_copy (Conv2D)        (None, 23, 23, 256)  16640       max_pooling2d[0][0]              
__________________________________________________________________________________________________
res_2_conv_c (Conv2D)           (None, 11, 11, 256)  16640       activation_2[0][0]               
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D)  (None, 11, 11, 256)  0           res_2_conv_copy[0][0]            
__________________________________________________________________________________________________
bn_2_conv_c (BatchNormalization (None, 11, 11, 256)  1024        res_2_conv_c[0][0]               
__________________________________________________________________________________________________
bn_2_conv_copy (BatchNormalizat (None, 11, 11, 256)  1024        max_pooling2d_2[0][0]            
__________________________________________________________________________________________________
add (Add)                       (None, 11, 11, 256)  0           bn_2_conv_c[0][0]                
                                                                 bn_2_conv_copy[0][0]             
__________________________________________________________________________________________________
activation_3 (Activation)       (None, 11, 11, 256)  0           add[0][0]                        
__________________________________________________________________________________________________
res_2_identity_1_a (Conv2D)     (None, 11, 11, 64)   16448       activation_3[0][0]               
__________________________________________________________________________________________________
bn_2_identity_1_a (BatchNormali (None, 11, 11, 64)   256         res_2_identity_1_a[0][0]         
__________________________________________________________________________________________________
activation_4 (Activation)       (None, 11, 11, 64)   0           bn_2_identity_1_a[0][0]          
__________________________________________________________________________________________________
res_2_identity_1_b (Conv2D)     (None, 11, 11, 64)   36928       activation_4[0][0]               
__________________________________________________________________________________________________
bn_2_identity_1_b (BatchNormali (None, 11, 11, 64)   256         res_2_identity_1_b[0][0]         
__________________________________________________________________________________________________
activation_5 (Activation)       (None, 11, 11, 64)   0           bn_2_identity_1_b[0][0]          
__________________________________________________________________________________________________
res_2_identity_1_c (Conv2D)     (None, 11, 11, 256)  16640       activation_5[0][0]               
__________________________________________________________________________________________________
bn_2_identity_1_c (BatchNormali (None, 11, 11, 256)  1024        res_2_identity_1_c[0][0]         
__________________________________________________________________________________________________
add_1 (Add)                     (None, 11, 11, 256)  0           bn_2_identity_1_c[0][0]          
                                                                 activation_3[0][0]               
__________________________________________________________________________________________________
activation_6 (Activation)       (None, 11, 11, 256)  0           add_1[0][0]                      
__________________________________________________________________________________________________
res_2_identity_2_a (Conv2D)     (None, 11, 11, 64)   16448       activation_6[0][0]               
__________________________________________________________________________________________________
bn_2_identity_2_a (BatchNormali (None, 11, 11, 64)   256         res_2_identity_2_a[0][0]         
__________________________________________________________________________________________________
activation_7 (Activation)       (None, 11, 11, 64)   0           bn_2_identity_2_a[0][0]          
__________________________________________________________________________________________________
res_2_identity_2_b (Conv2D)     (None, 11, 11, 64)   36928       activation_7[0][0]               
__________________________________________________________________________________________________
bn_2_identity_2_b (BatchNormali (None, 11, 11, 64)   256         res_2_identity_2_b[0][0]         
__________________________________________________________________________________________________
activation_8 (Activation)       (None, 11, 11, 64)   0           bn_2_identity_2_b[0][0]          
__________________________________________________________________________________________________
res_2_identity_2_c (Conv2D)     (None, 11, 11, 256)  16640       activation_8[0][0]               
__________________________________________________________________________________________________
bn_2_identity_2_c (BatchNormali (None, 11, 11, 256)  1024        res_2_identity_2_c[0][0]         
__________________________________________________________________________________________________
add_2 (Add)                     (None, 11, 11, 256)  0           bn_2_identity_2_c[0][0]          
                                                                 activation_6[0][0]               
__________________________________________________________________________________________________
activation_9 (Activation)       (None, 11, 11, 256)  0           add_2[0][0]                      
__________________________________________________________________________________________________
res_3_conv_a (Conv2D)           (None, 11, 11, 128)  32896       activation_9[0][0]               
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D)  (None, 5, 5, 128)    0           res_3_conv_a[0][0]               
__________________________________________________________________________________________________
bn_3_conv_a (BatchNormalization (None, 5, 5, 128)    512         max_pooling2d_3[0][0]            
__________________________________________________________________________________________________
activation_10 (Activation)      (None, 5, 5, 128)    0           bn_3_conv_a[0][0]                
__________________________________________________________________________________________________
res_3_conv_b (Conv2D)           (None, 5, 5, 128)    147584      activation_10[0][0]              
__________________________________________________________________________________________________
bn_3_conv_b (BatchNormalization (None, 5, 5, 128)    512         res_3_conv_b[0][0]               
__________________________________________________________________________________________________
activation_11 (Activation)      (None, 5, 5, 128)    0           bn_3_conv_b[0][0]                
__________________________________________________________________________________________________
res_3_conv_copy (Conv2D)        (None, 11, 11, 512)  131584      activation_9[0][0]               
__________________________________________________________________________________________________
res_3_conv_c (Conv2D)           (None, 5, 5, 512)    66048       activation_11[0][0]              
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D)  (None, 5, 5, 512)    0           res_3_conv_copy[0][0]            
__________________________________________________________________________________________________
bn_3_conv_c (BatchNormalization (None, 5, 5, 512)    2048        res_3_conv_c[0][0]               
__________________________________________________________________________________________________
bn_3_conv_copy (BatchNormalizat (None, 5, 5, 512)    2048        max_pooling2d_4[0][0]            
__________________________________________________________________________________________________
add_3 (Add)                     (None, 5, 5, 512)    0           bn_3_conv_c[0][0]                
                                                                 bn_3_conv_copy[0][0]             
__________________________________________________________________________________________________
activation_12 (Activation)      (None, 5, 5, 512)    0           add_3[0][0]                      
__________________________________________________________________________________________________
res_3_identity_1_a (Conv2D)     (None, 5, 5, 128)    65664       activation_12[0][0]              
__________________________________________________________________________________________________
bn_3_identity_1_a (BatchNormali (None, 5, 5, 128)    512         res_3_identity_1_a[0][0]         
__________________________________________________________________________________________________
activation_13 (Activation)      (None, 5, 5, 128)    0           bn_3_identity_1_a[0][0]          
__________________________________________________________________________________________________
res_3_identity_1_b (Conv2D)     (None, 5, 5, 128)    147584      activation_13[0][0]              
__________________________________________________________________________________________________
bn_3_identity_1_b (BatchNormali (None, 5, 5, 128)    512         res_3_identity_1_b[0][0]         
__________________________________________________________________________________________________
activation_14 (Activation)      (None, 5, 5, 128)    0           bn_3_identity_1_b[0][0]          
__________________________________________________________________________________________________
res_3_identity_1_c (Conv2D)     (None, 5, 5, 512)    66048       activation_14[0][0]              
__________________________________________________________________________________________________
bn_3_identity_1_c (BatchNormali (None, 5, 5, 512)    2048        res_3_identity_1_c[0][0]         
__________________________________________________________________________________________________
add_4 (Add)                     (None, 5, 5, 512)    0           bn_3_identity_1_c[0][0]          
                                                                 activation_12[0][0]              
__________________________________________________________________________________________________
activation_15 (Activation)      (None, 5, 5, 512)    0           add_4[0][0]                      
__________________________________________________________________________________________________
res_3_identity_2_a (Conv2D)     (None, 5, 5, 128)    65664       activation_15[0][0]              
__________________________________________________________________________________________________
bn_3_identity_2_a (BatchNormali (None, 5, 5, 128)    512         res_3_identity_2_a[0][0]         
__________________________________________________________________________________________________
activation_16 (Activation)      (None, 5, 5, 128)    0           bn_3_identity_2_a[0][0]          
__________________________________________________________________________________________________
res_3_identity_2_b (Conv2D)     (None, 5, 5, 128)    147584      activation_16[0][0]              
__________________________________________________________________________________________________
bn_3_identity_2_b (BatchNormali (None, 5, 5, 128)    512         res_3_identity_2_b[0][0]         
__________________________________________________________________________________________________
activation_17 (Activation)      (None, 5, 5, 128)    0           bn_3_identity_2_b[0][0]          
__________________________________________________________________________________________________
res_3_identity_2_c (Conv2D)     (None, 5, 5, 512)    66048       activation_17[0][0]              
__________________________________________________________________________________________________
bn_3_identity_2_c (BatchNormali (None, 5, 5, 512)    2048        res_3_identity_2_c[0][0]         
__________________________________________________________________________________________________
add_5 (Add)                     (None, 5, 5, 512)    0           bn_3_identity_2_c[0][0]          
                                                                 activation_15[0][0]              
__________________________________________________________________________________________________
activation_18 (Activation)      (None, 5, 5, 512)    0           add_5[0][0]                      
__________________________________________________________________________________________________
Averagea_Pooling (AveragePoolin (None, 2, 2, 512)    0           activation_18[0][0]              
__________________________________________________________________________________________________
flatten (Flatten)               (None, 2048)         0           Averagea_Pooling[0][0]           
__________________________________________________________________________________________________
dense (Dense)                   (None, 4096)         8392704     flatten[0][0]                    
__________________________________________________________________________________________________
dropout (Dropout)               (None, 4096)         0           dense[0][0]                      
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 2048)         8390656     dropout[0][0]                    
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 2048)         0           dense_1[0][0]                    
__________________________________________________________________________________________________
dense_2 (Dense)                 (None, 30)           61470       dropout_1[0][0]                  
==================================================================================================
Total params: 18,016,286
Trainable params: 18,007,710
Non-trainable params: 8,576
__________________________________________________________________________________________________
In [36]:
# Getting the model history keys 
history.history.keys()
Out[36]:
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
In [37]:
# plot the training artifacts

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train_loss','val_loss'], loc = 'upper right')
plt.show()
In [38]:
# Make prediction using the testing dataset
df_predict = model.predict(X_test)
In [39]:
# Print the rmse loss values

from sklearn.metrics import mean_squared_error
from math import sqrt

rms = sqrt(mean_squared_error(y_test, df_predict))
print("RMSE value : {}".format(rms))
RMSE value : 6.322913314927379
In [40]:
# Convert the predicted values into a dataframe

df_predict= pd.DataFrame(df_predict, columns = columns)
df_predict.head()
Out[40]:
left_eye_center_x left_eye_center_y right_eye_center_x right_eye_center_y left_eye_inner_corner_x left_eye_inner_corner_y left_eye_outer_corner_x left_eye_outer_corner_y right_eye_inner_corner_x right_eye_inner_corner_y ... nose_tip_x nose_tip_y mouth_left_corner_x mouth_left_corner_y mouth_right_corner_x mouth_right_corner_y mouth_center_top_lip_x mouth_center_top_lip_y mouth_center_bottom_lip_x mouth_center_bottom_lip_y
0 65.154404 36.612583 31.130142 39.964916 59.072594 37.855476 71.778587 36.915302 37.613567 40.037167 ... 51.647491 54.010353 66.692085 71.180481 37.524170 74.002769 52.150669 68.108337 52.914207 78.149536
1 60.114082 39.118500 32.973644 36.810299 55.387520 39.898491 65.595970 40.300369 39.656254 38.642666 ... 56.804279 56.438549 54.328701 75.653290 32.783192 74.189224 47.764057 73.447296 47.329479 73.812553
2 69.142952 38.618374 30.264132 35.572491 60.520973 39.552452 78.610367 39.905857 38.856220 37.723526 ... 47.938633 60.523029 68.170990 72.545197 26.584059 70.394226 47.154415 73.249489 46.840172 81.142830
3 65.802940 37.181976 28.217066 40.493561 59.020416 38.082733 73.153755 38.074074 35.435806 40.185780 ... 49.096264 50.730274 67.086937 74.375237 34.260624 77.349609 50.059605 66.967087 50.999660 83.270958
4 68.812233 26.664133 26.540789 27.582954 59.161774 28.879942 78.780045 28.023087 36.416523 28.938408 ... 47.266449 50.809135 64.781044 72.603157 32.466076 72.701363 48.005661 68.559364 48.880390 76.719803

5 rows × 30 columns

VISUALIZING THE RESULTS

In [41]:
# Plot the test images and their predicted keypoints

fig = plt.figure(figsize=(20, 20))

for i in range(8):
    ax = fig.add_subplot(4, 2, i + 1)
    # Using squeeze to convert the image shape from (96,96,1) to (96,96)
    plt.imshow(X_test[i].squeeze(),cmap='gray')
    for j in range(1,31,2):
            plt.plot(df_predict.loc[i][j-1], df_predict.loc[i][j], 'r.')

COMPILING THE MODEL USING DEEPC

In [42]:
!deepCC model.h5
reading [keras model] from 'model.h5'
Saved 'model.onnx'
reading onnx model from file  model.onnx
Model info:
  ir_vesion :  5 
  doc       : 
WARN (ONNX): graph-node conv1's attribute auto_pad has no meaningful data.
WARN (ONNX): graph-node res_2_conv_b's attribute auto_pad has no meaningful data.
WARN (ONNX): graph-node res_2_identity_1_b's attribute auto_pad has no meaningful data.
WARN (ONNX): graph-node res_2_identity_2_b's attribute auto_pad has no meaningful data.
WARN (ONNX): graph-node res_3_conv_b's attribute auto_pad has no meaningful data.
WARN (ONNX): graph-node res_3_identity_1_b's attribute auto_pad has no meaningful data.
WARN (ONNX): graph-node res_3_identity_2_b's attribute auto_pad has no meaningful data.
WARN (ONNX): terminal (input/output) input_1's shape is less than 1.
             changing it to 1.
WARN (ONNX): terminal (input/output) dense_2's shape is less than 1.
             changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_2) as io node.
running DNNC graph sanity check ... passed.
Writing C++ file  model_deepC/model.cpp
INFO (ONNX): model files are ready in dir model_deepC
g++ -std=c++11 -O3 -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 model_deepC/model.cpp -o model_deepC/model.exe
Model executable  model_deepC/model.exe

FACE RECOGNITION MODEL USING DLIB LIBRARY

DLIB is a library which conatains models which are trained to find KEY FACIAL POINTS using a very large dataset and with a more complex Neural Network Architecture.It has been trained for a sufficiently long time and it can be used for various Face Applications

In [43]:
#Import dlib library
import dlib
dlib.__version__
Out[43]:
'19.21.0'
In [44]:
!wget -N "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/dlib_face_recognition.zip"
!unzip -o dlib_face_recognition.zip
!rm dlib_face_recognition.zip
--2020-11-29 08:55:00--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/dlib_face_recognition.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.64.8
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.64.8|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 27755406 (26M) [application/zip]
Saving to: ‘dlib_face_recognition.zip’

dlib_face_recogniti 100%[===================>]  26.47M  80.2MB/s    in 0.3s    

2020-11-29 08:55:01 (80.2 MB/s) - ‘dlib_face_recognition.zip’ saved [27755406/27755406]

Archive:  dlib_face_recognition.zip
  inflating: dlib_face_recognition/shape_predictor_5_face_landmarks.dat  
  inflating: dlib_face_recognition/downey2.jpg  
  inflating: dlib_face_recognition/dlib_face_recognition_resnet_model_v1.dat  
  inflating: dlib_face_recognition/downey1.jpg  
  inflating: dlib_face_recognition/chris.jpg  
In [45]:
#Initializing various modules which we will be using for face recognition
detector = dlib.get_frontal_face_detector()
sp = dlib.shape_predictor("dlib_face_recognition/shape_predictor_5_face_landmarks.dat")
detec_model = dlib.face_recognition_model_v1("dlib_face_recognition/dlib_face_recognition_resnet_model_v1.dat")
In [46]:
#Loading images of Celebs
downey1_img = dlib.load_rgb_image("dlib_face_recognition/downey1.jpg")
downey2_img = dlib.load_rgb_image("dlib_face_recognition/downey2.jpg")
chris_img = dlib.load_rgb_image("dlib_face_recognition/chris.jpg")
In [47]:
#Detecting face using Dlib
img1_detected = detector(downey1_img)
img2_detected = detector(downey2_img)
img3_detected = detector(chris_img)
In [48]:
#Determinig the key facial landmarks
img1_shape = sp(downey1_img, img1_detected[0])
img2_shape = sp(downey2_img, img2_detected[0])
img3_shape = sp(chris_img, img3_detected[0])
In [49]:
#Aligning the face for better results using Dlib
img1_aligned = dlib.get_face_chip(downey1_img, img1_shape)
img2_aligned = dlib.get_face_chip(downey2_img, img2_shape)
img3_aligned = dlib.get_face_chip(chris_img, img3_shape)
In [50]:
#Converting the aligned image in the form of vectors
img1_rep = detec_model.compute_face_descriptor(img1_aligned)
img2_rep = detec_model.compute_face_descriptor(img2_aligned)
img3_rep = detec_model.compute_face_descriptor(img3_aligned)
In [51]:
#Function to find euclidean distance between source points and target poinst which will be later used for comaprisons between different faces 
def findeuclidean(source_rep, test_rep):
    dist = source_rep - test_rep
    dist = np.sum(np.multiply(dist, dist))
    dist = np.sqrt(dist)
    return dist
In [52]:
#Function to check whether the target image matches with the source image
#We will be comparing the key facial points co-ordinates by calculating their euclidean distance
#If the distance is less than a threshold value then the images are of same person otherwise different persons
def check_result(img1_rep, img2_rep):
    threshold = 0.6
    distance = findeuclidean(img1_rep, img2_rep)
    if(distance < threshold):
        print("Both presonalities are same!")
    else:
        print("Both personalities are different!")

ACCESSING THE PERFORMANCE OF THE MODEL

In [53]:
plt.imshow(img1_aligned)
plt.show()
plt.imshow(img2_aligned)
plt.show()
img1_rep = np.array(img1_rep)
img2_rep = np.array(img2_rep)
In [54]:
check_result(img1_rep, img2_rep)
Both presonalities are same!
In [55]:
plt.imshow(img1_aligned)
plt.show()
plt.imshow(img3_aligned)
plt.show()
img1_rep = np.array(img1_rep)
img3_rep = np.array(img3_rep)
In [56]:
check_result(img1_rep, img3_rep)
Both personalities are different!