Cainvas
Model Files
ageModel.h5
keras
Model
GenderModel.h5
keras
Model
deepSea Compiled Models
ageModel.exe
deepSea
Ubuntu
GenderModel.exe
deepSea
Ubuntu

Age-Gender Prediction

Credit: AITS Cainvas Community

Photo by Zhu Eason on Dribbble

Age and Gender has always been an important feature of our identity. It is also an important factor in our social life. Predictions of age and Gender made with AI can be applied to many areas such as intelligent human-machine interface development, security, cosmetics, electronic commerce.

Import the Dataset

In [1]:
!wget -N "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/age_gender.zip"
!unzip -qo age_gender.zip 
!rm age_gender.zip
--2020-12-14 08:58:20--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/age_gender.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.66.120
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.66.120|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 66303838 (63M) [application/zip]
Saving to: ‘age_gender.zip’

age_gender.zip      100%[===================>]  63.23M   108MB/s    in 0.6s    

2020-12-14 08:58:21 (108 MB/s) - ‘age_gender.zip’ saved [66303838/66303838]

Import necessary Libraries

In [2]:
import numpy as np 
import pandas as pd 
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow.keras.layers import Conv2D,InputLayer, Dropout, BatchNormalization, Flatten, Dense, MaxPooling2D
from tensorflow.keras import utils
from tensorflow.keras.models import Sequential
from keras.callbacks import ModelCheckpoint

Data Analysis

In [3]:
Dataset = pd.read_csv('age_gender.csv')
Dataset.head(5)
Out[3]:
age ethnicity gender img_name pixels
0 1 2 0 20161219203650636.jpg.chip.jpg 129 128 128 126 127 130 133 135 139 142 145 14...
1 1 2 0 20161219222752047.jpg.chip.jpg 164 74 111 168 169 171 175 182 184 188 193 199...
2 1 2 0 20161219222832191.jpg.chip.jpg 67 70 71 70 69 67 70 79 90 103 116 132 145 155...
3 1 2 0 20161220144911423.jpg.chip.jpg 193 197 198 200 199 200 202 203 204 205 208 21...
4 1 2 0 20161220144914327.jpg.chip.jpg 202 205 209 210 209 209 210 211 212 214 218 21...
In [4]:
Dataset.describe()
Out[4]:
age ethnicity gender
count 23705.000000 23705.000000 23705.000000
mean 33.300907 1.269226 0.477283
std 19.885708 1.345638 0.499494
min 1.000000 0.000000 0.000000
25% 23.000000 0.000000 0.000000
50% 29.000000 1.000000 0.000000
75% 45.000000 2.000000 1.000000
max 116.000000 4.000000 1.000000
In [5]:
# Transforming pixels which is in string format to numpy array
Dataset['pixels'] = Dataset['pixels'].map(lambda x: np.array(x.split(' '), dtype=np.float32).reshape(48, 48))
In [6]:
# Plotting the data according to age
Dataset['age'].hist()
Out[6]:
<AxesSubplot:>
In [7]:
# Putting the age into a category
Dataset["age_cat"] = pd.cut(Dataset["age"],
                               bins=[0., 20., 40.0, 60., 80., np.inf],
                               labels=[1, 2, 3, 4, 5])
In [8]:
# Counting the category of data 
Dataset["age_cat"].value_counts()
Out[8]:
2    12122
1     4877
3     4311
4     1855
5      540
Name: age_cat, dtype: int64

Creation of two dataset for AgeModel and GenderModel Accordingly

In [9]:
from sklearn.model_selection import StratifiedShuffleSplit

split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(Dataset, Dataset["age_cat"]):
    strat_train_set = Dataset.loc[train_index]
    strat_test_set = Dataset.loc[test_index]
In [10]:
def age_cat_proportions(data):
    return data["age_cat"].value_counts() / len(data)

train_set, test_set = train_test_split(Dataset, test_size=0.2, random_state=42)

compare_props = pd.DataFrame({
    "Overall": age_cat_proportions(Dataset),
    "Stratified": age_cat_proportions(strat_test_set),
    "Random": age_cat_proportions(test_set),
}).sort_index()
compare_props["Rand. %error"] = 100 * compare_props["Random"] / compare_props["Overall"] - 100
compare_props["Strat. %error"] = 100 * compare_props["Stratified"] / compare_props["Overall"] - 100
In [11]:
compare_props
Out[11]:
Overall Stratified Random Rand. %error Strat. %error
1 0.205737 0.205864 0.201223 -2.193972 0.061513
2 0.511369 0.511285 0.517401 1.179673 -0.016499
3 0.181860 0.181818 0.178865 -1.646950 -0.023196
4 0.078254 0.078254 0.079730 1.886792 0.000000
5 0.022780 0.022780 0.022780 0.000000 0.000000
In [12]:
for set_ in (strat_train_set, strat_test_set):
    set_.drop("age_cat", axis=1, inplace=True)

Data Visualization

In [13]:
strat_train_set
Out[13]:
age ethnicity gender img_name pixels
13327 35 1 1 20170117154137390.jpg.chip.jpg [[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,...
1216 10 4 1 20170104005649671.jpg.chip.jpg [[16.0, 13.0, 12.0, 10.0, 10.0, 14.0, 21.0, 29...
7056 26 3 1 20170117153041485.jpg.chip.jpg [[37.0, 23.0, 14.0, 16.0, 18.0, 15.0, 16.0, 28...
18424 50 0 0 20170104181517653.jpg.chip.jpg [[95.0, 93.0, 93.0, 106.0, 160.0, 180.0, 184.0...
5273 24 2 1 20170116173419546.jpg.chip.jpg [[19.0, 22.0, 33.0, 50.0, 74.0, 102.0, 123.0, ...
... ... ... ... ... ...
14209 36 0 0 20170105163417082.jpg.chip.jpg [[52.0, 46.0, 32.0, 35.0, 46.0, 94.0, 124.0, 1...
3291 20 3 1 20170104222031311.jpg.chip.jpg [[128.0, 16.0, 30.0, 29.0, 15.0, 12.0, 12.0, 1...
6976 26 3 1 20170117154857092.jpg.chip.jpg [[1.0, 1.0, 1.0, 2.0, 2.0, 6.0, 11.0, 30.0, 64...
21892 7 0 0 20170110215534588.jpg.chip.jpg [[100.0, 114.0, 130.0, 125.0, 141.0, 129.0, 13...
22411 75 0 1 20170110182543266.jpg.chip.jpg [[140.0, 140.0, 141.0, 142.0, 136.0, 128.0, 12...

18964 rows × 5 columns

In [14]:
full_dataset = strat_train_set.append(strat_test_set)
In [15]:
full_dataset.head()
Out[15]:
age ethnicity gender img_name pixels
13327 35 1 1 20170117154137390.jpg.chip.jpg [[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,...
1216 10 4 1 20170104005649671.jpg.chip.jpg [[16.0, 13.0, 12.0, 10.0, 10.0, 14.0, 21.0, 29...
7056 26 3 1 20170117153041485.jpg.chip.jpg [[37.0, 23.0, 14.0, 16.0, 18.0, 15.0, 16.0, 28...
18424 50 0 0 20170104181517653.jpg.chip.jpg [[95.0, 93.0, 93.0, 106.0, 160.0, 180.0, 184.0...
5273 24 2 1 20170116173419546.jpg.chip.jpg [[19.0, 22.0, 33.0, 50.0, 74.0, 102.0, 123.0, ...
In [16]:
strat_test_set
Out[16]:
age ethnicity gender img_name pixels
20402 6 3 1 20161220223138171.jpg.chip.jpg [[34.0, 39.0, 43.0, 36.0, 48.0, 66.0, 89.0, 11...
17678 48 3 0 20170119181346742.jpg.chip.jpg [[92.0, 85.0, 92.0, 106.0, 135.0, 148.0, 150.0...
7736 26 0 0 20170117195743604.jpg.chip.jpg [[36.0, 42.0, 55.0, 49.0, 77.0, 94.0, 88.0, 88...
10200 29 1 0 20170113145429766.jpg.chip.jpg [[254.0, 254.0, 255.0, 255.0, 255.0, 252.0, 24...
5019 24 0 0 20170114030346768.jpg.chip.jpg [[56.0, 20.0, 17.0, 21.0, 18.0, 18.0, 18.0, 24...
... ... ... ... ... ...
19765 56 0 1 20170109221138733.jpg.chip.jpg [[18.0, 19.0, 35.0, 16.0, 22.0, 45.0, 72.0, 10...
3063 2 2 1 20161219141208216.jpg.chip.jpg [[243.0, 164.0, 66.0, 23.0, 49.0, 80.0, 71.0, ...
15364 4 2 0 20161219200024691.jpg.chip.jpg [[23.0, 25.0, 39.0, 54.0, 70.0, 81.0, 84.0, 89...
7990 26 0 1 20170117175317465.jpg.chip.jpg [[172.0, 172.0, 171.0, 173.0, 170.0, 170.0, 17...
18600 52 0 0 20170113183829983.jpg.chip.jpg [[101.0, 141.0, 198.0, 232.0, 238.0, 235.0, 22...

4741 rows × 5 columns

In [17]:
full_dataset['pixels'] = full_dataset['pixels'].apply(lambda x: x/255)

age_dist = full_dataset['age'].value_counts()
ethnicity_dist = full_dataset['ethnicity'].value_counts()
gender_dist = full_dataset['gender'].value_counts().rename(index={0:'Male',1:'Female'})
In [18]:
X = np.array(full_dataset['pixels'].tolist())

## Converting pixels from 1D to 3D
X = X.reshape(X.shape[0],48,48,1)

Train-Test Split

In [19]:
# split the data into train ad test
np.random.seed(42)
y_age = np.array(full_dataset['age'])
y_gender = np.array(full_dataset['gender'])
print('X',X.shape)
print('y_age',y_age.shape)
print('y_gender',y_gender.shape)

X_train, X_test, y_age_train, y_age_test, y_gender_train, y_gender_test = train_test_split(X,y_age, y_gender, test_size=0.2, random_state=42)
X (23705, 48, 48, 1)
y_age (23705,)
y_gender (23705,)
In [20]:
def plot(X,y):
        plt.title(y)
        plt.imshow(X.reshape(48,48))
        plt.show()
In [21]:
plot(full_dataset['pixels'][50],full_dataset['gender'][10])

AgeModel Architecture

In [22]:
import tensorflow.keras.layers as L

tf.keras.backend.clear_session()

AgeModel = tf.keras.Sequential([
    L.InputLayer(input_shape=(48,48,1)),
    L.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),
    L.BatchNormalization(),
    L.MaxPooling2D((2, 2)),
    L.Conv2D(64, (3, 3), activation='relu'),
    L.MaxPooling2D((2, 2)),
    L.Flatten(),
    L.Dense(64, activation='relu'),
    L.Dropout(rate=0.5),
    L.Dense(1)
])

AgeModel.compile(optimizer='adam',
              loss='mean_squared_error')
In [23]:
AgeModel.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 46, 46, 32)        320       
_________________________________________________________________
batch_normalization (BatchNo (None, 46, 46, 32)        128       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 23, 23, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 21, 21, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 10, 10, 64)        0         
_________________________________________________________________
flatten (Flatten)            (None, 6400)              0         
_________________________________________________________________
dense (Dense)                (None, 64)                409664    
_________________________________________________________________
dropout (Dropout)            (None, 64)                0         
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 65        
=================================================================
Total params: 428,673
Trainable params: 428,609
Non-trainable params: 64
_________________________________________________________________

AgeModel Training

In [24]:
checkpointer = ModelCheckpoint('ageModel.h5', monitor='val_loss', mode='min', verbose=2, save_best_only=True)
In [25]:
history = AgeModel.fit(X_train, y_age_train, epochs=50, validation_split=0.2, batch_size=64,callbacks=[checkpointer])
Epoch 1/50
234/238 [============================>.] - ETA: 0s - loss: 286.2662
Epoch 00001: val_loss improved from inf to 1066.42444, saving model to ageModel.h5
238/238 [==============================] - 1s 6ms/step - loss: 285.5999 - val_loss: 1066.4244
Epoch 2/50
232/238 [============================>.] - ETA: 0s - loss: 185.9249
Epoch 00002: val_loss improved from 1066.42444 to 540.01013, saving model to ageModel.h5
238/238 [==============================] - 1s 5ms/step - loss: 185.5370 - val_loss: 540.0101
Epoch 3/50
233/238 [============================>.] - ETA: 0s - loss: 164.4420
Epoch 00003: val_loss improved from 540.01013 to 213.64314, saving model to ageModel.h5
238/238 [==============================] - 1s 5ms/step - loss: 163.9943 - val_loss: 213.6431
Epoch 4/50
233/238 [============================>.] - ETA: 0s - loss: 151.6891
Epoch 00004: val_loss improved from 213.64314 to 196.80907, saving model to ageModel.h5
238/238 [==============================] - 1s 5ms/step - loss: 151.3881 - val_loss: 196.8091
Epoch 5/50
233/238 [============================>.] - ETA: 0s - loss: 143.1645
Epoch 00005: val_loss improved from 196.80907 to 114.22736, saving model to ageModel.h5
238/238 [==============================] - 1s 5ms/step - loss: 143.3918 - val_loss: 114.2274
Epoch 6/50
232/238 [============================>.] - ETA: 0s - loss: 134.3705
Epoch 00006: val_loss did not improve from 114.22736
238/238 [==============================] - 1s 5ms/step - loss: 134.1076 - val_loss: 297.4822
Epoch 7/50
232/238 [============================>.] - ETA: 0s - loss: 128.2115
Epoch 00007: val_loss did not improve from 114.22736
238/238 [==============================] - 1s 5ms/step - loss: 127.8800 - val_loss: 140.7396
Epoch 8/50
232/238 [============================>.] - ETA: 0s - loss: 123.9512
Epoch 00008: val_loss did not improve from 114.22736
238/238 [==============================] - 1s 5ms/step - loss: 124.0468 - val_loss: 119.5395
Epoch 9/50
232/238 [============================>.] - ETA: 0s - loss: 121.6847
Epoch 00009: val_loss improved from 114.22736 to 91.66720, saving model to ageModel.h5
238/238 [==============================] - 1s 5ms/step - loss: 121.3594 - val_loss: 91.6672
Epoch 10/50
232/238 [============================>.] - ETA: 0s - loss: 117.3604
Epoch 00010: val_loss did not improve from 91.66720
238/238 [==============================] - 1s 5ms/step - loss: 117.3798 - val_loss: 177.7045
Epoch 11/50
232/238 [============================>.] - ETA: 0s - loss: 112.4703
Epoch 00011: val_loss did not improve from 91.66720
238/238 [==============================] - 1s 5ms/step - loss: 112.2663 - val_loss: 102.4054
Epoch 12/50
232/238 [============================>.] - ETA: 0s - loss: 113.0116
Epoch 00012: val_loss did not improve from 91.66720
238/238 [==============================] - 1s 5ms/step - loss: 112.8832 - val_loss: 206.0475
Epoch 13/50
232/238 [============================>.] - ETA: 0s - loss: 108.5813
Epoch 00013: val_loss did not improve from 91.66720
238/238 [==============================] - 1s 5ms/step - loss: 108.2748 - val_loss: 194.7007
Epoch 14/50
233/238 [============================>.] - ETA: 0s - loss: 108.8550
Epoch 00014: val_loss did not improve from 91.66720
238/238 [==============================] - 1s 5ms/step - loss: 108.5674 - val_loss: 113.1224
Epoch 15/50
232/238 [============================>.] - ETA: 0s - loss: 102.2055
Epoch 00015: val_loss did not improve from 91.66720
238/238 [==============================] - 1s 5ms/step - loss: 102.0137 - val_loss: 100.8581
Epoch 16/50
232/238 [============================>.] - ETA: 0s - loss: 101.6277
Epoch 00016: val_loss did not improve from 91.66720
238/238 [==============================] - 1s 5ms/step - loss: 101.6285 - val_loss: 94.6730
Epoch 17/50
233/238 [============================>.] - ETA: 0s - loss: 99.8282
Epoch 00017: val_loss did not improve from 91.66720
238/238 [==============================] - 1s 5ms/step - loss: 99.7062 - val_loss: 105.9717
Epoch 18/50
233/238 [============================>.] - ETA: 0s - loss: 100.0202
Epoch 00018: val_loss improved from 91.66720 to 86.29923, saving model to ageModel.h5
238/238 [==============================] - 1s 5ms/step - loss: 100.5353 - val_loss: 86.2992
Epoch 19/50
232/238 [============================>.] - ETA: 0s - loss: 95.1813
Epoch 00019: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 95.4503 - val_loss: 144.0168
Epoch 20/50
232/238 [============================>.] - ETA: 0s - loss: 97.6045
Epoch 00020: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 97.8340 - val_loss: 90.2909
Epoch 21/50
232/238 [============================>.] - ETA: 0s - loss: 90.5480
Epoch 00021: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 90.4660 - val_loss: 149.7598
Epoch 22/50
232/238 [============================>.] - ETA: 0s - loss: 90.0907
Epoch 00022: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 90.1268 - val_loss: 94.0419
Epoch 23/50
232/238 [============================>.] - ETA: 0s - loss: 90.7165
Epoch 00023: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 90.7972 - val_loss: 155.4871
Epoch 24/50
232/238 [============================>.] - ETA: 0s - loss: 89.8452
Epoch 00024: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 89.6999 - val_loss: 131.3130
Epoch 25/50
231/238 [============================>.] - ETA: 0s - loss: 87.8657
Epoch 00025: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 88.0912 - val_loss: 146.1218
Epoch 26/50
232/238 [============================>.] - ETA: 0s - loss: 84.8350
Epoch 00026: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 85.0894 - val_loss: 100.7113
Epoch 27/50
232/238 [============================>.] - ETA: 0s - loss: 82.9415
Epoch 00027: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 82.6979 - val_loss: 95.1299
Epoch 28/50
232/238 [============================>.] - ETA: 0s - loss: 84.9293
Epoch 00028: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 84.8110 - val_loss: 112.3625
Epoch 29/50
232/238 [============================>.] - ETA: 0s - loss: 86.7754
Epoch 00029: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 86.6460 - val_loss: 97.8881
Epoch 30/50
233/238 [============================>.] - ETA: 0s - loss: 82.4508
Epoch 00030: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 82.6687 - val_loss: 101.1854
Epoch 31/50
232/238 [============================>.] - ETA: 0s - loss: 80.2478
Epoch 00031: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 80.3190 - val_loss: 103.0450
Epoch 32/50
232/238 [============================>.] - ETA: 0s - loss: 78.3151
Epoch 00032: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 78.2053 - val_loss: 95.6181
Epoch 33/50
232/238 [============================>.] - ETA: 0s - loss: 76.2125
Epoch 00033: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 76.1204 - val_loss: 133.4474
Epoch 34/50
232/238 [============================>.] - ETA: 0s - loss: 79.7978
Epoch 00034: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 79.8340 - val_loss: 99.6469
Epoch 35/50
232/238 [============================>.] - ETA: 0s - loss: 78.1433
Epoch 00035: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 78.5447 - val_loss: 120.9925
Epoch 36/50
232/238 [============================>.] - ETA: 0s - loss: 79.2853
Epoch 00036: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 79.0875 - val_loss: 109.8794
Epoch 37/50
232/238 [============================>.] - ETA: 0s - loss: 80.2814
Epoch 00037: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 80.0441 - val_loss: 143.4534
Epoch 38/50
233/238 [============================>.] - ETA: 0s - loss: 75.6576
Epoch 00038: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 75.5048 - val_loss: 92.3368
Epoch 39/50
232/238 [============================>.] - ETA: 0s - loss: 75.7996
Epoch 00039: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 75.5266 - val_loss: 107.0028
Epoch 40/50
232/238 [============================>.] - ETA: 0s - loss: 73.2651
Epoch 00040: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 73.3241 - val_loss: 95.4426
Epoch 41/50
232/238 [============================>.] - ETA: 0s - loss: 73.3477
Epoch 00041: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 73.1804 - val_loss: 102.3233
Epoch 42/50
232/238 [============================>.] - ETA: 0s - loss: 73.6182
Epoch 00042: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 73.6006 - val_loss: 103.4750
Epoch 43/50
232/238 [============================>.] - ETA: 0s - loss: 75.9336
Epoch 00043: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 76.0361 - val_loss: 127.1263
Epoch 44/50
232/238 [============================>.] - ETA: 0s - loss: 71.5623
Epoch 00044: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 71.8548 - val_loss: 130.1533
Epoch 45/50
232/238 [============================>.] - ETA: 0s - loss: 70.1876
Epoch 00045: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 69.8612 - val_loss: 116.8164
Epoch 46/50
232/238 [============================>.] - ETA: 0s - loss: 69.4999
Epoch 00046: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 69.6164 - val_loss: 101.5540
Epoch 47/50
232/238 [============================>.] - ETA: 0s - loss: 71.9057
Epoch 00047: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 71.7430 - val_loss: 98.4158
Epoch 48/50
232/238 [============================>.] - ETA: 0s - loss: 71.3997
Epoch 00048: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 71.3280 - val_loss: 104.6958
Epoch 49/50
231/238 [============================>.] - ETA: 0s - loss: 69.8919
Epoch 00049: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 69.7740 - val_loss: 101.4516
Epoch 50/50
232/238 [============================>.] - ETA: 0s - loss: 66.0722
Epoch 00050: val_loss did not improve from 86.29923
238/238 [==============================] - 1s 5ms/step - loss: 66.1853 - val_loss: 101.6719

Training Plot

In [26]:
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.show()

Evaluating AgeModel

In [27]:
AgeModel.evaluate(X_test,y_age_test)
149/149 [==============================] - 0s 2ms/step - loss: 99.3864
Out[27]:
99.38643646240234
In [28]:
y_age_test[:10]
Out[28]:
array([90, 37, 35, 24, 40, 24, 21, 40, 54, 57])
In [29]:
y_age_pred = AgeModel.predict(X_test[:10])
np.round(y_age_pred)
Out[29]:
array([[105.],
       [ 26.],
       [ 37.],
       [ 29.],
       [ 29.],
       [ 36.],
       [ 24.],
       [ 27.],
       [ 54.],
       [ 46.]], dtype=float32)

GenderModle Architecture

In [30]:
##Gender Model
tf.keras.backend.clear_session()
GenderModel = tf.keras.Sequential([
    L.InputLayer(input_shape=(48,48,1)),
    L.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),
    L.BatchNormalization(),
    L.MaxPooling2D((2, 2)),
    L.Conv2D(64, (3, 3), activation='relu'),
    L.MaxPooling2D((2, 2)),
    L.Flatten(),
    L.Dense(64, activation='relu'),
    L.Dropout(rate=0.5),
    L.Dense(1, activation='sigmoid')
])

GenderModel.compile(optimizer='adam',
              loss=tf.keras.losses.BinaryCrossentropy(),
              metrics=['accuracy'])
In [31]:
GenderModel.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 46, 46, 32)        320       
_________________________________________________________________
batch_normalization (BatchNo (None, 46, 46, 32)        128       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 23, 23, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 21, 21, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 10, 10, 64)        0         
_________________________________________________________________
flatten (Flatten)            (None, 6400)              0         
_________________________________________________________________
dense (Dense)                (None, 64)                409664    
_________________________________________________________________
dropout (Dropout)            (None, 64)                0         
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 65        
=================================================================
Total params: 428,673
Trainable params: 428,609
Non-trainable params: 64
_________________________________________________________________

GenderModel Training

In [32]:
Gender_history = GenderModel.fit(
    X_train, y_gender_train, epochs=18, validation_split=0.2, batch_size=64)
Epoch 1/18
238/238 [==============================] - 1s 6ms/step - loss: 0.4478 - accuracy: 0.7911 - val_loss: 0.5666 - val_accuracy: 0.8526
Epoch 2/18
238/238 [==============================] - 1s 5ms/step - loss: 0.3232 - accuracy: 0.8583 - val_loss: 0.3647 - val_accuracy: 0.8808
Epoch 3/18
238/238 [==============================] - 1s 5ms/step - loss: 0.2820 - accuracy: 0.8774 - val_loss: 0.2698 - val_accuracy: 0.8927
Epoch 4/18
238/238 [==============================] - 1s 5ms/step - loss: 0.2651 - accuracy: 0.8854 - val_loss: 0.2526 - val_accuracy: 0.8814
Epoch 5/18
238/238 [==============================] - 1s 5ms/step - loss: 0.2469 - accuracy: 0.8924 - val_loss: 0.2601 - val_accuracy: 0.8782
Epoch 6/18
238/238 [==============================] - 1s 5ms/step - loss: 0.2292 - accuracy: 0.9010 - val_loss: 0.2490 - val_accuracy: 0.8951
Epoch 7/18
238/238 [==============================] - 1s 5ms/step - loss: 0.2184 - accuracy: 0.9056 - val_loss: 0.2472 - val_accuracy: 0.8914
Epoch 8/18
238/238 [==============================] - 1s 5ms/step - loss: 0.2039 - accuracy: 0.9127 - val_loss: 0.2730 - val_accuracy: 0.8998
Epoch 9/18
238/238 [==============================] - 1s 5ms/step - loss: 0.1887 - accuracy: 0.9179 - val_loss: 0.2633 - val_accuracy: 0.8877
Epoch 10/18
238/238 [==============================] - 1s 5ms/step - loss: 0.2071 - accuracy: 0.9080 - val_loss: 0.2493 - val_accuracy: 0.8909
Epoch 11/18
238/238 [==============================] - 1s 5ms/step - loss: 0.1855 - accuracy: 0.9206 - val_loss: 0.2445 - val_accuracy: 0.8980
Epoch 12/18
238/238 [==============================] - 1s 5ms/step - loss: 0.1646 - accuracy: 0.9293 - val_loss: 0.2803 - val_accuracy: 0.9014
Epoch 13/18
238/238 [==============================] - 1s 5ms/step - loss: 0.1590 - accuracy: 0.9320 - val_loss: 0.2461 - val_accuracy: 0.9043
Epoch 14/18
238/238 [==============================] - 1s 5ms/step - loss: 0.1408 - accuracy: 0.9411 - val_loss: 0.2740 - val_accuracy: 0.8998
Epoch 15/18
238/238 [==============================] - 1s 5ms/step - loss: 0.1360 - accuracy: 0.9410 - val_loss: 0.2962 - val_accuracy: 0.9006
Epoch 16/18
238/238 [==============================] - 1s 6ms/step - loss: 0.1335 - accuracy: 0.9409 - val_loss: 0.3129 - val_accuracy: 0.9011
Epoch 17/18
238/238 [==============================] - 1s 6ms/step - loss: 0.1282 - accuracy: 0.9434 - val_loss: 0.3634 - val_accuracy: 0.8932
Epoch 18/18
238/238 [==============================] - 1s 6ms/step - loss: 0.1178 - accuracy: 0.9490 - val_loss: 0.3291 - val_accuracy: 0.8961

Training Plot

In [33]:
GenderModel.save("GenderModel.h5")
pd.DataFrame(Gender_history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.show()

Evaluating the performance of model

In [34]:
loss, acc = GenderModel.evaluate(X_test,y_gender_test,verbose=0)
print('Test loss: {}'.format(loss))
print('Test Accuracy: {}'.format(acc))
Test loss: 0.3629770576953888
Test Accuracy: 0.8873655200004578
In [35]:
y_gender_test[:10]
Out[35]:
array([1, 1, 0, 1, 1, 1, 1, 1, 0, 0])
In [36]:
y_gender_pred = GenderModel.predict(X_test)
np.transpose(np.round(y_gender_pred))
Out[36]:
array([[1., 1., 0., ..., 0., 0., 1.]], dtype=float32)

Accessing the performance of the model

In [37]:
def plot(X,y_age,y_gender):
    if y_gender<=0.5:
        plt.title('Gender is Male and Age is around ' +str(y_age))
        
    else:
        plt.title('Gender is Female and Age is around ' +str(y_age))
    plt.imshow(X.reshape(48,48))
    plt.show()
In [38]:
n=2
plot(X_test[n],int(y_age_pred[n]),y_gender_pred[n])
In [39]:
n=4
plot(X_test[n],int(y_age_pred[n]),y_gender_pred[n])
In [40]:
n=9
plot(X_test[n],int(y_age_pred[n]),y_gender_pred[n])

Compiling Model with DeepC

In [41]:
!deepCC ageModel.h5
[INFO]
Reading [keras model] 'ageModel.h5'
[SUCCESS]
Saved 'ageModel.onnx'
[INFO]
Reading [onnx model] 'ageModel.onnx'
[INFO]
Model info:
  ir_vesion : 5
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) input_1's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_1's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_1) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'ageModel_deepC/ageModel.cpp'
[INFO]
deepSea model files are ready in 'ageModel_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 ageModel_deepC/ageModel.cpp -o ageModel_deepC/ageModel.exe
[RUNNING COMMAND]
size "ageModel_deepC/ageModel.exe"
   text	   data	    bss	    dec	    hex	filename
 165740	1718440	    888	1885068	 1cc38c	ageModel_deepC/ageModel.exe
[SUCCESS]
Saved model as executable "ageModel_deepC/ageModel.exe"
In [42]:
!deepCC GenderModel.h5
[INFO]
Reading [keras model] 'GenderModel.h5'
[SUCCESS]
Saved 'GenderModel.onnx'
[INFO]
Reading [onnx model] 'GenderModel.onnx'
[INFO]
Model info:
  ir_vesion : 5
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) input_1's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_1's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_1) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'GenderModel_deepC/GenderModel.cpp'
[INFO]
deepSea model files are ready in 'GenderModel_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 GenderModel_deepC/GenderModel.cpp -o GenderModel_deepC/GenderModel.exe
[RUNNING COMMAND]
size "GenderModel_deepC/GenderModel.exe"
   text	   data	    bss	    dec	    hex	filename
 167674	1718632	    888	1887194	 1ccbda	GenderModel_deepC/GenderModel.exe
[SUCCESS]
Saved model as executable "GenderModel_deepC/GenderModel.exe"