Cainvas
Model Files
fetal_health.h5
keras
Model
deepSea Compiled Models
fetal_health.exe
deepSea
Ubuntu

Fetal health detection

Credit: AITS Cainvas Community

Photo by Vivi Garleone on Dribbble

Classifying fetal health in order to prevent child and maternal mortality.

United Nation's Sustainable Development Goals reflect that reduction of child mortality is an indicator of human progress. This concept also includes maternal mortality.

Most of the accounted losses have occured in regions of low-resource and coul dhave been prevented.

Cardiotocography (CTG) is the means of measuring the fetal heart rate, movements and uterine contractions, thus continuously monitoring the health of the mother and child. The equipment used to perform the monitoring is called cardiotocograph and work using ultrasound pulses. This is a simple and cost effictive solution to assessing the fetal health, thus allowing professionals to take neccessary action.

In [1]:
import pandas as pd
import numpy as np
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import random

Dataset

Citation Ayres de Campos et al. (2000) SisPorto 2.0 A Program for Automated Analysis of Cardiotocograms9:5%3C311::AID-MFM12%3E3.0.CO;2-9). J Matern Fetal Med 5:311-318

The dataset has 2126 samples containing features extracted from cardiotocogram exams.

The data was labelled by expert obstetritians into 3 classes.

In [2]:
df = pd.read_csv('https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/fetal_health.csv')
df
Out[2]:
baseline value accelerations fetal_movement uterine_contractions light_decelerations severe_decelerations prolongued_decelerations abnormal_short_term_variability mean_value_of_short_term_variability percentage_of_time_with_abnormal_long_term_variability ... histogram_min histogram_max histogram_number_of_peaks histogram_number_of_zeroes histogram_mode histogram_mean histogram_median histogram_variance histogram_tendency fetal_health
0 120.0 0.000 0.000 0.000 0.000 0.0 0.0 73.0 0.5 43.0 ... 62.0 126.0 2.0 0.0 120.0 137.0 121.0 73.0 1.0 2.0
1 132.0 0.006 0.000 0.006 0.003 0.0 0.0 17.0 2.1 0.0 ... 68.0 198.0 6.0 1.0 141.0 136.0 140.0 12.0 0.0 1.0
2 133.0 0.003 0.000 0.008 0.003 0.0 0.0 16.0 2.1 0.0 ... 68.0 198.0 5.0 1.0 141.0 135.0 138.0 13.0 0.0 1.0
3 134.0 0.003 0.000 0.008 0.003 0.0 0.0 16.0 2.4 0.0 ... 53.0 170.0 11.0 0.0 137.0 134.0 137.0 13.0 1.0 1.0
4 132.0 0.007 0.000 0.008 0.000 0.0 0.0 16.0 2.4 0.0 ... 53.0 170.0 9.0 0.0 137.0 136.0 138.0 11.0 1.0 1.0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2121 140.0 0.000 0.000 0.007 0.000 0.0 0.0 79.0 0.2 25.0 ... 137.0 177.0 4.0 0.0 153.0 150.0 152.0 2.0 0.0 2.0
2122 140.0 0.001 0.000 0.007 0.000 0.0 0.0 78.0 0.4 22.0 ... 103.0 169.0 6.0 0.0 152.0 148.0 151.0 3.0 1.0 2.0
2123 140.0 0.001 0.000 0.007 0.000 0.0 0.0 79.0 0.4 20.0 ... 103.0 170.0 5.0 0.0 153.0 148.0 152.0 4.0 1.0 2.0
2124 140.0 0.001 0.000 0.006 0.000 0.0 0.0 78.0 0.4 27.0 ... 103.0 169.0 6.0 0.0 152.0 147.0 151.0 4.0 1.0 2.0
2125 142.0 0.002 0.002 0.008 0.000 0.0 0.0 74.0 0.4 36.0 ... 117.0 159.0 2.0 1.0 145.0 143.0 145.0 1.0 0.0 1.0

2126 rows × 22 columns

In [3]:
# Here are the class labels according to the metadata

class_names = ['Normal', 'Suspect', 'Pathological']
In [4]:
# Lets see the spread of values across classes

df['fetal_health'].value_counts()
Out[4]:
1.0    1655
2.0     295
3.0     176
Name: fetal_health, dtype: int64

This is a heavily unbalanced.

In order to balance the dataset, there are two options,

  • upsampling - resample the values to make their count equal to the class label with the higher count (here, 1655).
  • downsampling - pick n samples from each class label where n = number of samples in class with least count (here, 176)

Here, we will be upsampling.

In [5]:
# separating into 3 dataframes, one for each class 

df1 = df[df['fetal_health'] == 1.0]
df2 = df[df['fetal_health'] == 2.0]
df3 = df[df['fetal_health'] == 3.0]
In [6]:
print("Number of samples in:")
print("Class label 1 - ", len(df1))
print("Class label 2 - ", len(df2))
print("Class label 3 - ", len(df3))

# Upsampling 

df2 = df2.sample(len(df1), replace = True)    # replace = True enables resampling
df3 = df3.sample(len(df1), replace = True)

print('\nAfter resampling - ')

print("Number of samples in:")
print("Class label 1 - ", len(df1))
print("Class label 2 - ", len(df2))
print("Class label 3 - ", len(df3))
Number of samples in:
Class label 1 -  1655
Class label 2 -  295
Class label 3 -  176

After resampling - 
Number of samples in:
Class label 1 -  1655
Class label 2 -  1655
Class label 3 -  1655
In [7]:
# concatente to form a single dataframe

dfx = df1.append(df2).append(df3)

print('Total number of samples - ', len(dfx))
Total number of samples -  4965

Preprocessing

One hot encoding

In [8]:
# Defining input and output columns

inputc = dfx.columns[:-1]
outputc = [1, 2, 3]    # to be used after one hot encoding

print("Input columns - ", list(inputc))
print("\nOutput columns - ", outputc)
Input columns -  ['baseline value', 'accelerations', 'fetal_movement', 'uterine_contractions', 'light_decelerations', 'severe_decelerations', 'prolongued_decelerations', 'abnormal_short_term_variability', 'mean_value_of_short_term_variability', 'percentage_of_time_with_abnormal_long_term_variability', 'mean_value_of_long_term_variability', 'histogram_width', 'histogram_min', 'histogram_max', 'histogram_number_of_peaks', 'histogram_number_of_zeroes', 'histogram_mode', 'histogram_mean', 'histogram_median', 'histogram_variance', 'histogram_tendency']

Output columns -  [1, 2, 3]

SInce this is a classification problem, the output of the model which is now as an integer should be one-hot encoded.

In [9]:
y = pd.get_dummies(dfx.fetal_health)

y
Out[9]:
1.0 2.0 3.0
1 1 0 0
2 1 0 0
3 1 0 0
4 1 0 0
12 1 0 0
... ... ... ...
335 0 0 1
700 0 0 1
363 0 0 1
2034 0 0 1
1752 0 0 1

4965 rows × 3 columns

In [10]:
# adding as columns to the dataframe

for x in outputc:
    dfx[x] = y[x]
    
dfx    

# as said before, the output columns are labelled 1, 2, 3
Out[10]:
baseline value accelerations fetal_movement uterine_contractions light_decelerations severe_decelerations prolongued_decelerations abnormal_short_term_variability mean_value_of_short_term_variability percentage_of_time_with_abnormal_long_term_variability ... histogram_number_of_zeroes histogram_mode histogram_mean histogram_median histogram_variance histogram_tendency fetal_health 1 2 3
1 132.0 0.006 0.000 0.006 0.003 0.0 0.000 17.0 2.1 0.0 ... 1.0 141.0 136.0 140.0 12.0 0.0 1.0 1 0 0
2 133.0 0.003 0.000 0.008 0.003 0.0 0.000 16.0 2.1 0.0 ... 1.0 141.0 135.0 138.0 13.0 0.0 1.0 1 0 0
3 134.0 0.003 0.000 0.008 0.003 0.0 0.000 16.0 2.4 0.0 ... 0.0 137.0 134.0 137.0 13.0 1.0 1.0 1 0 0
4 132.0 0.007 0.000 0.008 0.000 0.0 0.000 16.0 2.4 0.0 ... 0.0 137.0 136.0 138.0 11.0 1.0 1.0 1 0 0
12 131.0 0.005 0.072 0.008 0.003 0.0 0.000 28.0 1.4 0.0 ... 0.0 135.0 134.0 137.0 7.0 1.0 1.0 1 0 0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
335 146.0 0.000 0.003 0.000 0.000 0.0 0.000 81.0 0.2 67.0 ... 0.0 146.0 144.0 146.0 1.0 0.0 3.0 0 0 1
700 130.0 0.000 0.346 0.003 0.003 0.0 0.003 29.0 2.2 0.0 ... 0.0 129.0 106.0 122.0 129.0 0.0 3.0 0 0 1
363 135.0 0.000 0.013 0.001 0.000 0.0 0.000 69.0 0.3 78.0 ... 0.0 136.0 136.0 137.0 0.0 0.0 3.0 0 0 1
2034 129.0 0.000 0.001 0.006 0.005 0.0 0.002 67.0 3.3 0.0 ... 0.0 105.0 80.0 107.0 14.0 -1.0 3.0 0 0 1
1752 134.0 0.004 0.001 0.001 0.003 0.0 0.003 61.0 1.8 0.0 ... 0.0 88.0 112.0 111.0 182.0 -1.0 3.0 0 0 1

4965 rows × 25 columns

Train test split

In [11]:
# Splitting into train and test using 80-20 split

traindf, testdf = train_test_split(dfx.sample(frac=1), test_size = 0.2)    # shuffling the dataframe before splitting

print('Number of samples in:')
print('Train set - ' , len(traindf))
print('Test set - ', len(testdf))
Number of samples in:
Train set -  3972
Test set -  993
In [12]:
# Splitting into X and y arrays for preprocessing purposes

Xtrain, ytrain = traindf[inputc], traindf[outputc]
Xtest, ytest = testdf[inputc], testdf[outputc]

Scaling the values

In [13]:
# Each feature has a different range. 
# Using min_max_scaler to scale them to values in the range [0,1].

min_max_scaler = MinMaxScaler()

# Fit on training set alone
Xtrain = min_max_scaler.fit_transform(Xtrain)

# Use it to transform val and test input
Xtest = min_max_scaler.transform(Xtest)

The model

In [14]:
model = Sequential([
    Dense(128, activation = 'relu'),
    Dense(64, activation = 'relu'),
    Dense(3, activation = 'softmax'),
])
In [15]:
# training with a learning rate of 0.01

model.compile(optimizer = Adam(0.01), loss = 'categorical_crossentropy', metrics = ['accuracy'])
history1 = model.fit(Xtrain, ytrain, validation_data= (Xtest, ytest), epochs = 64)
Epoch 1/64
125/125 [==============================] - 0s 3ms/step - loss: 0.4545 - accuracy: 0.8129 - val_loss: 0.3606 - val_accuracy: 0.8540
Epoch 2/64
125/125 [==============================] - 0s 2ms/step - loss: 0.3369 - accuracy: 0.8560 - val_loss: 0.3101 - val_accuracy: 0.8701
Epoch 3/64
125/125 [==============================] - 0s 2ms/step - loss: 0.2740 - accuracy: 0.8865 - val_loss: 0.2333 - val_accuracy: 0.9215
Epoch 4/64
125/125 [==============================] - 0s 2ms/step - loss: 0.2523 - accuracy: 0.8955 - val_loss: 0.2899 - val_accuracy: 0.8691
Epoch 5/64
125/125 [==============================] - 0s 2ms/step - loss: 0.2469 - accuracy: 0.9028 - val_loss: 0.2198 - val_accuracy: 0.9285
Epoch 6/64
125/125 [==============================] - 0s 2ms/step - loss: 0.2185 - accuracy: 0.9136 - val_loss: 0.1935 - val_accuracy: 0.9366
Epoch 7/64
125/125 [==============================] - 0s 2ms/step - loss: 0.2187 - accuracy: 0.9099 - val_loss: 0.2227 - val_accuracy: 0.9154
Epoch 8/64
125/125 [==============================] - 0s 2ms/step - loss: 0.2244 - accuracy: 0.9141 - val_loss: 0.3530 - val_accuracy: 0.8530
Epoch 9/64
125/125 [==============================] - 0s 2ms/step - loss: 0.2007 - accuracy: 0.9182 - val_loss: 0.1775 - val_accuracy: 0.9305
Epoch 10/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1766 - accuracy: 0.9310 - val_loss: 0.1785 - val_accuracy: 0.9305
Epoch 11/64
125/125 [==============================] - 0s 2ms/step - loss: 0.2017 - accuracy: 0.9154 - val_loss: 0.1932 - val_accuracy: 0.9225
Epoch 12/64
125/125 [==============================] - 0s 2ms/step - loss: 0.2276 - accuracy: 0.9106 - val_loss: 0.2248 - val_accuracy: 0.9134
Epoch 13/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1678 - accuracy: 0.9350 - val_loss: 0.1666 - val_accuracy: 0.9315
Epoch 14/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1692 - accuracy: 0.9310 - val_loss: 0.2029 - val_accuracy: 0.9084
Epoch 15/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1667 - accuracy: 0.9376 - val_loss: 0.1680 - val_accuracy: 0.9386
Epoch 16/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1577 - accuracy: 0.9398 - val_loss: 0.1536 - val_accuracy: 0.9446
Epoch 17/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1707 - accuracy: 0.9313 - val_loss: 0.1374 - val_accuracy: 0.9446
Epoch 18/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1377 - accuracy: 0.9451 - val_loss: 0.1400 - val_accuracy: 0.9476
Epoch 19/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1266 - accuracy: 0.9494 - val_loss: 0.1152 - val_accuracy: 0.9537
Epoch 20/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1491 - accuracy: 0.9403 - val_loss: 0.1179 - val_accuracy: 0.9617
Epoch 21/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1814 - accuracy: 0.9270 - val_loss: 0.1715 - val_accuracy: 0.9325
Epoch 22/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1479 - accuracy: 0.9456 - val_loss: 0.1345 - val_accuracy: 0.9577
Epoch 23/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1237 - accuracy: 0.9524 - val_loss: 0.1319 - val_accuracy: 0.9547
Epoch 24/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1231 - accuracy: 0.9532 - val_loss: 0.1285 - val_accuracy: 0.9537
Epoch 25/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1438 - accuracy: 0.9416 - val_loss: 0.1608 - val_accuracy: 0.9527
Epoch 26/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1283 - accuracy: 0.9507 - val_loss: 0.1381 - val_accuracy: 0.9547
Epoch 27/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1328 - accuracy: 0.9502 - val_loss: 0.2465 - val_accuracy: 0.9124
Epoch 28/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1247 - accuracy: 0.9519 - val_loss: 0.1199 - val_accuracy: 0.9607
Epoch 29/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1205 - accuracy: 0.9564 - val_loss: 0.1491 - val_accuracy: 0.9476
Epoch 30/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1106 - accuracy: 0.9582 - val_loss: 0.1310 - val_accuracy: 0.9476
Epoch 31/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1196 - accuracy: 0.9534 - val_loss: 0.1131 - val_accuracy: 0.9557
Epoch 32/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0984 - accuracy: 0.9615 - val_loss: 0.1019 - val_accuracy: 0.9597
Epoch 33/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1148 - accuracy: 0.9559 - val_loss: 0.1099 - val_accuracy: 0.9577
Epoch 34/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0978 - accuracy: 0.9625 - val_loss: 0.1254 - val_accuracy: 0.9597
Epoch 35/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1038 - accuracy: 0.9587 - val_loss: 0.1199 - val_accuracy: 0.9567
Epoch 36/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1312 - accuracy: 0.9504 - val_loss: 0.1004 - val_accuracy: 0.9637
Epoch 37/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1055 - accuracy: 0.9585 - val_loss: 0.0962 - val_accuracy: 0.9627
Epoch 38/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1045 - accuracy: 0.9605 - val_loss: 0.1090 - val_accuracy: 0.9527
Epoch 39/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1132 - accuracy: 0.9539 - val_loss: 0.1068 - val_accuracy: 0.9678
Epoch 40/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1000 - accuracy: 0.9605 - val_loss: 0.1088 - val_accuracy: 0.9668
Epoch 41/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0784 - accuracy: 0.9700 - val_loss: 0.0961 - val_accuracy: 0.9698
Epoch 42/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0828 - accuracy: 0.9673 - val_loss: 0.0922 - val_accuracy: 0.9648
Epoch 43/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0818 - accuracy: 0.9663 - val_loss: 0.1177 - val_accuracy: 0.9627
Epoch 44/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1093 - accuracy: 0.9620 - val_loss: 0.2127 - val_accuracy: 0.9325
Epoch 45/64
125/125 [==============================] - 0s 2ms/step - loss: 0.1043 - accuracy: 0.9600 - val_loss: 0.0935 - val_accuracy: 0.9698
Epoch 46/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0903 - accuracy: 0.9622 - val_loss: 0.1095 - val_accuracy: 0.9678
Epoch 47/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0885 - accuracy: 0.9668 - val_loss: 0.1039 - val_accuracy: 0.9698
Epoch 48/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0873 - accuracy: 0.9668 - val_loss: 0.1062 - val_accuracy: 0.9648
Epoch 49/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0764 - accuracy: 0.9710 - val_loss: 0.0919 - val_accuracy: 0.9738
Epoch 50/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0644 - accuracy: 0.9748 - val_loss: 0.0750 - val_accuracy: 0.9758
Epoch 51/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0823 - accuracy: 0.9698 - val_loss: 0.0886 - val_accuracy: 0.9678
Epoch 52/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0956 - accuracy: 0.9683 - val_loss: 0.1218 - val_accuracy: 0.9617
Epoch 53/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0734 - accuracy: 0.9716 - val_loss: 0.0879 - val_accuracy: 0.9738
Epoch 54/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0940 - accuracy: 0.9658 - val_loss: 0.0970 - val_accuracy: 0.9668
Epoch 55/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0748 - accuracy: 0.9703 - val_loss: 0.0792 - val_accuracy: 0.9778
Epoch 56/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0979 - accuracy: 0.9655 - val_loss: 0.0811 - val_accuracy: 0.9738
Epoch 57/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0746 - accuracy: 0.9741 - val_loss: 0.0909 - val_accuracy: 0.9658
Epoch 58/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0605 - accuracy: 0.9768 - val_loss: 0.1063 - val_accuracy: 0.9708
Epoch 59/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0853 - accuracy: 0.9680 - val_loss: 0.1078 - val_accuracy: 0.9648
Epoch 60/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0773 - accuracy: 0.9716 - val_loss: 0.1148 - val_accuracy: 0.9648
Epoch 61/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0725 - accuracy: 0.9746 - val_loss: 0.0854 - val_accuracy: 0.9789
Epoch 62/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0799 - accuracy: 0.9713 - val_loss: 0.0758 - val_accuracy: 0.9819
Epoch 63/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0615 - accuracy: 0.9766 - val_loss: 0.0962 - val_accuracy: 0.9688
Epoch 64/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0613 - accuracy: 0.9741 - val_loss: 0.0838 - val_accuracy: 0.9768
In [16]:
# training with learning rate of 0.001

model.compile(optimizer = Adam(0.001), loss = 'categorical_crossentropy', metrics = ['accuracy'])
history2 = model.fit(Xtrain, ytrain, validation_data= (Xtest, ytest), epochs = 64)
Epoch 1/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0394 - accuracy: 0.9854 - val_loss: 0.0649 - val_accuracy: 0.9819
Epoch 2/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0339 - accuracy: 0.9877 - val_loss: 0.0646 - val_accuracy: 0.9849
Epoch 3/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0321 - accuracy: 0.9887 - val_loss: 0.0601 - val_accuracy: 0.9859
Epoch 4/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0315 - accuracy: 0.9884 - val_loss: 0.0687 - val_accuracy: 0.9819
Epoch 5/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0303 - accuracy: 0.9894 - val_loss: 0.0599 - val_accuracy: 0.9869
Epoch 6/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0286 - accuracy: 0.9897 - val_loss: 0.0616 - val_accuracy: 0.9859
Epoch 7/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0286 - accuracy: 0.9904 - val_loss: 0.0627 - val_accuracy: 0.9849
Epoch 8/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0279 - accuracy: 0.9907 - val_loss: 0.0614 - val_accuracy: 0.9849
Epoch 9/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0272 - accuracy: 0.9897 - val_loss: 0.0662 - val_accuracy: 0.9849
Epoch 10/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0264 - accuracy: 0.9907 - val_loss: 0.0669 - val_accuracy: 0.9859
Epoch 11/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0269 - accuracy: 0.9899 - val_loss: 0.0705 - val_accuracy: 0.9839
Epoch 12/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0259 - accuracy: 0.9894 - val_loss: 0.0626 - val_accuracy: 0.9879
Epoch 13/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0251 - accuracy: 0.9912 - val_loss: 0.0648 - val_accuracy: 0.9859
Epoch 14/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0250 - accuracy: 0.9904 - val_loss: 0.0665 - val_accuracy: 0.9859
Epoch 15/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0240 - accuracy: 0.9917 - val_loss: 0.0692 - val_accuracy: 0.9859
Epoch 16/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0234 - accuracy: 0.9909 - val_loss: 0.0666 - val_accuracy: 0.9859
Epoch 17/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0227 - accuracy: 0.9924 - val_loss: 0.0662 - val_accuracy: 0.9859
Epoch 18/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0230 - accuracy: 0.9907 - val_loss: 0.0722 - val_accuracy: 0.9849
Epoch 19/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0234 - accuracy: 0.9904 - val_loss: 0.0665 - val_accuracy: 0.9849
Epoch 20/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0214 - accuracy: 0.9922 - val_loss: 0.0784 - val_accuracy: 0.9849
Epoch 21/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0223 - accuracy: 0.9912 - val_loss: 0.0733 - val_accuracy: 0.9849
Epoch 22/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0223 - accuracy: 0.9924 - val_loss: 0.0663 - val_accuracy: 0.9859
Epoch 23/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0208 - accuracy: 0.9919 - val_loss: 0.0794 - val_accuracy: 0.9839
Epoch 24/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0209 - accuracy: 0.9932 - val_loss: 0.0722 - val_accuracy: 0.9859
Epoch 25/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0207 - accuracy: 0.9924 - val_loss: 0.0754 - val_accuracy: 0.9849
Epoch 26/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0202 - accuracy: 0.9932 - val_loss: 0.0742 - val_accuracy: 0.9859
Epoch 27/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0196 - accuracy: 0.9935 - val_loss: 0.0878 - val_accuracy: 0.9839
Epoch 28/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0204 - accuracy: 0.9930 - val_loss: 0.0794 - val_accuracy: 0.9849
Epoch 29/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0192 - accuracy: 0.9935 - val_loss: 0.0776 - val_accuracy: 0.9859
Epoch 30/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0197 - accuracy: 0.9932 - val_loss: 0.0717 - val_accuracy: 0.9859
Epoch 31/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0191 - accuracy: 0.9927 - val_loss: 0.0719 - val_accuracy: 0.9859
Epoch 32/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0186 - accuracy: 0.9935 - val_loss: 0.0736 - val_accuracy: 0.9849
Epoch 33/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0194 - accuracy: 0.9935 - val_loss: 0.0754 - val_accuracy: 0.9859
Epoch 34/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0176 - accuracy: 0.9935 - val_loss: 0.0822 - val_accuracy: 0.9859
Epoch 35/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0170 - accuracy: 0.9940 - val_loss: 0.0837 - val_accuracy: 0.9849
Epoch 36/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0172 - accuracy: 0.9935 - val_loss: 0.0772 - val_accuracy: 0.9849
Epoch 37/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0187 - accuracy: 0.9932 - val_loss: 0.0822 - val_accuracy: 0.9839
Epoch 38/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0182 - accuracy: 0.9927 - val_loss: 0.0793 - val_accuracy: 0.9859
Epoch 39/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0163 - accuracy: 0.9940 - val_loss: 0.0946 - val_accuracy: 0.9829
Epoch 40/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0166 - accuracy: 0.9930 - val_loss: 0.0880 - val_accuracy: 0.9859
Epoch 41/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0175 - accuracy: 0.9932 - val_loss: 0.0786 - val_accuracy: 0.9859
Epoch 42/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0163 - accuracy: 0.9942 - val_loss: 0.0896 - val_accuracy: 0.9849
Epoch 43/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0156 - accuracy: 0.9924 - val_loss: 0.0833 - val_accuracy: 0.9859
Epoch 44/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0172 - accuracy: 0.9922 - val_loss: 0.0912 - val_accuracy: 0.9849
Epoch 45/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0153 - accuracy: 0.9945 - val_loss: 0.0875 - val_accuracy: 0.9859
Epoch 46/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0150 - accuracy: 0.9940 - val_loss: 0.0878 - val_accuracy: 0.9859
Epoch 47/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0149 - accuracy: 0.9947 - val_loss: 0.0883 - val_accuracy: 0.9849
Epoch 48/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0154 - accuracy: 0.9940 - val_loss: 0.0839 - val_accuracy: 0.9819
Epoch 49/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0156 - accuracy: 0.9945 - val_loss: 0.0840 - val_accuracy: 0.9849
Epoch 50/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0141 - accuracy: 0.9935 - val_loss: 0.0921 - val_accuracy: 0.9869
Epoch 51/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0150 - accuracy: 0.9945 - val_loss: 0.0860 - val_accuracy: 0.9859
Epoch 52/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0144 - accuracy: 0.9947 - val_loss: 0.0883 - val_accuracy: 0.9859
Epoch 53/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0145 - accuracy: 0.9947 - val_loss: 0.0810 - val_accuracy: 0.9869
Epoch 54/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0144 - accuracy: 0.9947 - val_loss: 0.0933 - val_accuracy: 0.9849
Epoch 55/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0139 - accuracy: 0.9930 - val_loss: 0.0939 - val_accuracy: 0.9849
Epoch 56/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0138 - accuracy: 0.9952 - val_loss: 0.0925 - val_accuracy: 0.9839
Epoch 57/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0139 - accuracy: 0.9940 - val_loss: 0.0844 - val_accuracy: 0.9859
Epoch 58/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0132 - accuracy: 0.9952 - val_loss: 0.0914 - val_accuracy: 0.9829
Epoch 59/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0145 - accuracy: 0.9940 - val_loss: 0.1001 - val_accuracy: 0.9849
Epoch 60/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0137 - accuracy: 0.9947 - val_loss: 0.0884 - val_accuracy: 0.9869
Epoch 61/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0132 - accuracy: 0.9940 - val_loss: 0.0845 - val_accuracy: 0.9849
Epoch 62/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0141 - accuracy: 0.9942 - val_loss: 0.0952 - val_accuracy: 0.9859
Epoch 63/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0130 - accuracy: 0.9950 - val_loss: 0.0989 - val_accuracy: 0.9849
Epoch 64/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0132 - accuracy: 0.9950 - val_loss: 0.0877 - val_accuracy: 0.9849
In [17]:
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 128)               2816      
_________________________________________________________________
dense_1 (Dense)              (None, 64)                8256      
_________________________________________________________________
dense_2 (Dense)              (None, 3)                 195       
=================================================================
Total params: 11,267
Trainable params: 11,267
Non-trainable params: 0
_________________________________________________________________
In [18]:
model.evaluate(Xtest, ytest)
32/32 [==============================] - 0s 1ms/step - loss: 0.0877 - accuracy: 0.9849
Out[18]:
[0.08774606883525848, 0.9848942756652832]

Plotting the metrics

In [19]:
def plot(history1, history2, variable1, variable2):
    # combining metrics from both trainings    
    var1_history = history1[variable1]
    var1_history.extend(history2[variable1])
    
    var2_history = history1[variable2]
    var2_history.extend(history2[variable2])
    
    # plotting them
    plt.plot(range(len(var1_history)), var1_history)
    plt.plot(range(len(var2_history)), var2_history)
    plt.legend([variable1, variable2])
    plt.title(variable1)
In [20]:
plot(history1.history, history2.history, "accuracy", 'val_accuracy')
In [21]:
plot(history1.history, history2.history, "loss", 'val_loss')

Prediction

In [22]:
# pick random test data sample from one batch
x = random.randint(0, len(Xtest) - 1)

output = model.predict(Xtest[x].reshape(1, -1))    # getting output; input shape (256, 256, 3) --> (1, 256, 256, 3)
pred = np.argmax(output[0])    # finding max
print("Prdicted: ", class_names[pred])    # Picking the label from class_names base don the model output

output_true = np.array(ytest)[x]

print("True: ", class_names[np.argmax(output_true)])
print("Probability: ", output[0][pred])
Prdicted:  Normal
True:  Normal
Probability:  0.9964545

deepC

In [23]:
model.save('fetal_health.h5')

!deepCC fetal_health.h5
[INFO]
Reading [keras model] 'fetal_health.h5'
[SUCCESS]
Saved 'fetal_health_deepC/fetal_health.onnx'
[INFO]
Reading [onnx model] 'fetal_health_deepC/fetal_health.onnx'
[INFO]
Model info:
  ir_vesion : 4
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) dense_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_2's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_2) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'fetal_health_deepC/fetal_health.cpp'
[INFO]
deepSea model files are ready in 'fetal_health_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "fetal_health_deepC/fetal_health.cpp" -D_AITS_MAIN -o "fetal_health_deepC/fetal_health.exe"
[RUNNING COMMAND]
size "fetal_health_deepC/fetal_health.exe"
   text	   data	    bss	    dec	    hex	filename
 168293	   2984	    760	 172037	  2a005	fetal_health_deepC/fetal_health.exe
[SUCCESS]
Saved model as executable "fetal_health_deepC/fetal_health.exe"
In [27]:
# pick random test data sample from one batch
x = random.randint(0, len(Xtest) - 1)

np.savetxt('sample.data', Xtest[x])    # xth sample into text file

# run exe with input
!fetal_health_deepC/fetal_health.exe sample.data

# show predicted output
nn_out = np.loadtxt('deepSea_result_1.out')

pred = np.argmax(nn_out)    # finding max
print("Prdicted: ", class_names[pred])    # Picking the label from class_names base don the model output

output_true = np.array(ytest)[x]

print("True: ", class_names[np.argmax(output_true)])
print("Probability: ", nn_out[pred])
writing file deepSea_result_1.out.
Prdicted:  Pathological
True:  Pathological
Probability:  1.0