Cainvas
Model Files
fetal_health.h5
keras
Model
deepSea Compiled Models
fetal_health.exe
deepSea
Ubuntu

Fetal health detection

Credit: AITS Cainvas Community

Photo by Vivi Garleone on Dribbble

Classifying fetal health in order to prevent child and maternal mortality.

United Nation's Sustainable Development Goals reflect that reduction of child mortality is an indicator of human progress. This concept also includes maternal mortality.

Most of the accounted losses have occured in regions of low-resource and coul dhave been prevented.

Cardiotocography (CTG) is the means of measuring the fetal heart rate, movements and uterine contractions, thus continuously monitoring the health of the mother and child. The equipment used to perform the monitoring is called cardiotocograph and work using ultrasound pulses. This is a simple and cost effictive solution to assessing the fetal health, thus allowing professionals to take neccessary action.

In [1]:
import pandas as pd
import numpy as np
from keras.optimizers import Adam
from keras.models import Sequential
from keras.layers import Dense
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import random

Dataset

Citation Ayres de Campos et al. (2000) SisPorto 2.0 A Program for Automated Analysis of Cardiotocograms9:5%3C311::AID-MFM12%3E3.0.CO;2-9). J Matern Fetal Med 5:311-318

The dataset has 2126 samples containing features extracted from cardiotocogram exams.

The data was labelled by expert obstetritians into 3 classes.

In [2]:
df = pd.read_csv('https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/fetal_health.csv')
df
Out[2]:
baseline value accelerations fetal_movement uterine_contractions light_decelerations severe_decelerations prolongued_decelerations abnormal_short_term_variability mean_value_of_short_term_variability percentage_of_time_with_abnormal_long_term_variability ... histogram_min histogram_max histogram_number_of_peaks histogram_number_of_zeroes histogram_mode histogram_mean histogram_median histogram_variance histogram_tendency fetal_health
0 120.0 0.000 0.000 0.000 0.000 0.0 0.0 73.0 0.5 43.0 ... 62.0 126.0 2.0 0.0 120.0 137.0 121.0 73.0 1.0 2.0
1 132.0 0.006 0.000 0.006 0.003 0.0 0.0 17.0 2.1 0.0 ... 68.0 198.0 6.0 1.0 141.0 136.0 140.0 12.0 0.0 1.0
2 133.0 0.003 0.000 0.008 0.003 0.0 0.0 16.0 2.1 0.0 ... 68.0 198.0 5.0 1.0 141.0 135.0 138.0 13.0 0.0 1.0
3 134.0 0.003 0.000 0.008 0.003 0.0 0.0 16.0 2.4 0.0 ... 53.0 170.0 11.0 0.0 137.0 134.0 137.0 13.0 1.0 1.0
4 132.0 0.007 0.000 0.008 0.000 0.0 0.0 16.0 2.4 0.0 ... 53.0 170.0 9.0 0.0 137.0 136.0 138.0 11.0 1.0 1.0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2121 140.0 0.000 0.000 0.007 0.000 0.0 0.0 79.0 0.2 25.0 ... 137.0 177.0 4.0 0.0 153.0 150.0 152.0 2.0 0.0 2.0
2122 140.0 0.001 0.000 0.007 0.000 0.0 0.0 78.0 0.4 22.0 ... 103.0 169.0 6.0 0.0 152.0 148.0 151.0 3.0 1.0 2.0
2123 140.0 0.001 0.000 0.007 0.000 0.0 0.0 79.0 0.4 20.0 ... 103.0 170.0 5.0 0.0 153.0 148.0 152.0 4.0 1.0 2.0
2124 140.0 0.001 0.000 0.006 0.000 0.0 0.0 78.0 0.4 27.0 ... 103.0 169.0 6.0 0.0 152.0 147.0 151.0 4.0 1.0 2.0
2125 142.0 0.002 0.002 0.008 0.000 0.0 0.0 74.0 0.4 36.0 ... 117.0 159.0 2.0 1.0 145.0 143.0 145.0 1.0 0.0 1.0

2126 rows × 22 columns

In [3]:
# Here are the class labels according to the metadata

class_names = ['Normal', 'Suspect', 'Pathological']
In [4]:
# Lets see the spread of values across classes

df['fetal_health'].value_counts()
Out[4]:
1.0    1655
2.0     295
3.0     176
Name: fetal_health, dtype: int64

This is a heavily unbalanced.

In order to balance the dataset, there are two options,

  • upsampling - resample the values to make their count equal to the class label with the higher count (here, 1655).
  • downsampling - pick n samples from each class label where n = number of samples in class with least count (here, 176)

Here, we will be upsampling.

In [5]:
# separating into 3 dataframes, one for each class 

df1 = df[df['fetal_health'] == 1.0]
df2 = df[df['fetal_health'] == 2.0]
df3 = df[df['fetal_health'] == 3.0]
In [6]:
print("Number of samples in:")
print("Class label 1 - ", len(df1))
print("Class label 2 - ", len(df2))
print("Class label 3 - ", len(df3))

# Upsampling 

df2 = df2.sample(len(df1), replace = True)    # replace = True enables resampling
df3 = df3.sample(len(df1), replace = True)

print('\nAfter resampling - ')

print("Number of samples in:")
print("Class label 1 - ", len(df1))
print("Class label 2 - ", len(df2))
print("Class label 3 - ", len(df3))
Number of samples in:
Class label 1 -  1655
Class label 2 -  295
Class label 3 -  176

After resampling - 
Number of samples in:
Class label 1 -  1655
Class label 2 -  1655
Class label 3 -  1655
In [7]:
# concatente to form a single dataframe

dfx = df1.append(df2).append(df3)

print('Total number of samples - ', len(dfx))
Total number of samples -  4965

Preprocessing

One hot encoding

In [8]:
# Defining input and output columns

inputc = dfx.columns[:-1]
outputc = [1, 2, 3]    # to be used after one hot encoding

print("Input columns - ", list(inputc))
print("\nOutput columns - ", outputc)
Input columns -  ['baseline value', 'accelerations', 'fetal_movement', 'uterine_contractions', 'light_decelerations', 'severe_decelerations', 'prolongued_decelerations', 'abnormal_short_term_variability', 'mean_value_of_short_term_variability', 'percentage_of_time_with_abnormal_long_term_variability', 'mean_value_of_long_term_variability', 'histogram_width', 'histogram_min', 'histogram_max', 'histogram_number_of_peaks', 'histogram_number_of_zeroes', 'histogram_mode', 'histogram_mean', 'histogram_median', 'histogram_variance', 'histogram_tendency']

Output columns -  [1, 2, 3]

SInce this is a classification problem, the output of the model which is now as an integer should be one-hot encoded.

In [9]:
y = pd.get_dummies(dfx.fetal_health)

y
Out[9]:
1.0 2.0 3.0
1 1 0 0
2 1 0 0
3 1 0 0
4 1 0 0
12 1 0 0
... ... ... ...
469 0 0 1
697 0 0 1
468 0 0 1
654 0 0 1
24 0 0 1

4965 rows × 3 columns

In [10]:
# adding as columns to the dataframe

for x in outputc:
    dfx[x] = y[x]
    
dfx    

# as said before, the output columns are labelled 1, 2, 3
Out[10]:
baseline value accelerations fetal_movement uterine_contractions light_decelerations severe_decelerations prolongued_decelerations abnormal_short_term_variability mean_value_of_short_term_variability percentage_of_time_with_abnormal_long_term_variability ... histogram_number_of_zeroes histogram_mode histogram_mean histogram_median histogram_variance histogram_tendency fetal_health 1 2 3
1 132.0 0.006 0.000 0.006 0.003 0.0 0.000 17.0 2.1 0.0 ... 1.0 141.0 136.0 140.0 12.0 0.0 1.0 1 0 0
2 133.0 0.003 0.000 0.008 0.003 0.0 0.000 16.0 2.1 0.0 ... 1.0 141.0 135.0 138.0 13.0 0.0 1.0 1 0 0
3 134.0 0.003 0.000 0.008 0.003 0.0 0.000 16.0 2.4 0.0 ... 0.0 137.0 134.0 137.0 13.0 1.0 1.0 1 0 0
4 132.0 0.007 0.000 0.008 0.000 0.0 0.000 16.0 2.4 0.0 ... 0.0 137.0 136.0 138.0 11.0 1.0 1.0 1 0 0
12 131.0 0.005 0.072 0.008 0.003 0.0 0.000 28.0 1.4 0.0 ... 0.0 135.0 134.0 137.0 7.0 1.0 1.0 1 0 0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
469 151.0 0.000 0.002 0.000 0.000 0.0 0.000 81.0 0.4 28.0 ... 0.0 152.0 152.0 153.0 1.0 0.0 3.0 0 0 1
697 131.0 0.001 0.369 0.003 0.003 0.0 0.002 29.0 2.0 0.0 ... 0.0 129.0 112.0 128.0 103.0 1.0 3.0 0 0 1
468 144.0 0.000 0.002 0.002 0.000 0.0 0.000 84.0 0.3 34.0 ... 0.0 144.0 143.0 145.0 0.0 -1.0 3.0 0 0 1
654 123.0 0.000 0.000 0.000 0.000 0.0 0.000 71.0 0.3 77.0 ... 0.0 123.0 123.0 124.0 0.0 1.0 3.0 0 0 1
24 128.0 0.000 0.000 0.003 0.000 0.0 0.000 86.0 0.3 79.0 ... 0.0 128.0 126.0 129.0 0.0 1.0 3.0 0 0 1

4965 rows × 25 columns

Train test split

In [11]:
# Splitting into train and test using 80-20 split

traindf, testdf = train_test_split(dfx.sample(frac=1), test_size = 0.2)    # shuffling the dataframe before splitting

print('Number of samples in:')
print('Train set - ' , len(traindf))
print('Test set - ', len(testdf))
Number of samples in:
Train set -  3972
Test set -  993
In [12]:
# Splitting into X and y arrays for preprocessing purposes

Xtrain, ytrain = traindf[inputc], traindf[outputc]
Xtest, ytest = testdf[inputc], testdf[outputc]

Scaling the values

In [13]:
# Each feature has a different range. 
# Using min_max_scaler to scale them to values in the range [0,1].

min_max_scaler = MinMaxScaler()

# Fit on training set alone
Xtrain = min_max_scaler.fit_transform(Xtrain)

# Use it to transform val and test input
Xtest = min_max_scaler.transform(Xtest)

The model

In [14]:
model = Sequential([
    Dense(128, activation = 'relu'),
    Dense(64, activation = 'relu'),
    Dense(3, activation = 'softmax'),
])
In [15]:
# training with a learning rate of 0.01

model.compile(optimizer = Adam(0.01), loss = 'categorical_crossentropy', metrics = ['accuracy'])
history1 = model.fit(Xtrain, ytrain, validation_data= (Xtest, ytest), epochs = 64)
Epoch 1/64
125/125 [==============================] - 0s 3ms/step - loss: 0.4736 - accuracy: 0.8122 - val_loss: 0.3116 - val_accuracy: 0.8771
Epoch 2/64
125/125 [==============================] - 0s 1ms/step - loss: 0.3358 - accuracy: 0.8615 - val_loss: 0.3877 - val_accuracy: 0.8338
Epoch 3/64
125/125 [==============================] - 0s 1ms/step - loss: 0.3191 - accuracy: 0.8698 - val_loss: 0.2946 - val_accuracy: 0.8560
Epoch 4/64
125/125 [==============================] - 0s 1ms/step - loss: 0.3002 - accuracy: 0.8779 - val_loss: 0.2765 - val_accuracy: 0.8802
Epoch 5/64
125/125 [==============================] - 0s 1ms/step - loss: 0.2628 - accuracy: 0.8950 - val_loss: 0.2238 - val_accuracy: 0.9053
Epoch 6/64
125/125 [==============================] - 0s 1ms/step - loss: 0.2531 - accuracy: 0.8965 - val_loss: 0.2138 - val_accuracy: 0.9154
Epoch 7/64
125/125 [==============================] - 0s 1ms/step - loss: 0.2389 - accuracy: 0.9084 - val_loss: 0.1795 - val_accuracy: 0.9265
Epoch 8/64
125/125 [==============================] - 0s 1ms/step - loss: 0.2272 - accuracy: 0.9119 - val_loss: 0.1674 - val_accuracy: 0.9376
Epoch 9/64
125/125 [==============================] - 0s 1ms/step - loss: 0.2168 - accuracy: 0.9152 - val_loss: 0.2117 - val_accuracy: 0.9074
Epoch 10/64
125/125 [==============================] - 0s 1ms/step - loss: 0.2034 - accuracy: 0.9245 - val_loss: 0.1604 - val_accuracy: 0.9406
Epoch 11/64
125/125 [==============================] - 0s 1ms/step - loss: 0.2080 - accuracy: 0.9204 - val_loss: 0.2478 - val_accuracy: 0.9043
Epoch 12/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1853 - accuracy: 0.9265 - val_loss: 0.1471 - val_accuracy: 0.9396
Epoch 13/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1732 - accuracy: 0.9325 - val_loss: 0.1803 - val_accuracy: 0.9345
Epoch 14/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1626 - accuracy: 0.9361 - val_loss: 0.1488 - val_accuracy: 0.9466
Epoch 15/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1727 - accuracy: 0.9338 - val_loss: 0.1750 - val_accuracy: 0.9345
Epoch 16/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1661 - accuracy: 0.9368 - val_loss: 0.2552 - val_accuracy: 0.8983
Epoch 17/64
125/125 [==============================] - 0s 1ms/step - loss: 0.2236 - accuracy: 0.9149 - val_loss: 0.1693 - val_accuracy: 0.9416
Epoch 18/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1672 - accuracy: 0.9401 - val_loss: 0.3445 - val_accuracy: 0.9013
Epoch 19/64
125/125 [==============================] - 0s 1ms/step - loss: 0.2194 - accuracy: 0.9189 - val_loss: 0.1558 - val_accuracy: 0.9366
Epoch 20/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1520 - accuracy: 0.9408 - val_loss: 0.1513 - val_accuracy: 0.9456
Epoch 21/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1366 - accuracy: 0.9476 - val_loss: 0.1211 - val_accuracy: 0.9678
Epoch 22/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1653 - accuracy: 0.9335 - val_loss: 0.1075 - val_accuracy: 0.9617
Epoch 23/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1348 - accuracy: 0.9461 - val_loss: 0.1204 - val_accuracy: 0.9537
Epoch 24/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1468 - accuracy: 0.9476 - val_loss: 0.0973 - val_accuracy: 0.9718
Epoch 25/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1881 - accuracy: 0.9293 - val_loss: 0.1341 - val_accuracy: 0.9547
Epoch 26/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1380 - accuracy: 0.9464 - val_loss: 0.1273 - val_accuracy: 0.9597
Epoch 27/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1326 - accuracy: 0.9496 - val_loss: 0.0868 - val_accuracy: 0.9728
Epoch 28/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1495 - accuracy: 0.9423 - val_loss: 0.1077 - val_accuracy: 0.9678
Epoch 29/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1215 - accuracy: 0.9562 - val_loss: 0.0786 - val_accuracy: 0.9748
Epoch 30/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1218 - accuracy: 0.9529 - val_loss: 0.0884 - val_accuracy: 0.9668
Epoch 31/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1222 - accuracy: 0.9524 - val_loss: 0.1154 - val_accuracy: 0.9587
Epoch 32/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1185 - accuracy: 0.9532 - val_loss: 0.0855 - val_accuracy: 0.9728
Epoch 33/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1046 - accuracy: 0.9615 - val_loss: 0.0872 - val_accuracy: 0.9668
Epoch 34/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1216 - accuracy: 0.9529 - val_loss: 0.0898 - val_accuracy: 0.9678
Epoch 35/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1039 - accuracy: 0.9595 - val_loss: 0.1140 - val_accuracy: 0.9547
Epoch 36/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1004 - accuracy: 0.9600 - val_loss: 0.0828 - val_accuracy: 0.9728
Epoch 37/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1812 - accuracy: 0.9386 - val_loss: 0.1206 - val_accuracy: 0.9567
Epoch 38/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1161 - accuracy: 0.9542 - val_loss: 0.0952 - val_accuracy: 0.9668
Epoch 39/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0979 - accuracy: 0.9627 - val_loss: 0.0855 - val_accuracy: 0.9748
Epoch 40/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1214 - accuracy: 0.9532 - val_loss: 0.1461 - val_accuracy: 0.9305
Epoch 41/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1128 - accuracy: 0.9585 - val_loss: 0.0707 - val_accuracy: 0.9748
Epoch 42/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1083 - accuracy: 0.9577 - val_loss: 0.1169 - val_accuracy: 0.9627
Epoch 43/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1031 - accuracy: 0.9602 - val_loss: 0.0949 - val_accuracy: 0.9698
Epoch 44/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0850 - accuracy: 0.9655 - val_loss: 0.0683 - val_accuracy: 0.9768
Epoch 45/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0920 - accuracy: 0.9640 - val_loss: 0.0708 - val_accuracy: 0.9799
Epoch 46/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0907 - accuracy: 0.9625 - val_loss: 0.0794 - val_accuracy: 0.9758
Epoch 47/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0834 - accuracy: 0.9660 - val_loss: 0.0832 - val_accuracy: 0.9738
Epoch 48/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0911 - accuracy: 0.9605 - val_loss: 0.1339 - val_accuracy: 0.9658
Epoch 49/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0971 - accuracy: 0.9622 - val_loss: 0.1002 - val_accuracy: 0.9658
Epoch 50/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1299 - accuracy: 0.9569 - val_loss: 0.1041 - val_accuracy: 0.9648
Epoch 51/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1138 - accuracy: 0.9554 - val_loss: 0.0717 - val_accuracy: 0.9708
Epoch 52/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0851 - accuracy: 0.9655 - val_loss: 0.0799 - val_accuracy: 0.9768
Epoch 53/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0860 - accuracy: 0.9685 - val_loss: 0.0697 - val_accuracy: 0.9799
Epoch 54/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0761 - accuracy: 0.9718 - val_loss: 0.0881 - val_accuracy: 0.9708
Epoch 55/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1388 - accuracy: 0.9489 - val_loss: 0.0763 - val_accuracy: 0.9748
Epoch 56/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1039 - accuracy: 0.9587 - val_loss: 0.0788 - val_accuracy: 0.9668
Epoch 57/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0885 - accuracy: 0.9675 - val_loss: 0.0849 - val_accuracy: 0.9648
Epoch 58/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0692 - accuracy: 0.9758 - val_loss: 0.0505 - val_accuracy: 0.9789
Epoch 59/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0715 - accuracy: 0.9731 - val_loss: 0.0588 - val_accuracy: 0.9789
Epoch 60/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0635 - accuracy: 0.9758 - val_loss: 0.1150 - val_accuracy: 0.9577
Epoch 61/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0839 - accuracy: 0.9668 - val_loss: 0.0995 - val_accuracy: 0.9637
Epoch 62/64
125/125 [==============================] - 0s 1ms/step - loss: 0.1002 - accuracy: 0.9637 - val_loss: 0.0667 - val_accuracy: 0.9748
Epoch 63/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0828 - accuracy: 0.9693 - val_loss: 0.0890 - val_accuracy: 0.9748
Epoch 64/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0817 - accuracy: 0.9690 - val_loss: 0.0536 - val_accuracy: 0.9829
In [16]:
# training with learning rate of 0.001

model.compile(optimizer = Adam(0.001), loss = 'categorical_crossentropy', metrics = ['accuracy'])
history2 = model.fit(Xtrain, ytrain, validation_data= (Xtest, ytest), epochs = 64)
Epoch 1/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0506 - accuracy: 0.9794 - val_loss: 0.0491 - val_accuracy: 0.9869
Epoch 2/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0454 - accuracy: 0.9814 - val_loss: 0.0453 - val_accuracy: 0.9919
Epoch 3/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0431 - accuracy: 0.9831 - val_loss: 0.0446 - val_accuracy: 0.9919
Epoch 4/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0408 - accuracy: 0.9839 - val_loss: 0.0450 - val_accuracy: 0.9899
Epoch 5/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0417 - accuracy: 0.9839 - val_loss: 0.0454 - val_accuracy: 0.9909
Epoch 6/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0391 - accuracy: 0.9854 - val_loss: 0.0424 - val_accuracy: 0.9909
Epoch 7/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0390 - accuracy: 0.9834 - val_loss: 0.0431 - val_accuracy: 0.9899
Epoch 8/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0375 - accuracy: 0.9854 - val_loss: 0.0456 - val_accuracy: 0.9909
Epoch 9/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0376 - accuracy: 0.9856 - val_loss: 0.0384 - val_accuracy: 0.9889
Epoch 10/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0362 - accuracy: 0.9864 - val_loss: 0.0435 - val_accuracy: 0.9919
Epoch 11/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0377 - accuracy: 0.9846 - val_loss: 0.0395 - val_accuracy: 0.9899
Epoch 12/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0345 - accuracy: 0.9856 - val_loss: 0.0424 - val_accuracy: 0.9899
Epoch 13/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0361 - accuracy: 0.9844 - val_loss: 0.0373 - val_accuracy: 0.9909
Epoch 14/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0345 - accuracy: 0.9864 - val_loss: 0.0422 - val_accuracy: 0.9909
Epoch 15/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0328 - accuracy: 0.9869 - val_loss: 0.0416 - val_accuracy: 0.9909
Epoch 16/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0324 - accuracy: 0.9892 - val_loss: 0.0518 - val_accuracy: 0.9879
Epoch 17/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0319 - accuracy: 0.9887 - val_loss: 0.0420 - val_accuracy: 0.9899
Epoch 18/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0311 - accuracy: 0.9869 - val_loss: 0.0429 - val_accuracy: 0.9909
Epoch 19/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0317 - accuracy: 0.9877 - val_loss: 0.0428 - val_accuracy: 0.9919
Epoch 20/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0301 - accuracy: 0.9877 - val_loss: 0.0444 - val_accuracy: 0.9899
Epoch 21/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0300 - accuracy: 0.9884 - val_loss: 0.0488 - val_accuracy: 0.9899
Epoch 22/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0306 - accuracy: 0.9887 - val_loss: 0.0437 - val_accuracy: 0.9919
Epoch 23/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0298 - accuracy: 0.9874 - val_loss: 0.0419 - val_accuracy: 0.9919
Epoch 24/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0294 - accuracy: 0.9874 - val_loss: 0.0414 - val_accuracy: 0.9909
Epoch 25/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0275 - accuracy: 0.9887 - val_loss: 0.0439 - val_accuracy: 0.9909
Epoch 26/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0285 - accuracy: 0.9884 - val_loss: 0.0456 - val_accuracy: 0.9899
Epoch 27/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0278 - accuracy: 0.9889 - val_loss: 0.0466 - val_accuracy: 0.9899
Epoch 28/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0273 - accuracy: 0.9892 - val_loss: 0.0441 - val_accuracy: 0.9919
Epoch 29/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0255 - accuracy: 0.9897 - val_loss: 0.0463 - val_accuracy: 0.9909
Epoch 30/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0253 - accuracy: 0.9907 - val_loss: 0.0532 - val_accuracy: 0.9899
Epoch 31/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0265 - accuracy: 0.9899 - val_loss: 0.0423 - val_accuracy: 0.9919
Epoch 32/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0250 - accuracy: 0.9894 - val_loss: 0.0473 - val_accuracy: 0.9909
Epoch 33/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0241 - accuracy: 0.9889 - val_loss: 0.0418 - val_accuracy: 0.9930
Epoch 34/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0240 - accuracy: 0.9909 - val_loss: 0.0399 - val_accuracy: 0.9909
Epoch 35/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0236 - accuracy: 0.9907 - val_loss: 0.0444 - val_accuracy: 0.9909
Epoch 36/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0231 - accuracy: 0.9897 - val_loss: 0.0505 - val_accuracy: 0.9919
Epoch 37/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0228 - accuracy: 0.9919 - val_loss: 0.0507 - val_accuracy: 0.9919
Epoch 38/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0229 - accuracy: 0.9904 - val_loss: 0.0627 - val_accuracy: 0.9869
Epoch 39/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0212 - accuracy: 0.9919 - val_loss: 0.0512 - val_accuracy: 0.9889
Epoch 40/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0213 - accuracy: 0.9904 - val_loss: 0.0476 - val_accuracy: 0.9899
Epoch 41/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0222 - accuracy: 0.9902 - val_loss: 0.0446 - val_accuracy: 0.9930
Epoch 42/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0217 - accuracy: 0.9914 - val_loss: 0.0600 - val_accuracy: 0.9899
Epoch 43/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0239 - accuracy: 0.9909 - val_loss: 0.0512 - val_accuracy: 0.9899
Epoch 44/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0213 - accuracy: 0.9904 - val_loss: 0.0462 - val_accuracy: 0.9919
Epoch 45/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0201 - accuracy: 0.9899 - val_loss: 0.0489 - val_accuracy: 0.9919
Epoch 46/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0184 - accuracy: 0.9930 - val_loss: 0.0521 - val_accuracy: 0.9899
Epoch 47/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0199 - accuracy: 0.9917 - val_loss: 0.0438 - val_accuracy: 0.9909
Epoch 48/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0190 - accuracy: 0.9919 - val_loss: 0.0525 - val_accuracy: 0.9889
Epoch 49/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0186 - accuracy: 0.9917 - val_loss: 0.0504 - val_accuracy: 0.9899
Epoch 50/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0206 - accuracy: 0.9912 - val_loss: 0.0528 - val_accuracy: 0.9930
Epoch 51/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0184 - accuracy: 0.9924 - val_loss: 0.0453 - val_accuracy: 0.9930
Epoch 52/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0181 - accuracy: 0.9922 - val_loss: 0.0573 - val_accuracy: 0.9879
Epoch 53/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0205 - accuracy: 0.9914 - val_loss: 0.0510 - val_accuracy: 0.9899
Epoch 54/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0181 - accuracy: 0.9930 - val_loss: 0.0489 - val_accuracy: 0.9909
Epoch 55/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0171 - accuracy: 0.9930 - val_loss: 0.0508 - val_accuracy: 0.9919
Epoch 56/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0168 - accuracy: 0.9919 - val_loss: 0.0506 - val_accuracy: 0.9919
Epoch 57/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0181 - accuracy: 0.9922 - val_loss: 0.0490 - val_accuracy: 0.9919
Epoch 58/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0160 - accuracy: 0.9940 - val_loss: 0.0491 - val_accuracy: 0.9909
Epoch 59/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0175 - accuracy: 0.9922 - val_loss: 0.0568 - val_accuracy: 0.9909
Epoch 60/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0168 - accuracy: 0.9922 - val_loss: 0.0492 - val_accuracy: 0.9909
Epoch 61/64
125/125 [==============================] - 0s 2ms/step - loss: 0.0157 - accuracy: 0.9924 - val_loss: 0.0497 - val_accuracy: 0.9919
Epoch 62/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0161 - accuracy: 0.9947 - val_loss: 0.0474 - val_accuracy: 0.9930
Epoch 63/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0159 - accuracy: 0.9927 - val_loss: 0.0516 - val_accuracy: 0.9919
Epoch 64/64
125/125 [==============================] - 0s 1ms/step - loss: 0.0162 - accuracy: 0.9922 - val_loss: 0.0693 - val_accuracy: 0.9859
In [17]:
model.evaluate(Xtest, ytest)
32/32 [==============================] - 0s 758us/step - loss: 0.0693 - accuracy: 0.9859
Out[17]:
[0.0693005695939064, 0.9859012961387634]

Plotting the metrics

In [18]:
def plot(history1, history2, variable1, variable2):
    # combining metrics from both trainings    
    var1_history = history1[variable1]
    var1_history.extend(history2[variable1])
    
    var2_history = history1[variable2]
    var2_history.extend(history2[variable2])
    
    # plotting them
    plt.plot(range(len(var1_history)), var1_history)
    plt.plot(range(len(var2_history)), var2_history)
    plt.legend([variable1, variable2])
    plt.title(variable1)
In [19]:
plot(history1.history, history2.history, "accuracy", 'val_accuracy')
In [20]:
plot(history1.history, history2.history, "loss", 'val_loss')

Prediction

In [21]:
# pick random test data sample from one batch
x = random.randint(0, len(Xtest) - 1)

output = model.predict(Xtest[x].reshape(1, -1))    # getting output; input shape (256, 256, 3) --> (1, 256, 256, 3)
pred = np.argmax(output[0])    # finding max
print("Prdicted: ", class_names[pred])    # Picking the label from class_names base don the model output

output_true = np.array(ytest)[x]

print("True: ", class_names[np.argmax(output_true)])
print("Probability: ", output[0][pred])
Prdicted:  Pathological
True:  Pathological
Probability:  1.0

deepC

In [22]:
model.save('fetal_health.h5')

!deepCC fetal_health.h5
reading [keras model] from 'fetal_health.h5'
Saved 'fetal_health.onnx'
reading onnx model from file  fetal_health.onnx
Model info:
  ir_vesion :  4 
  doc       : 
WARN (ONNX): terminal (input/output) dense_input's shape is less than 1.
             changing it to 1.
WARN (ONNX): terminal (input/output) dense_2's shape is less than 1.
             changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_2) as io node.
running DNNC graph sanity check ... passed.
Writing C++ file  fetal_health_deepC/fetal_health.cpp
INFO (ONNX): model files are ready in dir fetal_health_deepC
g++ -std=c++11 -O3 -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 fetal_health_deepC/fetal_health.cpp -o fetal_health_deepC/fetal_health.exe
Model executable  fetal_health_deepC/fetal_health.exe
In [23]:
# pick random test data sample from one batch
x = random.randint(0, len(Xtest) - 1)

np.savetxt('sample.data', Xtest[x])    # xth sample into text file

# run exe with input
!fetal_health_deepC/fetal_health.exe sample.data

# show predicted output
nn_out = np.loadtxt('dense_2.out')

pred = np.argmax(nn_out)    # finding max
print("Prdicted: ", class_names[pred])    # Picking the label from class_names base don the model output

output_true = np.array(ytest)[x]

print("True: ", class_names[np.argmax(output_true)])
print("Probability: ", nn_out[pred])
reading file sample.data.
writing file dense_2.out.
Prdicted:  Pathological
True:  Pathological
Probability:  1.0