Cainvas

Breast Cancer Detection using Deep Learning

Credit: AITS Cainvas Community

Photo by Fatimah on Dribbble

In [1]:
# Import all the necessary libraries

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import Sequential
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import os
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import confusion_matrix
from sklearn.metrics import plot_confusion_matrix

Unzip the dataset

In [2]:
!wget 'https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/data_1lZhcJn.zip'

!unzip -qo data_1lZhcJn.zip 
!rm data_1lZhcJn.zip
--2021-08-25 13:33:34--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/data_1lZhcJn.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.158.7
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.158.7|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 49796 (49K) [application/x-zip-compressed]
Saving to: ‘data_1lZhcJn.zip’

data_1lZhcJn.zip    100%[===================>]  48.63K  --.-KB/s    in 0.001s  

2021-08-25 13:33:34 (57.4 MB/s) - ‘data_1lZhcJn.zip’ saved [49796/49796]

Loading and displaying the data

In [3]:
#Loading the data file using pandas library

data = pd.read_csv('data.csv', sep = ",")
data.head(3)
Out[3]:
id diagnosis radius_mean texture_mean perimeter_mean area_mean smoothness_mean compactness_mean concavity_mean concave points_mean ... texture_worst perimeter_worst area_worst smoothness_worst compactness_worst concavity_worst concave points_worst symmetry_worst fractal_dimension_worst Unnamed: 32
0 842302 M 17.99 10.38 122.8 1001.0 0.11840 0.27760 0.3001 0.14710 ... 17.33 184.6 2019.0 0.1622 0.6656 0.7119 0.2654 0.4601 0.11890 NaN
1 842517 M 20.57 17.77 132.9 1326.0 0.08474 0.07864 0.0869 0.07017 ... 23.41 158.8 1956.0 0.1238 0.1866 0.2416 0.1860 0.2750 0.08902 NaN
2 84300903 M 19.69 21.25 130.0 1203.0 0.10960 0.15990 0.1974 0.12790 ... 25.53 152.5 1709.0 0.1444 0.4245 0.4504 0.2430 0.3613 0.08758 NaN

3 rows × 33 columns

In [4]:
print(data.isna().sum())
data = data.dropna(axis = 1)
id                           0
diagnosis                    0
radius_mean                  0
texture_mean                 0
perimeter_mean               0
area_mean                    0
smoothness_mean              0
compactness_mean             0
concavity_mean               0
concave points_mean          0
symmetry_mean                0
fractal_dimension_mean       0
radius_se                    0
texture_se                   0
perimeter_se                 0
area_se                      0
smoothness_se                0
compactness_se               0
concavity_se                 0
concave points_se            0
symmetry_se                  0
fractal_dimension_se         0
radius_worst                 0
texture_worst                0
perimeter_worst              0
area_worst                   0
smoothness_worst             0
compactness_worst            0
concavity_worst              0
concave points_worst         0
symmetry_worst               0
fractal_dimension_worst      0
Unnamed: 32                569
dtype: int64
In [5]:
data.head(2)
Out[5]:
id diagnosis radius_mean texture_mean perimeter_mean area_mean smoothness_mean compactness_mean concavity_mean concave points_mean ... radius_worst texture_worst perimeter_worst area_worst smoothness_worst compactness_worst concavity_worst concave points_worst symmetry_worst fractal_dimension_worst
0 842302 M 17.99 10.38 122.8 1001.0 0.11840 0.27760 0.3001 0.14710 ... 25.38 17.33 184.6 2019.0 0.1622 0.6656 0.7119 0.2654 0.4601 0.11890
1 842517 M 20.57 17.77 132.9 1326.0 0.08474 0.07864 0.0869 0.07017 ... 24.99 23.41 158.8 1956.0 0.1238 0.1866 0.2416 0.1860 0.2750 0.08902

2 rows × 32 columns

Visulaising the Features of the Data

In [6]:
data.hist(figsize = (18,18))
Out[6]:
array([[<AxesSubplot:title={'center':'id'}>,
        <AxesSubplot:title={'center':'radius_mean'}>,
        <AxesSubplot:title={'center':'texture_mean'}>,
        <AxesSubplot:title={'center':'perimeter_mean'}>,
        <AxesSubplot:title={'center':'area_mean'}>,
        <AxesSubplot:title={'center':'smoothness_mean'}>],
       [<AxesSubplot:title={'center':'compactness_mean'}>,
        <AxesSubplot:title={'center':'concavity_mean'}>,
        <AxesSubplot:title={'center':'concave points_mean'}>,
        <AxesSubplot:title={'center':'symmetry_mean'}>,
        <AxesSubplot:title={'center':'fractal_dimension_mean'}>,
        <AxesSubplot:title={'center':'radius_se'}>],
       [<AxesSubplot:title={'center':'texture_se'}>,
        <AxesSubplot:title={'center':'perimeter_se'}>,
        <AxesSubplot:title={'center':'area_se'}>,
        <AxesSubplot:title={'center':'smoothness_se'}>,
        <AxesSubplot:title={'center':'compactness_se'}>,
        <AxesSubplot:title={'center':'concavity_se'}>],
       [<AxesSubplot:title={'center':'concave points_se'}>,
        <AxesSubplot:title={'center':'symmetry_se'}>,
        <AxesSubplot:title={'center':'fractal_dimension_se'}>,
        <AxesSubplot:title={'center':'radius_worst'}>,
        <AxesSubplot:title={'center':'texture_worst'}>,
        <AxesSubplot:title={'center':'perimeter_worst'}>],
       [<AxesSubplot:title={'center':'area_worst'}>,
        <AxesSubplot:title={'center':'smoothness_worst'}>,
        <AxesSubplot:title={'center':'compactness_worst'}>,
        <AxesSubplot:title={'center':'concavity_worst'}>,
        <AxesSubplot:title={'center':'concave points_worst'}>,
        <AxesSubplot:title={'center':'symmetry_worst'}>],
       [<AxesSubplot:title={'center':'fractal_dimension_worst'}>,
        <AxesSubplot:>, <AxesSubplot:>, <AxesSubplot:>, <AxesSubplot:>,
        <AxesSubplot:>]], dtype=object)
In [7]:
# Plotting a heatmap/correlation plot to see how different values are related to each other
plt.figure(figsize=(27,24))
sns.heatmap(data.corr(),annot=False,linewidths=2)
plt.show()

Pre processing the Data

In [8]:
# Encoding our diagnostics using label encoder

le = LabelEncoder()
data.iloc[:,1] = le.fit_transform(data.iloc[:,1].values)
In [9]:
print(data.shape)
X = data.iloc[:, 2:31].values
y = data.iloc[:,1].values

# Splitting our dataset into train-test split
X_train, X_test, Y_train, Y_test = train_test_split(X, y,test_size = 0.3,random_state = 0, stratify = y)
(569, 32)
In [10]:
#Feature Scaling

sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
In [11]:
# convert the data to categorical labels

from tensorflow.keras.utils import to_categorical
Y_train = to_categorical(Y_train, num_classes=None)
Y_test = to_categorical(Y_test, num_classes=None)
print ("Y = ",Y_train.shape)
print ("X = ",X_train.shape)
Y =  (398, 2)
X =  (398, 29)

Defining our DL Model

In [12]:
# Defining the architecture of our deep learning model

model = Sequential()

model.add(Dense(100, activation = "relu", input_dim = 29))
model.add(Dropout(0.2))
model.add(Dense(100, activation = "relu"))
model.add(Dense(2, activation = "softmax"))

model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 100)               3000      
_________________________________________________________________
dropout (Dropout)            (None, 100)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 100)               10100     
_________________________________________________________________
dense_2 (Dense)              (None, 2)                 202       
=================================================================
Total params: 13,302
Trainable params: 13,302
Non-trainable params: 0
_________________________________________________________________
In [13]:
# Compiling the model
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
In [14]:
# Run the model for a batch size of 35 for 100 epochs
history = model.fit(X_train, 
                    Y_train, 
                    validation_data = (X_test, Y_test),
                    batch_size = 35,
                    epochs = 100
                   )
Epoch 1/100
12/12 [==============================] - 0s 12ms/step - loss: 0.4426 - accuracy: 0.7889 - val_loss: 0.2544 - val_accuracy: 0.9123
Epoch 2/100
12/12 [==============================] - 0s 3ms/step - loss: 0.1750 - accuracy: 0.9472 - val_loss: 0.1646 - val_accuracy: 0.9298
Epoch 3/100
12/12 [==============================] - 0s 3ms/step - loss: 0.1234 - accuracy: 0.9573 - val_loss: 0.1376 - val_accuracy: 0.9474
Epoch 4/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0960 - accuracy: 0.9724 - val_loss: 0.1234 - val_accuracy: 0.9474
Epoch 5/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0790 - accuracy: 0.9824 - val_loss: 0.1182 - val_accuracy: 0.9415
Epoch 6/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0676 - accuracy: 0.9824 - val_loss: 0.1121 - val_accuracy: 0.9474
Epoch 7/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0602 - accuracy: 0.9824 - val_loss: 0.1103 - val_accuracy: 0.9357
Epoch 8/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0537 - accuracy: 0.9899 - val_loss: 0.1131 - val_accuracy: 0.9357
Epoch 9/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0463 - accuracy: 0.9874 - val_loss: 0.1163 - val_accuracy: 0.9415
Epoch 10/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0468 - accuracy: 0.9899 - val_loss: 0.1196 - val_accuracy: 0.9415
Epoch 11/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0361 - accuracy: 0.9975 - val_loss: 0.1217 - val_accuracy: 0.9532
Epoch 12/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0374 - accuracy: 0.9925 - val_loss: 0.1216 - val_accuracy: 0.9532
Epoch 13/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0286 - accuracy: 0.9950 - val_loss: 0.1291 - val_accuracy: 0.9532
Epoch 14/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0330 - accuracy: 0.9950 - val_loss: 0.1357 - val_accuracy: 0.9415
Epoch 15/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0289 - accuracy: 0.9950 - val_loss: 0.1406 - val_accuracy: 0.9357
Epoch 16/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0222 - accuracy: 0.9975 - val_loss: 0.1420 - val_accuracy: 0.9415
Epoch 17/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0184 - accuracy: 0.9975 - val_loss: 0.1465 - val_accuracy: 0.9357
Epoch 18/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0223 - accuracy: 0.9950 - val_loss: 0.1397 - val_accuracy: 0.9415
Epoch 19/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0256 - accuracy: 0.9950 - val_loss: 0.1400 - val_accuracy: 0.9474
Epoch 20/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0170 - accuracy: 0.9975 - val_loss: 0.1515 - val_accuracy: 0.9532
Epoch 21/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0140 - accuracy: 0.9950 - val_loss: 0.1624 - val_accuracy: 0.9474
Epoch 22/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0153 - accuracy: 0.9975 - val_loss: 0.1568 - val_accuracy: 0.9474
Epoch 23/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0158 - accuracy: 0.9925 - val_loss: 0.1528 - val_accuracy: 0.9415
Epoch 24/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0123 - accuracy: 0.9950 - val_loss: 0.1560 - val_accuracy: 0.9415
Epoch 25/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0132 - accuracy: 0.9975 - val_loss: 0.1674 - val_accuracy: 0.9357
Epoch 26/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0090 - accuracy: 0.9975 - val_loss: 0.1648 - val_accuracy: 0.9357
Epoch 27/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0113 - accuracy: 0.9950 - val_loss: 0.1596 - val_accuracy: 0.9357
Epoch 28/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0070 - accuracy: 1.0000 - val_loss: 0.1737 - val_accuracy: 0.9474
Epoch 29/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0083 - accuracy: 0.9975 - val_loss: 0.1763 - val_accuracy: 0.9415
Epoch 30/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0090 - accuracy: 0.9950 - val_loss: 0.1581 - val_accuracy: 0.9357
Epoch 31/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0059 - accuracy: 1.0000 - val_loss: 0.1615 - val_accuracy: 0.9357
Epoch 32/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0049 - accuracy: 1.0000 - val_loss: 0.1767 - val_accuracy: 0.9357
Epoch 33/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0060 - accuracy: 1.0000 - val_loss: 0.1871 - val_accuracy: 0.9357
Epoch 34/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0074 - accuracy: 0.9975 - val_loss: 0.1920 - val_accuracy: 0.9357
Epoch 35/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0041 - accuracy: 1.0000 - val_loss: 0.1948 - val_accuracy: 0.9357
Epoch 36/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0043 - accuracy: 1.0000 - val_loss: 0.2022 - val_accuracy: 0.9357
Epoch 37/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0055 - accuracy: 1.0000 - val_loss: 0.1978 - val_accuracy: 0.9357
Epoch 38/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0041 - accuracy: 1.0000 - val_loss: 0.2003 - val_accuracy: 0.9415
Epoch 39/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0032 - accuracy: 1.0000 - val_loss: 0.2011 - val_accuracy: 0.9415
Epoch 40/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0035 - accuracy: 0.9975 - val_loss: 0.1957 - val_accuracy: 0.9474
Epoch 41/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0038 - accuracy: 1.0000 - val_loss: 0.1943 - val_accuracy: 0.9357
Epoch 42/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0025 - accuracy: 1.0000 - val_loss: 0.2107 - val_accuracy: 0.9357
Epoch 43/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0026 - accuracy: 1.0000 - val_loss: 0.2135 - val_accuracy: 0.9357
Epoch 44/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0040 - accuracy: 1.0000 - val_loss: 0.1962 - val_accuracy: 0.9357
Epoch 45/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0018 - accuracy: 1.0000 - val_loss: 0.2057 - val_accuracy: 0.9474
Epoch 46/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0050 - accuracy: 0.9975 - val_loss: 0.2110 - val_accuracy: 0.9415
Epoch 47/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0022 - accuracy: 1.0000 - val_loss: 0.2226 - val_accuracy: 0.9415
Epoch 48/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0031 - accuracy: 1.0000 - val_loss: 0.2224 - val_accuracy: 0.9415
Epoch 49/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0062 - accuracy: 0.9975 - val_loss: 0.2198 - val_accuracy: 0.9474
Epoch 50/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0034 - accuracy: 1.0000 - val_loss: 0.2451 - val_accuracy: 0.9532
Epoch 51/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0016 - accuracy: 1.0000 - val_loss: 0.2588 - val_accuracy: 0.9415
Epoch 52/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 0.2591 - val_accuracy: 0.9415
Epoch 53/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0016 - accuracy: 1.0000 - val_loss: 0.2617 - val_accuracy: 0.9415
Epoch 54/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0012 - accuracy: 1.0000 - val_loss: 0.2605 - val_accuracy: 0.9415
Epoch 55/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 0.2610 - val_accuracy: 0.9357
Epoch 56/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0014 - accuracy: 1.0000 - val_loss: 0.2680 - val_accuracy: 0.9357
Epoch 57/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0035 - accuracy: 1.0000 - val_loss: 0.2502 - val_accuracy: 0.9415
Epoch 58/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0021 - accuracy: 1.0000 - val_loss: 0.2391 - val_accuracy: 0.9415
Epoch 59/100
12/12 [==============================] - 0s 3ms/step - loss: 8.7868e-04 - accuracy: 1.0000 - val_loss: 0.2353 - val_accuracy: 0.9415
Epoch 60/100
12/12 [==============================] - 0s 3ms/step - loss: 8.4733e-04 - accuracy: 1.0000 - val_loss: 0.2338 - val_accuracy: 0.9415
Epoch 61/100
12/12 [==============================] - 0s 3ms/step - loss: 9.5379e-04 - accuracy: 1.0000 - val_loss: 0.2417 - val_accuracy: 0.9415
Epoch 62/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0014 - accuracy: 1.0000 - val_loss: 0.2395 - val_accuracy: 0.9415
Epoch 63/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0020 - accuracy: 1.0000 - val_loss: 0.2438 - val_accuracy: 0.9415
Epoch 64/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0020 - accuracy: 1.0000 - val_loss: 0.2705 - val_accuracy: 0.9298
Epoch 65/100
12/12 [==============================] - 0s 3ms/step - loss: 7.6176e-04 - accuracy: 1.0000 - val_loss: 0.2792 - val_accuracy: 0.9298
Epoch 66/100
12/12 [==============================] - 0s 3ms/step - loss: 9.4545e-04 - accuracy: 1.0000 - val_loss: 0.2819 - val_accuracy: 0.9357
Epoch 67/100
12/12 [==============================] - 0s 3ms/step - loss: 9.9722e-04 - accuracy: 1.0000 - val_loss: 0.2822 - val_accuracy: 0.9357
Epoch 68/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0021 - accuracy: 1.0000 - val_loss: 0.2729 - val_accuracy: 0.9298
Epoch 69/100
12/12 [==============================] - 0s 3ms/step - loss: 7.2909e-04 - accuracy: 1.0000 - val_loss: 0.2694 - val_accuracy: 0.9357
Epoch 70/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0010 - accuracy: 1.0000 - val_loss: 0.2750 - val_accuracy: 0.9415
Epoch 71/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0010 - accuracy: 1.0000 - val_loss: 0.2712 - val_accuracy: 0.9415
Epoch 72/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0011 - accuracy: 1.0000 - val_loss: 0.2644 - val_accuracy: 0.9415
Epoch 73/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0020 - accuracy: 1.0000 - val_loss: 0.2604 - val_accuracy: 0.9298
Epoch 74/100
12/12 [==============================] - 0s 3ms/step - loss: 6.6001e-04 - accuracy: 1.0000 - val_loss: 0.2669 - val_accuracy: 0.9357
Epoch 75/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0011 - accuracy: 1.0000 - val_loss: 0.2647 - val_accuracy: 0.9357
Epoch 76/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0023 - accuracy: 0.9975 - val_loss: 0.2642 - val_accuracy: 0.9357
Epoch 77/100
12/12 [==============================] - 0s 3ms/step - loss: 8.1857e-04 - accuracy: 1.0000 - val_loss: 0.2699 - val_accuracy: 0.9357
Epoch 78/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0014 - accuracy: 1.0000 - val_loss: 0.2722 - val_accuracy: 0.9357
Epoch 79/100
12/12 [==============================] - 0s 3ms/step - loss: 9.4532e-04 - accuracy: 1.0000 - val_loss: 0.2707 - val_accuracy: 0.9415
Epoch 80/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0016 - accuracy: 1.0000 - val_loss: 0.2705 - val_accuracy: 0.9357
Epoch 81/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0010 - accuracy: 1.0000 - val_loss: 0.2828 - val_accuracy: 0.9357
Epoch 82/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0014 - accuracy: 1.0000 - val_loss: 0.2692 - val_accuracy: 0.9298
Epoch 83/100
12/12 [==============================] - 0s 3ms/step - loss: 9.7523e-04 - accuracy: 1.0000 - val_loss: 0.2521 - val_accuracy: 0.9240
Epoch 84/100
12/12 [==============================] - 0s 3ms/step - loss: 8.5023e-04 - accuracy: 1.0000 - val_loss: 0.2464 - val_accuracy: 0.9357
Epoch 85/100
12/12 [==============================] - 0s 3ms/step - loss: 8.0846e-04 - accuracy: 1.0000 - val_loss: 0.2466 - val_accuracy: 0.9357
Epoch 86/100
12/12 [==============================] - 0s 3ms/step - loss: 7.7615e-04 - accuracy: 1.0000 - val_loss: 0.2497 - val_accuracy: 0.9357
Epoch 87/100
12/12 [==============================] - 0s 3ms/step - loss: 4.4013e-04 - accuracy: 1.0000 - val_loss: 0.2501 - val_accuracy: 0.9357
Epoch 88/100
12/12 [==============================] - 0s 3ms/step - loss: 0.0011 - accuracy: 1.0000 - val_loss: 0.2591 - val_accuracy: 0.9474
Epoch 89/100
12/12 [==============================] - 0s 3ms/step - loss: 8.0967e-04 - accuracy: 1.0000 - val_loss: 0.2743 - val_accuracy: 0.9474
Epoch 90/100
12/12 [==============================] - 0s 3ms/step - loss: 6.0420e-04 - accuracy: 1.0000 - val_loss: 0.2847 - val_accuracy: 0.9474
Epoch 91/100
12/12 [==============================] - 0s 3ms/step - loss: 9.3105e-04 - accuracy: 1.0000 - val_loss: 0.2959 - val_accuracy: 0.9357
Epoch 92/100
12/12 [==============================] - 0s 3ms/step - loss: 3.7003e-04 - accuracy: 1.0000 - val_loss: 0.2950 - val_accuracy: 0.9415
Epoch 93/100
12/12 [==============================] - 0s 3ms/step - loss: 2.2449e-04 - accuracy: 1.0000 - val_loss: 0.2955 - val_accuracy: 0.9474
Epoch 94/100
12/12 [==============================] - 0s 3ms/step - loss: 7.2264e-04 - accuracy: 1.0000 - val_loss: 0.2969 - val_accuracy: 0.9415
Epoch 95/100
12/12 [==============================] - 0s 3ms/step - loss: 8.4027e-04 - accuracy: 1.0000 - val_loss: 0.2887 - val_accuracy: 0.9415
Epoch 96/100
12/12 [==============================] - 0s 3ms/step - loss: 3.6854e-04 - accuracy: 1.0000 - val_loss: 0.2853 - val_accuracy: 0.9415
Epoch 97/100
12/12 [==============================] - 0s 3ms/step - loss: 4.7667e-04 - accuracy: 1.0000 - val_loss: 0.2839 - val_accuracy: 0.9474
Epoch 98/100
12/12 [==============================] - 0s 3ms/step - loss: 2.8791e-04 - accuracy: 1.0000 - val_loss: 0.2822 - val_accuracy: 0.9474
Epoch 99/100
12/12 [==============================] - 0s 3ms/step - loss: 6.4904e-04 - accuracy: 1.0000 - val_loss: 0.2910 - val_accuracy: 0.9357
Epoch 100/100
12/12 [==============================] - 0s 3ms/step - loss: 3.6132e-04 - accuracy: 1.0000 - val_loss: 0.3011 - val_accuracy: 0.9357
In [15]:
# Function to plot "accuracy vs epoch" graphs and "loss vs epoch" graphs for training and validation data
def plot_metrics(model_name, metric = 'accuracy'):
    if metric == 'loss':
        plt.title("Loss Values")
        plt.plot(model_name.history['loss'], label = 'train')
        plt.plot(model_name.history['val_loss'], label = 'test')
        plt.legend()
        plt.show()
    else:
        plt.title("Accuracy Values")
        plt.plot(model_name.history['accuracy'], label='train') 
        plt.plot(model_name.history['val_accuracy'], label='test') 
        plt.legend()
        plt.show()
In [16]:
plot_metrics(history, 'accuracy')
plot_metrics(history, 'loss')

Save the Model File

In [17]:
# Saving our trained model
from tensorflow.keras.models import save_model
if os.path.isfile('best_model.h5') is False:
    model.save('best_model.h5')

Checking the Accuracy of the Model by Predicting

In [18]:
#Plotting a confusion matrix for checking the performance of our model
Y_pred = np.argmax(model.predict(X_test), axis = 1)
cnf = confusion_matrix(Y_test.argmax(axis = 1), Y_pred)


df_cnf = pd.DataFrame(cnf, range(2), range(2))
sns.set(font_scale = 2)
sns.heatmap(df_cnf, annot = True)
plt.title("Confusion Matrix")
plt.xlabel("True Values")
plt.ylabel("Prediction Values")
plt.show()

IoT Application of This Project

Utilising this machine learning model to develop equipment to detect breats cancer can go a long way in helping patients to detect such problems at an early stage and may lead to an early recovery. Data collected from such equipmets will help physicians devise the right course of anction for their patients for a smooth transition to normal life after proper recovery. It is of course of no doubt that the application of such macine learning and deep learning models in the field of healthcare in designing such IoT equipments has had a ahige impact on enhancing the quality of treatment given to patients.

In [19]:
from tensorflow.keras import models
model = models.load_model('best_model.h5')
In [20]:
!deepCC best_model.h5
[INFO]
Reading [keras model] 'best_model.h5'
[SUCCESS]
Saved 'best_model_deepC/best_model.onnx'
[INFO]
Reading [onnx model] 'best_model_deepC/best_model.onnx'
[INFO]
Model info:
  ir_vesion : 4
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) dense_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_2's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_2) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'best_model_deepC/best_model.cpp'
[INFO]
deepSea model files are ready in 'best_model_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "best_model_deepC/best_model.cpp" -D_AITS_MAIN -o "best_model_deepC/best_model.exe"
[RUNNING COMMAND]
size "best_model_deepC/best_model.exe"
   text	   data	    bss	    dec	    hex	filename
 176485	   2984	    760	 180229	  2c005	best_model_deepC/best_model.exe
[SUCCESS]
Saved model as executable "best_model_deepC/best_model.exe"
In [ ]: