Cainvas
Model Files
Breast_Cancer_Detection.h5
keras
Model
deepSea Compiled Models
Breast_Cancer_Detection.exe
deepSea
Ubuntu

Breast Cancer Detection Using Deep Learning

Credit: AITS Cainvas Community

Photo by Shreya Damle on Dribbble

In this notebook, we will be building a CNN model inorder to detect Breast Cancer using the Breast Cancer Wisconsin (Diagnostic) Data Set

In [1]:
!wget https://cainvas-static.s3.amazonaws.com/media/user_data/jayc/data.csv
--2021-07-05 10:06:16--  https://cainvas-static.s3.amazonaws.com/media/user_data/jayc/data.csv
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.62.16
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.62.16|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 125204 (122K) [application/octet-stream]
Saving to: ‘data.csv’

data.csv            100%[===================>] 122.27K  --.-KB/s    in 0.002s  

2021-07-05 10:06:16 (73.1 MB/s) - ‘data.csv’ saved [125204/125204]

Importing necessary libraries that will use in model building.

Let's import the important libraries which will be used in this project.

In [2]:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Flatten, Dense, Dropout, BatchNormalization
from tensorflow.keras.layers import Conv1D, MaxPool1D
from tensorflow.keras.optimizers import Adam
import pandas as pd
import numpy as np
import  seaborn as sns
import matplotlib.pyplot as plt
from sklearn import datasets, metrics
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

Loading of data and looking into some insights

In this section we will be loading our Breast Cancer Wisconsin Dataset and then we will be extracting some information out of it.

In [3]:
cancer = datasets.load_breast_cancer()

We will be using pandas DataFrame to present all our data.

In [4]:
df = pd.DataFrame(data = cancer.data, columns=cancer.feature_names)
df.head()
Out[4]:
mean radius mean texture mean perimeter mean area mean smoothness mean compactness mean concavity mean concave points mean symmetry mean fractal dimension ... worst radius worst texture worst perimeter worst area worst smoothness worst compactness worst concavity worst concave points worst symmetry worst fractal dimension
0 17.99 10.38 122.80 1001.0 0.11840 0.27760 0.3001 0.14710 0.2419 0.07871 ... 25.38 17.33 184.60 2019.0 0.1622 0.6656 0.7119 0.2654 0.4601 0.11890
1 20.57 17.77 132.90 1326.0 0.08474 0.07864 0.0869 0.07017 0.1812 0.05667 ... 24.99 23.41 158.80 1956.0 0.1238 0.1866 0.2416 0.1860 0.2750 0.08902
2 19.69 21.25 130.00 1203.0 0.10960 0.15990 0.1974 0.12790 0.2069 0.05999 ... 23.57 25.53 152.50 1709.0 0.1444 0.4245 0.4504 0.2430 0.3613 0.08758
3 11.42 20.38 77.58 386.1 0.14250 0.28390 0.2414 0.10520 0.2597 0.09744 ... 14.91 26.50 98.87 567.7 0.2098 0.8663 0.6869 0.2575 0.6638 0.17300
4 20.29 14.34 135.10 1297.0 0.10030 0.13280 0.1980 0.10430 0.1809 0.05883 ... 22.54 16.67 152.20 1575.0 0.1374 0.2050 0.4000 0.1625 0.2364 0.07678

5 rows × 30 columns

Let's find the correlation between some columns

We we have used a heatmap inorder to visualize the correlation between the first 10 columns.

In [5]:
import seaborn as sns
featureMeans = list(df.columns[1:11])
plt.figure(figsize=(10,10))
sns.heatmap(df[featureMeans].corr(), annot=True, square=True, cmap='coolwarm')
plt.show()

Description of data

The describe() keyword can be used to extract the description of various fields in the dataset.

In [6]:
df.describe()
Out[6]:
mean radius mean texture mean perimeter mean area mean smoothness mean compactness mean concavity mean concave points mean symmetry mean fractal dimension ... worst radius worst texture worst perimeter worst area worst smoothness worst compactness worst concavity worst concave points worst symmetry worst fractal dimension
count 569.000000 569.000000 569.000000 569.000000 569.000000 569.000000 569.000000 569.000000 569.000000 569.000000 ... 569.000000 569.000000 569.000000 569.000000 569.000000 569.000000 569.000000 569.000000 569.000000 569.000000
mean 14.127292 19.289649 91.969033 654.889104 0.096360 0.104341 0.088799 0.048919 0.181162 0.062798 ... 16.269190 25.677223 107.261213 880.583128 0.132369 0.254265 0.272188 0.114606 0.290076 0.083946
std 3.524049 4.301036 24.298981 351.914129 0.014064 0.052813 0.079720 0.038803 0.027414 0.007060 ... 4.833242 6.146258 33.602542 569.356993 0.022832 0.157336 0.208624 0.065732 0.061867 0.018061
min 6.981000 9.710000 43.790000 143.500000 0.052630 0.019380 0.000000 0.000000 0.106000 0.049960 ... 7.930000 12.020000 50.410000 185.200000 0.071170 0.027290 0.000000 0.000000 0.156500 0.055040
25% 11.700000 16.170000 75.170000 420.300000 0.086370 0.064920 0.029560 0.020310 0.161900 0.057700 ... 13.010000 21.080000 84.110000 515.300000 0.116600 0.147200 0.114500 0.064930 0.250400 0.071460
50% 13.370000 18.840000 86.240000 551.100000 0.095870 0.092630 0.061540 0.033500 0.179200 0.061540 ... 14.970000 25.410000 97.660000 686.500000 0.131300 0.211900 0.226700 0.099930 0.282200 0.080040
75% 15.780000 21.800000 104.100000 782.700000 0.105300 0.130400 0.130700 0.074000 0.195700 0.066120 ... 18.790000 29.720000 125.400000 1084.000000 0.146000 0.339100 0.382900 0.161400 0.317900 0.092080
max 28.110000 39.280000 188.500000 2501.000000 0.163400 0.345400 0.426800 0.201200 0.304000 0.097440 ... 36.040000 49.540000 251.200000 4254.000000 0.222600 1.058000 1.252000 0.291000 0.663800 0.207500

8 rows × 30 columns

Data Splitting and Standardization

The data needs to be split in the training and testing sets. Furthermore, we need to standardize the inputs as well before fitting into the model

In [7]:
x=df
x.shape
Out[7]:
(569, 30)
In [8]:
y=cancer.target
y.shape
Out[8]:
(569,)
In [9]:
cancer.target_names
Out[9]:
array(['malignant', 'benign'], dtype='<U9')

We will be using 80% of our dataset for training purposes and 20% for testing.

In [10]:
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 0, stratify = y)
In [11]:
x_train.shape
Out[11]:
(455, 30)
In [12]:
x_test.shape
Out[12]:
(114, 30)

StandardScaler removes the mean and scales the data to unit variance.

In [13]:
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_test)
x_train = x_train.reshape(455,30,1)
x_test = x_test.reshape(114, 30, 1)

Building the CNN Model

In [14]:
epochs = 50
model = Sequential()
model.add(Conv1D(filters=32, kernel_size=2, activation='relu', input_shape = (30,1)))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Conv1D(filters=64, kernel_size=2, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))

Model Summary

In [15]:
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d (Conv1D)              (None, 29, 32)            96        
_________________________________________________________________
batch_normalization (BatchNo (None, 29, 32)            128       
_________________________________________________________________
dropout (Dropout)            (None, 29, 32)            0         
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 28, 64)            4160      
_________________________________________________________________
batch_normalization_1 (Batch (None, 28, 64)            256       
_________________________________________________________________
dropout_1 (Dropout)          (None, 28, 64)            0         
_________________________________________________________________
flatten (Flatten)            (None, 1792)              0         
_________________________________________________________________
dense (Dense)                (None, 64)                114752    
_________________________________________________________________
dropout_2 (Dropout)          (None, 64)                0         
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 65        
=================================================================
Total params: 119,457
Trainable params: 119,265
Non-trainable params: 192
_________________________________________________________________

Compile defines the loss function, the optimizer, and the metrics.

In [16]:
model.compile(optimizer=Adam(lr=0.00005), loss = 'binary_crossentropy', metrics=['accuracy'])

Now, let's fit the model

In [17]:
history = model.fit(x_train, y_train, epochs=epochs, validation_data=(x_test, y_test), verbose=1)
Epoch 1/50
15/15 [==============================] - 0s 16ms/step - loss: 1.0654 - accuracy: 0.5253 - val_loss: 0.6604 - val_accuracy: 0.6930
Epoch 2/50
15/15 [==============================] - 0s 3ms/step - loss: 0.7615 - accuracy: 0.6484 - val_loss: 0.6319 - val_accuracy: 0.7719
Epoch 3/50
15/15 [==============================] - 0s 7ms/step - loss: 0.5839 - accuracy: 0.7363 - val_loss: 0.6007 - val_accuracy: 0.8421
Epoch 4/50
15/15 [==============================] - 0s 4ms/step - loss: 0.4916 - accuracy: 0.7626 - val_loss: 0.5699 - val_accuracy: 0.8947
Epoch 5/50
15/15 [==============================] - 0s 4ms/step - loss: 0.4025 - accuracy: 0.8110 - val_loss: 0.5365 - val_accuracy: 0.9035
Epoch 6/50
15/15 [==============================] - 0s 4ms/step - loss: 0.4285 - accuracy: 0.8352 - val_loss: 0.5058 - val_accuracy: 0.9035
Epoch 7/50
15/15 [==============================] - 0s 4ms/step - loss: 0.3355 - accuracy: 0.8593 - val_loss: 0.4754 - val_accuracy: 0.9123
Epoch 8/50
15/15 [==============================] - 0s 3ms/step - loss: 0.3315 - accuracy: 0.8637 - val_loss: 0.4458 - val_accuracy: 0.9211
Epoch 9/50
15/15 [==============================] - 0s 4ms/step - loss: 0.3018 - accuracy: 0.8747 - val_loss: 0.4130 - val_accuracy: 0.9211
Epoch 10/50
15/15 [==============================] - 0s 4ms/step - loss: 0.2236 - accuracy: 0.9143 - val_loss: 0.3827 - val_accuracy: 0.9298
Epoch 11/50
15/15 [==============================] - 0s 3ms/step - loss: 0.2560 - accuracy: 0.8879 - val_loss: 0.3528 - val_accuracy: 0.9386
Epoch 12/50
15/15 [==============================] - 0s 3ms/step - loss: 0.2542 - accuracy: 0.9033 - val_loss: 0.3265 - val_accuracy: 0.9474
Epoch 13/50
15/15 [==============================] - 0s 5ms/step - loss: 0.2753 - accuracy: 0.9011 - val_loss: 0.2988 - val_accuracy: 0.9474
Epoch 14/50
15/15 [==============================] - 0s 5ms/step - loss: 0.2538 - accuracy: 0.8857 - val_loss: 0.2722 - val_accuracy: 0.9386
Epoch 15/50
15/15 [==============================] - 0s 4ms/step - loss: 0.1819 - accuracy: 0.9253 - val_loss: 0.2505 - val_accuracy: 0.9386
Epoch 16/50
15/15 [==============================] - 0s 4ms/step - loss: 0.2076 - accuracy: 0.9209 - val_loss: 0.2315 - val_accuracy: 0.9386
Epoch 17/50
15/15 [==============================] - 0s 3ms/step - loss: 0.2074 - accuracy: 0.9143 - val_loss: 0.2121 - val_accuracy: 0.9474
Epoch 18/50
15/15 [==============================] - 0s 4ms/step - loss: 0.1872 - accuracy: 0.9165 - val_loss: 0.1941 - val_accuracy: 0.9474
Epoch 19/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1796 - accuracy: 0.9253 - val_loss: 0.1783 - val_accuracy: 0.9386
Epoch 20/50
15/15 [==============================] - 0s 4ms/step - loss: 0.1883 - accuracy: 0.9099 - val_loss: 0.1649 - val_accuracy: 0.9386
Epoch 21/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1976 - accuracy: 0.9231 - val_loss: 0.1542 - val_accuracy: 0.9386
Epoch 22/50
15/15 [==============================] - 0s 3ms/step - loss: 0.2000 - accuracy: 0.9209 - val_loss: 0.1430 - val_accuracy: 0.9474
Epoch 23/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1509 - accuracy: 0.9385 - val_loss: 0.1333 - val_accuracy: 0.9561
Epoch 24/50
15/15 [==============================] - 0s 6ms/step - loss: 0.1688 - accuracy: 0.9363 - val_loss: 0.1265 - val_accuracy: 0.9561
Epoch 25/50
15/15 [==============================] - 0s 4ms/step - loss: 0.1685 - accuracy: 0.9341 - val_loss: 0.1207 - val_accuracy: 0.9561
Epoch 26/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1505 - accuracy: 0.9407 - val_loss: 0.1165 - val_accuracy: 0.9649
Epoch 27/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1399 - accuracy: 0.9319 - val_loss: 0.1128 - val_accuracy: 0.9649
Epoch 28/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1554 - accuracy: 0.9473 - val_loss: 0.1104 - val_accuracy: 0.9649
Epoch 29/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1428 - accuracy: 0.9363 - val_loss: 0.1082 - val_accuracy: 0.9649
Epoch 30/50
15/15 [==============================] - 0s 4ms/step - loss: 0.1511 - accuracy: 0.9451 - val_loss: 0.1061 - val_accuracy: 0.9649
Epoch 31/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1304 - accuracy: 0.9516 - val_loss: 0.1044 - val_accuracy: 0.9737
Epoch 32/50
15/15 [==============================] - 0s 4ms/step - loss: 0.1401 - accuracy: 0.9516 - val_loss: 0.1033 - val_accuracy: 0.9649
Epoch 33/50
15/15 [==============================] - 0s 4ms/step - loss: 0.0949 - accuracy: 0.9736 - val_loss: 0.1023 - val_accuracy: 0.9649
Epoch 34/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1319 - accuracy: 0.9604 - val_loss: 0.1015 - val_accuracy: 0.9649
Epoch 35/50
15/15 [==============================] - 0s 6ms/step - loss: 0.1543 - accuracy: 0.9407 - val_loss: 0.1010 - val_accuracy: 0.9737
Epoch 36/50
15/15 [==============================] - 0s 4ms/step - loss: 0.1479 - accuracy: 0.9429 - val_loss: 0.1003 - val_accuracy: 0.9649
Epoch 37/50
15/15 [==============================] - 0s 4ms/step - loss: 0.1472 - accuracy: 0.9429 - val_loss: 0.0996 - val_accuracy: 0.9737
Epoch 38/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1344 - accuracy: 0.9495 - val_loss: 0.0980 - val_accuracy: 0.9737
Epoch 39/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1165 - accuracy: 0.9538 - val_loss: 0.0977 - val_accuracy: 0.9737
Epoch 40/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1068 - accuracy: 0.9538 - val_loss: 0.0977 - val_accuracy: 0.9737
Epoch 41/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1032 - accuracy: 0.9626 - val_loss: 0.0975 - val_accuracy: 0.9737
Epoch 42/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1268 - accuracy: 0.9582 - val_loss: 0.0978 - val_accuracy: 0.9737
Epoch 43/50
15/15 [==============================] - 0s 3ms/step - loss: 0.0905 - accuracy: 0.9714 - val_loss: 0.0981 - val_accuracy: 0.9737
Epoch 44/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1293 - accuracy: 0.9473 - val_loss: 0.0979 - val_accuracy: 0.9737
Epoch 45/50
15/15 [==============================] - 0s 3ms/step - loss: 0.1103 - accuracy: 0.9670 - val_loss: 0.0979 - val_accuracy: 0.9737
Epoch 46/50
15/15 [==============================] - 0s 6ms/step - loss: 0.1227 - accuracy: 0.9516 - val_loss: 0.0974 - val_accuracy: 0.9737
Epoch 47/50
15/15 [==============================] - 0s 4ms/step - loss: 0.1067 - accuracy: 0.9560 - val_loss: 0.0956 - val_accuracy: 0.9737
Epoch 48/50
15/15 [==============================] - 0s 4ms/step - loss: 0.1020 - accuracy: 0.9582 - val_loss: 0.0949 - val_accuracy: 0.9737
Epoch 49/50
15/15 [==============================] - 0s 4ms/step - loss: 0.0995 - accuracy: 0.9692 - val_loss: 0.0963 - val_accuracy: 0.9737
Epoch 50/50
15/15 [==============================] - 0s 4ms/step - loss: 0.1104 - accuracy: 0.9560 - val_loss: 0.0964 - val_accuracy: 0.9737
In [18]:
def plot_learningCurve(history, epoch):
  # Plot training & validation accuracy values
  epoch_range = range(1, epoch+1)
  plt.plot(epoch_range, history.history['accuracy'])
  plt.plot(epoch_range, history.history['val_accuracy'])
  plt.title('Model accuracy')
  plt.ylabel('Accuracy')
  plt.xlabel('Epoch')
  plt.legend(['Train', 'Val'], loc='upper left')
  plt.show()

  # Plot training & validation loss values
  plt.plot(epoch_range, history.history['loss'])
  plt.plot(epoch_range, history.history['val_loss'])
  plt.title('Model loss')
  plt.ylabel('Loss')
  plt.xlabel('Epoch')
  plt.legend(['Train', 'Val'], loc='upper left')
  plt.show()

A history object that contains all information collected during training.

In [19]:
history.history
Out[19]:
{'loss': [1.0653538703918457,
  0.7614781856536865,
  0.5839028358459473,
  0.4915880858898163,
  0.40253692865371704,
  0.4284686744213104,
  0.3354661166667938,
  0.33149152994155884,
  0.3017803132534027,
  0.22362302243709564,
  0.2560194134712219,
  0.25424373149871826,
  0.2752891480922699,
  0.25380659103393555,
  0.18185760080814362,
  0.2075776606798172,
  0.20735985040664673,
  0.187189981341362,
  0.17960397899150848,
  0.18832442164421082,
  0.19755275547504425,
  0.2000408172607422,
  0.1509152352809906,
  0.16881616413593292,
  0.16852399706840515,
  0.1505013108253479,
  0.13994963467121124,
  0.1554204374551773,
  0.14281918108463287,
  0.15110445022583008,
  0.13042815029621124,
  0.14008866250514984,
  0.094886913895607,
  0.13188642263412476,
  0.1542927622795105,
  0.1479421705007553,
  0.14723902940750122,
  0.13444142043590546,
  0.11652465164661407,
  0.10684654116630554,
  0.10316459089517593,
  0.12677106261253357,
  0.09045010805130005,
  0.12932178378105164,
  0.11030561476945877,
  0.12265732139348984,
  0.10665298253297806,
  0.10201983898878098,
  0.09948378056287766,
  0.11042127013206482],
 'accuracy': [0.5252747535705566,
  0.6483516693115234,
  0.7362637519836426,
  0.7626373767852783,
  0.8109890222549438,
  0.8351648449897766,
  0.8593406677246094,
  0.8637362718582153,
  0.8747252821922302,
  0.9142857193946838,
  0.8879120945930481,
  0.903296709060669,
  0.901098906993866,
  0.8857142925262451,
  0.9252747297286987,
  0.9208791255950928,
  0.9142857193946838,
  0.9164835214614868,
  0.9252747297286987,
  0.9098901152610779,
  0.9230769276618958,
  0.9208791255950928,
  0.9384615421295166,
  0.9362637400627136,
  0.9340659379959106,
  0.9406593441963196,
  0.9318681359291077,
  0.9472527503967285,
  0.9362637400627136,
  0.9450549483299255,
  0.9516483545303345,
  0.9516483545303345,
  0.9736263751983643,
  0.9604395627975464,
  0.9406593441963196,
  0.9428571462631226,
  0.9428571462631226,
  0.9494505524635315,
  0.9538461565971375,
  0.9538461565971375,
  0.9626373648643494,
  0.9582417607307434,
  0.9714285731315613,
  0.9472527503967285,
  0.9670329689979553,
  0.9516483545303345,
  0.9560439586639404,
  0.9582417607307434,
  0.9692307710647583,
  0.9560439586639404],
 'val_loss': [0.6603986024856567,
  0.6318624019622803,
  0.6006906032562256,
  0.5699207782745361,
  0.5365297794342041,
  0.5057849287986755,
  0.4754408001899719,
  0.4458489716053009,
  0.4130277931690216,
  0.3826737403869629,
  0.3528382480144501,
  0.32651203870773315,
  0.2988233268260956,
  0.272162526845932,
  0.2505013048648834,
  0.2314673364162445,
  0.21206560730934143,
  0.1940762996673584,
  0.17832323908805847,
  0.16485385596752167,
  0.15424852073192596,
  0.14300909638404846,
  0.13327115774154663,
  0.1265336126089096,
  0.1207365170121193,
  0.11651040613651276,
  0.11281606554985046,
  0.11042694747447968,
  0.1082240417599678,
  0.10607068240642548,
  0.10440341383218765,
  0.10326424241065979,
  0.10231874138116837,
  0.10150882601737976,
  0.10100582242012024,
  0.10033512860536575,
  0.09959465265274048,
  0.09799809008836746,
  0.09771120548248291,
  0.09765501320362091,
  0.09754190593957901,
  0.09783197939395905,
  0.0980534702539444,
  0.09790488332509995,
  0.09793185442686081,
  0.09738163650035858,
  0.09562940895557404,
  0.09492193907499313,
  0.09629010409116745,
  0.09643898159265518],
 'val_accuracy': [0.6929824352264404,
  0.7719298005104065,
  0.8421052694320679,
  0.8947368264198303,
  0.9035087823867798,
  0.9035087823867798,
  0.9122806787490845,
  0.9210526347160339,
  0.9210526347160339,
  0.9298245906829834,
  0.9385964870452881,
  0.9473684430122375,
  0.9473684430122375,
  0.9385964870452881,
  0.9385964870452881,
  0.9385964870452881,
  0.9473684430122375,
  0.9473684430122375,
  0.9385964870452881,
  0.9385964870452881,
  0.9385964870452881,
  0.9473684430122375,
  0.9561403393745422,
  0.9561403393745422,
  0.9561403393745422,
  0.9649122953414917,
  0.9649122953414917,
  0.9649122953414917,
  0.9649122953414917,
  0.9649122953414917,
  0.9736841917037964,
  0.9649122953414917,
  0.9649122953414917,
  0.9649122953414917,
  0.9736841917037964,
  0.9649122953414917,
  0.9736841917037964,
  0.9736841917037964,
  0.9736841917037964,
  0.9736841917037964,
  0.9736841917037964,
  0.9736841917037964,
  0.9736841917037964,
  0.9736841917037964,
  0.9736841917037964,
  0.9736841917037964,
  0.9736841917037964,
  0.9736841917037964,
  0.9736841917037964,
  0.9736841917037964]}

Plotting the curves using the function defined above

In [20]:
plot_learningCurve(history, epochs)
  • In Model accuracy graph validation accuracy is always greater than train accuracy thats means our model is not overfitting.
  • In Model accuracy graph validation loss is also very lower than training loss so unless and until validation loss goes above than the training loss than we can keep training our model.

Making predictions on some values

In [21]:
test_predictions = model.predict_classes(x_test)
print(test_predictions[:10])
WARNING:tensorflow:From <ipython-input-21-be5999d69a85>:1: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01.
Instructions for updating:
Please use instead:* `np.argmax(model.predict(x), axis=-1)`,   if your model does multi-class classification   (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`,   if your model does binary classification   (e.g. if it uses a `sigmoid` last-layer activation).
[[0]
 [0]
 [0]
 [1]
 [0]
 [1]
 [0]
 [1]
 [1]
 [0]]
In [23]:
#saving the model
model.save('Breast_Cancer_Detection.h5')

deepCC

In [ ]:
!deepCC 'Breast_Cancer_Detection.h5'
[INFO]
Reading [keras model] 'Breast_Cancer_Detection.h5'
[SUCCESS]
Saved 'Breast_Cancer_Detection_deepC/Breast_Cancer_Detection.onnx'
[INFO]
Reading [onnx model] 'Breast_Cancer_Detection_deepC/Breast_Cancer_Detection.onnx'
[INFO]
Model info:
  ir_vesion : 4
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) conv1d_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_1's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_1) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'Breast_Cancer_Detection_deepC/Breast_Cancer_Detection.cpp'
[INFO]
deepSea model files are ready in 'Breast_Cancer_Detection_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "Breast_Cancer_Detection_deepC/Breast_Cancer_Detection.cpp" -D_AITS_MAIN -o "Breast_Cancer_Detection_deepC/Breast_Cancer_Detection.exe"
In [ ]: