Cainvas

Pneumothorax Classification

Credit: AITS Cainvas Community

Photo by Vladimir Marchukov on Dribbble

Obtaining the dataset

In [1]:
!wget -N https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/Pneumothoraxdataset.zip
!unzip -qo Pneumothoraxdataset.zip
dir = 'small_train_data_set/small_train_data_set'
--2021-10-26 04:01:41--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/Pneumothoraxdataset.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.66.24
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.66.24|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 50669044 (48M) [application/x-zip-compressed]
Saving to: ‘Pneumothoraxdataset.zip’

Pneumothoraxdataset 100%[===================>]  48.32M  95.6MB/s    in 0.5s    

2021-10-26 04:01:42 (95.6 MB/s) - ‘Pneumothoraxdataset.zip’ saved [50669044/50669044]

Importing the required libraries

In [2]:
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import random
import cv2
from sklearn.model_selection import train_test_split
import sklearn
import warnings
warnings.filterwarnings('ignore')
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
In [3]:
csv_path = 'small_train_data_set/small_train_data_set/train_data.csv'
df = pd.read_csv(csv_path)
#Randomly shuffling the dataset
df = df.sample(frac=1)
#Checking how our dataframe looks like
df.head()
Out[3]:
Unnamed: 0 Unnamed: 0.1 file_name target
8 7056 7056 1.2.276.0.7230010.3.1.4.8323329.4811.151787518... 1
1446 1820 1820 1.2.276.0.7230010.3.1.4.8323329.32603.15178751... 0
1476 863 863 1.2.276.0.7230010.3.1.4.8323329.2859.151787517... 1
1143 6084 6084 1.2.276.0.7230010.3.1.4.8323329.32380.15178751... 1
1376 3072 3072 1.2.276.0.7230010.3.1.4.8323329.13633.15178752... 1
In [4]:
images = []
labels = []
# Reading the images and saving it in a numpy array
for i in df['file_name']:
    path = os.path.join(dir, i)
    img = cv2.imread(path)
    img = cv2.resize(img, (96,96))
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    images.append(img)
# Saving the labels in a numpy array
for i in df['target']:
    labels.append(i)
images = [i/255 for i in images]
X = np.array(images)
y = np.array(labels)

Plotting to check the how the dataset looks like

In [5]:
d = {0:'NO PNEUMOTHORAX', 1:'PNEUMOTHORAX'}
fig,axes = plt.subplots(3,3) 
fig.subplots_adjust(0,0,2,2)
for i in range(3):
    for j in range(3):
        num = random.randint(0, len(images))
        axes[i,j].imshow(images[num])
        axes[i,j].set_title("CLASS: "+str(labels[num]) +"\n" +  "LABEL:" + str(d[labels[num]]))
        axes[i,j].axis('off')
In [6]:
# Checking the balance of the dataset
neg, pos = np.bincount(y)
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n Negative: {} ({:.2f}% of total)'.format(
    total, pos, 100 * pos / total, neg, 100 * neg / total))
sns.countplot(y)
plt.show()
Examples:
 Total: 2027
 Positive: 1597 (78.79% of total)
 Negative: 430 (21.21% of total)

Clearly the dataset is imbalanced and needs to be handled. Oversampling of the minority dataset is one of the ways to handle class imbalance

In [7]:
from imblearn.over_sampling import RandomOverSampler
reshaped_X = X.reshape(X.shape[0],-1)
#Oversampling
oversample = RandomOverSampler()
oversampled_X, oversampled_y  = oversample.fit_resample(reshaped_X , y)
new_X = oversampled_X.reshape(-1,96,96,3)
In [8]:
X_train, X_test, y_train, y_test = train_test_split(X, y, shuffle=True, test_size=0.1,stratify=y)
In [9]:
# Splitting data into train and test
X_train, X_test, y_train, y_test = train_test_split(new_X, oversampled_y, shuffle=True, test_size=0.1,stratify=oversampled_y)
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
sns.countplot(ax=axes[0], x = y_train)
axes[0].set_title('Training data labels', fontsize = 14)
sns.countplot(ax=axes[1], x = y_test)
axes[1].set_title('Testing data labels', fontsize = 14)
plt.show()

Now the data is perfectly balanced and we can proceed towards building a model

In [10]:
# Training a simple convolutional model
model = keras.Sequential(
    [
        layers.Conv2D(8, input_shape=(96,96,3),padding="same",kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Conv2D(16, kernel_size=(3, 3), padding="same",activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Conv2D(32, kernel_size=(3, 3), padding="same",activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Conv2D(32, kernel_size=(3, 3),padding="same",activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Conv2D(64, kernel_size=(3, 3),padding="same",activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Conv2D(64, kernel_size=(3, 3),padding="same",activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
#         layers.Conv2D(128, kernel_size=(3, 3),padding="same",activation="relu"),
#         layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Flatten(),
        layers.Dropout(0.3),
        layers.Dense(1, activation="sigmoid"),
    ]
)
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 96, 96, 8)         224       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 48, 48, 8)         0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 48, 48, 16)        1168      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 24, 24, 16)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 24, 24, 32)        4640      
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 12, 12, 32)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 12, 12, 32)        9248      
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 6, 6, 32)          0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 6, 6, 64)          18496     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 3, 3, 64)          0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 3, 3, 64)          36928     
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 1, 1, 64)          0         
_________________________________________________________________
flatten (Flatten)            (None, 64)                0         
_________________________________________________________________
dropout (Dropout)            (None, 64)                0         
_________________________________________________________________
dense (Dense)                (None, 1)                 65        
=================================================================
Total params: 70,769
Trainable params: 70,769
Non-trainable params: 0
_________________________________________________________________
In [11]:
# Defining the hyperparameters
LOSS = keras.losses.BinaryCrossentropy()
LEARNING_RATE = 1e-4 #Choosing a smaller learning rate for a smoother curve
OPTIMIZER = keras.optimizers.Adam(LEARNING_RATE)
BATCH_SIZE = 64
EPOCHS = 100
In [12]:
model.compile(loss=LOSS, optimizer=OPTIMIZER, metrics=['AUC'])
history=model.fit(x=X_train, y=y_train, batch_size=BATCH_SIZE, epochs=EPOCHS, validation_split=0.2)
Epoch 1/100
36/36 [==============================] - 1s 23ms/step - loss: 0.6924 - auc: 0.5216 - val_loss: 0.6924 - val_auc: 0.6012
Epoch 2/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6901 - auc: 0.5787 - val_loss: 0.6910 - val_auc: 0.6018
Epoch 3/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6880 - auc: 0.5876 - val_loss: 0.6883 - val_auc: 0.6061
Epoch 4/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6857 - auc: 0.5832 - val_loss: 0.6887 - val_auc: 0.6122
Epoch 5/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6798 - auc: 0.6133 - val_loss: 0.6815 - val_auc: 0.6072
Epoch 6/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6770 - auc: 0.6133 - val_loss: 0.6764 - val_auc: 0.6161
Epoch 7/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6714 - auc: 0.6188 - val_loss: 0.6719 - val_auc: 0.6256
Epoch 8/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6610 - auc: 0.6464 - val_loss: 0.6680 - val_auc: 0.6290
Epoch 9/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6618 - auc: 0.6401 - val_loss: 0.6684 - val_auc: 0.6410
Epoch 10/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6568 - auc: 0.6543 - val_loss: 0.6629 - val_auc: 0.6456
Epoch 11/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6525 - auc: 0.6630 - val_loss: 0.6596 - val_auc: 0.6522
Epoch 12/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6506 - auc: 0.6658 - val_loss: 0.6536 - val_auc: 0.6594
Epoch 13/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6414 - auc: 0.6824 - val_loss: 0.6507 - val_auc: 0.6639
Epoch 14/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6394 - auc: 0.6848 - val_loss: 0.6460 - val_auc: 0.6720
Epoch 15/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6308 - auc: 0.7012 - val_loss: 0.6448 - val_auc: 0.6778
Epoch 16/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6306 - auc: 0.6960 - val_loss: 0.6362 - val_auc: 0.6875
Epoch 17/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6283 - auc: 0.7011 - val_loss: 0.6364 - val_auc: 0.6946
Epoch 18/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6147 - auc: 0.7236 - val_loss: 0.6418 - val_auc: 0.6986
Epoch 19/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6116 - auc: 0.7241 - val_loss: 0.6224 - val_auc: 0.7085
Epoch 20/100
36/36 [==============================] - 0s 9ms/step - loss: 0.6034 - auc: 0.7379 - val_loss: 0.6159 - val_auc: 0.7161
Epoch 21/100
36/36 [==============================] - 0s 9ms/step - loss: 0.5968 - auc: 0.7444 - val_loss: 0.6057 - val_auc: 0.7337
Epoch 22/100
36/36 [==============================] - 0s 9ms/step - loss: 0.5886 - auc: 0.7543 - val_loss: 0.6004 - val_auc: 0.7382
Epoch 23/100
36/36 [==============================] - 0s 9ms/step - loss: 0.5773 - auc: 0.7696 - val_loss: 0.5913 - val_auc: 0.7491
Epoch 24/100
36/36 [==============================] - 0s 9ms/step - loss: 0.5761 - auc: 0.7685 - val_loss: 0.6023 - val_auc: 0.7609
Epoch 25/100
36/36 [==============================] - 0s 9ms/step - loss: 0.5759 - auc: 0.7679 - val_loss: 0.5785 - val_auc: 0.7699
Epoch 26/100
36/36 [==============================] - 0s 9ms/step - loss: 0.5609 - auc: 0.7868 - val_loss: 0.5785 - val_auc: 0.7720
Epoch 27/100
36/36 [==============================] - 0s 9ms/step - loss: 0.5478 - auc: 0.8016 - val_loss: 0.5635 - val_auc: 0.7826
Epoch 28/100
36/36 [==============================] - 0s 9ms/step - loss: 0.5401 - auc: 0.8059 - val_loss: 0.5560 - val_auc: 0.7914
Epoch 29/100
36/36 [==============================] - 0s 9ms/step - loss: 0.5325 - auc: 0.8150 - val_loss: 0.5510 - val_auc: 0.7907
Epoch 30/100
36/36 [==============================] - 0s 9ms/step - loss: 0.5207 - auc: 0.8257 - val_loss: 0.5408 - val_auc: 0.8009
Epoch 31/100
36/36 [==============================] - 0s 9ms/step - loss: 0.5118 - auc: 0.8306 - val_loss: 0.5382 - val_auc: 0.8050
Epoch 32/100
36/36 [==============================] - 0s 9ms/step - loss: 0.5049 - auc: 0.8370 - val_loss: 0.5319 - val_auc: 0.8099
Epoch 33/100
36/36 [==============================] - 0s 9ms/step - loss: 0.4956 - auc: 0.8477 - val_loss: 0.5320 - val_auc: 0.8066
Epoch 34/100
36/36 [==============================] - 0s 9ms/step - loss: 0.4925 - auc: 0.8467 - val_loss: 0.5282 - val_auc: 0.8102
Epoch 35/100
36/36 [==============================] - 0s 9ms/step - loss: 0.4810 - auc: 0.8545 - val_loss: 0.5314 - val_auc: 0.8189
Epoch 36/100
36/36 [==============================] - 0s 9ms/step - loss: 0.4909 - auc: 0.8443 - val_loss: 0.5373 - val_auc: 0.8214
Epoch 37/100
36/36 [==============================] - 0s 9ms/step - loss: 0.4686 - auc: 0.8624 - val_loss: 0.5073 - val_auc: 0.8286
Epoch 38/100
36/36 [==============================] - 0s 9ms/step - loss: 0.4576 - auc: 0.8719 - val_loss: 0.5015 - val_auc: 0.8373
Epoch 39/100
36/36 [==============================] - 0s 9ms/step - loss: 0.4532 - auc: 0.8708 - val_loss: 0.5037 - val_auc: 0.8360
Epoch 40/100
36/36 [==============================] - 0s 9ms/step - loss: 0.4423 - auc: 0.8813 - val_loss: 0.4962 - val_auc: 0.8417
Epoch 41/100
36/36 [==============================] - 0s 9ms/step - loss: 0.4403 - auc: 0.8812 - val_loss: 0.4905 - val_auc: 0.8433
Epoch 42/100
36/36 [==============================] - 0s 9ms/step - loss: 0.4423 - auc: 0.8778 - val_loss: 0.5301 - val_auc: 0.8487
Epoch 43/100
36/36 [==============================] - 0s 9ms/step - loss: 0.4367 - auc: 0.8831 - val_loss: 0.4860 - val_auc: 0.8461
Epoch 44/100
36/36 [==============================] - 0s 9ms/step - loss: 0.4176 - auc: 0.8944 - val_loss: 0.4823 - val_auc: 0.8488
Epoch 45/100
36/36 [==============================] - 0s 9ms/step - loss: 0.4146 - auc: 0.8963 - val_loss: 0.4770 - val_auc: 0.8526
Epoch 46/100
36/36 [==============================] - 0s 9ms/step - loss: 0.4063 - auc: 0.9024 - val_loss: 0.4746 - val_auc: 0.8545
Epoch 47/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3976 - auc: 0.9055 - val_loss: 0.4650 - val_auc: 0.8593
Epoch 48/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3971 - auc: 0.9060 - val_loss: 0.4584 - val_auc: 0.8629
Epoch 49/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3884 - auc: 0.9115 - val_loss: 0.4662 - val_auc: 0.8628
Epoch 50/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3847 - auc: 0.9115 - val_loss: 0.4564 - val_auc: 0.8637
Epoch 51/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3710 - auc: 0.9197 - val_loss: 0.4477 - val_auc: 0.8728
Epoch 52/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3691 - auc: 0.9200 - val_loss: 0.4467 - val_auc: 0.8704
Epoch 53/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3741 - auc: 0.9152 - val_loss: 0.4447 - val_auc: 0.8723
Epoch 54/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3523 - auc: 0.9285 - val_loss: 0.4862 - val_auc: 0.8716
Epoch 55/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3594 - auc: 0.9242 - val_loss: 0.4336 - val_auc: 0.8799
Epoch 56/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3399 - auc: 0.9340 - val_loss: 0.4355 - val_auc: 0.8806
Epoch 57/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3404 - auc: 0.9342 - val_loss: 0.4226 - val_auc: 0.8843
Epoch 58/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3385 - auc: 0.9338 - val_loss: 0.4259 - val_auc: 0.8840
Epoch 59/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3357 - auc: 0.9349 - val_loss: 0.4227 - val_auc: 0.8849
Epoch 60/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3333 - auc: 0.9346 - val_loss: 0.4220 - val_auc: 0.8851
Epoch 61/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3230 - auc: 0.9410 - val_loss: 0.4163 - val_auc: 0.8902
Epoch 62/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3152 - auc: 0.9437 - val_loss: 0.4087 - val_auc: 0.8930
Epoch 63/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2986 - auc: 0.9504 - val_loss: 0.4110 - val_auc: 0.8956
Epoch 64/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2985 - auc: 0.9486 - val_loss: 0.4766 - val_auc: 0.8953
Epoch 65/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3086 - auc: 0.9459 - val_loss: 0.4079 - val_auc: 0.8947
Epoch 66/100
36/36 [==============================] - 0s 9ms/step - loss: 0.3005 - auc: 0.9483 - val_loss: 0.4312 - val_auc: 0.8904
Epoch 67/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2944 - auc: 0.9511 - val_loss: 0.4028 - val_auc: 0.8999
Epoch 68/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2805 - auc: 0.9558 - val_loss: 0.4499 - val_auc: 0.8995
Epoch 69/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2783 - auc: 0.9575 - val_loss: 0.3992 - val_auc: 0.8997
Epoch 70/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2665 - auc: 0.9604 - val_loss: 0.4049 - val_auc: 0.9021
Epoch 71/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2527 - auc: 0.9664 - val_loss: 0.4088 - val_auc: 0.9044
Epoch 72/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2635 - auc: 0.9614 - val_loss: 0.3852 - val_auc: 0.9051
Epoch 73/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2531 - auc: 0.9637 - val_loss: 0.3823 - val_auc: 0.9093
Epoch 74/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2460 - auc: 0.9678 - val_loss: 0.3739 - val_auc: 0.9137
Epoch 75/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2392 - auc: 0.9695 - val_loss: 0.3820 - val_auc: 0.9118
Epoch 76/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2450 - auc: 0.9673 - val_loss: 0.3829 - val_auc: 0.9136
Epoch 77/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2365 - auc: 0.9694 - val_loss: 0.3675 - val_auc: 0.9140
Epoch 78/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2291 - auc: 0.9717 - val_loss: 0.3674 - val_auc: 0.9156
Epoch 79/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2161 - auc: 0.9766 - val_loss: 0.3654 - val_auc: 0.9188
Epoch 80/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2182 - auc: 0.9744 - val_loss: 0.3780 - val_auc: 0.9211
Epoch 81/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2163 - auc: 0.9746 - val_loss: 0.3521 - val_auc: 0.9225
Epoch 82/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2137 - auc: 0.9755 - val_loss: 0.3567 - val_auc: 0.9231
Epoch 83/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2037 - auc: 0.9788 - val_loss: 0.3704 - val_auc: 0.9143
Epoch 84/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2155 - auc: 0.9742 - val_loss: 0.3620 - val_auc: 0.9230
Epoch 85/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2123 - auc: 0.9748 - val_loss: 0.3561 - val_auc: 0.9237
Epoch 86/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2009 - auc: 0.9787 - val_loss: 0.3636 - val_auc: 0.9205
Epoch 87/100
36/36 [==============================] - 0s 9ms/step - loss: 0.2199 - auc: 0.9728 - val_loss: 0.3297 - val_auc: 0.9332
Epoch 88/100
36/36 [==============================] - 0s 9ms/step - loss: 0.1936 - auc: 0.9803 - val_loss: 0.3509 - val_auc: 0.9247
Epoch 89/100
36/36 [==============================] - 0s 9ms/step - loss: 0.1804 - auc: 0.9837 - val_loss: 0.3456 - val_auc: 0.9268
Epoch 90/100
36/36 [==============================] - 0s 9ms/step - loss: 0.1719 - auc: 0.9856 - val_loss: 0.3437 - val_auc: 0.9269
Epoch 91/100
36/36 [==============================] - 0s 9ms/step - loss: 0.1738 - auc: 0.9854 - val_loss: 0.3490 - val_auc: 0.9252
Epoch 92/100
36/36 [==============================] - 0s 9ms/step - loss: 0.1816 - auc: 0.9822 - val_loss: 0.4085 - val_auc: 0.9270
Epoch 93/100
36/36 [==============================] - 0s 9ms/step - loss: 0.1719 - auc: 0.9851 - val_loss: 0.3339 - val_auc: 0.9318
Epoch 94/100
36/36 [==============================] - 0s 9ms/step - loss: 0.1689 - auc: 0.9850 - val_loss: 0.3739 - val_auc: 0.9231
Epoch 95/100
36/36 [==============================] - 0s 9ms/step - loss: 0.1587 - auc: 0.9874 - val_loss: 0.3726 - val_auc: 0.9302
Epoch 96/100
36/36 [==============================] - 0s 9ms/step - loss: 0.1742 - auc: 0.9832 - val_loss: 0.3292 - val_auc: 0.9329
Epoch 97/100
36/36 [==============================] - 0s 9ms/step - loss: 0.1624 - auc: 0.9869 - val_loss: 0.3516 - val_auc: 0.9300
Epoch 98/100
36/36 [==============================] - 0s 9ms/step - loss: 0.1496 - auc: 0.9898 - val_loss: 0.3307 - val_auc: 0.9361
Epoch 99/100
36/36 [==============================] - 0s 9ms/step - loss: 0.1469 - auc: 0.9900 - val_loss: 0.3344 - val_auc: 0.9322
Epoch 100/100
36/36 [==============================] - 0s 9ms/step - loss: 0.1417 - auc: 0.9907 - val_loss: 0.4171 - val_auc: 0.9291
In [13]:
#Accuracy Plot
plt.plot(history.history['auc'])
plt.plot(history.history['val_auc'])
plt.title('Model AUC')
plt.ylabel('AUC')
plt.xlabel('Epoch')
plt.legend(['train', 'val'], loc='center right')
plt.show()
#Loss Plot
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['train', 'val'], loc='upper right')
plt.show()
In [14]:
score = model.evaluate(X_test, y_test, verbose=0)
print(f'Test loss: {score[0]}\nTest AUC: {score[1] * 100}')
Test loss: 0.32383885979652405
Test AUC: 95.2695369720459
In [15]:
#Predictions on random samples in test set
test = X_test.reshape(-1,96,96,3)
predictions = model.predict_classes(test)
# predictions = predictions > 0.5
# summarize the first 5 cases
# for i in range(10):
# 	print(' predicted => %d (expected %d)' % (predictions[i], y_test[i]))
WARNING:tensorflow:From <ipython-input-15-a6ce0f464aaf>:3: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01.
Instructions for updating:
Please use instead:* `np.argmax(model.predict(x), axis=-1)`,   if your model does multi-class classification   (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`,   if your model does binary classification   (e.g. if it uses a `sigmoid` last-layer activation).
In [16]:
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
matrix = confusion_matrix(y_test,predictions, labels=[0,1])
print('Confusion matrix : \n',matrix)
matrix = classification_report(y_test,predictions,labels=[0,1])
print('Classification report : \n',matrix)
Confusion matrix : 
 [[153   7]
 [ 31 129]]
Classification report : 
               precision    recall  f1-score   support

           0       0.83      0.96      0.89       160
           1       0.95      0.81      0.87       160

    accuracy                           0.88       320
   macro avg       0.89      0.88      0.88       320
weighted avg       0.89      0.88      0.88       320

In [17]:
y_pred = (predictions > 0.5)
arr = sklearn.metrics.confusion_matrix(y_test, y_pred)
df_cm = pd.DataFrame(arr, ['absent','present'], ['absent','present'])
# plt.figure(figsize=(10,7))
sns.set(font_scale=1) # for label size
sns.heatmap(df_cm, annot=True, annot_kws={"size": 16})
Out[17]:
<AxesSubplot:>
In [18]:
#Saving the model
model.save('pneumo.h5')
In [19]:
!deepCC pneumo.h5
[INFO]
Reading [keras model] 'pneumo.h5'
[SUCCESS]
Saved 'pneumo_deepC/pneumo.onnx'
[INFO]
Reading [onnx model] 'pneumo_deepC/pneumo.onnx'
[INFO]
Model info:
  ir_vesion : 5
  doc       : 
[WARNING]
[ONNX]: graph-node conv2d's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node conv2d_1's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node conv2d_2's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node conv2d_3's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node conv2d_4's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node conv2d_5's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: terminal (input/output) conv2d_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'pneumo_deepC/pneumo.cpp'
[INFO]
deepSea model files are ready in 'pneumo_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "pneumo_deepC/pneumo.cpp" -D_AITS_MAIN -o "pneumo_deepC/pneumo.exe"
[RUNNING COMMAND]
size "pneumo_deepC/pneumo.exe"
   text	   data	    bss	    dec	    hex	filename
 468683	   3768	    760	 473211	  7387b	pneumo_deepC/pneumo.exe
[SUCCESS]
Saved model as executable "pneumo_deepC/pneumo.exe"