Cainvas
Model Files
CovidTest.h5
keras
Model
deepSea Compiled Models
CovidTest.exe
deepSea
Ubuntu

Covid-19 Detection

Credit: AITS Cainvas Community

Photo by Hilmy Fawwazy on Dribbble

We all are aware of the pandemic situation created by the outbreak of novel corona virus and the large of number of deaths it has caused worldwide.So, it is the need of the hour to develop a deep learning model which can help us identify covid patients.

Importing the Dataset

The dataset contains the spectogram of cough sounds of normal and covid patients

In [1]:
# This will load the dataset.You will see a folder called ALL in your workspace.
!wget -N "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/CovidDetec.zip"
!unzip -qo CovidDetec.zip 
!rm CovidDetec.zip
--2020-12-18 18:03:05--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/CovidDetec.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.66.8
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.66.8|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3479943 (3.3M) [application/zip]
Saving to: ‘CovidDetec.zip’

CovidDetec.zip      100%[===================>]   3.32M  --.-KB/s    in 0.06s   

2020-12-18 18:03:05 (59.2 MB/s) - ‘CovidDetec.zip’ saved [3479943/3479943]


The following code has been used to convert the audio files to spectograms.But you don't have to worry about this step as it has been already done for you.

import librosa
import librosa.display
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from glob import glob

data_dir = 'CovidDetec/Covid'
audio_files = glob(data_dir + '/*.wav')
print(len(audio_files))
samples, sample_rate = librosa.load(audio_files[1],sr=44100)
sample = np.array(samples)
sample = sample.reshape(1,sample.shape[0])
for i in range (2,8):
    samples, sample_rate = librosa.load(audio_files[1],sr=44100)
    temp = np.array(samples)
    temp = temp.reshape(1,temp.shape[0])
    sample = np.append(sample, temp, axis=0)
    print(type(sample))
    print(sample.shape)
for i in range (1,8):
    fig = plt.figure(figsize=[4,4])
    ax = fig.add_subplot(111)
    ax.axes.get_xaxis().set_visible(False)
    ax.axes.get_yaxis().set_visible(False)
    ax.set_frame_on(False)
    S = librosa.feature.melspectrogram(y=sample[1], sr=sample_rate)
    #librosa.display.specshow(librosa.power_to_db(S, ref=np.max))
    direc = 'CovidDetec/MS/CovidMS/Covid10'+str(i)
    plt.savefig(direc)

Importing important libraries

In [2]:
# import libraries

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cv2
from keras.layers import Dense, Flatten, AveragePooling2D, Dropout
from keras.models import Model
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import Adam

Data Visualization

In [3]:
data_path = "CovidDetec/MS/"
In [4]:
# Check images
img = cv2.imread("CovidDetec/MS/Train/Covid/Covid10.png")
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.title("Covid Cough Spectogram")
plt.imshow(img)
Out[4]:
<matplotlib.image.AxesImage at 0x7fa6e0712b70>
In [5]:
# Check images
img = cv2.imread("CovidDetec/MS/Train/Normal/Normal11.png")
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.title("Normal Cough Spectogram")
plt.imshow(img)
Out[5]:
<matplotlib.image.AxesImage at 0x7fa6de607ef0>

Creating TrainSet and TestSet

In [6]:
# Data agumentation on train and test

train_datagen = ImageDataGenerator(rescale = 1./255,
                                   zoom_range = 0.2,
                                   rotation_range=15,
                                   horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255)
In [7]:
# create dataset train
training_set = train_datagen.flow_from_directory(data_path + 'Train',
                                                 target_size = (224, 224),
                                                 batch_size = 16,
                                                 class_mode = 'categorical',
                                                 shuffle=True)
Found 60 images belonging to 2 classes.
In [8]:
# Create test data set
test_set = test_datagen.flow_from_directory(data_path + 'Test',
                                            target_size = (224, 224),
                                            batch_size = 16,
                                            class_mode = 'categorical',
                                            shuffle = False)
Found 10 images belonging to 2 classes.

Model Architecture

In [9]:
# Model creation with changes

model = VGG16(input_shape=(224,224,3),include_top=False)

for layer in model.layers:
    layer.trainable = False

newModel = model.output
newModel = AveragePooling2D()(newModel)
newModel = Flatten()(newModel)
newModel = Dense(128, activation="relu")(newModel)
newModel = Dropout(0.5)(newModel)
newModel = Dense(2, activation='softmax')(newModel)

model = Model(inputs=model.input, outputs=newModel)
model.summary()
Model: "functional_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 224, 224, 3)]     0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         
_________________________________________________________________
average_pooling2d (AveragePo (None, 3, 3, 512)         0         
_________________________________________________________________
flatten (Flatten)            (None, 4608)              0         
_________________________________________________________________
dense (Dense)                (None, 128)               589952    
_________________________________________________________________
dropout (Dropout)            (None, 128)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 2)                 258       
=================================================================
Total params: 15,304,898
Trainable params: 590,210
Non-trainable params: 14,714,688
_________________________________________________________________

Model Training

In [10]:
opt=Adam(learning_rate=0.0001)
model.compile(optimizer=opt, loss='binary_crossentropy', metrics=['accuracy'])
In [11]:
history = model.fit(training_set,
                              validation_data=test_set,
                              epochs=8)
Epoch 1/8
4/4 [==============================] - 2s 467ms/step - loss: 0.6665 - accuracy: 0.6000 - val_loss: 0.7292 - val_accuracy: 0.5000
Epoch 2/8
4/4 [==============================] - 1s 131ms/step - loss: 0.6458 - accuracy: 0.6500 - val_loss: 0.6467 - val_accuracy: 0.5000
Epoch 3/8
4/4 [==============================] - 1s 130ms/step - loss: 0.7042 - accuracy: 0.6167 - val_loss: 0.6092 - val_accuracy: 0.5000
Epoch 4/8
4/4 [==============================] - 1s 128ms/step - loss: 0.5791 - accuracy: 0.6833 - val_loss: 0.5895 - val_accuracy: 0.5000
Epoch 5/8
4/4 [==============================] - 1s 128ms/step - loss: 0.4159 - accuracy: 0.8500 - val_loss: 0.5570 - val_accuracy: 0.6000
Epoch 6/8
4/4 [==============================] - 1s 127ms/step - loss: 0.4948 - accuracy: 0.7500 - val_loss: 0.5026 - val_accuracy: 0.7000
Epoch 7/8
4/4 [==============================] - 1s 130ms/step - loss: 0.4110 - accuracy: 0.8167 - val_loss: 0.4573 - val_accuracy: 0.9000
Epoch 8/8
4/4 [==============================] - 1s 128ms/step - loss: 0.3933 - accuracy: 0.8667 - val_loss: 0.4293 - val_accuracy: 1.0000

Accessing the performance of the model

In [12]:
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs=range(len(acc))
plt.plot(epochs,acc,label='Trainin_acc',color='blue')
plt.plot(epochs,val_acc,label='Validation_acc',color='red')
plt.legend()
plt.title("Training and Validation Accuracy")
Out[12]:
Text(0.5, 1.0, 'Training and Validation Accuracy')
In [13]:
plt.plot(epochs,loss,label='Training_loss',color='blue')
plt.plot(epochs,val_loss,label='Validation_loss',color='red')
plt.legend()
plt.title("Training and Validation loss")
Out[13]:
Text(0.5, 1.0, 'Training and Validation loss')
In [14]:
print("Accuracy of our model on test data : " , model.evaluate(test_set)[1]*100 , "%")
1/1 [==============================] - 0s 1ms/step - loss: 0.4293 - accuracy: 1.0000
Accuracy of our model on test data :  100.0 %

Saving the model

In [15]:
model.save("CovidTest.h5")

Compiling the DeepC Compiler

In [ ]:
!deepCC CovidTest.h5
[INFO]
Reading [keras model] 'CovidTest.h5'
[SUCCESS]
Saved 'CovidTest.onnx'
[INFO]
Reading [onnx model] 'CovidTest.onnx'
[INFO]
Model info:
  ir_vesion : 5
  doc       : 
[WARNING]
[ONNX]: graph-node block1_conv1's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node block1_conv2's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node block2_conv1's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node block2_conv2's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node block3_conv1's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node block3_conv2's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node block3_conv3's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node block4_conv1's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node block4_conv2's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node block4_conv3's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node block5_conv1's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node block5_conv2's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node block5_conv3's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: terminal (input/output) input_1's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_1's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_1) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'CovidTest_deepC/CovidTest.cpp'
[INFO]
deepSea model files are ready in 'CovidTest_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 CovidTest_deepC/CovidTest.cpp -o CovidTest_deepC/CovidTest.exe