Cainvas
Model Files
alz_model1.h5
keras
Model
deepSea Compiled Models
alz_model1.exe
deepSea
Ubuntu

Alzheimer Detection Using CNN

Credit: AITS Cainvas Community

Photo by Killian López on Dribbble

Downloading Dataset

In [1]:
!wget -N "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/Alz_data.zip"
!unzip -qo Alz_data.zip 
!rm Alz_data.zip
--2021-07-14 09:43:00--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/Alz_data.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.156.63
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.156.63|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 35524854 (34M) [application/x-zip-compressed]
Saving to: ‘Alz_data.zip’

Alz_data.zip        100%[===================>]  33.88M  84.2MB/s    in 0.4s    

2021-07-14 09:43:00 (84.2 MB/s) - ‘Alz_data.zip’ saved [35524854/35524854]

Importing Necessary Libraries

In [2]:
import tensorflow as tf
from tensorflow import keras 
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as img
%matplotlib inline
import tensorflow.keras.backend as K
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing import image
from pylab import imread,subplot,imshow,show
import cv2
import os

Rescaling

In [3]:
train = ImageDataGenerator(rescale=1./255)
test =  ImageDataGenerator(rescale=1./255)
val =  ImageDataGenerator(rescale=1./255)
In [4]:
train='Alzheimer_s Dataset/train/'
In [5]:
train_data = tf.keras.preprocessing.image_dataset_from_directory(
    train,
    validation_split=0.2,
    image_size=(224,224),
    batch_size=32,
    subset='training',
    seed=1000 )
Found 5121 files belonging to 4 classes.
Using 4097 files for training.
In [6]:
val='Alzheimer_s Dataset/train/'
In [7]:
val_data = tf.keras.preprocessing.image_dataset_from_directory(
    val,
    validation_split=0.2,
    image_size=(224,224),
    batch_size=32,
    subset='validation',
    seed=1000
    )
Found 5121 files belonging to 4 classes.
Using 1024 files for validation.
In [8]:
test='Alzheimer_s Dataset/test/'
In [9]:
test_data=tf.keras.preprocessing.image_dataset_from_directory(
    test,
    image_size=(224,224),
    batch_size=32,
    seed=1000
    )
Found 1279 files belonging to 4 classes.
In [10]:
class_names = ['MildDementia', 'ModerateDementia', 'NonDementia', 'VeryMildDementia']
In [11]:
train_data.class_names = class_names
val_data.class_names = class_names
In [12]:
print(val_data)
<BatchDataset shapes: ((None, 224, 224, 3), (None,)), types: (tf.float32, tf.int32)>
In [13]:
plt.figure(figsize=(10, 10))
for images, labels in train_data.take(1):
    for i in range(6):
        ax = plt.subplot(3, 3, i + 1)
        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(train_data.class_names[labels[i]])
        plt.axis("off")
In [14]:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D,Input
from tensorflow.keras.layers import Dense
In [15]:
model=Sequential()
In [16]:
model.add(Conv2D(16,(3,3), activation='relu', input_shape=(224,224,3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32,(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32,(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(10,activation='relu'))
model.add(Dense(5,activation='relu'))
model.add(Dense(12,activation='relu'))
model.add(Dense(30,activation='relu'))
model.add(Dense(10,activation='relu'))
model.add(Dense(100,activation='relu'))
model.add(Dense(133,activation='relu'))
model.add(Dense(4,activation='softmax'))
In [17]:
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 222, 222, 16)      448       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 111, 111, 16)      0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 109, 109, 32)      4640      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 54, 54, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 52, 52, 32)        9248      
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 26, 26, 32)        0         
_________________________________________________________________
flatten (Flatten)            (None, 21632)             0         
_________________________________________________________________
dense (Dense)                (None, 10)                216330    
_________________________________________________________________
dense_1 (Dense)              (None, 5)                 55        
_________________________________________________________________
dense_2 (Dense)              (None, 12)                72        
_________________________________________________________________
dense_3 (Dense)              (None, 30)                390       
_________________________________________________________________
dense_4 (Dense)              (None, 10)                310       
_________________________________________________________________
dense_5 (Dense)              (None, 100)               1100      
_________________________________________________________________
dense_6 (Dense)              (None, 133)               13433     
_________________________________________________________________
dense_7 (Dense)              (None, 4)                 536       
=================================================================
Total params: 246,562
Trainable params: 246,562
Non-trainable params: 0
_________________________________________________________________
In [18]:
model.compile(optimizer = tf.keras.optimizers.Adam(1e-4), loss="sparse_categorical_crossentropy", metrics=["accuracy"])
In [19]:
history = model.fit(train_data, validation_data=val_data, epochs=10)
Epoch 1/10
129/129 [==============================] - 7s 51ms/step - loss: 1.0227 - accuracy: 0.5116 - val_loss: 0.8785 - val_accuracy: 0.5742
Epoch 2/10
129/129 [==============================] - 6s 49ms/step - loss: 0.8658 - accuracy: 0.5880 - val_loss: 1.0934 - val_accuracy: 0.5332
Epoch 3/10
129/129 [==============================] - 6s 49ms/step - loss: 0.8157 - accuracy: 0.6117 - val_loss: 0.7731 - val_accuracy: 0.6182
Epoch 4/10
129/129 [==============================] - 6s 49ms/step - loss: 0.7660 - accuracy: 0.6412 - val_loss: 0.7004 - val_accuracy: 0.6729
Epoch 5/10
129/129 [==============================] - 6s 48ms/step - loss: 0.6692 - accuracy: 0.6966 - val_loss: 0.6423 - val_accuracy: 0.7129
Epoch 6/10
129/129 [==============================] - 6s 48ms/step - loss: 0.5324 - accuracy: 0.7718 - val_loss: 0.8265 - val_accuracy: 0.6572
Epoch 7/10
129/129 [==============================] - 6s 49ms/step - loss: 0.4833 - accuracy: 0.8018 - val_loss: 0.4976 - val_accuracy: 0.7930
Epoch 8/10
129/129 [==============================] - 6s 48ms/step - loss: 0.3579 - accuracy: 0.8619 - val_loss: 0.4487 - val_accuracy: 0.8203
Epoch 9/10
129/129 [==============================] - 6s 48ms/step - loss: 0.2537 - accuracy: 0.9112 - val_loss: 0.3463 - val_accuracy: 0.8662
Epoch 10/10
129/129 [==============================] - 6s 49ms/step - loss: 0.2310 - accuracy: 0.9151 - val_loss: 0.3526 - val_accuracy: 0.8584

Model Saving

In [20]:
model.save("alz_model1.h5")
In [21]:
model.evaluate(val_data)
32/32 [==============================] - 1s 25ms/step - loss: 0.3526 - accuracy: 0.8584
Out[21]:
[0.35255926847457886, 0.8583984375]

Graph Plotting

In [22]:
plt.plot(history.history['accuracy'])
plt.title('Train model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.show()
In [23]:
plt.plot(history.history['loss'])
plt.title('Train Model loss')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.show()
In [24]:
plt.plot(history.history['val_accuracy'])
plt.title(' Val model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.show()
In [25]:
plt.plot(history.history['val_loss'])
plt.title(' Val Model Loss')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.show()
In [26]:
loss_train = history.history['loss']
loss_val = history.history['val_loss']
plt.plot(loss_train, 'g', label='Training loss')
plt.plot(loss_val, 'b', label='validation loss')
plt.title('Training and Validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
In [27]:
accuracy_train = history.history['accuracy']
accuracy_val = history.history['val_accuracy']
plt.plot(accuracy_train, 'g', label='Training accuracy')
plt.plot(accuracy_val, 'b', label='Validation accuracy')
plt.title('Training and Validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()

Prediction

In [28]:
class_names={0:"MildDementia", 1:"ModerateDementia", 2:"NonDementia", 3:"VeryMildDementia"}
In [29]:
for images, labels in val_data.take(1):
    for i in range(6):
        print("True_class:",val_data.class_names[labels[i]])
        x = image.img_to_array(images[i])
        x = np.expand_dims(x, axis=0)
        p=np.argmax(model.predict(x))
        if p==0:
            print("Predicted Image: Mild Dementia")
        elif p==1:
            print("Predicted Image: Moderate Dementia")
        elif p==2:
            print("Predicted Image: Non Dementia")
        else:
            print("Predicted Image: Very Mild Dementia")
        
        print("Predicted class:",p)
print("All the predicted images are correct!!!!!")
       
True_class: VeryMildDementia
Predicted Image: Very Mild Dementia
Predicted class: 3
True_class: VeryMildDementia
Predicted Image: Non Dementia
Predicted class: 2
True_class: NonDementia
Predicted Image: Non Dementia
Predicted class: 2
True_class: VeryMildDementia
Predicted Image: Very Mild Dementia
Predicted class: 3
True_class: NonDementia
Predicted Image: Non Dementia
Predicted class: 2
True_class: VeryMildDementia
Predicted Image: Very Mild Dementia
Predicted class: 3
All the predicted images are correct!!!!!

deepCC

In [30]:
!deepCC alz_model1.h5
[INFO]
Reading [keras model] 'alz_model1.h5'
[SUCCESS]
Saved 'alz_model1_deepC/alz_model1.onnx'
[INFO]
Reading [onnx model] 'alz_model1_deepC/alz_model1.onnx'
[INFO]
Model info:
  ir_vesion : 5
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) conv2d_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_7's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_7) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'alz_model1_deepC/alz_model1.cpp'
[INFO]
deepSea model files are ready in 'alz_model1_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "alz_model1_deepC/alz_model1.cpp" -D_AITS_MAIN -o "alz_model1_deepC/alz_model1.exe"
[RUNNING COMMAND]
size "alz_model1_deepC/alz_model1.exe"
   text	   data	    bss	    dec	    hex	filename
1171741	   3784	    760	1176285	 11f2dd	alz_model1_deepC/alz_model1.exe
[SUCCESS]
Saved model as executable "alz_model1_deepC/alz_model1.exe"