Cainvas

Gemstone Classification using Deep Learning

Credit: AITS Cainvas Community

Photo by Ivan Mesaros on Dribbble

A gemstone (gem, fine gem, jewel, precious stone, or semi-precious stone) is a piece of mineral crystal which, in cut and polished form, is used to make jewelry or other adornments. In this notebook, we will be classifying the type of gemstone based on the given image.

Import all the required libraries

In [1]:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import wget
import os
import cv2
from random import randint

from tensorflow.keras.utils import to_categorical
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from sklearn.metrics import classification_report, log_loss, accuracy_score
from sklearn.model_selection import train_test_split
In [2]:
!pip install wget
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: wget in ./.local/lib/python3.7/site-packages (3.2)
WARNING: You are using pip version 20.3.1; however, version 21.1.3 is available.
You should consider upgrading via the '/opt/tljh/user/bin/python -m pip install --upgrade pip' command.

Unzip the dataset to use in our notebook

In [3]:
!wget -N "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/gemstones.zip"
!unzip -qo gemstones.zip
--2021-07-21 05:00:53--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/gemstones.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.160.15
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.160.15|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 24678423 (24M) [application/x-zip-compressed]
Saving to: ‘gemstones.zip’

gemstones.zip       100%[===================>]  23.54M  98.0MB/s    in 0.2s    

2021-07-21 05:00:54 (98.0 MB/s) - ‘gemstones.zip’ saved [24678423/24678423]

In [4]:
directory = 'gemstones/train/'
In [5]:
#printing all the gemstone categories present in our dataset
Name=[]
for file in os.listdir(directory):
    Name+=[file]
print(Name)
print(len(Name))
['Morganite', 'Chrysoberyl', 'Zoisite', 'Serpentine', 'Spodumene', 'Spinel', 'Onyx Red', 'Iolite', 'Larimar', 'Chrome Diopside', 'Carnelian', 'Prehnite', 'Andradite', 'Sapphire Pink', 'Aventurine Green', 'Sapphire Yellow', 'Quartz Beer', 'Andalusite', 'Rhodochrosite', 'Alexandrite', 'Quartz Smoky', 'Cats Eye', 'Danburite', 'Tigers Eye', 'Topaz', 'Peridot', 'Variscite', 'Lapis Lazuli', 'Quartz Rose', 'Blue Lace Agate', 'Chalcedony', 'Hessonite', 'Ametrine', 'Sunstone', 'Emerald', 'Ruby', 'Diamond', 'Aventurine Yellow', 'Dumortierite', 'Chrysoprase']
40

Map and display all the categories present in our dataset. There are total 68 different kinds of gemstones.

In [6]:
gems_map = dict(zip(Name, [t for t in range(len(Name))]))
print(gems_map)
r_gems_map=dict(zip([t for t in range(len(Name))],Name)) 
{'Morganite': 0, 'Chrysoberyl': 1, 'Zoisite': 2, 'Serpentine': 3, 'Spodumene': 4, 'Spinel': 5, 'Onyx Red': 6, 'Iolite': 7, 'Larimar': 8, 'Chrome Diopside': 9, 'Carnelian': 10, 'Prehnite': 11, 'Andradite': 12, 'Sapphire Pink': 13, 'Aventurine Green': 14, 'Sapphire Yellow': 15, 'Quartz Beer': 16, 'Andalusite': 17, 'Rhodochrosite': 18, 'Alexandrite': 19, 'Quartz Smoky': 20, 'Cats Eye': 21, 'Danburite': 22, 'Tigers Eye': 23, 'Topaz': 24, 'Peridot': 25, 'Variscite': 26, 'Lapis Lazuli': 27, 'Quartz Rose': 28, 'Blue Lace Agate': 29, 'Chalcedony': 30, 'Hessonite': 31, 'Ametrine': 32, 'Sunstone': 33, 'Emerald': 34, 'Ruby': 35, 'Diamond': 36, 'Aventurine Yellow': 37, 'Dumortierite': 38, 'Chrysoprase': 39}
In [7]:
img_w, img_h = 100, 100

Create functions to read images and labels of gemstones from the training dataset.

In [8]:
#function which reads images and class names
def read_images():
    Images, Labels = [], []
    for root, dirs, files in os.walk('gemstones/train/'):
        f = os.path.basename(root)        
        for file in files:
            Labels.append(f)
            try:
                image = cv2.imread(root+'/'+file)              # read the image (OpenCV)
                image = cv2.resize(image,(int(img_w), int(img_h)))       # resize the image (images are different sizes)
                image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # converts an image from BGR color space to RGB
                Images.append(image)
            except Exception as e:
                print(e)
    Images = np.array(Images)
    return (Images,Labels)
In [9]:
#function which converts string labels to numbers
def get_class_index(Labels):
    for i, n in enumerate(Labels):
        for j, k in enumerate(Name):    
            if n == k:
                Labels[i] = j
    Labels = np.array(Labels)
    return Labels           

Read images and labels from the training set.

In [10]:
Train_Imgs, Train_Lbls = read_images()
Train_Lbls = get_class_index(Train_Lbls)
print('Shape of train images: {}'.format(Train_Imgs.shape))
print('Shape of train labels: {}'.format(Train_Lbls.shape))
Shape of train images: (1303, 100, 100, 3)
Shape of train labels: (1303,)

Printing some random images from the gemstone training set.

In [11]:
dim = 5

f,ax = plt.subplots(dim,dim) 
f.subplots_adjust(0,0,2,2)
for i in range(0,dim):
    for j in range(0,dim):
        rnd_number = randint(0,len(Train_Imgs))
        cl = Train_Lbls[rnd_number]
        ax[i,j].imshow(Train_Imgs[rnd_number])
        ax[i,j].set_title(Name[cl]+': ' + str(cl))
        ax[i,j].axis('off')

Split the training dataset into train and validation sets.

In [12]:
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(Train_Imgs, Train_Lbls, shuffle = True, test_size = 0.2, random_state = 42)
print('Shape of X_train: {}, y_train: {} '.format(X_train.shape, y_train.shape))
print('Shape of X_val: {}, y_val: {} '.format(X_val.shape, y_val.shape))
Shape of X_train: (1042, 100, 100, 3), y_train: (1042,) 
Shape of X_val: (261, 100, 100, 3), y_val: (261,) 
In [13]:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,Conv2D,MaxPooling2D,Dropout,Flatten,Activation,BatchNormalization, AveragePooling2D
from tensorflow.keras.models import model_from_json
from tensorflow.keras.models import load_model

Create a sequential model.

In [14]:
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3,3),input_shape=(100,100,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding = 'same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())

model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dropout(0.2))

model.add(Dense(len(gems_map)))
model.add(Activation('softmax'))

model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 100, 100, 32)      896       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 50, 50, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 50, 50, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 25, 25, 64)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 25, 25, 64)        36928     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 12, 12, 64)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 12, 12, 64)        36928     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 6, 6, 64)          0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 6, 6, 64)          36928     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 3, 3, 64)          0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 3, 3, 64)          36928     
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 1, 1, 64)          0         
_________________________________________________________________
flatten (Flatten)            (None, 64)                0         
_________________________________________________________________
dense (Dense)                (None, 256)               16640     
_________________________________________________________________
activation (Activation)      (None, 256)               0         
_________________________________________________________________
dropout (Dropout)            (None, 256)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 40)                10280     
_________________________________________________________________
activation_1 (Activation)    (None, 40)                0         
=================================================================
Total params: 194,024
Trainable params: 194,024
Non-trainable params: 0
_________________________________________________________________
In [15]:
model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])

Perform data augmentation on the images so that we can achieve more relevant data.

In [16]:
train_datagen = ImageDataGenerator(vertical_flip=True,
                                horizontal_flip=True,
                                rotation_range=40,
                                width_shift_range=0.2,
                                height_shift_range=0.2,
                                zoom_range=0.1,
                                validation_split=0.2)

val_datagen = ImageDataGenerator()
In [17]:
batch_size = 32
In [18]:
n = randint(0,len(X_train))
samples = np.expand_dims(X_train[n], 0)
it = train_datagen.flow(samples, batch_size=batch_size)
cols = 7

fig, ax = plt.subplots(nrows=1, ncols=cols, figsize=(15, 10))
ax[0].imshow(X_train[n], cmap='gray')
ax[0].set_title('Original', fontsize=10)

for i in range(1,cols):
    batch = it.next()    # generate batch of images 
    image = batch[0].astype('uint32') # convert to unsigned int for viewing
    ax[i].set_title('augmented {}'.format(i), fontsize=10)
    ax[i].imshow(image, cmap='gray')
In [19]:
train_gen = train_datagen.flow(X_train, y_train, batch_size=batch_size)
val_gen = val_datagen.flow(X_val, y_val, batch_size=batch_size)
In [20]:
EPOCHS = 80                           
iter_per_epoch = len(X_train) // batch_size  
val_per_epoch = len(X_val) // batch_size     

Train the sequential model.

In [21]:
m = model.fit(
       train_gen,
       steps_per_epoch= iter_per_epoch,
       epochs=EPOCHS, 
       validation_data = val_gen,
       validation_steps = val_per_epoch,
       verbose = 1 
       )
Epoch 1/80
32/32 [==============================] - 2s 66ms/step - loss: 4.2079 - accuracy: 0.0376 - val_loss: 3.6245 - val_accuracy: 0.0664
Epoch 2/80
32/32 [==============================] - 2s 60ms/step - loss: 3.4284 - accuracy: 0.0950 - val_loss: 3.2442 - val_accuracy: 0.0820
Epoch 3/80
32/32 [==============================] - 2s 60ms/step - loss: 2.9282 - accuracy: 0.1505 - val_loss: 2.5768 - val_accuracy: 0.1797
Epoch 4/80
32/32 [==============================] - 2s 60ms/step - loss: 2.4849 - accuracy: 0.2485 - val_loss: 2.1883 - val_accuracy: 0.2930
Epoch 5/80
32/32 [==============================] - 2s 61ms/step - loss: 2.1910 - accuracy: 0.3198 - val_loss: 1.7890 - val_accuracy: 0.3750
Epoch 6/80
32/32 [==============================] - 2s 61ms/step - loss: 1.8113 - accuracy: 0.4141 - val_loss: 1.6815 - val_accuracy: 0.4688
Epoch 7/80
32/32 [==============================] - 2s 60ms/step - loss: 1.7632 - accuracy: 0.4406 - val_loss: 1.6135 - val_accuracy: 0.4727
Epoch 8/80
32/32 [==============================] - 2s 61ms/step - loss: 1.5235 - accuracy: 0.5000 - val_loss: 1.6377 - val_accuracy: 0.4766
Epoch 9/80
32/32 [==============================] - 2s 60ms/step - loss: 1.4484 - accuracy: 0.5050 - val_loss: 1.3522 - val_accuracy: 0.5625
Epoch 10/80
32/32 [==============================] - 2s 60ms/step - loss: 1.3718 - accuracy: 0.5396 - val_loss: 1.3716 - val_accuracy: 0.5312
Epoch 11/80
32/32 [==============================] - 2s 61ms/step - loss: 1.3440 - accuracy: 0.5547 - val_loss: 1.3816 - val_accuracy: 0.5625
Epoch 12/80
32/32 [==============================] - 2s 60ms/step - loss: 1.2788 - accuracy: 0.5941 - val_loss: 1.5098 - val_accuracy: 0.5312
Epoch 13/80
32/32 [==============================] - 2s 60ms/step - loss: 1.3280 - accuracy: 0.5772 - val_loss: 1.3070 - val_accuracy: 0.5977
Epoch 14/80
32/32 [==============================] - 2s 59ms/step - loss: 1.2259 - accuracy: 0.5950 - val_loss: 1.1434 - val_accuracy: 0.6328
Epoch 15/80
32/32 [==============================] - 2s 60ms/step - loss: 1.1127 - accuracy: 0.6386 - val_loss: 1.1892 - val_accuracy: 0.6523
Epoch 16/80
32/32 [==============================] - 2s 59ms/step - loss: 1.0289 - accuracy: 0.6554 - val_loss: 1.1051 - val_accuracy: 0.6523
Epoch 17/80
32/32 [==============================] - 2s 59ms/step - loss: 1.0216 - accuracy: 0.6416 - val_loss: 0.9111 - val_accuracy: 0.7070
Epoch 18/80
32/32 [==============================] - 2s 59ms/step - loss: 0.9730 - accuracy: 0.6733 - val_loss: 1.1452 - val_accuracy: 0.6250
Epoch 19/80
32/32 [==============================] - 2s 59ms/step - loss: 0.9331 - accuracy: 0.6911 - val_loss: 1.1686 - val_accuracy: 0.6016
Epoch 20/80
32/32 [==============================] - 2s 60ms/step - loss: 0.8527 - accuracy: 0.7061 - val_loss: 1.0939 - val_accuracy: 0.6172
Epoch 21/80
32/32 [==============================] - 2s 59ms/step - loss: 0.8761 - accuracy: 0.7158 - val_loss: 0.9339 - val_accuracy: 0.6797
Epoch 22/80
32/32 [==============================] - 2s 60ms/step - loss: 0.7970 - accuracy: 0.7287 - val_loss: 1.0589 - val_accuracy: 0.6914
Epoch 23/80
32/32 [==============================] - 2s 60ms/step - loss: 0.8455 - accuracy: 0.7277 - val_loss: 1.0644 - val_accuracy: 0.6602
Epoch 24/80
32/32 [==============================] - 2s 60ms/step - loss: 0.8152 - accuracy: 0.7238 - val_loss: 1.1862 - val_accuracy: 0.6641
Epoch 25/80
32/32 [==============================] - 2s 61ms/step - loss: 0.7785 - accuracy: 0.7208 - val_loss: 0.9433 - val_accuracy: 0.7031
Epoch 26/80
32/32 [==============================] - 2s 60ms/step - loss: 0.7563 - accuracy: 0.7475 - val_loss: 1.0235 - val_accuracy: 0.7031
Epoch 27/80
32/32 [==============================] - 2s 60ms/step - loss: 0.7988 - accuracy: 0.7426 - val_loss: 0.9344 - val_accuracy: 0.7070
Epoch 28/80
32/32 [==============================] - 2s 60ms/step - loss: 0.6943 - accuracy: 0.7594 - val_loss: 0.9738 - val_accuracy: 0.6953
Epoch 29/80
32/32 [==============================] - 2s 61ms/step - loss: 0.7155 - accuracy: 0.7594 - val_loss: 0.8775 - val_accuracy: 0.7266
Epoch 30/80
32/32 [==============================] - 2s 60ms/step - loss: 0.8643 - accuracy: 0.7158 - val_loss: 1.0294 - val_accuracy: 0.6836
Epoch 31/80
32/32 [==============================] - 2s 60ms/step - loss: 0.6180 - accuracy: 0.8010 - val_loss: 1.3800 - val_accuracy: 0.6172
Epoch 32/80
32/32 [==============================] - 2s 61ms/step - loss: 0.7554 - accuracy: 0.7515 - val_loss: 0.9653 - val_accuracy: 0.7070
Epoch 33/80
32/32 [==============================] - 2s 60ms/step - loss: 0.5859 - accuracy: 0.7950 - val_loss: 0.9284 - val_accuracy: 0.7109
Epoch 34/80
32/32 [==============================] - 2s 60ms/step - loss: 0.5933 - accuracy: 0.7921 - val_loss: 1.1602 - val_accuracy: 0.6289
Epoch 35/80
32/32 [==============================] - 2s 60ms/step - loss: 0.7204 - accuracy: 0.7634 - val_loss: 1.0010 - val_accuracy: 0.6992
Epoch 36/80
32/32 [==============================] - 2s 60ms/step - loss: 0.7447 - accuracy: 0.7307 - val_loss: 0.9270 - val_accuracy: 0.7227
Epoch 37/80
32/32 [==============================] - 2s 60ms/step - loss: 0.6840 - accuracy: 0.7683 - val_loss: 0.9998 - val_accuracy: 0.6992
Epoch 38/80
32/32 [==============================] - 2s 60ms/step - loss: 0.6541 - accuracy: 0.7782 - val_loss: 0.9047 - val_accuracy: 0.7109
Epoch 39/80
32/32 [==============================] - 2s 60ms/step - loss: 0.5999 - accuracy: 0.8010 - val_loss: 0.9474 - val_accuracy: 0.7383
Epoch 40/80
32/32 [==============================] - 2s 60ms/step - loss: 0.6444 - accuracy: 0.7861 - val_loss: 0.9138 - val_accuracy: 0.7305
Epoch 41/80
32/32 [==============================] - 2s 60ms/step - loss: 0.5287 - accuracy: 0.8139 - val_loss: 0.9966 - val_accuracy: 0.7305
Epoch 42/80
32/32 [==============================] - 2s 62ms/step - loss: 0.5754 - accuracy: 0.7911 - val_loss: 0.9971 - val_accuracy: 0.7188
Epoch 43/80
32/32 [==============================] - 2s 60ms/step - loss: 0.5660 - accuracy: 0.8099 - val_loss: 1.0197 - val_accuracy: 0.7148
Epoch 44/80
32/32 [==============================] - 2s 61ms/step - loss: 0.5659 - accuracy: 0.8089 - val_loss: 0.7458 - val_accuracy: 0.7383
Epoch 45/80
32/32 [==============================] - 2s 60ms/step - loss: 0.5490 - accuracy: 0.8109 - val_loss: 0.7259 - val_accuracy: 0.7695
Epoch 46/80
32/32 [==============================] - 2s 60ms/step - loss: 0.5213 - accuracy: 0.8099 - val_loss: 0.8678 - val_accuracy: 0.7383
Epoch 47/80
32/32 [==============================] - 2s 60ms/step - loss: 0.5883 - accuracy: 0.8010 - val_loss: 1.0750 - val_accuracy: 0.6953
Epoch 48/80
32/32 [==============================] - 2s 60ms/step - loss: 0.5563 - accuracy: 0.8119 - val_loss: 0.7804 - val_accuracy: 0.7656
Epoch 49/80
32/32 [==============================] - 2s 61ms/step - loss: 0.4934 - accuracy: 0.8297 - val_loss: 0.8755 - val_accuracy: 0.7422
Epoch 50/80
32/32 [==============================] - 2s 60ms/step - loss: 0.4670 - accuracy: 0.8426 - val_loss: 0.9172 - val_accuracy: 0.7188
Epoch 51/80
32/32 [==============================] - 2s 59ms/step - loss: 0.4367 - accuracy: 0.8545 - val_loss: 0.8850 - val_accuracy: 0.7422
Epoch 52/80
32/32 [==============================] - 2s 59ms/step - loss: 0.5342 - accuracy: 0.7950 - val_loss: 0.9756 - val_accuracy: 0.7305
Epoch 53/80
32/32 [==============================] - 2s 59ms/step - loss: 0.6626 - accuracy: 0.7832 - val_loss: 0.9179 - val_accuracy: 0.7383
Epoch 54/80
32/32 [==============================] - 2s 59ms/step - loss: 0.6019 - accuracy: 0.7960 - val_loss: 0.8499 - val_accuracy: 0.7383
Epoch 55/80
32/32 [==============================] - 2s 61ms/step - loss: 0.5578 - accuracy: 0.8050 - val_loss: 0.9855 - val_accuracy: 0.7109
Epoch 56/80
32/32 [==============================] - 2s 60ms/step - loss: 0.5674 - accuracy: 0.8059 - val_loss: 0.8304 - val_accuracy: 0.7773
Epoch 57/80
32/32 [==============================] - 2s 61ms/step - loss: 0.5897 - accuracy: 0.8089 - val_loss: 0.7696 - val_accuracy: 0.7539
Epoch 58/80
32/32 [==============================] - 2s 61ms/step - loss: 0.4549 - accuracy: 0.8386 - val_loss: 0.6650 - val_accuracy: 0.7891
Epoch 59/80
32/32 [==============================] - 2s 60ms/step - loss: 0.4637 - accuracy: 0.8505 - val_loss: 0.8659 - val_accuracy: 0.7188
Epoch 60/80
32/32 [==============================] - 2s 61ms/step - loss: 0.4961 - accuracy: 0.8366 - val_loss: 0.9199 - val_accuracy: 0.7188
Epoch 61/80
32/32 [==============================] - 2s 61ms/step - loss: 0.5178 - accuracy: 0.8188 - val_loss: 0.8502 - val_accuracy: 0.7383
Epoch 62/80
32/32 [==============================] - 2s 60ms/step - loss: 0.4216 - accuracy: 0.8485 - val_loss: 0.8488 - val_accuracy: 0.7773
Epoch 63/80
32/32 [==============================] - 2s 60ms/step - loss: 0.5886 - accuracy: 0.8149 - val_loss: 1.0456 - val_accuracy: 0.6680
Epoch 64/80
32/32 [==============================] - 2s 61ms/step - loss: 0.5585 - accuracy: 0.8099 - val_loss: 0.8201 - val_accuracy: 0.7539
Epoch 65/80
32/32 [==============================] - 2s 60ms/step - loss: 0.4093 - accuracy: 0.8634 - val_loss: 0.7294 - val_accuracy: 0.7500
Epoch 66/80
32/32 [==============================] - 2s 60ms/step - loss: 0.3875 - accuracy: 0.8545 - val_loss: 0.8637 - val_accuracy: 0.7344
Epoch 67/80
32/32 [==============================] - 2s 61ms/step - loss: 0.4394 - accuracy: 0.8485 - val_loss: 0.8763 - val_accuracy: 0.7695
Epoch 68/80
32/32 [==============================] - 2s 60ms/step - loss: 0.4396 - accuracy: 0.8416 - val_loss: 0.7573 - val_accuracy: 0.7734
Epoch 69/80
32/32 [==============================] - 2s 60ms/step - loss: 0.3869 - accuracy: 0.8653 - val_loss: 0.7817 - val_accuracy: 0.7266
Epoch 70/80
32/32 [==============================] - 2s 61ms/step - loss: 0.3788 - accuracy: 0.8703 - val_loss: 0.7880 - val_accuracy: 0.7969
Epoch 71/80
32/32 [==============================] - 2s 61ms/step - loss: 0.3370 - accuracy: 0.8851 - val_loss: 0.8404 - val_accuracy: 0.7305
Epoch 72/80
32/32 [==============================] - 2s 60ms/step - loss: 0.4235 - accuracy: 0.8545 - val_loss: 0.9113 - val_accuracy: 0.7227
Epoch 73/80
32/32 [==============================] - 2s 60ms/step - loss: 0.4061 - accuracy: 0.8455 - val_loss: 1.2877 - val_accuracy: 0.6758
Epoch 74/80
32/32 [==============================] - 2s 60ms/step - loss: 0.4463 - accuracy: 0.8574 - val_loss: 0.8312 - val_accuracy: 0.7930
Epoch 75/80
32/32 [==============================] - 2s 60ms/step - loss: 0.3844 - accuracy: 0.8663 - val_loss: 0.8555 - val_accuracy: 0.7734
Epoch 76/80
32/32 [==============================] - 2s 61ms/step - loss: 0.5735 - accuracy: 0.8168 - val_loss: 0.8001 - val_accuracy: 0.7891
Epoch 77/80
32/32 [==============================] - 2s 60ms/step - loss: 0.3745 - accuracy: 0.8733 - val_loss: 0.8405 - val_accuracy: 0.7617
Epoch 78/80
32/32 [==============================] - 2s 59ms/step - loss: 0.3947 - accuracy: 0.8634 - val_loss: 0.9272 - val_accuracy: 0.7695
Epoch 79/80
32/32 [==============================] - 2s 60ms/step - loss: 0.4353 - accuracy: 0.8733 - val_loss: 0.8038 - val_accuracy: 0.7578
Epoch 80/80
32/32 [==============================] - 2s 60ms/step - loss: 0.3395 - accuracy: 0.8832 - val_loss: 0.7148 - val_accuracy: 0.7930

Plot accuracy and loss graphs to evaluate the performance of our model.

In [22]:
plt.plot(m.history['loss'])
plt.plot(m.history['val_loss'])
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.title('Model loss')
plt.show()

plt.plot(m.history['accuracy'])
plt.plot(m.history['val_accuracy'])
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.title('Model accuracy')
plt.show()

Read the images and labels from the test set.

In [23]:
def read_imagest():
    Images, Labels = [], []
    for root, dirs, files in os.walk('gemstones/test/'):
        f = os.path.basename(root) 
        for file in files:
            Labels.append(f)
            try:
                image = cv2.imread(root+'/'+file)              # read the image (OpenCV)
                image = cv2.resize(image,(int(img_w), int(img_h)))       # resize the image (images are different sizes)
                image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # converts an image from BGR color space to RGB
                Images.append(image)
            except Exception as e:
                print(e)
    Images = np.array(Images)
    return (Images,Labels)
In [24]:
Test_Imgs, Test_Lbls = read_imagest()
Test_Lbls = get_class_index(Test_Lbls)

Make predictions on random images from the test set.

In [25]:
f,ax = plt.subplots(5,5) 
f.subplots_adjust(0,0,2,2)
for i in range(0,5,1):
    for j in range(0,5,1):
        rnd_number = randint(0,len(Test_Imgs))
        pred_image = np.array([Test_Imgs[rnd_number]])
        pred_class = model.predict_classes(pred_image)[0]
        pred_prob = model.predict(pred_image).reshape(40)
        act =Name[Test_Lbls[rnd_number]]
        ax[i,j].imshow(Test_Imgs[rnd_number])
        ax[i,j].imshow(pred_image[0])
        if(Name[pred_class] != Name[Test_Lbls[rnd_number]]):
            t = '{} [{}]'.format(Name[pred_class], Name[Test_Lbls[rnd_number]])
            ax[i,j].set_title(t, fontdict={'color': 'darkred'})
        else:
            t = '[OK] {}'.format(Name[pred_class]) 
            ax[i,j].set_title(t)
        ax[i,j].axis('off')
WARNING:tensorflow:From <ipython-input-25-07c4a8a0c768>:7: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01.
Instructions for updating:
Please use instead:* `np.argmax(model.predict(x), axis=-1)`,   if your model does multi-class classification   (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`,   if your model does binary classification   (e.g. if it uses a `sigmoid` last-layer activation).

Images with [OK] are the correctly classified images.

In [26]:
model.save('saved_models/gemstone.tf')
WARNING:tensorflow:From /opt/tljh/user/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:From /opt/tljh/user/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
INFO:tensorflow:Assets written to: saved_models/gemstone.tf/assets

DeepCC

In [27]:
!deepCC 'saved_models/gemstone.tf'
[INFO]
Reading [tensorflow model] 'saved_models/gemstone.tf'
[SUCCESS]
Saved 'gemstone_deepC/gemstone.tf.onnx'
[INFO]
Reading [onnx model] 'gemstone_deepC/gemstone.tf.onnx'
[INFO]
Model info:
  ir_vesion : 4
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) conv2d_input_0's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) Identity_0's shape is less than 1. Changing it to 1.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'gemstone_deepC/gemstone.cpp'
[INFO]
deepSea model files are ready in 'gemstone_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "gemstone_deepC/gemstone.cpp" -D_AITS_MAIN -o "gemstone_deepC/gemstone.exe"
[RUNNING COMMAND]
size "gemstone_deepC/gemstone.exe"
   text	   data	    bss	    dec	    hex	filename
 975231	   3968	    760	 979959	  ef3f7	gemstone_deepC/gemstone.exe
[SUCCESS]
Saved model as executable "gemstone_deepC/gemstone.exe"
In [ ]: