Cainvas

Marble Defect Classifier

Credit: AITS Cainvas Community

Photo by Beethowen Souza on Dribbble

  • This application of detecting defect on marble is an example of how deep learning can be implemented to Industrial Business like Lime Stone Manufacturers. Fragile material experiences a lot damage during process and consumption of electricity on already wasted material is just extra waste.
  • It will reduce the inprocess error and add a middle layer to quality inspection.
  • image dimension 256x256
  • There are four classes of deformations.
In [1]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow import keras as ks
import os
import cv2
In [2]:
!wget https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/marble.zip
--2021-07-14 15:34:46--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/marble.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.160.51
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.160.51|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 38558013 (37M) [application/x-zip-compressed]
Saving to: ‘marble.zip.4’

marble.zip.4        100%[===================>]  36.77M   103MB/s    in 0.4s    

2021-07-14 15:34:47 (103 MB/s) - ‘marble.zip.4’ saved [38558013/38558013]

In [3]:
!unzip -qo marble.zip
In [4]:
train_dir = "dataset/train/"
test_dir = "dataset/test/"
In [5]:
def get_image(path):
    img = cv2.imread(path)
    print(img.shape)
    plt.imshow(img)
In [6]:
image = "dataset/test/crack/_0_0_20210531_17292_0.jpg"
In [7]:
get_image(image)
(256, 256, 3)

Create Image Data Generator

In [8]:
def datapreprocessing(main_dir,bsize):
    from tensorflow.keras.preprocessing.image import ImageDataGenerator
    
    train_gen = ImageDataGenerator(rescale=1.0/255,
                                   zoom_range=0.2,
                                   shear_range=0.1,
                                   horizontal_flip=True,
                                   vertical_flip=True,
                                   rotation_range=20,
                                   width_shift_range=0.2,
                                   height_shift_range=0.2,
                                   #validation_split=0.3,
                                   fill_mode='nearest',
                                  )

    train_generator = train_gen.flow_from_directory(
        directory=main_dir,
        target_size=(48,48),
        batch_size=bsize,
        color_mode="rgb",
        shuffle=True,
        subset="training",
        class_mode='categorical')
    
    
    
    return train_generator
In [9]:
traingen = datapreprocessing(train_dir,20)
validgen = datapreprocessing(test_dir,20)
Found 2249 images belonging to 4 classes.
Found 688 images belonging to 4 classes.
In [10]:
labelnames = traingen.class_indices
labelnames
Out[10]:
{'crack': 0, 'dot': 1, 'good': 2, 'joint': 3}
In [11]:
#Function that can build a dataframe on passing folderpath.
def getdata(folder_path):
    sig = pd.DataFrame(columns=['image_abs_path','image_labels'])
    for key,value in labelnames.items():
        #print("processing for label: {}".format(label))
        label_i = folder_path+"/"+str(key)
        #read directory
        dirs_label_i =  os.listdir(label_i)
        idx = 0
        for image in dirs_label_i:
            #create a absolute image path
            sig_i = os.path.join(label_i,image)
            #print('Absolute path for image no. {} and label {}: {}'\
                  #.format(idx,label,flower_i))

            #fill the dataframe with path and label
            sig = sig.append({'image_abs_path':sig_i,
                            'image_labels':key},
                           ignore_index=True)
            idx += 1
    return sig
In [12]:
#Create Train Dataframe as repository of paths and labels.
valid = getdata(test_dir)
In [13]:
valid
Out[13]:
image_abs_path image_labels
0 dataset/test//crack/_3072_1536_20210531_17292_... crack
1 dataset/test//crack/_512_4096_20210531_17293_0... crack
2 dataset/test//crack/_1792_1280_20210531_10554.jpg crack
3 dataset/test//crack/_512_2816_20210531_11271_2... crack
4 dataset/test//crack/_3072_1962_20210531_17292_... crack
... ... ...
683 dataset/test//joint/_1024_256_20210531_10553.jpg joint
684 dataset/test//joint/_1792_2304_20210531_11274_... joint
685 dataset/test//joint/_768_1024_20210531_10552.jpg joint
686 dataset/test//joint/_1024_4096_20210525_15441_... joint
687 dataset/test//joint/_512_512_20210525_13375_0.jpg joint

688 rows × 2 columns

In [14]:
# Fetch n number of images from train data frame
def get_n_images(n,df,label):
    import warnings
    warnings.filterwarnings('ignore')
    train = df[df["image_labels"]==label]
    print(len(train))
    i = 0
    m = n/2
    plt.figure(figsize=(12, 6))
    for path in train['image_abs_path'][0:n]:
        plt.subplot(2,m,i+1)
        get_image(path)
        #plt.title(train['image_labels'][i])
        i += 1
    plt.tight_layout()
    plt.show()
In [15]:
def visualize_gen(train_generator):   
    #Visualising Images Processed
    plt.figure(figsize=(6, 3))
    for i in range(0, 10):
        plt.subplot(2, 5, i+1)
        for X_batch, Y_batch in train_generator:
            image = X_batch[0]        
            plt.axis("off")
            plt.imshow((image*255).astype(np.uint8))
            break
    plt.tight_layout()
    plt.show()
In [16]:
visualize_gen(traingen)
In [17]:
input_shape = traingen.image_shape
input_shape
Out[17]:
(48, 48, 3)

Build Model's Architecture

In [18]:
def imageclf2(input_shape):
    from tensorflow import keras as ks
    #from tensorflow.keras import regularizers
    model = ks.models.Sequential()
    #building architecture
    #Adding layers
    model.add(ks.layers.Conv2D(8,(3,3),
                               strides=1,
                               activation="relu",
                               padding='same',
                               name="layer1",
                               input_shape=input_shape))
    model.add(ks.layers.MaxPooling2D(pool_size=2,strides=2))
    model.add(ks.layers.Dropout(0.2))
    model.add(ks.layers.Conv2D(8,(3,3),strides=1,padding="same",activation="relu",name="layer2"))
    model.add(ks.layers.MaxPooling2D(pool_size=2,strides=2))

    
    model.add(ks.layers.Flatten())
    model.add(ks.layers.Dense(128,activation="relu",
                              name="layer5"))
    model.add(ks.layers.Dropout(0.2))
    
    model.add(ks.layers.Dense(4,activation="softmax",
                              name="output"))
    model.summary()
    
    return model

Build the Compiler

In [19]:
def compiler2(model,train_generator,valid_generator,epchs,bsize=32,lr=0.0001):

    from tensorflow import keras as ks
    callbck = ks.callbacks.EarlyStopping(monitor='val_loss',patience=10,
                                         verbose=2,
                                         restore_best_weights=True,) 
    
    opt = ks.optimizers.Adam(learning_rate=lr)
    
    model.compile(loss="categorical_crossentropy",
                      optimizer=opt,
                      metrics=["accuracy"])
    history = model.fit(train_generator,
                        epochs=epchs,
                        callbacks=[callbck],
                        validation_data=valid_generator,
                        verbose = 1,
                        #steps_per_epoch = train_generator.n // bsize
                       )
    #Visualise curves
    plt.plot(history.history['accuracy'], label='train_acc')
    plt.plot(history.history['val_accuracy'], label='valid_acc')

    plt.title('lrate='+str(lr), pad=-50)
    plt.legend()
    plt.grid(True)
    return model,history

Fit Model and Evaluate

In [20]:
model01 = imageclf2(input_shape)
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
layer1 (Conv2D)              (None, 48, 48, 8)         224       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 24, 24, 8)         0         
_________________________________________________________________
dropout (Dropout)            (None, 24, 24, 8)         0         
_________________________________________________________________
layer2 (Conv2D)              (None, 24, 24, 8)         584       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 12, 12, 8)         0         
_________________________________________________________________
flatten (Flatten)            (None, 1152)              0         
_________________________________________________________________
layer5 (Dense)               (None, 128)               147584    
_________________________________________________________________
dropout_1 (Dropout)          (None, 128)               0         
_________________________________________________________________
output (Dense)               (None, 4)                 516       
=================================================================
Total params: 148,908
Trainable params: 148,908
Non-trainable params: 0
_________________________________________________________________
In [21]:
model_com01 = compiler2(model01,traingen,validgen,100)
Epoch 1/100
113/113 [==============================] - 5s 43ms/step - loss: 1.1903 - accuracy: 0.4135 - val_loss: 1.1657 - val_accuracy: 0.3503
Epoch 2/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1690 - accuracy: 0.4211 - val_loss: 1.1670 - val_accuracy: 0.3547
Epoch 3/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1683 - accuracy: 0.4122 - val_loss: 1.1810 - val_accuracy: 0.5480
Epoch 4/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1686 - accuracy: 0.4082 - val_loss: 1.1900 - val_accuracy: 0.3576
Epoch 5/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1596 - accuracy: 0.4073 - val_loss: 1.1641 - val_accuracy: 0.3576
Epoch 6/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1599 - accuracy: 0.4237 - val_loss: 1.1758 - val_accuracy: 0.3576
Epoch 7/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1639 - accuracy: 0.4300 - val_loss: 1.1822 - val_accuracy: 0.3576
Epoch 8/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1558 - accuracy: 0.4415 - val_loss: 1.1757 - val_accuracy: 0.3576
Epoch 9/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1552 - accuracy: 0.4349 - val_loss: 1.1768 - val_accuracy: 0.3576
Epoch 10/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1549 - accuracy: 0.4273 - val_loss: 1.1589 - val_accuracy: 0.3677
Epoch 11/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1485 - accuracy: 0.4384 - val_loss: 1.2189 - val_accuracy: 0.3576
Epoch 12/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1477 - accuracy: 0.4406 - val_loss: 1.1844 - val_accuracy: 0.3299
Epoch 13/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1424 - accuracy: 0.4357 - val_loss: 1.1831 - val_accuracy: 0.3547
Epoch 14/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1342 - accuracy: 0.4620 - val_loss: 1.2019 - val_accuracy: 0.5378
Epoch 15/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1266 - accuracy: 0.4549 - val_loss: 1.1858 - val_accuracy: 0.5334
Epoch 16/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1112 - accuracy: 0.4833 - val_loss: 1.1446 - val_accuracy: 0.5422
Epoch 17/100
113/113 [==============================] - 5s 42ms/step - loss: 1.1152 - accuracy: 0.4580 - val_loss: 1.1508 - val_accuracy: 0.5552
Epoch 18/100
113/113 [==============================] - 5s 42ms/step - loss: 1.0932 - accuracy: 0.4900 - val_loss: 1.2063 - val_accuracy: 0.5407
Epoch 19/100
113/113 [==============================] - 5s 42ms/step - loss: 1.0811 - accuracy: 0.5020 - val_loss: 1.1574 - val_accuracy: 0.5610
Epoch 20/100
113/113 [==============================] - 5s 42ms/step - loss: 1.0641 - accuracy: 0.5171 - val_loss: 1.1624 - val_accuracy: 0.5756
Epoch 21/100
113/113 [==============================] - 5s 42ms/step - loss: 1.0550 - accuracy: 0.5402 - val_loss: 1.1284 - val_accuracy: 0.5581
Epoch 22/100
113/113 [==============================] - 5s 42ms/step - loss: 1.0245 - accuracy: 0.5776 - val_loss: 1.1110 - val_accuracy: 0.5741
Epoch 23/100
113/113 [==============================] - 5s 42ms/step - loss: 0.9946 - accuracy: 0.6069 - val_loss: 1.0381 - val_accuracy: 0.6017
Epoch 24/100
113/113 [==============================] - 5s 42ms/step - loss: 0.9512 - accuracy: 0.6461 - val_loss: 1.0300 - val_accuracy: 0.5887
Epoch 25/100
113/113 [==============================] - 5s 42ms/step - loss: 0.9211 - accuracy: 0.6607 - val_loss: 0.9602 - val_accuracy: 0.6163
Epoch 26/100
113/113 [==============================] - 5s 42ms/step - loss: 0.9114 - accuracy: 0.6514 - val_loss: 0.9349 - val_accuracy: 0.6483
Epoch 27/100
113/113 [==============================] - 5s 42ms/step - loss: 0.9000 - accuracy: 0.6527 - val_loss: 0.9066 - val_accuracy: 0.6555
Epoch 28/100
113/113 [==============================] - 5s 42ms/step - loss: 0.8789 - accuracy: 0.6727 - val_loss: 0.8703 - val_accuracy: 0.6759
Epoch 29/100
113/113 [==============================] - 5s 42ms/step - loss: 0.8767 - accuracy: 0.6754 - val_loss: 0.8853 - val_accuracy: 0.6657
Epoch 30/100
113/113 [==============================] - 5s 42ms/step - loss: 0.8515 - accuracy: 0.6856 - val_loss: 0.8795 - val_accuracy: 0.6628
Epoch 31/100
113/113 [==============================] - 5s 42ms/step - loss: 0.8555 - accuracy: 0.6821 - val_loss: 0.8755 - val_accuracy: 0.6657
Epoch 32/100
113/113 [==============================] - 5s 42ms/step - loss: 0.8488 - accuracy: 0.6874 - val_loss: 0.8566 - val_accuracy: 0.6831
Epoch 33/100
113/113 [==============================] - 5s 42ms/step - loss: 0.8383 - accuracy: 0.6972 - val_loss: 0.8748 - val_accuracy: 0.6628
Epoch 34/100
113/113 [==============================] - 5s 42ms/step - loss: 0.8252 - accuracy: 0.6976 - val_loss: 0.8637 - val_accuracy: 0.6817
Epoch 35/100
113/113 [==============================] - 5s 42ms/step - loss: 0.8281 - accuracy: 0.6870 - val_loss: 0.8178 - val_accuracy: 0.7093
Epoch 36/100
113/113 [==============================] - 5s 42ms/step - loss: 0.8256 - accuracy: 0.6941 - val_loss: 0.8429 - val_accuracy: 0.6846
Epoch 37/100
113/113 [==============================] - 5s 42ms/step - loss: 0.8245 - accuracy: 0.6968 - val_loss: 0.8135 - val_accuracy: 0.7093
Epoch 38/100
113/113 [==============================] - 5s 42ms/step - loss: 0.8177 - accuracy: 0.7030 - val_loss: 0.8379 - val_accuracy: 0.6846
Epoch 39/100
113/113 [==============================] - 5s 42ms/step - loss: 0.8029 - accuracy: 0.7056 - val_loss: 0.8002 - val_accuracy: 0.7209
Epoch 40/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7984 - accuracy: 0.7181 - val_loss: 0.7963 - val_accuracy: 0.7209
Epoch 41/100
113/113 [==============================] - 5s 42ms/step - loss: 0.8039 - accuracy: 0.7123 - val_loss: 0.8056 - val_accuracy: 0.7209
Epoch 42/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7944 - accuracy: 0.7105 - val_loss: 0.7945 - val_accuracy: 0.7093
Epoch 43/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7904 - accuracy: 0.7119 - val_loss: 0.7677 - val_accuracy: 0.7384
Epoch 44/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7848 - accuracy: 0.7225 - val_loss: 0.7877 - val_accuracy: 0.7209
Epoch 45/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7933 - accuracy: 0.7190 - val_loss: 0.7628 - val_accuracy: 0.7369
Epoch 46/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7887 - accuracy: 0.7199 - val_loss: 0.7631 - val_accuracy: 0.7442
Epoch 47/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7830 - accuracy: 0.7199 - val_loss: 0.7732 - val_accuracy: 0.7253
Epoch 48/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7705 - accuracy: 0.7288 - val_loss: 0.7581 - val_accuracy: 0.7297
Epoch 49/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7823 - accuracy: 0.7234 - val_loss: 0.7776 - val_accuracy: 0.7209
Epoch 50/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7719 - accuracy: 0.7248 - val_loss: 0.7721 - val_accuracy: 0.7195
Epoch 51/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7734 - accuracy: 0.7270 - val_loss: 0.7445 - val_accuracy: 0.7485
Epoch 52/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7706 - accuracy: 0.7270 - val_loss: 0.7587 - val_accuracy: 0.7355
Epoch 53/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7650 - accuracy: 0.7270 - val_loss: 0.7605 - val_accuracy: 0.7384
Epoch 54/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7576 - accuracy: 0.7305 - val_loss: 0.7414 - val_accuracy: 0.7558
Epoch 55/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7617 - accuracy: 0.7354 - val_loss: 0.7424 - val_accuracy: 0.7544
Epoch 56/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7648 - accuracy: 0.7274 - val_loss: 0.7426 - val_accuracy: 0.7500
Epoch 57/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7419 - accuracy: 0.7403 - val_loss: 0.7228 - val_accuracy: 0.7456
Epoch 58/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7570 - accuracy: 0.7337 - val_loss: 0.7373 - val_accuracy: 0.7515
Epoch 59/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7518 - accuracy: 0.7350 - val_loss: 0.7481 - val_accuracy: 0.7326
Epoch 60/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7398 - accuracy: 0.7386 - val_loss: 0.7528 - val_accuracy: 0.7282
Epoch 61/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7567 - accuracy: 0.7323 - val_loss: 0.7301 - val_accuracy: 0.7500
Epoch 62/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7511 - accuracy: 0.7328 - val_loss: 0.7276 - val_accuracy: 0.7413
Epoch 63/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7429 - accuracy: 0.7341 - val_loss: 0.7354 - val_accuracy: 0.7297
Epoch 64/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7480 - accuracy: 0.7354 - val_loss: 0.7172 - val_accuracy: 0.7427
Epoch 65/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7367 - accuracy: 0.7381 - val_loss: 0.7179 - val_accuracy: 0.7442
Epoch 66/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7322 - accuracy: 0.7474 - val_loss: 0.7082 - val_accuracy: 0.7558
Epoch 67/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7407 - accuracy: 0.7345 - val_loss: 0.7021 - val_accuracy: 0.7558
Epoch 68/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7242 - accuracy: 0.7474 - val_loss: 0.7146 - val_accuracy: 0.7485
Epoch 69/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7326 - accuracy: 0.7381 - val_loss: 0.7055 - val_accuracy: 0.7544
Epoch 70/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7398 - accuracy: 0.7372 - val_loss: 0.7096 - val_accuracy: 0.7602
Epoch 71/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7327 - accuracy: 0.7417 - val_loss: 0.7164 - val_accuracy: 0.7427
Epoch 72/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7180 - accuracy: 0.7439 - val_loss: 0.7123 - val_accuracy: 0.7587
Epoch 73/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7192 - accuracy: 0.7470 - val_loss: 0.7226 - val_accuracy: 0.7456
Epoch 74/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7207 - accuracy: 0.7434 - val_loss: 0.7195 - val_accuracy: 0.7558
Epoch 75/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7159 - accuracy: 0.7466 - val_loss: 0.7160 - val_accuracy: 0.7573
Epoch 76/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7256 - accuracy: 0.7457 - val_loss: 0.6949 - val_accuracy: 0.7689
Epoch 77/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7232 - accuracy: 0.7412 - val_loss: 0.7100 - val_accuracy: 0.7645
Epoch 78/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7281 - accuracy: 0.7430 - val_loss: 0.7080 - val_accuracy: 0.7529
Epoch 79/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7288 - accuracy: 0.7430 - val_loss: 0.6974 - val_accuracy: 0.7544
Epoch 80/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7129 - accuracy: 0.7461 - val_loss: 0.6991 - val_accuracy: 0.7602
Epoch 81/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7163 - accuracy: 0.7461 - val_loss: 0.7208 - val_accuracy: 0.7587
Epoch 82/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7175 - accuracy: 0.7461 - val_loss: 0.7183 - val_accuracy: 0.7587
Epoch 83/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7230 - accuracy: 0.7452 - val_loss: 0.7064 - val_accuracy: 0.7515
Epoch 84/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7135 - accuracy: 0.7457 - val_loss: 0.7073 - val_accuracy: 0.7529
Epoch 85/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7051 - accuracy: 0.7488 - val_loss: 0.6813 - val_accuracy: 0.7762
Epoch 86/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7148 - accuracy: 0.7479 - val_loss: 0.7047 - val_accuracy: 0.7645
Epoch 87/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7084 - accuracy: 0.7430 - val_loss: 0.6940 - val_accuracy: 0.7645
Epoch 88/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7064 - accuracy: 0.7563 - val_loss: 0.7116 - val_accuracy: 0.7805
Epoch 89/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7166 - accuracy: 0.7452 - val_loss: 0.7027 - val_accuracy: 0.7587
Epoch 90/100
113/113 [==============================] - 5s 42ms/step - loss: 0.6979 - accuracy: 0.7546 - val_loss: 0.6755 - val_accuracy: 0.7791
Epoch 91/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7019 - accuracy: 0.7603 - val_loss: 0.6895 - val_accuracy: 0.7674
Epoch 92/100
113/113 [==============================] - 5s 42ms/step - loss: 0.6821 - accuracy: 0.7594 - val_loss: 0.6728 - val_accuracy: 0.7791
Epoch 93/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7012 - accuracy: 0.7523 - val_loss: 0.6882 - val_accuracy: 0.7747
Epoch 94/100
113/113 [==============================] - 5s 42ms/step - loss: 0.6961 - accuracy: 0.7546 - val_loss: 0.6662 - val_accuracy: 0.7689
Epoch 95/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7140 - accuracy: 0.7412 - val_loss: 0.6831 - val_accuracy: 0.7674
Epoch 96/100
113/113 [==============================] - 5s 42ms/step - loss: 0.6897 - accuracy: 0.7581 - val_loss: 0.6842 - val_accuracy: 0.7631
Epoch 97/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7059 - accuracy: 0.7492 - val_loss: 0.6784 - val_accuracy: 0.7631
Epoch 98/100
113/113 [==============================] - 5s 42ms/step - loss: 0.6954 - accuracy: 0.7590 - val_loss: 0.6960 - val_accuracy: 0.7834
Epoch 99/100
113/113 [==============================] - 5s 42ms/step - loss: 0.7077 - accuracy: 0.7514 - val_loss: 0.6821 - val_accuracy: 0.7747
Epoch 100/100
113/113 [==============================] - 5s 42ms/step - loss: 0.6951 - accuracy: 0.7523 - val_loss: 0.7061 - val_accuracy: 0.7791
In [22]:
#Visualise loss curves
history = model_com01[1]
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.legend()
plt.grid()
plt.show()

Get Prediction and visualise the output.

In [23]:
def get_predictions(n):
    
    image1= validgen[0][0][n]
    
    plt.imshow(image1)
    input_arr = ks.preprocessing.image.img_to_array(validgen[0][0][n])
    input_arr = np.array([input_arr])  # Convert single image to a batch.
    predictions = model_com01[0].predict_classes(input_arr)
    
    return predictions
In [24]:
get_predictions(11)
WARNING:tensorflow:From <ipython-input-23-2b586d755c13>:8: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01.
Instructions for updating:
Please use instead:* `np.argmax(model.predict(x), axis=-1)`,   if your model does multi-class classification   (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`,   if your model does binary classification   (e.g. if it uses a `sigmoid` last-layer activation).
Out[24]:
array([2])
In [25]:
# Fetch n number of images from train data frame
def get_n_images(n,df,label):
    import warnings
    warnings.filterwarnings('ignore')
    train = df[df["image_labels"]==label]
    print(len(train))
    i = 0
    m = n/2
    plt.figure(figsize=(12, 6))
    for path in train['image_abs_path'][0:n]:
        plt.subplot(2,m,i+1)
        get_image(path)
        #plt.title(train['image_labels'][i])
        i += 1
    plt.tight_layout()
    plt.show()
In [26]:
#Visualise Predictions
get_n_images(6,valid,"good")
340
(256, 256, 3)
(256, 256, 3)
(256, 256, 3)
(256, 256, 3)
(256, 256, 3)
(256, 256, 3)
  • Model has predicted that there is no deformation.

Save the model!

In [27]:
# save the model to disk
model = model_com01[0]
model.save('saved_models/MarbleModel.tf')
WARNING:tensorflow:From /opt/tljh/user/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:From /opt/tljh/user/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
INFO:tensorflow:Assets written to: saved_models/MarbleModel.tf/assets

DeepCC

In [28]:
!deepCC 'saved_models/MarbleModel.tf'
[INFO]
Reading [tensorflow model] 'saved_models/MarbleModel.tf'
[SUCCESS]
Saved 'MarbleModel_deepC/MarbleModel.tf.onnx'
[INFO]
Reading [onnx model] 'MarbleModel_deepC/MarbleModel.tf.onnx'
[INFO]
Model info:
  ir_vesion : 4
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) layer1_input_0's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) Identity_0's shape is less than 1. Changing it to 1.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'MarbleModel_deepC/MarbleModel.cpp'
[INFO]
deepSea model files are ready in 'MarbleModel_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "MarbleModel_deepC/MarbleModel.cpp" -D_AITS_MAIN -o "MarbleModel_deepC/MarbleModel.exe"
[RUNNING COMMAND]
size "MarbleModel_deepC/MarbleModel.exe"
   text	   data	    bss	    dec	    hex	filename
 768653	   3976	    760	 773389	  bcd0d	MarbleModel_deepC/MarbleModel.exe
[SUCCESS]
Saved model as executable "MarbleModel_deepC/MarbleModel.exe"
In [ ]: