Cainvas
Model Files
quality_check.h5
keras
Model
deepSea Compiled Models
quality_check.exe
deepSea
Ubuntu

Quality inspection of manufactured product

Credit: AITS Cainvas Community

Photo by Yulia Yu on Dribbble

Deep learning can be used to automate the inspection process of manufactured goods.

It is crucial to identify defective products to maintian the reputation of the product in the market as well as for safety concerns in some cases. Deep learning help identify the defective ones with high accuracy.

The output of the model can be used to triiger a movement to seperate the defective product from the non defective ones.


The notebook differentiates between defective and non-defective casting products.

Casting defect is an undesired irregularity in a metal casting process.

There are many types of defect in casting like blow holes, pinholes, burr, shrinkage defects, mould material defects, pouring metal defects, metallurgical defects, etc.

In [10]:
import numpy as np
import os
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras import layers
import random
from sklearn.metrics import confusion_matrix

Downloading the dataset

Source of dataset - kaggle

In [11]:
#!wget -N "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/casting_data.zip"
#!unzip -q casting_data.zip

The dataset has two folders - train and test, each with subfolders - def_front (defective) and ok_front (non-defective).

In [12]:
dataset_path = 'casting_data'
batch = 32

# The train and test datasets
print("Train dataset")
train_ds = tf.keras.preprocessing.image_dataset_from_directory(dataset_path+'/train', batch_size=batch)

print("Test dataset")
test_ds = tf.keras.preprocessing.image_dataset_from_directory(dataset_path+'/test', batch_size=batch)
Train dataset
Found 6377 files belonging to 2 classes.
Test dataset
Found 715 files belonging to 2 classes.

Understanding the subfolders

In [13]:
class_names = train_ds.class_names

print("Train class names: ", train_ds.class_names)
print("Test class names: ", test_ds.class_names)
Train class names:  ['def_front', 'ok_front']
Test class names:  ['def_front', 'ok_front']

Visualizing the samples in the dataset

In [14]:
plt.figure(figsize=(10, 10))
for images, labels in train_ds:
    for i in range(9):
        ax = plt.subplot(3, 3, i + 1)
        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(class_names[labels[i]])
        plt.axis("off")
    break
In [15]:
print("Looking into the shape of images and labels in one batch\n")  

for image_batch, labels_batch in train_ds:
    print("Shape of images input for one batch: ", image_batch.shape)
    print("Shape of images labels for one batch: ", labels_batch.shape)
    break
Looking into the shape of images and labels in one batch

Shape of images input for one batch:  (32, 256, 256, 3)
Shape of images labels for one batch:  (32,)
In [16]:
# Normalizing the pixel values

normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)

train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
test_ds = test_ds.map(lambda x, y: (normalization_layer(x), y))

Model

In [17]:
model = tf.keras.Sequential([
  layers.Conv2D(32, 3, activation='relu'),
  layers.MaxPool2D(),
  layers.Conv2D(32, 3, activation='relu'),
  layers.MaxPool2D(),
  layers.Conv2D(32, 3, activation='relu'),
  layers.MaxPool2D(),
  layers.Flatten(),
  layers.Dense(128, activation='relu'),
  layers.Dense(len(class_names), activation = 'softmax')
])

model.compile(optimizer='adam', loss=tf.losses.SparseCategoricalCrossentropy(), metrics=['accuracy'])
In [18]:
history = model.fit(train_ds, epochs=8)
Epoch 1/8
200/200 [==============================] - 14s 71ms/step - loss: 0.5069 - accuracy: 0.7414
Epoch 2/8
200/200 [==============================] - 14s 69ms/step - loss: 0.2529 - accuracy: 0.8927
Epoch 3/8
200/200 [==============================] - 15s 77ms/step - loss: 0.1040 - accuracy: 0.9660
Epoch 4/8
200/200 [==============================] - 17s 85ms/step - loss: 0.0561 - accuracy: 0.9838
Epoch 5/8
200/200 [==============================] - 14s 72ms/step - loss: 0.0642 - accuracy: 0.9787
Epoch 6/8
200/200 [==============================] - 14s 71ms/step - loss: 0.0266 - accuracy: 0.9926
Epoch 7/8
200/200 [==============================] - 14s 69ms/step - loss: 0.0167 - accuracy: 0.9956
Epoch 8/8
200/200 [==============================] - 14s 69ms/step - loss: 0.0154 - accuracy: 0.9951
In [19]:
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_3 (Conv2D)            (None, 254, 254, 32)      896       
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 127, 127, 32)      0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 125, 125, 32)      9248      
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 62, 62, 32)        0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 60, 60, 32)        9248      
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 30, 30, 32)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 28800)             0         
_________________________________________________________________
dense_2 (Dense)              (None, 128)               3686528   
_________________________________________________________________
dense_3 (Dense)              (None, 2)                 258       
=================================================================
Total params: 3,706,178
Trainable params: 3,706,178
Non-trainable params: 0
_________________________________________________________________
In [20]:
model.evaluate(test_ds)
23/23 [==============================] - 1s 40ms/step - loss: 0.0111 - accuracy: 0.9972
Out[20]:
[0.011101297102868557, 0.9972028136253357]

Plotting the metrics

In [21]:
def plot(history, variable):
    plt.plot(range(len(history[variable])), history[variable])
    plt.title(variable)
In [22]:
plot(history.history, "accuracy")
In [23]:
plot(history.history, "loss")

Prediction

In [24]:
# pick random test data sample from one batch
x = random.randint(0, batch - 1)

for i in test_ds.as_numpy_iterator():
    img, label = i    
    plt.axis('off')   # remove axes
    plt.imshow(img[x])    # shape from (32, 256, 256, 3) --> (256, 256, 3)
    output = model.predict(np.expand_dims(img[x],0))    # getting output; input shape (256, 256, 3) --> (1, 256, 256, 3)
    pred = np.argmax(output[0])    # finding max
    print("Prdicted: ", class_names[pred])    # Picking the label from class_names base don the model output
    print("True: ", class_names[label[x]])
    print("Probability: ", output[0][pred])
    break
Prdicted:  def_front
True:  def_front
Probability:  0.9983612

deepC

In [25]:
model.save('quality_check.h5')
In [26]:
!deepCC quality_check.h5
[INFO]
Reading [keras model] 'quality_check.h5'
[SUCCESS]
Saved 'quality_check_deepC/quality_check.onnx'
[INFO]
Reading [onnx model] 'quality_check_deepC/quality_check.onnx'
[INFO]
Model info:
  ir_vesion : 5
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) conv2d_3_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_3's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_3) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'quality_check_deepC/quality_check.cpp'
[INFO]
deepSea model files are ready in 'quality_check_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "quality_check_deepC/quality_check.cpp" -D_AITS_MAIN -o "quality_check_deepC/quality_check.exe"
[RUNNING COMMAND]
size "quality_check_deepC/quality_check.exe"
   text	   data	    bss	    dec	    hex	filename
15000557	   3784	    760	15005101	 e4f5ad	quality_check_deepC/quality_check.exe
[SUCCESS]
Saved model as executable "quality_check_deepC/quality_check.exe"
In [27]:
# pick random test data sample from the batch
x = random.randint(0, batch - 1)

for i in test_ds.as_numpy_iterator():    
    img, label = i      # i contains all test samples
    np.savetxt('sample.data', (img[x]).flatten())    # xth sample into text file
    plt.axis('off')
    plt.imshow(img[x])
    print("True: ", class_names[label[x]])
    break

# run exe with input
!quality_check_deepC/quality_check.exe sample.data

# show predicted output
nn_out = np.loadtxt('dense_1.out')
pred = np.argmax(nn_out)
print ("Model predicted the product quality: ", class_names[pred], " with prbability ", nn_out[pred])
True:  def_front
Warn: conv2d_3_Relu_0_pooling: auto_pad attribute is deprecated, it'll be ignored.
Warn: conv2d_4_Relu_0_pooling: auto_pad attribute is deprecated, it'll be ignored.
Warn: conv2d_5_Relu_0_pooling: auto_pad attribute is deprecated, it'll be ignored.
writing file deepSea_result_1.out.
---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-27-1c6e4b2720d3> in <module>
     14 
     15 # show predicted output
---> 16 nn_out = np.loadtxt('dense_1.out')
     17 pred = np.argmax(nn_out)
     18 print ("Model predicted the product quality: ", class_names[pred], " with prbability ", nn_out[pred])

/opt/tljh/user/lib/python3.7/site-packages/numpy/lib/npyio.py in loadtxt(fname, dtype, comments, delimiter, converters, skiprows, usecols, unpack, ndmin, encoding, max_rows)
    979             fname = os_fspath(fname)
    980         if _is_string_like(fname):
--> 981             fh = np.lib._datasource.open(fname, 'rt', encoding=encoding)
    982             fencoding = getattr(fh, 'encoding', 'latin1')
    983             fh = iter(fh)

/opt/tljh/user/lib/python3.7/site-packages/numpy/lib/_datasource.py in open(path, mode, destpath, encoding, newline)
    267 
    268     ds = DataSource(destpath)
--> 269     return ds.open(path, mode, encoding=encoding, newline=newline)
    270 
    271 

/opt/tljh/user/lib/python3.7/site-packages/numpy/lib/_datasource.py in open(self, path, mode, encoding, newline)
    621                                       encoding=encoding, newline=newline)
    622         else:
--> 623             raise IOError("%s not found." % path)
    624 
    625 

OSError: dense_1.out not found.
In [ ]: