Cainvas
Model Files
fashion_mnist.h5
keras
Model
deepSea Compiled Models
fashion_mnist.exe
deepSea
Ubuntu

Classifying Clothing Items from Fashion MNIST Dataset

Credit: AITS Cainvas Community

Photo by Ofspace Digital Agency on Dribbble

This notebook employs the use of neural network to classify different types of clothing items present in the Fashion MNIST Dataset. Tensorflow and Keras has been used to create the neural network for classifying the clothing images to their appropriate classes.

In [1]:
import numpy as np
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Dropout, BatchNormalization
import matplotlib.pyplot as plt
%matplotlib inline
In [2]:
import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')

if gpus:
    try:
        # Currently, memory growth needs to be the same across GPUs
        for gpu in gpus:
            tf.config.experimental.set_memory_growth(gpu, True)
        logical_gpus = tf.config.experimental.list_logical_devices('GPU')
        print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
    except RuntimeError as e:
        # Memory growth must be set before GPUs have been initialized
        print(e)
1 Physical GPUs, 1 Logical GPUs

Fashion MNIST Dataset

Fashion-MNIST is a dataset of Zalando's article images - consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale imaeg, associated with a label from 10 classes.

  • Class 0: T-shirt/top
  • Class 1: Trouser
  • Class 2: Pullover
  • Class 3: Dress
  • Class 4: Coat
  • Class 5: Sandal
  • Class 6: Shirt
  • Class 7: Sneaker
  • Class 8: Bag
  • Class 9: Ankle Boot
In [3]:
from tensorflow.keras.datasets import fashion_mnist
(x_train, x_lab), (y_test,y_lab) = fashion_mnist.load_data()
print(x_train.shape)
print(y_test.shape)
plt.imshow(x_train[5])
plt.title('Class: {}'.format(x_lab[5]))
plt.show()
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz
32768/29515 [=================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
26427392/26421880 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz
8192/5148 [===============================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz
4423680/4422102 [==============================] - 0s 0us/step
(60000, 28, 28)
(10000, 28, 28)
In [4]:
x_train = keras.utils.normalize(x_train, axis=1)
y_test = keras.utils.normalize(y_test, axis=1)
plt.imshow(x_train[5])
plt.title('Class: {}'.format(x_lab[5]))
plt.show()
In [5]:
from tensorflow.keras import regularizers
model = Sequential()
model.add(Flatten(input_shape = ((28,28))))
model.add(Dropout(0.05))
model.add(Dense(128, activation = "relu"))
model.add(Dense(64, activation = "relu"))
model.add(Dropout(0.05))
model.add(Dense(10,activation="softmax"))
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 784)               0         
_________________________________________________________________
dropout (Dropout)            (None, 784)               0         
_________________________________________________________________
dense (Dense)                (None, 128)               100480    
_________________________________________________________________
dense_1 (Dense)              (None, 64)                8256      
_________________________________________________________________
dropout_1 (Dropout)          (None, 64)                0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                650       
=================================================================
Total params: 109,386
Trainable params: 109,386
Non-trainable params: 0
_________________________________________________________________
In [6]:
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"])
history = model.fit(x_train, x_lab, validation_split = 0.33, epochs = 35)
Epoch 1/35
1257/1257 [==============================] - 6s 5ms/step - loss: 0.5556 - accuracy: 0.7964 - val_loss: 0.4092 - val_accuracy: 0.8509
Epoch 2/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.3974 - accuracy: 0.8537 - val_loss: 0.3681 - val_accuracy: 0.8627
Epoch 3/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.3561 - accuracy: 0.8681 - val_loss: 0.3637 - val_accuracy: 0.8650
Epoch 4/35
1257/1257 [==============================] - 5s 4ms/step - loss: 0.3332 - accuracy: 0.8761 - val_loss: 0.3497 - val_accuracy: 0.8724
Epoch 5/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.3120 - accuracy: 0.8826 - val_loss: 0.3343 - val_accuracy: 0.8778
Epoch 6/35
1257/1257 [==============================] - 6s 5ms/step - loss: 0.2985 - accuracy: 0.8870 - val_loss: 0.3392 - val_accuracy: 0.8756
Epoch 7/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.2884 - accuracy: 0.8917 - val_loss: 0.3266 - val_accuracy: 0.8791
Epoch 8/35
1257/1257 [==============================] - 5s 4ms/step - loss: 0.2735 - accuracy: 0.8966 - val_loss: 0.3335 - val_accuracy: 0.8781
Epoch 9/35
1257/1257 [==============================] - 5s 4ms/step - loss: 0.2646 - accuracy: 0.8985 - val_loss: 0.3123 - val_accuracy: 0.8872
Epoch 10/35
1257/1257 [==============================] - 4s 3ms/step - loss: 0.2560 - accuracy: 0.9042 - val_loss: 0.3141 - val_accuracy: 0.8891
Epoch 11/35
1257/1257 [==============================] - 5s 4ms/step - loss: 0.2435 - accuracy: 0.9069 - val_loss: 0.3319 - val_accuracy: 0.8800
Epoch 12/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.2414 - accuracy: 0.9087 - val_loss: 0.3326 - val_accuracy: 0.8826
Epoch 13/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.2334 - accuracy: 0.9101 - val_loss: 0.3105 - val_accuracy: 0.8893
Epoch 14/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.2253 - accuracy: 0.9131 - val_loss: 0.3406 - val_accuracy: 0.8818
Epoch 15/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.2205 - accuracy: 0.9157 - val_loss: 0.3209 - val_accuracy: 0.8902
Epoch 16/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.2119 - accuracy: 0.9186 - val_loss: 0.3333 - val_accuracy: 0.8902
Epoch 17/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.2091 - accuracy: 0.9202 - val_loss: 0.3324 - val_accuracy: 0.8878
Epoch 18/35
1257/1257 [==============================] - 5s 4ms/step - loss: 0.2039 - accuracy: 0.9197 - val_loss: 0.3329 - val_accuracy: 0.8899
Epoch 19/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.1977 - accuracy: 0.9231 - val_loss: 0.3292 - val_accuracy: 0.8892
Epoch 20/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.1937 - accuracy: 0.9244 - val_loss: 0.3335 - val_accuracy: 0.8885
Epoch 21/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.1889 - accuracy: 0.9262 - val_loss: 0.3345 - val_accuracy: 0.8905
Epoch 22/35
1257/1257 [==============================] - 5s 4ms/step - loss: 0.1819 - accuracy: 0.9297 - val_loss: 0.3326 - val_accuracy: 0.8918
Epoch 23/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.1801 - accuracy: 0.9312 - val_loss: 0.3545 - val_accuracy: 0.8888
Epoch 24/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.1758 - accuracy: 0.9328 - val_loss: 0.3398 - val_accuracy: 0.8912
Epoch 25/35
1257/1257 [==============================] - 6s 5ms/step - loss: 0.1721 - accuracy: 0.9330 - val_loss: 0.3347 - val_accuracy: 0.8925
Epoch 26/35
1257/1257 [==============================] - 5s 4ms/step - loss: 0.1682 - accuracy: 0.9345 - val_loss: 0.4208 - val_accuracy: 0.8792
Epoch 27/35
1257/1257 [==============================] - 4s 3ms/step - loss: 0.1682 - accuracy: 0.9358 - val_loss: 0.3563 - val_accuracy: 0.8932
Epoch 28/35
1257/1257 [==============================] - 5s 4ms/step - loss: 0.1647 - accuracy: 0.9358 - val_loss: 0.3424 - val_accuracy: 0.8969
Epoch 29/35
1257/1257 [==============================] - 5s 4ms/step - loss: 0.1634 - accuracy: 0.9373 - val_loss: 0.3689 - val_accuracy: 0.8883
Epoch 30/35
1257/1257 [==============================] - 5s 4ms/step - loss: 0.1562 - accuracy: 0.9402 - val_loss: 0.3518 - val_accuracy: 0.8934
Epoch 31/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.1574 - accuracy: 0.9382 - val_loss: 0.3637 - val_accuracy: 0.8929
Epoch 32/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.1525 - accuracy: 0.9409 - val_loss: 0.3658 - val_accuracy: 0.8962
Epoch 33/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.1512 - accuracy: 0.9409 - val_loss: 0.3618 - val_accuracy: 0.8956
Epoch 34/35
1257/1257 [==============================] - 5s 4ms/step - loss: 0.1469 - accuracy: 0.9445 - val_loss: 0.3763 - val_accuracy: 0.8938
Epoch 35/35
1257/1257 [==============================] - 6s 4ms/step - loss: 0.1465 - accuracy: 0.9441 - val_loss: 0.3744 - val_accuracy: 0.8919
In [7]:
model.evaluate(y_test,y_lab)
313/313 [==============================] - 1s 3ms/step - loss: 0.4285 - accuracy: 0.8812
Out[7]:
[0.4284818470478058, 0.8812000155448914]
In [8]:
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
In [9]:
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
In [10]:
p = model.predict(y_test[:10])
print(p)
[[1.56451438e-08 5.98775664e-17 3.30352908e-15 5.48490897e-12
  6.42496446e-14 1.17467025e-05 1.32269654e-10 1.03421116e-04
  3.62460867e-10 9.99884844e-01]
 [1.14386363e-04 1.53019038e-11 9.99353588e-01 6.04862514e-12
  3.16385238e-04 3.44458880e-12 2.15644715e-04 2.89657964e-10
  1.58083796e-10 3.22466446e-13]
 [6.72763303e-11 1.00000000e+00 1.06854722e-14 3.72921763e-11
  1.74975169e-17 1.02592045e-20 6.53424390e-14 4.67071329e-25
  5.03721850e-18 9.33302170e-22]
 [5.89728868e-14 1.00000000e+00 3.21249343e-19 3.06759094e-13
  4.33925096e-21 8.15069212e-22 7.28938851e-15 6.05309428e-26
  7.74054731e-19 2.51832926e-22]
 [9.29527164e-01 5.56032271e-13 6.83052582e-04 1.26792543e-07
  6.55551412e-05 1.53529030e-11 6.97241127e-02 2.18209066e-12
  1.30823581e-11 3.80634651e-13]
 [6.55047039e-09 1.00000000e+00 1.06724117e-11 2.25792898e-10
  7.42565257e-14 5.40892834e-19 1.52787505e-11 1.72065797e-20
  7.83342737e-13 5.73261497e-19]
 [1.40669670e-10 4.52502415e-14 4.19008837e-04 2.46905794e-12
  9.99578416e-01 1.24373448e-15 2.63992229e-06 1.46835292e-16
  3.93415252e-15 3.44628026e-14]
 [3.24590133e-07 2.58869311e-14 1.34181819e-05 4.21124832e-06
  8.88979775e-05 5.94732666e-11 9.99893069e-01 1.20031252e-09
  1.38944134e-10 7.60877472e-12]
 [1.66358897e-20 9.18522194e-34 5.98218756e-18 3.31437862e-20
  3.86473572e-17 1.00000000e+00 1.60792341e-18 1.20993077e-15
  2.91599422e-20 2.49624667e-28]
 [1.68890513e-15 3.11470397e-21 1.29099883e-13 1.99629245e-12
  1.06981958e-11 1.69941154e-06 1.55821417e-12 9.99998331e-01
  5.44792889e-13 3.03000358e-09]]
In [11]:
pred = np.argmax(p, axis = 1)
print(pred)
print(y_lab[:10])
[9 2 1 1 0 1 4 6 5 7]
[9 2 1 1 6 1 4 6 5 7]
In [12]:
plt.figure(figsize = (10,10))
for i in range(10):
    plt.subplot(5,5,i+1)
    plt.imshow(y_test[i])
    plt.title('Original: {}, Predicted: {}'.format(y_lab[i], pred[i]))
    plt.axis('Off')

plt.subplots_adjust(left=1.5, right=2.5, top=1.2)
plt.show()
In [13]:
model.save("fashion_mnist.h5")

deepCC

In [14]:
!deepCC fashion_mnist.h5
[INFO]
Reading [keras model] 'fashion_mnist.h5'
[SUCCESS]
Saved 'fashion_mnist_deepC/fashion_mnist.onnx'
[INFO]
Reading [onnx model] 'fashion_mnist_deepC/fashion_mnist.onnx'
[INFO]
Model info:
  ir_vesion : 4
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) flatten_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_2's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_2) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'fashion_mnist_deepC/fashion_mnist.cpp'
[INFO]
deepSea model files are ready in 'fashion_mnist_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "fashion_mnist_deepC/fashion_mnist.cpp" -D_AITS_MAIN -o "fashion_mnist_deepC/fashion_mnist.exe"
[RUNNING COMMAND]
size "fashion_mnist_deepC/fashion_mnist.exe"
   text	   data	    bss	    dec	    hex	filename
 563781	   3184	    760	 567725	  8a9ad	fashion_mnist_deepC/fashion_mnist.exe
[SUCCESS]
Saved model as executable "fashion_mnist_deepC/fashion_mnist.exe"