Cainvas

Tomato Disease Detection

Credit: AITS Cainvas Community

Photo by Jonas Mosesson on Dribbble

Importing necessary libraries

In [1]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import cv2 
import os

Uploading the data

In [2]:
!wget -N "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/tomato.zip"
!unzip -qo tomato.zip
!rm tomato.zip
--2021-09-09 05:39:06--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/tomato.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.156.31
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.156.31|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 37248042 (36M) [application/x-zip-compressed]
Saving to: ‘tomato.zip’

tomato.zip          100%[===================>]  35.52M   110MB/s    in 0.3s    

2021-09-09 05:39:06 (110 MB/s) - ‘tomato.zip’ saved [37248042/37248042]

Reading and preprocessing images and labels

In [3]:
# In our main folder we have 2 folders, train and val, namely and each of them contains 10 folders, 1 for healthy leaves, the others for diseases
train_images = []
train_labels = []
test_images = []
test_labels = []

dataset_path = 'tomato'
for train_test_folder in os.listdir(dataset_path):
    # if we are in train folder, we go through disease/healthy folders there
    if train_test_folder == 'train':
        train_path = os.path.join(dataset_path, train_test_folder)
        # for each disease/healthy folder we take folder name as label and go through it to read images
        for disease_folder in os.listdir(train_path):
            disease_path = os.path.join(train_path, disease_folder)
            label = disease_folder.split('___')[1]
            # in each disease/healthy folder we read files with jpg format, i.e images and normalize them
            for file in os.listdir(disease_path):
                if file.endswith('jpg'):
                    img_path = os.path.join(disease_path, file)
                    img = cv2.imread(img_path)
                    r, g, b = img[:, :, 0]/255, img[:, :, 1]/255, img[:, :, 2]/255
                    img = np.dstack((r, g, b))
                    train_images.append(img)
                    train_labels.append(label)
                    
    # if we are in val folder, we go through disease/healthy folders there         
    if train_test_folder == 'val':
        test_path = os.path.join(dataset_path, train_test_folder)
        # for each disease/healthy folder we take folder name as label and go through it to read images
        for disease_folder in os.listdir(test_path):
            disease_path = os.path.join(test_path, disease_folder)
            label = disease_folder.split('___')[1]
            # in each disease/healthy folder we read files with jpg format, i.e images and normalize them
            for file in os.listdir(disease_path):
                if file.endswith('jpg'):
                    img_path = os.path.join(disease_path, file)
                    img = cv2.imread(img_path)
                    r, g, b = img[:, :, 0]/255, img[:, :, 1]/255, img[:, :, 2]/255
                    img = np.dstack((r, g, b))
                    test_images.append(img)
                    test_labels.append(label)
                    
train_images = np.array(train_images)
train_labels = np.array(train_labels)
test_images = np.array(test_images)
test_labels = np.array(test_labels)
print('Shape of the stacked train images:', train_images.shape)
print('Shape of the train labels:', train_labels.shape)
print('Shape of the stacked test images:', test_images.shape)
print('Shape of the test_labels:', test_labels.shape)
Shape of the stacked train images: (10000, 64, 64, 3)
Shape of the train labels: (10000,)
Shape of the stacked test images: (1000, 64, 64, 3)
Shape of the test_labels: (1000,)

Checking for all leaf categories

In [4]:
unique_labels = np.unique(train_labels)
unique_labels
Out[4]:
array(['Bacterial_spot', 'Early_blight', 'Late_blight', 'Leaf_Mold',
       'Septoria_leaf_spot', 'Spider_mites Two-spotted_spider_mite',
       'Target_Spot', 'Tomato_Yellow_Leaf_Curl_Virus',
       'Tomato_mosaic_virus', 'healthy'], dtype='<U36')

Encoder function that uses one hot encoding strategy to change string labels into numerical values

In [5]:
def encoder(labels):
    train_labels = np.zeros((labels.shape[0], 10))
    dic = {'Bacterial_spot':0, 'Early_blight':1, 'Late_blight':2, 'Leaf_Mold':3, 'Septoria_leaf_spot':4, 'Spider_mites Two-spotted_spider_mite':5,
          'Target_Spot':6, 'Tomato_Yellow_Leaf_Curl_Virus':7, 'Tomato_mosaic_virus':8, 'healthy':9}
    for i in range(len(labels)):
        train_labels[i, dic[labels[i]]] = 1
    return train_labels

Decoder function that will transform predicted results into string labels

In [6]:
def decoder(labels):
    preds = np.argmax(labels, axis=1)
    test_labels = []
    dic = {0:'Bacterial_spot', 1:'Early_blight', 2:'Late_blight', 3:'Leaf_Mold', 4:'Septoria_leaf_spot', 5:'Spider_mites Two-spotted_spider_mite',
          6:'Target_Spot', 7:'Tomato_Yellow_Leaf_Curl_Virus', 8:'Tomato_mosaic_virus', 9:'healthy'}
    
    for i in preds:
        test_labels.append(dic[i])
    return np.array(test_labels)
    

Let's visualize 2 images from each category with their corresponding labels to have an idea about our data

In [7]:
row = 5
col = 4
fig, axes = plt.subplots(row, col, figsize=(14, 14))
c = 0
count = 0
for i in range(row):
    for j in range(col):
        axes[i][j].imshow(train_images[c])
        axes[i][j].set_title(train_labels[c])
        c += 500
        
plt.tight_layout()
plt.show()

Encoding our labels

In [8]:
train_labels = encoder(train_labels)
test_labels = encoder(test_labels)

Splitting our data into train and validation sets and augmenting train images

In [9]:
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(train_images, train_labels, random_state=123)
In [10]:
from tensorflow.keras.preprocessing.image import ImageDataGenerator
datagen_train = ImageDataGenerator(horizontal_flip=True, vertical_flip=True)
train_iter = datagen_train.flow(X_train, y_train, batch_size=64)

Building our model and train

In [11]:
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Flatten

model = tf.keras.Sequential([
    Conv2D(8, (3, 3), input_shape=(X_train.shape[1], X_train.shape[2], X_train.shape[3]), activation='relu', padding='same'),
    MaxPooling2D((2, 2), padding='same'),
    
    Conv2D(16, (3, 3), activation='relu', padding='same'),
    MaxPooling2D((2, 2), padding='same'),
    
    Conv2D(32, (3, 3), activation='relu', padding='same'),
    MaxPooling2D((2, 2), padding='same'),
    
    Flatten(),
    Dense(40, activation='relu'),
    Dense(10, activation='softmax')
    
])

model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 64, 64, 8)         224       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 32, 32, 8)         0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 32, 32, 16)        1168      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 16, 16, 16)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 16, 16, 32)        4640      
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 8, 8, 32)          0         
_________________________________________________________________
flatten (Flatten)            (None, 2048)              0         
_________________________________________________________________
dense (Dense)                (None, 40)                81960     
_________________________________________________________________
dense_1 (Dense)              (None, 10)                410       
=================================================================
Total params: 88,402
Trainable params: 88,402
Non-trainable params: 0
_________________________________________________________________
In [12]:
# training our model with callbacks: if we have no improvement on validation loss for 10 epochs, we stop and create a checkpoint
cb = [
        tf.keras.callbacks.EarlyStopping(monitor = 'val_loss', patience = 10, restore_best_weights = True),
        tf.keras.callbacks.ModelCheckpoint('model_tomato.h5', monitor = "val_loss", save_best_only = True)
    ]

model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.0005), loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(train_iter, steps_per_epoch=len(train_iter), epochs=120, validation_data=(X_val, y_val), callbacks = cb)
Epoch 1/120
118/118 [==============================] - 1s 9ms/step - loss: 2.1872 - accuracy: 0.2189 - val_loss: 1.8924 - val_accuracy: 0.3568
Epoch 2/120
118/118 [==============================] - 1s 6ms/step - loss: 1.6174 - accuracy: 0.4547 - val_loss: 1.4975 - val_accuracy: 0.5028
Epoch 3/120
118/118 [==============================] - 1s 5ms/step - loss: 1.3795 - accuracy: 0.5427 - val_loss: 1.3518 - val_accuracy: 0.5336
Epoch 4/120
118/118 [==============================] - 1s 5ms/step - loss: 1.2177 - accuracy: 0.5955 - val_loss: 1.1116 - val_accuracy: 0.6328
Epoch 5/120
118/118 [==============================] - 1s 5ms/step - loss: 1.0678 - accuracy: 0.6497 - val_loss: 1.0314 - val_accuracy: 0.6436
Epoch 6/120
118/118 [==============================] - 1s 5ms/step - loss: 0.9830 - accuracy: 0.6733 - val_loss: 0.9510 - val_accuracy: 0.6736
Epoch 7/120
118/118 [==============================] - 1s 5ms/step - loss: 0.8957 - accuracy: 0.6992 - val_loss: 0.9741 - val_accuracy: 0.6532
Epoch 8/120
118/118 [==============================] - 1s 5ms/step - loss: 0.8500 - accuracy: 0.7117 - val_loss: 0.8613 - val_accuracy: 0.7040
Epoch 9/120
118/118 [==============================] - 1s 5ms/step - loss: 0.8113 - accuracy: 0.7229 - val_loss: 0.8274 - val_accuracy: 0.7244
Epoch 10/120
118/118 [==============================] - 1s 5ms/step - loss: 0.7713 - accuracy: 0.7423 - val_loss: 0.7751 - val_accuracy: 0.7356
Epoch 11/120
118/118 [==============================] - 1s 5ms/step - loss: 0.7143 - accuracy: 0.7617 - val_loss: 0.7194 - val_accuracy: 0.7572
Epoch 12/120
118/118 [==============================] - 1s 5ms/step - loss: 0.7046 - accuracy: 0.7613 - val_loss: 0.8102 - val_accuracy: 0.7224
Epoch 13/120
118/118 [==============================] - 1s 5ms/step - loss: 0.6726 - accuracy: 0.7743 - val_loss: 0.6838 - val_accuracy: 0.7716
Epoch 14/120
118/118 [==============================] - 1s 5ms/step - loss: 0.6562 - accuracy: 0.7761 - val_loss: 0.6640 - val_accuracy: 0.7728
Epoch 15/120
118/118 [==============================] - 1s 5ms/step - loss: 0.6284 - accuracy: 0.7841 - val_loss: 0.6262 - val_accuracy: 0.7892
Epoch 16/120
118/118 [==============================] - 1s 5ms/step - loss: 0.5981 - accuracy: 0.7968 - val_loss: 0.6599 - val_accuracy: 0.7676
Epoch 17/120
118/118 [==============================] - 1s 5ms/step - loss: 0.5819 - accuracy: 0.8052 - val_loss: 0.6001 - val_accuracy: 0.7976
Epoch 18/120
118/118 [==============================] - 1s 5ms/step - loss: 0.5530 - accuracy: 0.8137 - val_loss: 0.6478 - val_accuracy: 0.7836
Epoch 19/120
118/118 [==============================] - 1s 5ms/step - loss: 0.5462 - accuracy: 0.8143 - val_loss: 0.5706 - val_accuracy: 0.8000
Epoch 20/120
118/118 [==============================] - 1s 6ms/step - loss: 0.5270 - accuracy: 0.8181 - val_loss: 0.5534 - val_accuracy: 0.8040
Epoch 21/120
118/118 [==============================] - 1s 5ms/step - loss: 0.5097 - accuracy: 0.8243 - val_loss: 0.5369 - val_accuracy: 0.8116
Epoch 22/120
118/118 [==============================] - 1s 5ms/step - loss: 0.5043 - accuracy: 0.8283 - val_loss: 0.5829 - val_accuracy: 0.7948
Epoch 23/120
118/118 [==============================] - 1s 5ms/step - loss: 0.4799 - accuracy: 0.8339 - val_loss: 0.5904 - val_accuracy: 0.8028
Epoch 24/120
118/118 [==============================] - 1s 5ms/step - loss: 0.4708 - accuracy: 0.8343 - val_loss: 0.5722 - val_accuracy: 0.8016
Epoch 25/120
118/118 [==============================] - 1s 5ms/step - loss: 0.4741 - accuracy: 0.8313 - val_loss: 0.5087 - val_accuracy: 0.8280
Epoch 26/120
118/118 [==============================] - 1s 5ms/step - loss: 0.4423 - accuracy: 0.8476 - val_loss: 0.5850 - val_accuracy: 0.7772
Epoch 27/120
118/118 [==============================] - 1s 5ms/step - loss: 0.4359 - accuracy: 0.8452 - val_loss: 0.4863 - val_accuracy: 0.8312
Epoch 28/120
118/118 [==============================] - 1s 5ms/step - loss: 0.4274 - accuracy: 0.8531 - val_loss: 0.5385 - val_accuracy: 0.8056
Epoch 29/120
118/118 [==============================] - 1s 5ms/step - loss: 0.4338 - accuracy: 0.8480 - val_loss: 0.4462 - val_accuracy: 0.8476
Epoch 30/120
118/118 [==============================] - 1s 5ms/step - loss: 0.4113 - accuracy: 0.8571 - val_loss: 0.4668 - val_accuracy: 0.8364
Epoch 31/120
118/118 [==============================] - 1s 5ms/step - loss: 0.4010 - accuracy: 0.8620 - val_loss: 0.4411 - val_accuracy: 0.8512
Epoch 32/120
118/118 [==============================] - 1s 5ms/step - loss: 0.4015 - accuracy: 0.8621 - val_loss: 0.4524 - val_accuracy: 0.8520
Epoch 33/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3835 - accuracy: 0.8651 - val_loss: 0.4622 - val_accuracy: 0.8416
Epoch 34/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3930 - accuracy: 0.8644 - val_loss: 0.4821 - val_accuracy: 0.8328
Epoch 35/120
118/118 [==============================] - 1s 5ms/step - loss: 0.4041 - accuracy: 0.8576 - val_loss: 0.4236 - val_accuracy: 0.8540
Epoch 36/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3650 - accuracy: 0.8705 - val_loss: 0.4097 - val_accuracy: 0.8632
Epoch 37/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3643 - accuracy: 0.8748 - val_loss: 0.4211 - val_accuracy: 0.8536
Epoch 38/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3682 - accuracy: 0.8685 - val_loss: 0.4478 - val_accuracy: 0.8348
Epoch 39/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3516 - accuracy: 0.8779 - val_loss: 0.3997 - val_accuracy: 0.8652
Epoch 40/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3474 - accuracy: 0.8803 - val_loss: 0.4211 - val_accuracy: 0.8616
Epoch 41/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3389 - accuracy: 0.8823 - val_loss: 0.4564 - val_accuracy: 0.8348
Epoch 42/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3498 - accuracy: 0.8780 - val_loss: 0.4788 - val_accuracy: 0.8360
Epoch 43/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3617 - accuracy: 0.8753 - val_loss: 0.3956 - val_accuracy: 0.8696
Epoch 44/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3233 - accuracy: 0.8876 - val_loss: 0.3859 - val_accuracy: 0.8728
Epoch 45/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3264 - accuracy: 0.8852 - val_loss: 0.3653 - val_accuracy: 0.8788
Epoch 46/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3043 - accuracy: 0.8939 - val_loss: 0.3947 - val_accuracy: 0.8652
Epoch 47/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3298 - accuracy: 0.8828 - val_loss: 0.5324 - val_accuracy: 0.8140
Epoch 48/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3296 - accuracy: 0.8831 - val_loss: 0.3702 - val_accuracy: 0.8720
Epoch 49/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3339 - accuracy: 0.8829 - val_loss: 0.4098 - val_accuracy: 0.8640
Epoch 50/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3177 - accuracy: 0.8888 - val_loss: 0.3898 - val_accuracy: 0.8716
Epoch 51/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3006 - accuracy: 0.8947 - val_loss: 0.3787 - val_accuracy: 0.8772
Epoch 52/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2986 - accuracy: 0.8961 - val_loss: 0.3825 - val_accuracy: 0.8732
Epoch 53/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3288 - accuracy: 0.8813 - val_loss: 0.3635 - val_accuracy: 0.8800
Epoch 54/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2905 - accuracy: 0.8988 - val_loss: 0.3603 - val_accuracy: 0.8756
Epoch 55/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3019 - accuracy: 0.8935 - val_loss: 0.3654 - val_accuracy: 0.8776
Epoch 56/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2848 - accuracy: 0.9065 - val_loss: 0.3608 - val_accuracy: 0.8820
Epoch 57/120
118/118 [==============================] - 1s 5ms/step - loss: 0.3067 - accuracy: 0.8927 - val_loss: 0.3675 - val_accuracy: 0.8792
Epoch 58/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2730 - accuracy: 0.9045 - val_loss: 0.3574 - val_accuracy: 0.8812
Epoch 59/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2826 - accuracy: 0.9007 - val_loss: 0.4271 - val_accuracy: 0.8524
Epoch 60/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2765 - accuracy: 0.9017 - val_loss: 0.3440 - val_accuracy: 0.8916
Epoch 61/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2842 - accuracy: 0.8999 - val_loss: 0.3906 - val_accuracy: 0.8784
Epoch 62/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2784 - accuracy: 0.9023 - val_loss: 0.3500 - val_accuracy: 0.8820
Epoch 63/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2672 - accuracy: 0.9116 - val_loss: 0.3312 - val_accuracy: 0.8944
Epoch 64/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2862 - accuracy: 0.9007 - val_loss: 0.3581 - val_accuracy: 0.8836
Epoch 65/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2607 - accuracy: 0.9109 - val_loss: 0.3774 - val_accuracy: 0.8792
Epoch 66/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2660 - accuracy: 0.9072 - val_loss: 0.4456 - val_accuracy: 0.8464
Epoch 67/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2649 - accuracy: 0.9080 - val_loss: 0.3357 - val_accuracy: 0.8884
Epoch 68/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2606 - accuracy: 0.9071 - val_loss: 0.4327 - val_accuracy: 0.8592
Epoch 69/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2770 - accuracy: 0.9009 - val_loss: 0.3650 - val_accuracy: 0.8720
Epoch 70/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2619 - accuracy: 0.9081 - val_loss: 0.4054 - val_accuracy: 0.8668
Epoch 71/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2462 - accuracy: 0.9173 - val_loss: 0.3382 - val_accuracy: 0.8904
Epoch 72/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2339 - accuracy: 0.9204 - val_loss: 0.3617 - val_accuracy: 0.8780
Epoch 73/120
118/118 [==============================] - 1s 5ms/step - loss: 0.2617 - accuracy: 0.9091 - val_loss: 0.3710 - val_accuracy: 0.8852

Visualizing our accuracy, loss and saving the model

In [13]:
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()

# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()

model.save('model_tomato.h5')
print('Weights saved.')
Weights saved.

Testing our model

In [14]:
acc = model.evaluate(test_images, test_labels)
32/32 [==============================] - 0s 2ms/step - loss: 0.3532 - accuracy: 0.8890
In [15]:
# decoding the labels
predicted_labels = decoder(model.predict(test_images))
test_labels = decoder(test_labels)
In [16]:
# visualizing some of our results
row = 3
col = 4
fig, axes = plt.subplots(row, col, figsize=(16, 12))
c = 0
count = 0
for i in range(row):
    for j in range(col):
        axes[i][j].imshow(test_images[c])
        axes[i][j].set_title(f'Predicted: {predicted_labels[c]}', fontsize=14)
        axes[i][j].set_xlabel(f'Actual: {test_labels[c]}', fontsize=14)
        if (predicted_labels[c] != test_labels[c]):
            count+=1
        c += 80
        
plt.tight_layout()
plt.show()

DeepCC

In [17]:
!deepCC model_tomato.h5
[INFO]
Reading [keras model] 'model_tomato.h5'
[SUCCESS]
Saved 'model_tomato_deepC/model_tomato.onnx'
[INFO]
Reading [onnx model] 'model_tomato_deepC/model_tomato.onnx'
[INFO]
Model info:
  ir_vesion : 5
  doc       : 
[WARNING]
[ONNX]: graph-node conv2d's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node conv2d_1's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: graph-node conv2d_2's attribute auto_pad has no meaningful data.
[WARNING]
[ONNX]: terminal (input/output) conv2d_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_1's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_1) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'model_tomato_deepC/model_tomato.cpp'
[INFO]
deepSea model files are ready in 'model_tomato_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "model_tomato_deepC/model_tomato.cpp" -D_AITS_MAIN -o "model_tomato_deepC/model_tomato.exe"
[RUNNING COMMAND]
size "model_tomato_deepC/model_tomato.exe"
   text	   data	    bss	    dec	    hex	filename
 530693	   3784	    760	 535237	  82ac5	model_tomato_deepC/model_tomato.exe
[SUCCESS]
Saved model as executable "model_tomato_deepC/model_tomato.exe"
In [ ]: