Cainvas

pH-recognition

Credit: AITS Cainvas Community

Photo by Chris Gannon on Dribbble

Litmus paper is used to test acidity or basic nature of a given solution. Once dipped, the paper turns red, blue or any shade in between depending on the nature of the solution. Red stands for acidic nature and blue stands for basic nature!

Here we train a model to recognize the pH value based on RGB values of the color of the litmus paper.

In [1]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from tensorflow.keras import models, optimizers, losses, layers, callbacks
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
import random

Dataset

On Kaggle by Robert

The dataset is a CSV file with 4 columns - 3 indicating RGB values and 1 indicating pH value.

In [2]:
df = pd.read_csv('https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/ph-data.csv')
df
Out[2]:
blue green red label
0 36 27 231 0
1 36 84 250 1
2 37 164 255 2
3 22 205 255 3
4 38 223 221 4
... ... ... ... ...
648 201 123 0 10
649 203 51 46 11
650 169 62 48 12
651 173 37 79 13
652 131 2 77 14

653 rows × 4 columns

In [3]:
df['label'].value_counts()
Out[3]:
14    44
12    44
11    44
10    44
9     44
8     44
7     44
6     44
5     44
4     44
3     44
2     44
1     44
13    43
0     38
Name: label, dtype: int64

This is a balanced dataset.

Normalization

Pixel values in the range 0-255 are scaled down to the range 0-1 for faster convergence.

In [4]:
df[['red','green','blue']] /= 255

Train - val - test split

Defining the input and output columns.

In [5]:
# defining the input and output columns to separate the dataset in the later cells.

input_columns = df.columns.tolist()
input_columns.remove('label')

output_columns = ['label']

print("Number of input columns: ", len(input_columns))
#print("Input columns: ", ', '.join(input_columns))

print("Number of output columns: ", len(output_columns))
#print("Output columns: ", ', '.join(output_columns))
Number of input columns:  3
Number of output columns:  1
In [6]:
# Splitting into train, val and test set -- 80-10-10 split

# First, an 80-20 split
train_df, val_test_df = train_test_split(df, test_size = 0.2)

# Then split the 20% into half
val_df, test_df = train_test_split(val_test_df, test_size = 0.5)

print("Number of samples in...")
print("Training set: ", len(train_df))
print("Validation set: ", len(val_df))
print("Testing set: ", len(test_df))
Number of samples in...
Training set:  522
Validation set:  65
Testing set:  66
In [7]:
# Splitting into X (input) and y (output)

Xtrain, ytrain = np.array(train_df[input_columns]), np.array(train_df[output_columns])

Xval, yval = np.array(val_df[input_columns]), np.array(val_df[output_columns])

Xtest, ytest = np.array(test_df[input_columns]), np.array(test_df[output_columns])

Model

In [8]:
model = models.Sequential([
    layers.Dense(128, activation = 'relu', input_shape = Xtrain[0].shape),
    layers.Dense(64, activation = 'relu'),
    #layers.Dense(16, activation = 'relu'),
    layers.Dense(8, activation = 'relu'),
    layers.Dense(1)
])

cb = callbacks.EarlyStopping(patience = 10, restore_best_weights = True)
In [9]:
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 128)               512       
_________________________________________________________________
dense_1 (Dense)              (None, 64)                8256      
_________________________________________________________________
dense_2 (Dense)              (None, 8)                 520       
_________________________________________________________________
dense_3 (Dense)              (None, 1)                 9         
=================================================================
Total params: 9,297
Trainable params: 9,297
Non-trainable params: 0
_________________________________________________________________
In [10]:
model.compile(optimizer = optimizers.Adam(0.001), loss = losses.MeanSquaredError(), metrics = ['mae'])

history = model.fit(Xtrain, ytrain, validation_data = (Xval, yval), epochs = 1024, callbacks = cb)
Epoch 1/1024
17/17 [==============================] - 0s 12ms/step - loss: 61.7973 - mae: 6.6057 - val_loss: 60.0766 - val_mae: 6.6896
Epoch 2/1024
17/17 [==============================] - 0s 2ms/step - loss: 47.6637 - mae: 5.6371 - val_loss: 40.9958 - val_mae: 5.5114
Epoch 3/1024
17/17 [==============================] - 0s 2ms/step - loss: 29.8143 - mae: 4.5045 - val_loss: 20.5555 - val_mae: 3.8725
Epoch 4/1024
17/17 [==============================] - 0s 2ms/step - loss: 18.9430 - mae: 3.7179 - val_loss: 12.7931 - val_mae: 2.8593
Epoch 5/1024
17/17 [==============================] - 0s 2ms/step - loss: 13.0014 - mae: 2.9321 - val_loss: 9.1928 - val_mae: 2.3812
Epoch 6/1024
17/17 [==============================] - 0s 2ms/step - loss: 8.8822 - mae: 2.3437 - val_loss: 6.6088 - val_mae: 2.0251
Epoch 7/1024
17/17 [==============================] - 0s 2ms/step - loss: 6.5624 - mae: 2.0610 - val_loss: 5.7052 - val_mae: 1.9081
Epoch 8/1024
17/17 [==============================] - 0s 2ms/step - loss: 5.6330 - mae: 1.8805 - val_loss: 5.2629 - val_mae: 1.8092
Epoch 9/1024
17/17 [==============================] - 0s 2ms/step - loss: 5.1941 - mae: 1.7947 - val_loss: 5.0379 - val_mae: 1.7710
Epoch 10/1024
17/17 [==============================] - 0s 2ms/step - loss: 4.9549 - mae: 1.7651 - val_loss: 4.7679 - val_mae: 1.7026
Epoch 11/1024
17/17 [==============================] - 0s 2ms/step - loss: 4.6787 - mae: 1.6895 - val_loss: 4.4321 - val_mae: 1.6201
Epoch 12/1024
17/17 [==============================] - 0s 2ms/step - loss: 4.4344 - mae: 1.6431 - val_loss: 4.1849 - val_mae: 1.5630
Epoch 13/1024
17/17 [==============================] - 0s 2ms/step - loss: 4.2050 - mae: 1.5720 - val_loss: 3.9453 - val_mae: 1.5163
Epoch 14/1024
17/17 [==============================] - 0s 2ms/step - loss: 3.9136 - mae: 1.5366 - val_loss: 3.7233 - val_mae: 1.4535
Epoch 15/1024
17/17 [==============================] - 0s 2ms/step - loss: 3.6647 - mae: 1.4654 - val_loss: 3.4590 - val_mae: 1.3944
Epoch 16/1024
17/17 [==============================] - 0s 2ms/step - loss: 3.4426 - mae: 1.4002 - val_loss: 3.2722 - val_mae: 1.3611
Epoch 17/1024
17/17 [==============================] - 0s 2ms/step - loss: 3.2336 - mae: 1.3593 - val_loss: 3.1021 - val_mae: 1.3124
Epoch 18/1024
17/17 [==============================] - 0s 2ms/step - loss: 3.0625 - mae: 1.3205 - val_loss: 2.9194 - val_mae: 1.2658
Epoch 19/1024
17/17 [==============================] - 0s 2ms/step - loss: 2.9019 - mae: 1.2635 - val_loss: 2.7334 - val_mae: 1.2146
Epoch 20/1024
17/17 [==============================] - 0s 2ms/step - loss: 2.8312 - mae: 1.2891 - val_loss: 2.6256 - val_mae: 1.1767
Epoch 21/1024
17/17 [==============================] - 0s 2ms/step - loss: 2.6620 - mae: 1.2192 - val_loss: 2.5082 - val_mae: 1.1778
Epoch 22/1024
17/17 [==============================] - 0s 2ms/step - loss: 2.5102 - mae: 1.1681 - val_loss: 2.3441 - val_mae: 1.1152
Epoch 23/1024
17/17 [==============================] - 0s 2ms/step - loss: 2.4004 - mae: 1.1363 - val_loss: 2.2467 - val_mae: 1.1168
Epoch 24/1024
17/17 [==============================] - 0s 2ms/step - loss: 2.3122 - mae: 1.1208 - val_loss: 2.0943 - val_mae: 1.0396
Epoch 25/1024
17/17 [==============================] - 0s 2ms/step - loss: 2.2118 - mae: 1.0845 - val_loss: 2.0822 - val_mae: 1.0979
Epoch 26/1024
17/17 [==============================] - 0s 2ms/step - loss: 2.1365 - mae: 1.0714 - val_loss: 1.9390 - val_mae: 1.0110
Epoch 27/1024
17/17 [==============================] - 0s 2ms/step - loss: 2.0564 - mae: 1.0455 - val_loss: 1.8329 - val_mae: 0.9850
Epoch 28/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.9640 - mae: 1.0224 - val_loss: 1.7800 - val_mae: 0.9927
Epoch 29/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.8986 - mae: 0.9909 - val_loss: 1.6721 - val_mae: 0.9561
Epoch 30/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.8510 - mae: 0.9790 - val_loss: 1.6607 - val_mae: 0.9857
Epoch 31/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.7886 - mae: 0.9531 - val_loss: 1.5419 - val_mae: 0.9406
Epoch 32/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.7206 - mae: 0.9637 - val_loss: 1.4383 - val_mae: 0.8988
Epoch 33/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.6597 - mae: 0.9144 - val_loss: 1.4534 - val_mae: 0.9266
Epoch 34/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.5967 - mae: 0.9198 - val_loss: 1.2943 - val_mae: 0.8503
Epoch 35/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.5571 - mae: 0.8875 - val_loss: 1.2740 - val_mae: 0.8617
Epoch 36/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.5319 - mae: 0.8835 - val_loss: 1.2492 - val_mae: 0.8588
Epoch 37/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.4769 - mae: 0.8740 - val_loss: 1.1564 - val_mae: 0.8071
Epoch 38/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.4193 - mae: 0.8412 - val_loss: 1.1268 - val_mae: 0.8203
Epoch 39/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.3726 - mae: 0.8313 - val_loss: 1.0687 - val_mae: 0.7897
Epoch 40/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.3281 - mae: 0.8204 - val_loss: 1.0538 - val_mae: 0.8021
Epoch 41/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.3149 - mae: 0.8113 - val_loss: 1.0452 - val_mae: 0.7968
Epoch 42/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.3212 - mae: 0.8185 - val_loss: 1.0515 - val_mae: 0.7800
Epoch 43/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.2505 - mae: 0.7693 - val_loss: 0.9920 - val_mae: 0.7775
Epoch 44/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.2028 - mae: 0.7664 - val_loss: 0.9443 - val_mae: 0.7574
Epoch 45/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.1825 - mae: 0.7668 - val_loss: 0.9005 - val_mae: 0.7136
Epoch 46/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.1623 - mae: 0.7636 - val_loss: 0.9136 - val_mae: 0.7360
Epoch 47/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.2010 - mae: 0.7552 - val_loss: 0.8969 - val_mae: 0.7507
Epoch 48/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.1342 - mae: 0.7439 - val_loss: 0.8237 - val_mae: 0.6825
Epoch 49/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.0992 - mae: 0.7224 - val_loss: 0.8435 - val_mae: 0.6986
Epoch 50/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.0807 - mae: 0.7388 - val_loss: 0.7796 - val_mae: 0.6478
Epoch 51/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.0647 - mae: 0.7064 - val_loss: 0.8486 - val_mae: 0.7189
Epoch 52/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.0466 - mae: 0.6926 - val_loss: 0.8126 - val_mae: 0.6955
Epoch 53/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.0404 - mae: 0.7206 - val_loss: 0.7697 - val_mae: 0.6516
Epoch 54/1024
17/17 [==============================] - 0s 2ms/step - loss: 1.0306 - mae: 0.6945 - val_loss: 0.8087 - val_mae: 0.6968
Epoch 55/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.9913 - mae: 0.6784 - val_loss: 0.7573 - val_mae: 0.6537
Epoch 56/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.9670 - mae: 0.6743 - val_loss: 0.7307 - val_mae: 0.6445
Epoch 57/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.9653 - mae: 0.6615 - val_loss: 0.7322 - val_mae: 0.6246
Epoch 58/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.9734 - mae: 0.6847 - val_loss: 0.7014 - val_mae: 0.5967
Epoch 59/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.9480 - mae: 0.6714 - val_loss: 0.6982 - val_mae: 0.6142
Epoch 60/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.9256 - mae: 0.6412 - val_loss: 0.7391 - val_mae: 0.6499
Epoch 61/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.9345 - mae: 0.6556 - val_loss: 0.7120 - val_mae: 0.6268
Epoch 62/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.9055 - mae: 0.6367 - val_loss: 0.6904 - val_mae: 0.6218
Epoch 63/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8899 - mae: 0.6326 - val_loss: 0.7145 - val_mae: 0.6389
Epoch 64/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.9266 - mae: 0.6640 - val_loss: 0.6723 - val_mae: 0.5564
Epoch 65/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8980 - mae: 0.6464 - val_loss: 0.6709 - val_mae: 0.5735
Epoch 66/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.9186 - mae: 0.6353 - val_loss: 0.6797 - val_mae: 0.5962
Epoch 67/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8847 - mae: 0.6246 - val_loss: 0.6755 - val_mae: 0.6009
Epoch 68/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8804 - mae: 0.6211 - val_loss: 0.7491 - val_mae: 0.6902
Epoch 69/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8663 - mae: 0.6370 - val_loss: 0.6666 - val_mae: 0.5906
Epoch 70/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8500 - mae: 0.6152 - val_loss: 0.6656 - val_mae: 0.5742
Epoch 71/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8339 - mae: 0.6099 - val_loss: 0.6747 - val_mae: 0.6245
Epoch 72/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8504 - mae: 0.6095 - val_loss: 0.6763 - val_mae: 0.6075
Epoch 73/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8443 - mae: 0.6144 - val_loss: 0.6335 - val_mae: 0.5489
Epoch 74/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8305 - mae: 0.6169 - val_loss: 0.6529 - val_mae: 0.5890
Epoch 75/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8584 - mae: 0.6102 - val_loss: 0.6905 - val_mae: 0.6227
Epoch 76/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8196 - mae: 0.6019 - val_loss: 0.6305 - val_mae: 0.5575
Epoch 77/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8248 - mae: 0.6070 - val_loss: 0.6313 - val_mae: 0.5671
Epoch 78/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8337 - mae: 0.5982 - val_loss: 0.6319 - val_mae: 0.5493
Epoch 79/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8144 - mae: 0.5909 - val_loss: 0.6800 - val_mae: 0.6074
Epoch 80/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8373 - mae: 0.6296 - val_loss: 0.6329 - val_mae: 0.5384
Epoch 81/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8253 - mae: 0.6117 - val_loss: 0.6163 - val_mae: 0.5546
Epoch 82/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8124 - mae: 0.6018 - val_loss: 0.6255 - val_mae: 0.5613
Epoch 83/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8113 - mae: 0.5954 - val_loss: 0.6337 - val_mae: 0.5952
Epoch 84/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7957 - mae: 0.5801 - val_loss: 0.6795 - val_mae: 0.6482
Epoch 85/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8153 - mae: 0.6121 - val_loss: 0.6460 - val_mae: 0.5807
Epoch 86/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8114 - mae: 0.6013 - val_loss: 0.6033 - val_mae: 0.5470
Epoch 87/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8002 - mae: 0.6149 - val_loss: 0.6376 - val_mae: 0.5960
Epoch 88/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7853 - mae: 0.5840 - val_loss: 0.6289 - val_mae: 0.5690
Epoch 89/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7745 - mae: 0.5877 - val_loss: 0.5977 - val_mae: 0.5081
Epoch 90/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7893 - mae: 0.5710 - val_loss: 0.7013 - val_mae: 0.6523
Epoch 91/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8098 - mae: 0.5791 - val_loss: 0.6200 - val_mae: 0.5821
Epoch 92/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8050 - mae: 0.6047 - val_loss: 0.5971 - val_mae: 0.5493
Epoch 93/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7792 - mae: 0.5932 - val_loss: 0.5826 - val_mae: 0.5327
Epoch 94/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7689 - mae: 0.5803 - val_loss: 0.6038 - val_mae: 0.5516
Epoch 95/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7600 - mae: 0.5664 - val_loss: 0.6085 - val_mae: 0.5611
Epoch 96/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7800 - mae: 0.5781 - val_loss: 0.6097 - val_mae: 0.5489
Epoch 97/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7718 - mae: 0.5742 - val_loss: 0.5827 - val_mae: 0.5276
Epoch 98/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7767 - mae: 0.5853 - val_loss: 0.6323 - val_mae: 0.6218
Epoch 99/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7607 - mae: 0.5815 - val_loss: 0.5844 - val_mae: 0.5246
Epoch 100/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7699 - mae: 0.5917 - val_loss: 0.5998 - val_mae: 0.5618
Epoch 101/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7626 - mae: 0.5769 - val_loss: 0.5963 - val_mae: 0.5377
Epoch 102/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7623 - mae: 0.5640 - val_loss: 0.6031 - val_mae: 0.5436
Epoch 103/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7639 - mae: 0.5794 - val_loss: 0.5798 - val_mae: 0.5586
Epoch 104/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7743 - mae: 0.5792 - val_loss: 0.6192 - val_mae: 0.5593
Epoch 105/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.8187 - mae: 0.6181 - val_loss: 0.5746 - val_mae: 0.5209
Epoch 106/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7494 - mae: 0.5738 - val_loss: 0.5882 - val_mae: 0.5312
Epoch 107/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7625 - mae: 0.5865 - val_loss: 0.5587 - val_mae: 0.5239
Epoch 108/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7403 - mae: 0.5626 - val_loss: 0.5978 - val_mae: 0.5941
Epoch 109/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7448 - mae: 0.5723 - val_loss: 0.5716 - val_mae: 0.5274
Epoch 110/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7462 - mae: 0.5969 - val_loss: 0.5709 - val_mae: 0.5047
Epoch 111/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7668 - mae: 0.5658 - val_loss: 0.5950 - val_mae: 0.5638
Epoch 112/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7452 - mae: 0.5902 - val_loss: 0.5683 - val_mae: 0.5181
Epoch 113/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7459 - mae: 0.5729 - val_loss: 0.5936 - val_mae: 0.5442
Epoch 114/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7427 - mae: 0.5814 - val_loss: 0.5716 - val_mae: 0.5396
Epoch 115/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7231 - mae: 0.5817 - val_loss: 0.5698 - val_mae: 0.5168
Epoch 116/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7225 - mae: 0.5521 - val_loss: 0.5821 - val_mae: 0.5659
Epoch 117/1024
17/17 [==============================] - 0s 2ms/step - loss: 0.7820 - mae: 0.5928 - val_loss: 0.6177 - val_mae: 0.5795
In [11]:
model.evaluate(Xtest, ytest)
3/3 [==============================] - 0s 895us/step - loss: 0.7148 - mae: 0.5659
Out[11]:
[0.7148191332817078, 0.5658571124076843]

Prediction

In [12]:
pd.DataFrame({'True': ytest.reshape(-1)[:10],
             'Predicted':model.predict(Xtest).reshape(-1)[:10]})
Out[12]:
True Predicted
0 1 2.051923
1 6 3.055750
2 1 0.288311
3 9 7.290923
4 12 13.542187
5 10 10.409708
6 11 11.587477
7 1 1.889856
8 6 6.332875
9 2 2.140997

Plotting the metrics

In [13]:
def plot(history, variable, variable2):
    plt.plot(range(len(history[variable])), history[variable])
    plt.plot(range(len(history[variable2])), history[variable2])
    plt.legend([variable, variable2])
    plt.title(variable)
In [14]:
plot(history.history, "loss", "val_loss")
In [15]:
plot(history.history, "mae", "val_mae")

deepC

In [16]:
model.save('pH.h5')

!deepCC pH.h5
[INFO]
Reading [keras model] 'pH.h5'
[SUCCESS]
Saved 'pH_deepC/pH.onnx'
[INFO]
Reading [onnx model] 'pH_deepC/pH.onnx'
[INFO]
Model info:
  ir_vesion : 4
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) dense_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_3's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_3) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'pH_deepC/pH.cpp'
[INFO]
deepSea model files are ready in 'pH_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "pH_deepC/pH.cpp" -D_AITS_MAIN -o "pH_deepC/pH.exe"
[RUNNING COMMAND]
size "pH_deepC/pH.exe"
   text	   data	    bss	    dec	    hex	filename
 155077	   2736	    760	 158573	  26b6d	pH_deepC/pH.exe
[SUCCESS]
Saved model as executable "pH_deepC/pH.exe"