Cainvas

Flower Classification Model

Credit: AITS Cainvas Community

Photo by ILLO on Dribbble

Importing Libraries & Dataset

In [1]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import csv
import imageio
import os, shutil
from tensorflow import keras
In [2]:
!wget https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/flower_data.zip
--2021-07-13 09:49:27--  https://cainvas-static.s3.amazonaws.com/media/user_data/rishirajak/flower_data.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.158.15
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.158.15|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 26671531 (25M) [application/zip]
Saving to: ‘flower_data.zip.2’

flower_data.zip.2   100%[===================>]  25.44M  97.5MB/s    in 0.3s    

2021-07-13 09:49:27 (97.5 MB/s) - ‘flower_data.zip.2’ saved [26671531/26671531]

In [3]:
!unzip -qo flower_data.zip.1
In [4]:
label_1 = "flower_data/train/1"
dirs_label_1 =  os.listdir(label_1)
In [5]:
img = 'image_06736.jpg'
label_path = label_1+'/'+img
label_path
Out[5]:
'flower_data/train/1/image_06736.jpg'

Fetch Image from path

In [6]:
def get_image(path):    
    #visualise a flower
    flower = imageio.imread(path)
    #print(flower.shape)
    #plt.figure(figsize=(5,5))
    plt.imshow(flower)
In [7]:
get_image(label_path)

Build Dataframe containing Image abs paths and Labels

In [8]:
#Create Empty Dataframe
df = pd.DataFrame(columns=['image_abs_path','image_labels'])
In [9]:
df
Out[9]:
image_abs_path image_labels
In [10]:
#Flower Name Dictionary
flower_names = {"21": "fire lily", "3": "canterbury bells", "45": "bolero deep blue", "1": "pink primrose", 
 "34": "mexican aster", "27": "prince of wales feathers", "7": "moon orchid", "16": "globe-flower",
 "25": "grape hyacinth", "26": "corn poppy", "79": "toad lily", "39": "siam tulip", "24": "red ginger",
 "67": "spring crocus", "35": "alpine sea holly", "32": "garden phlox", "10": "globe thistle", 
 "6": "tiger lily", "93": "ball moss", "33": "love in the mist", "9": "monkshood", 
 "102": "blackberry lily", "14": "spear thistle", "19": "balloon flower", 
 "100": "blanket flower", "13": "king protea", "49": "oxeye daisy", "15": "yellow iris", 
 "61": "cautleya spicata", "31": "carnation", "64": "silverbush", "68": "bearded iris", 
 "63": "black-eyed susan", "69": "windflower", "62": "japanese anemone", "20": "giant white arum lily",
 "38": "great masterwort", "4": "sweet pea", "86": "tree mallow", "101": "trumpet creeper", 
 "42": "daffodil", "22": "pincushion flower", "2": "hard-leaved pocket orchid", "54": "sunflower", 
 "66": "osteospermum", "70": "tree poppy", "85": "desert-rose", "99": "bromelia", "87": "magnolia", 
 "5": "english marigold", "92": "bee balm", "28": "stemless gentian", "97": "mallow", "57": "gaura",
 "40": "lenten rose", "47": "marigold", "59": "orange dahlia", "48": "buttercup", "55": "pelargonium",
 "36": "ruby-lipped cattleya", "91": "hippeastrum", "29": "artichoke", "71": "gazania", 
 "90": "canna lily", "18": "peruvian lily", "98": "mexican petunia", "8": "bird of paradise", 
 "30": "sweet william", "17": "purple coneflower", "52": "wild pansy", "84": "columbine", 
 "12": "colt's foot", "11": "snapdragon", "96": "camellia", "23": "fritillary", "50": "common dandelion", 
 "44": "poinsettia", "53": "primula", "72": "azalea", "65": "californian poppy", "80": "anthurium",
 "76": "morning glory", "37": "cape flower", "56": "bishop of llandaff", "60": "pink-yellow dahlia", 
 "82": "clematis", "58": "geranium", "75": "thorn apple", "41": "barbeton daisy", "95": "bougainvillea",
 "43": "sword lily", "83": "hibiscus", "78": "lotus lotus", "88": "cyclamen",
 "94": "foxglove", "81": "frangipani", "74": "rose", "89": "watercress", "73": "water lily",
 "46": "wallflower", "77": "passion flower", "51": "petunia"}
In [11]:
#Select Top 10 To be used for Classification
few_flowers = {}
for i in range(1,11):
    few_flowers[str(i)]=flower_names[str(i)]
few_flowers
Out[11]:
{'1': 'pink primrose',
 '2': 'hard-leaved pocket orchid',
 '3': 'canterbury bells',
 '4': 'sweet pea',
 '5': 'english marigold',
 '6': 'tiger lily',
 '7': 'moon orchid',
 '8': 'bird of paradise',
 '9': 'monkshood',
 '10': 'globe thistle'}
In [12]:
few_flowers['5']
Out[12]:
'english marigold'
In [13]:
#Function that can build a dataframe on passing folderpath and number of subfolders.
def getdata(folder_path,num_subfolders):
    flowers = pd.DataFrame(columns=['image_abs_path','image_labels'])
    for label in range(1,num_subfolders+1):
        #print("processing for label: {}".format(label))
        label_i = folder_path+"/"+str(label)
        #read directory
        dirs_label_i =  os.listdir(label_i)
        idx = 0
        for image in dirs_label_i:
            #create a absolute image path
            flower_i = os.path.join(label_i,image)
            #print('Absolute path for image no. {} and label {}: {}'\
                  #.format(idx,label,flower_i))

            #fill the dataframe with path and label
            flowers = flowers.append({'image_abs_path':flower_i,
                            'image_labels':flower_names[str(label)]},
                           ignore_index=True)
            idx += 1
    return flowers
In [14]:
#Create Train Set
path = "flower_data/train"
num_folders = 10

train = getdata(folder_path=path,num_subfolders=num_folders)
In [15]:
train
Out[15]:
image_abs_path image_labels
0 flower_data/train/1/image_06734.jpg pink primrose
1 flower_data/train/1/image_06736.jpg pink primrose
2 flower_data/train/1/image_06741.jpg pink primrose
3 flower_data/train/1/image_06761.jpg pink primrose
4 flower_data/train/1/image_06742.jpg pink primrose
... ... ...
291 flower_data/train/10/image_07114.jpg globe thistle
292 flower_data/train/10/image_07093.jpg globe thistle
293 flower_data/train/10/image_07115.jpg globe thistle
294 flower_data/train/10/image_07108.jpg globe thistle
295 flower_data/train/10/image_07110.jpg globe thistle

296 rows × 2 columns

In [16]:
#Create Valid set
valid_path = "flower_data/valid"

valid = getdata(folder_path=valid_path,num_subfolders=num_folders)
In [17]:
valid
Out[17]:
image_abs_path image_labels
0 flower_data/valid/1/image_06756.jpg pink primrose
1 flower_data/valid/1/image_06755.jpg pink primrose
2 flower_data/valid/1/image_06758.jpg pink primrose
3 flower_data/valid/1/image_06765.jpg pink primrose
4 flower_data/valid/1/image_06769.jpg pink primrose
5 flower_data/valid/1/image_06739.jpg pink primrose
6 flower_data/valid/1/image_06763.jpg pink primrose
7 flower_data/valid/1/image_06749.jpg pink primrose
8 flower_data/valid/2/image_05101.jpg hard-leaved pocket orchid
9 flower_data/valid/2/image_05142.jpg hard-leaved pocket orchid
10 flower_data/valid/2/image_05124.jpg hard-leaved pocket orchid
11 flower_data/valid/2/image_05136.jpg hard-leaved pocket orchid
12 flower_data/valid/2/image_05094.jpg hard-leaved pocket orchid
13 flower_data/valid/2/image_05137.jpg hard-leaved pocket orchid
14 flower_data/valid/3/image_06621.jpg canterbury bells
15 flower_data/valid/3/image_06631.jpg canterbury bells
16 flower_data/valid/4/image_05680.jpg sweet pea
17 flower_data/valid/4/image_05681.jpg sweet pea
18 flower_data/valid/4/image_05638.jpg sweet pea
19 flower_data/valid/4/image_05660.jpg sweet pea
20 flower_data/valid/4/image_05657.jpg sweet pea
21 flower_data/valid/4/image_05677.jpg sweet pea
22 flower_data/valid/5/image_05164.jpg english marigold
23 flower_data/valid/5/image_05199.jpg english marigold
24 flower_data/valid/5/image_05209.jpg english marigold
25 flower_data/valid/5/image_05196.jpg english marigold
26 flower_data/valid/5/image_05168.jpg english marigold
27 flower_data/valid/5/image_05188.jpg english marigold
28 flower_data/valid/5/image_05192.jpg english marigold
29 flower_data/valid/6/image_08105.jpg tiger lily
30 flower_data/valid/7/image_07216.jpg moon orchid
31 flower_data/valid/8/image_03342.jpg bird of paradise
32 flower_data/valid/8/image_03349.jpg bird of paradise
33 flower_data/valid/8/image_03366.jpg bird of paradise
34 flower_data/valid/8/image_03313.jpg bird of paradise
35 flower_data/valid/8/image_03330.jpg bird of paradise
36 flower_data/valid/9/image_06414.jpg monkshood
37 flower_data/valid/9/image_06398.jpg monkshood
38 flower_data/valid/9/image_06420.jpg monkshood
39 flower_data/valid/10/image_07107.jpg globe thistle
40 flower_data/valid/10/image_07094.jpg globe thistle
41 flower_data/valid/10/image_07102.jpg globe thistle
42 flower_data/valid/10/image_07101.jpg globe thistle

Visualise Images

In [18]:
def get_n_images(n,df,label):
    import warnings
    warnings.filterwarnings('ignore')
    train = df[df["image_labels"]==label]
    print(len(train))
    i = 0
    m = n/2
    plt.figure(figsize=(12, 6))
    for path in train['image_abs_path'][0:n]:
        plt.subplot(2,m,i+1)
        get_image(path)
        #plt.title(train['image_labels'][i])
        i += 1
    plt.tight_layout()
    plt.show()
In [19]:
get_n_images(10,train,"english marigold")
27

Check Proportion of flowers

In [20]:
def plotHbar(df,flower_names):
    numbers = []
    flowers = list(flower_names.values())
    for i in flowers:
        numbers.append(len(df[df['image_labels']==i]))
    plt.figure(figsize=(12,20))
    plt.barh(flowers,numbers,align='center',color='green',)
    plt.title("Flower Counts")
    plt.show()
In [21]:
plotHbar(train,few_flowers)
In [22]:
plotHbar(valid,few_flowers)
In [23]:
#just to fetch any flower through label
def get_key(val,flower_names):
    for key, value in flower_names.items():
        if val == value:
            return key,value
In [24]:
sample = ['english marigold','rose']
for flower in sample:
    print(get_key(flower,few_flowers))
('5', 'english marigold')
None

Datapreprocess by using Image Data Generator.

In [25]:
def datapreprocessing(dataframe,bsize):
    from tensorflow.keras.preprocessing.image import ImageDataGenerator
    
    train_gen = ImageDataGenerator(rescale=1.0/255)

    train_generator = train_gen.flow_from_dataframe(
        dataframe,
        x_col= dataframe.columns[0],
        y_col=dataframe.columns[1],
        target_size=(150,150),
        batch_size=bsize,
        color_mode="rgb",
        shuffle=True,
        class_mode='categorical')
    
    return train_generator

def datapreprocessing_aug(dataframe,bsize):
    from tensorflow.keras.preprocessing.image import ImageDataGenerator
    
    train_gen = ImageDataGenerator(zoom_range=0.5,
                                   rescale=1.0/255,
                                   horizontal_flip=True,
                                   rotation_range=40,
                                  )

    train_generator = train_gen.flow_from_dataframe(
        dataframe,
        x_col= dataframe.columns[0],
        y_col=dataframe.columns[1],
        target_size=(150,150),
        batch_size=bsize,
        color_mode="rgb",
        shuffle=True,
        class_mode='categorical')
    return train_generator
In [26]:
#Create Sets to be used in model.
train_generated = datapreprocessing(train,bsize=1) 
valid_generated = datapreprocessing(valid,bsize=1)
train_generated_aug = datapreprocessing_aug(train,bsize=1)
valid_generated_aug = datapreprocessing_aug(valid,bsize=1)
Found 296 validated image filenames belonging to 10 classes.
Found 43 validated image filenames belonging to 10 classes.
Found 296 validated image filenames belonging to 10 classes.
Found 43 validated image filenames belonging to 10 classes.

Visualise Images present in Generator

In [27]:
def visualize_gen(train_generator):   
    #Visualising Images Processed
    plt.figure(figsize=(12, 6))
    for i in range(0, 10):
        plt.subplot(2, 5, i+1)
        for X_batch, Y_batch in train_generator:
            image = X_batch[0]        
            plt.axis("off")
            plt.imshow((image*255).astype(np.uint8))
            break
    plt.tight_layout()
    plt.show()
In [28]:
visualize_gen(train_generated)
In [29]:
visualize_gen(train_generated_aug)
In [30]:
input_shape = (150,150,3)

Building & Compiling Model Architecture

In [31]:
def imageclf(input_shape):
    from tensorflow import keras as ks
    from tensorflow.keras import regularizers
    model = ks.models.Sequential()
    #building architecture
    #Adding layers
    model.add(ks.layers.Conv2D(8,(3,3),
                               strides=6,
                               activation="elu",
                               padding='same',
                               name="layer1",
                               input_shape=input_shape))
    
    model.add(ks.layers.Conv2D(16,(3,3),strides=6,padding="same",activation="elu",name="layer2"))
    model.add(ks.layers.Conv2D(16,(3,3),padding="same",activation="elu",name="layer3"))
    model.add(ks.layers.Conv2D(16,(3,3),padding="same",activation="elu",name="layer4"))
    model.add(ks.layers.Conv2D(16,(3,3),padding="same",activation="elu",name="layer5"))
    
    model.add(ks.layers.Flatten())
    model.add(ks.layers.Dense(20,activation="elu",
                              kernel_regularizer=regularizers.l1_l2(l1=1e-4, l2=1e-5),
                              bias_regularizer=regularizers.l2(1e-4),
                              activity_regularizer=regularizers.l2(1e-5),
                              name="layer6"))
    
    model.add(ks.layers.Dropout(0.4))
    model.add(ks.layers.Dense(10,activation="softmax",
                              name="output"))#10 classes 
    model.summary()
    
    return model
In [32]:
def compileModel(model,train_generator,valid_generator,epchs,lr=0.0001):
    from tensorflow import keras as ks
    opt = ks.optimizers.Adam(learning_rate=lr)
    callback = ks.callbacks.EarlyStopping(monitor="val_loss",
                                          patience=10,
                                          verbose=2)
    model.compile(loss="categorical_crossentropy",
                      optimizer=opt,
                      metrics=["accuracy"])
    history = model.fit(train_generator,
                        epochs=epchs,
                        callbacks=[callback],
                        validation_data=valid_generator)
    #Visualise curves
    plt.plot(history.history['accuracy'], label='train_acc')
    plt.plot(history.history['val_accuracy'], label='valid_acc')

    plt.title('lrate='+str(lr), pad=-50)
    plt.legend()
    plt.grid(True)
    return model,history
In [33]:
model = imageclf(input_shape=input_shape)
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
layer1 (Conv2D)              (None, 25, 25, 8)         224       
_________________________________________________________________
layer2 (Conv2D)              (None, 5, 5, 16)          1168      
_________________________________________________________________
layer3 (Conv2D)              (None, 5, 5, 16)          2320      
_________________________________________________________________
layer4 (Conv2D)              (None, 5, 5, 16)          2320      
_________________________________________________________________
layer5 (Conv2D)              (None, 5, 5, 16)          2320      
_________________________________________________________________
flatten (Flatten)            (None, 400)               0         
_________________________________________________________________
layer6 (Dense)               (None, 20)                8020      
_________________________________________________________________
dropout (Dropout)            (None, 20)                0         
_________________________________________________________________
output (Dense)               (None, 10)                210       
=================================================================
Total params: 16,582
Trainable params: 16,582
Non-trainable params: 0
_________________________________________________________________
In [34]:
model_compiled = compileModel(model,train_generated,valid_generated,100)
Epoch 1/100
296/296 [==============================] - 2s 6ms/step - loss: 2.3304 - accuracy: 0.1385 - val_loss: 2.2984 - val_accuracy: 0.0930
Epoch 2/100
296/296 [==============================] - 1s 5ms/step - loss: 2.2080 - accuracy: 0.2500 - val_loss: 2.1791 - val_accuracy: 0.2326
Epoch 3/100
296/296 [==============================] - 1s 5ms/step - loss: 2.0583 - accuracy: 0.3311 - val_loss: 2.0539 - val_accuracy: 0.2791
Epoch 4/100
296/296 [==============================] - 1s 5ms/step - loss: 2.0124 - accuracy: 0.3041 - val_loss: 1.9798 - val_accuracy: 0.3256
Epoch 5/100
296/296 [==============================] - 1s 5ms/step - loss: 1.9456 - accuracy: 0.3378 - val_loss: 1.9044 - val_accuracy: 0.3256
Epoch 6/100
296/296 [==============================] - 1s 5ms/step - loss: 1.8558 - accuracy: 0.3514 - val_loss: 1.9903 - val_accuracy: 0.3488
Epoch 7/100
296/296 [==============================] - 2s 5ms/step - loss: 1.7998 - accuracy: 0.3547 - val_loss: 1.7940 - val_accuracy: 0.3721
Epoch 8/100
296/296 [==============================] - 2s 5ms/step - loss: 1.7383 - accuracy: 0.3986 - val_loss: 1.8009 - val_accuracy: 0.3721
Epoch 9/100
296/296 [==============================] - 2s 5ms/step - loss: 1.6912 - accuracy: 0.4223 - val_loss: 1.6833 - val_accuracy: 0.3953
Epoch 10/100
296/296 [==============================] - 1s 5ms/step - loss: 1.6709 - accuracy: 0.4324 - val_loss: 1.6656 - val_accuracy: 0.3721
Epoch 11/100
296/296 [==============================] - 1s 5ms/step - loss: 1.5619 - accuracy: 0.5101 - val_loss: 1.6127 - val_accuracy: 0.4419
Epoch 12/100
296/296 [==============================] - 1s 5ms/step - loss: 1.5357 - accuracy: 0.5000 - val_loss: 1.6084 - val_accuracy: 0.4651
Epoch 13/100
296/296 [==============================] - 1s 5ms/step - loss: 1.4895 - accuracy: 0.4966 - val_loss: 1.5935 - val_accuracy: 0.4651
Epoch 14/100
296/296 [==============================] - 2s 5ms/step - loss: 1.4673 - accuracy: 0.4696 - val_loss: 1.5392 - val_accuracy: 0.4651
Epoch 15/100
296/296 [==============================] - 1s 5ms/step - loss: 1.4343 - accuracy: 0.5405 - val_loss: 1.5476 - val_accuracy: 0.4419
Epoch 16/100
296/296 [==============================] - 1s 5ms/step - loss: 1.4084 - accuracy: 0.5203 - val_loss: 1.5293 - val_accuracy: 0.4651
Epoch 17/100
296/296 [==============================] - 1s 5ms/step - loss: 1.3889 - accuracy: 0.5405 - val_loss: 1.5443 - val_accuracy: 0.4884
Epoch 18/100
296/296 [==============================] - 1s 5ms/step - loss: 1.3812 - accuracy: 0.5405 - val_loss: 1.5608 - val_accuracy: 0.5116
Epoch 19/100
296/296 [==============================] - 1s 5ms/step - loss: 1.3892 - accuracy: 0.5473 - val_loss: 1.5225 - val_accuracy: 0.4884
Epoch 20/100
296/296 [==============================] - 1s 5ms/step - loss: 1.3110 - accuracy: 0.5743 - val_loss: 1.4982 - val_accuracy: 0.5116
Epoch 21/100
296/296 [==============================] - 1s 5ms/step - loss: 1.2560 - accuracy: 0.5878 - val_loss: 1.4811 - val_accuracy: 0.4884
Epoch 22/100
296/296 [==============================] - 1s 5ms/step - loss: 1.2423 - accuracy: 0.5845 - val_loss: 1.4797 - val_accuracy: 0.4884
Epoch 23/100
296/296 [==============================] - 1s 5ms/step - loss: 1.1689 - accuracy: 0.6486 - val_loss: 1.4879 - val_accuracy: 0.5116
Epoch 24/100
296/296 [==============================] - 1s 5ms/step - loss: 1.2602 - accuracy: 0.5676 - val_loss: 1.4723 - val_accuracy: 0.5116
Epoch 25/100
296/296 [==============================] - 1s 5ms/step - loss: 1.1549 - accuracy: 0.6419 - val_loss: 1.4565 - val_accuracy: 0.5116
Epoch 26/100
296/296 [==============================] - 1s 5ms/step - loss: 1.1319 - accuracy: 0.6385 - val_loss: 1.4742 - val_accuracy: 0.4884
Epoch 27/100
296/296 [==============================] - 1s 5ms/step - loss: 1.1486 - accuracy: 0.6351 - val_loss: 1.5282 - val_accuracy: 0.5116
Epoch 28/100
296/296 [==============================] - 1s 5ms/step - loss: 1.0718 - accuracy: 0.6250 - val_loss: 1.4987 - val_accuracy: 0.4884
Epoch 29/100
296/296 [==============================] - 2s 5ms/step - loss: 1.0594 - accuracy: 0.6453 - val_loss: 1.4760 - val_accuracy: 0.5116
Epoch 30/100
296/296 [==============================] - 2s 5ms/step - loss: 0.9858 - accuracy: 0.6858 - val_loss: 1.4987 - val_accuracy: 0.5349
Epoch 31/100
296/296 [==============================] - 1s 5ms/step - loss: 0.9613 - accuracy: 0.6926 - val_loss: 1.5048 - val_accuracy: 0.5349
Epoch 32/100
296/296 [==============================] - 1s 5ms/step - loss: 0.9932 - accuracy: 0.7264 - val_loss: 1.4927 - val_accuracy: 0.4651
Epoch 33/100
296/296 [==============================] - 1s 5ms/step - loss: 0.9569 - accuracy: 0.7162 - val_loss: 1.4854 - val_accuracy: 0.5349
Epoch 34/100
296/296 [==============================] - 1s 5ms/step - loss: 0.9174 - accuracy: 0.7128 - val_loss: 1.4861 - val_accuracy: 0.4884
Epoch 35/100
296/296 [==============================] - 2s 5ms/step - loss: 0.9424 - accuracy: 0.6791 - val_loss: 1.5177 - val_accuracy: 0.5116
Epoch 00035: early stopping

Plotting Curves

In [35]:
#Visualise curves
history = model_compiled[1]
pd.DataFrame(history.history).plot(figsize=(10,8))
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.grid(True)

plt.gca()
#plt.savefig("Performance of the model.jpeg")
plt.show()

Improving architecture

In [36]:
def imageclf2(input_shape):
    from tensorflow import keras as ks
    #from tensorflow.keras import regularizers
    model = ks.models.Sequential()
    #building architecture
    #Adding layers
    model.add(ks.layers.Conv2D(16,(6,6),
                               strides=2,
                               activation="relu",
                               padding='same',
                               name="layer1",
                               input_shape=input_shape))
    model.add(ks.layers.MaxPooling2D(pool_size=2))
    model.add(ks.layers.Conv2D(32,(3,3),strides=1,padding="same",activation="relu",name="layer2"))
    model.add(ks.layers.MaxPooling2D(pool_size=2,strides=2))
    model.add(ks.layers.Conv2D(64,(3,3),strides=1,padding="same",activation="relu",name="layer3"))
    model.add(ks.layers.MaxPooling2D(pool_size=2,strides=2))
    model.add(ks.layers.Conv2D(64,(3,3),strides=1,padding="same",activation="relu",name="layer4"))
    model.add(ks.layers.MaxPooling2D(pool_size=2,strides=2))
    
    
    model.add(ks.layers.Flatten())
    model.add(ks.layers.Dense(128,activation="relu",
                              name="layer5"))
    
    model.add(ks.layers.Dense(10,activation="softmax",
                              name="output"))#10 classes 
    model.summary()
    
    return model
In [37]:
model02 = imageclf2(input_shape=input_shape)
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
layer1 (Conv2D)              (None, 75, 75, 16)        1744      
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 37, 37, 16)        0         
_________________________________________________________________
layer2 (Conv2D)              (None, 37, 37, 32)        4640      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 18, 18, 32)        0         
_________________________________________________________________
layer3 (Conv2D)              (None, 18, 18, 64)        18496     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 9, 9, 64)          0         
_________________________________________________________________
layer4 (Conv2D)              (None, 9, 9, 64)          36928     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 4, 4, 64)          0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 1024)              0         
_________________________________________________________________
layer5 (Dense)               (None, 128)               131200    
_________________________________________________________________
output (Dense)               (None, 10)                1290      
=================================================================
Total params: 194,298
Trainable params: 194,298
Non-trainable params: 0
_________________________________________________________________
In [38]:
def compiler2(model,train_generator,valid_generator,epchs,bsize,lr=0.0001):

    from tensorflow import keras as ks
    callbck = ks.callbacks.EarlyStopping(monitor='val_loss',patience=8,verbose=2,restore_best_weights=True) 
    #red_lr= ReduceLROnPlateau(monitor='val_acc',patience=3,verbose=1,factor=0.1)
    opt = ks.optimizers.Adam(learning_rate=lr)
    
    model.compile(loss="categorical_crossentropy",
                      optimizer=opt,
                      metrics=["accuracy"])
    history = model.fit(train_generator,
                        epochs=epchs,
                        callbacks=[callbck],
                        validation_data=valid_generator,
                        verbose = 1,
                        steps_per_epoch = 6552 // bsize)
    #Visualise curves
    plt.plot(history.history['accuracy'], label='train_acc')
    plt.plot(history.history['val_accuracy'], label='valid_acc')

    plt.title('lrate='+str(lr), pad=-50)
    plt.legend()
    plt.grid(True)
    return model,history
In [39]:
model_com02 = compiler2(model02,train_generated,valid_generated,50,32)
Epoch 1/50
204/204 [==============================] - 1s 6ms/step - loss: 2.2911 - accuracy: 0.1961 - val_loss: 2.2758 - val_accuracy: 0.1395
Epoch 2/50
204/204 [==============================] - 1s 5ms/step - loss: 2.0429 - accuracy: 0.2451 - val_loss: 1.7803 - val_accuracy: 0.3721
Epoch 3/50
204/204 [==============================] - 1s 5ms/step - loss: 1.6512 - accuracy: 0.4314 - val_loss: 1.6008 - val_accuracy: 0.3953
Epoch 4/50
204/204 [==============================] - 1s 5ms/step - loss: 1.5088 - accuracy: 0.4657 - val_loss: 1.3190 - val_accuracy: 0.4884
Epoch 5/50
204/204 [==============================] - 1s 5ms/step - loss: 1.3303 - accuracy: 0.4853 - val_loss: 1.4109 - val_accuracy: 0.3721
Epoch 6/50
204/204 [==============================] - 1s 5ms/step - loss: 1.1329 - accuracy: 0.6422 - val_loss: 1.1286 - val_accuracy: 0.5349
Epoch 7/50
204/204 [==============================] - 1s 5ms/step - loss: 1.0527 - accuracy: 0.6618 - val_loss: 1.1238 - val_accuracy: 0.5814
Epoch 8/50
204/204 [==============================] - 1s 5ms/step - loss: 0.9585 - accuracy: 0.6618 - val_loss: 1.2176 - val_accuracy: 0.4884
Epoch 9/50
204/204 [==============================] - 1s 5ms/step - loss: 0.8872 - accuracy: 0.6863 - val_loss: 0.9679 - val_accuracy: 0.7209
Epoch 10/50
204/204 [==============================] - 1s 5ms/step - loss: 0.7383 - accuracy: 0.7010 - val_loss: 1.0680 - val_accuracy: 0.5814
Epoch 11/50
204/204 [==============================] - 1s 5ms/step - loss: 0.7029 - accuracy: 0.7549 - val_loss: 1.1923 - val_accuracy: 0.6047
Epoch 12/50
204/204 [==============================] - 1s 5ms/step - loss: 0.5785 - accuracy: 0.8235 - val_loss: 1.1152 - val_accuracy: 0.6744
Epoch 13/50
204/204 [==============================] - 1s 5ms/step - loss: 0.5796 - accuracy: 0.7941 - val_loss: 1.0037 - val_accuracy: 0.6512
Epoch 14/50
204/204 [==============================] - 1s 5ms/step - loss: 0.5088 - accuracy: 0.8284 - val_loss: 1.0667 - val_accuracy: 0.6047
Epoch 15/50
204/204 [==============================] - 1s 5ms/step - loss: 0.4322 - accuracy: 0.8627 - val_loss: 1.1827 - val_accuracy: 0.6047
Epoch 16/50
204/204 [==============================] - 1s 5ms/step - loss: 0.3434 - accuracy: 0.9069 - val_loss: 1.0998 - val_accuracy: 0.6279
Epoch 17/50
194/204 [===========================>..] - ETA: 0s - loss: 0.3338 - accuracy: 0.9021Restoring model weights from the end of the best epoch.
204/204 [==============================] - 1s 5ms/step - loss: 0.3295 - accuracy: 0.9069 - val_loss: 1.3280 - val_accuracy: 0.5349
Epoch 00017: early stopping

With Image Augmentation

In [40]:
model02 = imageclf2(input_shape=input_shape)
Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
layer1 (Conv2D)              (None, 75, 75, 16)        1744      
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 37, 37, 16)        0         
_________________________________________________________________
layer2 (Conv2D)              (None, 37, 37, 32)        4640      
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 18, 18, 32)        0         
_________________________________________________________________
layer3 (Conv2D)              (None, 18, 18, 64)        18496     
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 9, 9, 64)          0         
_________________________________________________________________
layer4 (Conv2D)              (None, 9, 9, 64)          36928     
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 4, 4, 64)          0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 1024)              0         
_________________________________________________________________
layer5 (Dense)               (None, 128)               131200    
_________________________________________________________________
output (Dense)               (None, 10)                1290      
=================================================================
Total params: 194,298
Trainable params: 194,298
Non-trainable params: 0
_________________________________________________________________
In [41]:
model_com03 = compiler2(model02,train_generated_aug,valid_generated_aug,100,32)
Epoch 1/100
204/204 [==============================] - 2s 10ms/step - loss: 2.2722 - accuracy: 0.1225 - val_loss: 2.1930 - val_accuracy: 0.1163
Epoch 2/100
204/204 [==============================] - 2s 10ms/step - loss: 1.9576 - accuracy: 0.2500 - val_loss: 1.9411 - val_accuracy: 0.0930
Epoch 3/100
204/204 [==============================] - 2s 10ms/step - loss: 1.7125 - accuracy: 0.3775 - val_loss: 1.6027 - val_accuracy: 0.3721
Epoch 4/100
204/204 [==============================] - 2s 10ms/step - loss: 1.6372 - accuracy: 0.3284 - val_loss: 1.6148 - val_accuracy: 0.3488
Epoch 5/100
204/204 [==============================] - 2s 10ms/step - loss: 1.4969 - accuracy: 0.4412 - val_loss: 1.3412 - val_accuracy: 0.5116
Epoch 6/100
204/204 [==============================] - 2s 10ms/step - loss: 1.4891 - accuracy: 0.4461 - val_loss: 1.4807 - val_accuracy: 0.4186
Epoch 7/100
204/204 [==============================] - 2s 10ms/step - loss: 1.3021 - accuracy: 0.5147 - val_loss: 1.2717 - val_accuracy: 0.5116
Epoch 8/100
204/204 [==============================] - 2s 10ms/step - loss: 1.2435 - accuracy: 0.5147 - val_loss: 1.3621 - val_accuracy: 0.5349
Epoch 9/100
204/204 [==============================] - 2s 10ms/step - loss: 1.2120 - accuracy: 0.6029 - val_loss: 1.2704 - val_accuracy: 0.5116
Epoch 10/100
204/204 [==============================] - 2s 10ms/step - loss: 1.1279 - accuracy: 0.5784 - val_loss: 1.2315 - val_accuracy: 0.5581
Epoch 11/100
204/204 [==============================] - 2s 10ms/step - loss: 0.9795 - accuracy: 0.6422 - val_loss: 1.4210 - val_accuracy: 0.4651
Epoch 12/100
204/204 [==============================] - 2s 10ms/step - loss: 1.0812 - accuracy: 0.5686 - val_loss: 1.0328 - val_accuracy: 0.5581
Epoch 13/100
204/204 [==============================] - 2s 10ms/step - loss: 1.1483 - accuracy: 0.5784 - val_loss: 1.4326 - val_accuracy: 0.4884
Epoch 14/100
204/204 [==============================] - 2s 10ms/step - loss: 0.9525 - accuracy: 0.6618 - val_loss: 1.0720 - val_accuracy: 0.6512
Epoch 15/100
204/204 [==============================] - 2s 10ms/step - loss: 0.9720 - accuracy: 0.6275 - val_loss: 1.1749 - val_accuracy: 0.5581
Epoch 16/100
204/204 [==============================] - 2s 10ms/step - loss: 0.9203 - accuracy: 0.6765 - val_loss: 1.0984 - val_accuracy: 0.5581
Epoch 17/100
204/204 [==============================] - 2s 10ms/step - loss: 0.8362 - accuracy: 0.7010 - val_loss: 1.2060 - val_accuracy: 0.5581
Epoch 18/100
204/204 [==============================] - 2s 10ms/step - loss: 0.8355 - accuracy: 0.6912 - val_loss: 0.9201 - val_accuracy: 0.6512
Epoch 19/100
204/204 [==============================] - 2s 10ms/step - loss: 0.8120 - accuracy: 0.6765 - val_loss: 1.1452 - val_accuracy: 0.5814
Epoch 20/100
204/204 [==============================] - 2s 10ms/step - loss: 0.6991 - accuracy: 0.7353 - val_loss: 1.1868 - val_accuracy: 0.5116
Epoch 21/100
204/204 [==============================] - 2s 10ms/step - loss: 0.8536 - accuracy: 0.6716 - val_loss: 1.2877 - val_accuracy: 0.5581
Epoch 22/100
204/204 [==============================] - 2s 10ms/step - loss: 0.7584 - accuracy: 0.7010 - val_loss: 1.0000 - val_accuracy: 0.6047
Epoch 23/100
204/204 [==============================] - 2s 10ms/step - loss: 0.8310 - accuracy: 0.6618 - val_loss: 0.9248 - val_accuracy: 0.6744
Epoch 24/100
204/204 [==============================] - 2s 10ms/step - loss: 0.6756 - accuracy: 0.7353 - val_loss: 0.9817 - val_accuracy: 0.6512
Epoch 25/100
204/204 [==============================] - 2s 10ms/step - loss: 0.6576 - accuracy: 0.7941 - val_loss: 1.0173 - val_accuracy: 0.5814
Epoch 26/100
203/204 [============================>.] - ETA: 0s - loss: 0.6470 - accuracy: 0.7241Restoring model weights from the end of the best epoch.
204/204 [==============================] - 2s 10ms/step - loss: 0.6446 - accuracy: 0.7255 - val_loss: 0.9528 - val_accuracy: 0.6279
Epoch 00026: early stopping
In [42]:
#Visualise loss curves
history = model_com03[1]
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.legend()
plt.grid()
plt.show()

Conclusion

    Total Number of Iterations performed: 210
    Final model contained 194k parameters
    Finalised Model:
    Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
layer1 (Conv2D)              (None, 75, 75, 16)        1744      
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 37, 37, 16)        0         
_________________________________________________________________
layer2 (Conv2D)              (None, 37, 37, 32)        4640      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 18, 18, 32)        0         
_________________________________________________________________
layer3 (Conv2D)              (None, 18, 18, 64)        18496     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 9, 9, 64)          0         
_________________________________________________________________
layer4 (Conv2D)              (None, 9, 9, 64)          36928     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 4, 4, 64)          0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 1024)              0         
_________________________________________________________________
layer5 (Dense)               (None, 128)               131200    
_________________________________________________________________
output (Dense)               (None, 10)                1290      
=================================================================
Total params: 194,298
Trainable params: 194,298
Non-trainable params: 0

Saving Model

In [43]:
# save the model to disk
model = model_com03[0]
model.save('saved_models/flowerModel02')
WARNING:tensorflow:From /opt/tljh/user/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:From /opt/tljh/user/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
INFO:tensorflow:Assets written to: saved_models/flowerModel02/assets
In [44]:
#Load the model for prediction
model = keras.models.load_model('saved_models/flowerModel02')
In [45]:
def get_predictions(n):
    image1= valid_generated_aug[0][0][n]
    #print(image1.shape)
    plt.imshow(image1)
    input_arr = keras.preprocessing.image.img_to_array(valid_generated_aug[0][0][n])
    input_arr = np.array([input_arr])  # Convert single image to a batch.
    predictions = model.predict_classes(input_arr)
    #our dictionary starts from 1 whereas model has classes from 0.
    return predictions
In [46]:
get_predictions(0)
WARNING:tensorflow:From <ipython-input-45-46bb37542b94>:7: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01.
Instructions for updating:
Please use instead:* `np.argmax(model.predict(x), axis=-1)`,   if your model does multi-class classification   (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`,   if your model does binary classification   (e.g. if it uses a `sigmoid` last-layer activation).
Out[46]:
array([7])
In [47]:
valid_generated_aug.class_indices
Out[47]:
{'bird of paradise': 0,
 'canterbury bells': 1,
 'english marigold': 2,
 'globe thistle': 3,
 'hard-leaved pocket orchid': 4,
 'monkshood': 5,
 'moon orchid': 6,
 'pink primrose': 7,
 'sweet pea': 8,
 'tiger lily': 9}
In [48]:
get_n_images(6,train,"globe thistle")
26

Model predicted the flower correctly!