Cainvas

Improving Image Resolution with Autoencoders

Credit: AITS Cainvas Community

Photo by Judith on Dribbble

Autoencoders can be used for increasing Image Resolution and this has proved to be effective when we want extract information from the feeds of Surveillance Cameras and autoencoders can also be used for Noise Reduction and they can serve as part of various IIoT Applications.

Downloading the Dataset

Data has been taken from the Imagenet dataset.

In [1]:
!wget -N "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/data.zip"
!unzip -qo data.zip 
!rm data.zip
--2020-12-15 17:01:30--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/data.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.62.36
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.62.36|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 116362032 (111M) [application/zip]
Saving to: ‘data.zip’

data.zip            100%[===================>] 110.97M  73.2MB/s    in 1.5s    

2020-12-15 17:01:31 (73.2 MB/s) - ‘data.zip’ saved [116362032/116362032]

Building the Autoencoder

In [2]:
# Importing necessary Libraries
from tensorflow.keras.layers import Input, Dense, Conv2D, MaxPooling2D, Dropout, Conv2DTranspose, UpSampling2D, add
from tensorflow.keras.models import Model
from tensorflow.keras import regularizers

# Building the encoder
input_img = Input(shape=(256, 256, 3))
l1 = Conv2D(64, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(input_img)
l2 = Conv2D(64, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(l1)

l3 = MaxPooling2D(padding='same')(l2)
l3 = Dropout(0.3)(l3)
l4 = Conv2D(128, (3, 3),  padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(l3)
l5 = Conv2D(128, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(l4)

l6 = MaxPooling2D(padding='same')(l5)
l7 = Conv2D(256, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(l6)
In [3]:
# Building the Decoder

l8 = UpSampling2D()(l7)

l9 = Conv2D(128, (3, 3), padding='same', activation='relu',
            activity_regularizer=regularizers.l1(10e-10))(l8)
l10 = Conv2D(128, (3, 3), padding='same', activation='relu',
             activity_regularizer=regularizers.l1(10e-10))(l9)

l11 = add([l5, l10])
l12 = UpSampling2D()(l11)
l13 = Conv2D(64, (3, 3), padding='same', activation='relu',
             activity_regularizer=regularizers.l1(10e-10))(l12)
l14 = Conv2D(64, (3, 3), padding='same', activation='relu',
             activity_regularizer=regularizers.l1(10e-10))(l13)

l15 = add([l14, l2])

decoded = Conv2D(3, (3, 3), padding='same', activation='relu', activity_regularizer=regularizers.l1(10e-10))(l15)

# Create our network
autoencoder = Model(input_img, decoded)
In [4]:
autoencoder.summary()
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 256, 256, 3) 0                                            
__________________________________________________________________________________________________
conv2d (Conv2D)                 (None, 256, 256, 64) 1792        input_1[0][0]                    
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 256, 256, 64) 36928       conv2d[0][0]                     
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D)    (None, 128, 128, 64) 0           conv2d_1[0][0]                   
__________________________________________________________________________________________________
dropout (Dropout)               (None, 128, 128, 64) 0           max_pooling2d[0][0]              
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 128, 128, 128 73856       dropout[0][0]                    
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 128, 128, 128 147584      conv2d_2[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 64, 64, 128)  0           conv2d_3[0][0]                   
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 64, 64, 256)  295168      max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
up_sampling2d (UpSampling2D)    (None, 128, 128, 256 0           conv2d_4[0][0]                   
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 128, 128, 128 295040      up_sampling2d[0][0]              
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 128, 128, 128 147584      conv2d_5[0][0]                   
__________________________________________________________________________________________________
add (Add)                       (None, 128, 128, 128 0           conv2d_3[0][0]                   
                                                                 conv2d_6[0][0]                   
__________________________________________________________________________________________________
up_sampling2d_1 (UpSampling2D)  (None, 256, 256, 128 0           add[0][0]                        
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 256, 256, 64) 73792       up_sampling2d_1[0][0]            
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 256, 256, 64) 36928       conv2d_7[0][0]                   
__________________________________________________________________________________________________
add_1 (Add)                     (None, 256, 256, 64) 0           conv2d_8[0][0]                   
                                                                 conv2d_1[0][0]                   
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 256, 256, 3)  1731        add_1[0][0]                      
==================================================================================================
Total params: 1,110,403
Trainable params: 1,110,403
Non-trainable params: 0
__________________________________________________________________________________________________

Defining the Callbacks and Compiling the Model

In [5]:
from tensorflow.keras.callbacks import ModelCheckpoint
autoencoder.compile(optimizer='adam', loss='mean_squared_error')
checkpointer = ModelCheckpoint(filepath = "model1.h5", verbose = 2, save_best_only = True)

Reading the images from Directory and Creating the Dataset and Defining Training Routine

In [6]:
import os
import re
from scipy import ndimage, misc
from skimage.transform import resize, rescale
from matplotlib import pyplot
import numpy as np

def train_batches(just_load_dataset=False):

    # Number of images to have at the same time in a batch
    batches = 256 
    
    # Number if images in the current batch (grows over time and then resets for each batch)
    batch = 0 
    
    # For printing purpose
    batch_nb = 0
    
    max_batches = -1
    
    # Number of epochs
    ep = 4 

    images = []
    x_train_high = []
    x_train_low = []
    
    x_train_high2 = [] # Resulting high res dataset
    x_train_low2 = [] # Resulting low res dataset
    
    for root, dirnames, filenames in os.walk("data"):
        for filename in filenames:
            if re.search("\.(jpg|jpeg|JPEG|png|bmp|tiff)$", filename):
                
                filepath = os.path.join(root, filename)
                image = pyplot.imread(filepath)
                if len(image.shape) > 2:
                    
                    # Resize the image so that every image is the same size
                    image_resized = resize(image, (256, 256)) 
                    x_train_high.append(image_resized) # Add this image to the high res dataset
                    x_train_low.append(rescale(rescale(image_resized, 0.5, multichannel=True), 2.0, multichannel=True)) # Creating the blurred images
                    batch += 1
                    if batch == batches:
                        
                        x_train_high2 = np.array(x_train_high)
                        x_train_low2 = np.array(x_train_low)
                        
                        if just_load_dataset:
                            return x_train_high2, x_train_low2
                        
                        batch_nb += 1
                        print('Training batch', batch_nb)

                        autoencoder.fit(x_train_low2, x_train_high2,
                            epochs=ep,
                            batch_size=10,
                            shuffle=True,
                            validation_split=0.15,
                            callbacks = [checkpointer] )
                    
                        x_train_high = []
                        x_train_low = []
                    
                        batch = 0

    return x_train_high2, x_train_low2

Training the model

In [7]:
x_train_n, x_train_down = train_batches(just_load_dataset=False)
Training batch 1
Epoch 1/4
22/22 [==============================] - ETA: 0s - loss: 0.0387
Epoch 00001: val_loss improved from inf to 0.00878, saving model to model1.h5
22/22 [==============================] - 9s 426ms/step - loss: 0.0387 - val_loss: 0.0088
Epoch 2/4
22/22 [==============================] - ETA: 0s - loss: 0.0063
Epoch 00002: val_loss improved from 0.00878 to 0.00527, saving model to model1.h5
22/22 [==============================] - 7s 338ms/step - loss: 0.0063 - val_loss: 0.0053
Epoch 3/4
22/22 [==============================] - ETA: 0s - loss: 0.0042
Epoch 00003: val_loss improved from 0.00527 to 0.00351, saving model to model1.h5
22/22 [==============================] - 7s 340ms/step - loss: 0.0042 - val_loss: 0.0035
Epoch 4/4
22/22 [==============================] - ETA: 0s - loss: 0.0030
Epoch 00004: val_loss improved from 0.00351 to 0.00274, saving model to model1.h5
22/22 [==============================] - 8s 341ms/step - loss: 0.0030 - val_loss: 0.0027
Training batch 2
Epoch 1/4
22/22 [==============================] - ETA: 0s - loss: 0.0027
Epoch 00001: val_loss improved from 0.00274 to 0.00248, saving model to model1.h5
22/22 [==============================] - 8s 346ms/step - loss: 0.0027 - val_loss: 0.0025
Epoch 2/4
22/22 [==============================] - ETA: 0s - loss: 0.0025
Epoch 00002: val_loss improved from 0.00248 to 0.00240, saving model to model1.h5
22/22 [==============================] - 8s 344ms/step - loss: 0.0025 - val_loss: 0.0024
Epoch 3/4
22/22 [==============================] - ETA: 0s - loss: 0.0024
Epoch 00003: val_loss improved from 0.00240 to 0.00233, saving model to model1.h5
22/22 [==============================] - 8s 345ms/step - loss: 0.0024 - val_loss: 0.0023
Epoch 4/4
22/22 [==============================] - ETA: 0s - loss: 0.0023
Epoch 00004: val_loss improved from 0.00233 to 0.00224, saving model to model1.h5
22/22 [==============================] - 8s 347ms/step - loss: 0.0023 - val_loss: 0.0022
Training batch 3
Epoch 1/4
22/22 [==============================] - ETA: 0s - loss: 0.0022
Epoch 00001: val_loss did not improve from 0.00224
22/22 [==============================] - 8s 349ms/step - loss: 0.0022 - val_loss: 0.0024
Epoch 2/4
22/22 [==============================] - ETA: 0s - loss: 0.0022
Epoch 00002: val_loss did not improve from 0.00224
22/22 [==============================] - 8s 347ms/step - loss: 0.0022 - val_loss: 0.0024
Epoch 3/4
22/22 [==============================] - ETA: 0s - loss: 0.0021
Epoch 00003: val_loss did not improve from 0.00224
22/22 [==============================] - 8s 349ms/step - loss: 0.0021 - val_loss: 0.0023
Epoch 4/4
22/22 [==============================] - ETA: 0s - loss: 0.0021
Epoch 00004: val_loss did not improve from 0.00224
22/22 [==============================] - 8s 350ms/step - loss: 0.0021 - val_loss: 0.0022
Training batch 4
Epoch 1/4
22/22 [==============================] - ETA: 0s - loss: 0.0019
Epoch 00001: val_loss improved from 0.00224 to 0.00193, saving model to model1.h5
22/22 [==============================] - 8s 356ms/step - loss: 0.0019 - val_loss: 0.0019
Epoch 2/4
22/22 [==============================] - ETA: 0s - loss: 0.0019
Epoch 00002: val_loss improved from 0.00193 to 0.00188, saving model to model1.h5
22/22 [==============================] - 8s 355ms/step - loss: 0.0019 - val_loss: 0.0019
Epoch 3/4
22/22 [==============================] - ETA: 0s - loss: 0.0018
Epoch 00003: val_loss improved from 0.00188 to 0.00185, saving model to model1.h5
22/22 [==============================] - 8s 356ms/step - loss: 0.0018 - val_loss: 0.0019
Epoch 4/4
22/22 [==============================] - ETA: 0s - loss: 0.0018
Epoch 00004: val_loss improved from 0.00185 to 0.00183, saving model to model1.h5
22/22 [==============================] - 8s 356ms/step - loss: 0.0018 - val_loss: 0.0018

Loading The saved Model

In [8]:
import tensorflow as tf
new_model1 = tf.keras.models.load_model('model1.h5')

new_model1.summary()
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 256, 256, 3) 0                                            
__________________________________________________________________________________________________
conv2d (Conv2D)                 (None, 256, 256, 64) 1792        input_1[0][0]                    
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 256, 256, 64) 36928       conv2d[0][0]                     
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D)    (None, 128, 128, 64) 0           conv2d_1[0][0]                   
__________________________________________________________________________________________________
dropout (Dropout)               (None, 128, 128, 64) 0           max_pooling2d[0][0]              
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 128, 128, 128 73856       dropout[0][0]                    
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 128, 128, 128 147584      conv2d_2[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 64, 64, 128)  0           conv2d_3[0][0]                   
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 64, 64, 256)  295168      max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
up_sampling2d (UpSampling2D)    (None, 128, 128, 256 0           conv2d_4[0][0]                   
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 128, 128, 128 295040      up_sampling2d[0][0]              
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 128, 128, 128 147584      conv2d_5[0][0]                   
__________________________________________________________________________________________________
add (Add)                       (None, 128, 128, 128 0           conv2d_3[0][0]                   
                                                                 conv2d_6[0][0]                   
__________________________________________________________________________________________________
up_sampling2d_1 (UpSampling2D)  (None, 256, 256, 128 0           add[0][0]                        
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 256, 256, 64) 73792       up_sampling2d_1[0][0]            
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 256, 256, 64) 36928       conv2d_7[0][0]                   
__________________________________________________________________________________________________
add_1 (Add)                     (None, 256, 256, 64) 0           conv2d_8[0][0]                   
                                                                 conv2d_1[0][0]                   
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 256, 256, 3)  1731        add_1[0][0]                      
==================================================================================================
Total params: 1,110,403
Trainable params: 1,110,403
Non-trainable params: 0
__________________________________________________________________________________________________

Visualizing the results

In [9]:
# We clip the output so that it doesn't produce weird colors
sr1 = np.clip(new_model1.predict(x_train_down), 0.0, 1.0)
image_index = 210
In [10]:
import matplotlib.pyplot as plt

plt.figure(figsize=(128, 128))
i = 1
ax = plt.subplot(10, 10, i)
plt.title("Blurred Image")
plt.imshow(x_train_down[image_index])
i += 1
ax = plt.subplot(10, 10, i)
plt.title("Original Image")
plt.imshow(x_train_n[image_index])
i += 1
ax = plt.subplot(10, 10, i)
plt.title("Recovered Image")
plt.imshow(sr1[image_index])
plt.show()

Compiling the Model using DeepCC

In [ ]:
!deepCC model1.h5