Jovian
⭐️
Sign In
In [1]:
import cv2 as cv
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf

Read in image

In [8]:
im = cv.imread("Candy.jfif")
im = im[:,:,::-1]
plt.imshow(im)
Out[8]:
<matplotlib.image.AxesImage at 0x20e9e31e448>
Notebook Image

Load keras's essential libraries

for number of filters??: https://stackoverflow.com/questions/36243536/what-is-the-number-of-filter-in-cnn

In [3]:
import keras
from keras.models import Sequential
from keras.layers import Conv2D,MaxPooling2D,Flatten
from keras.layers import Dense
from keras import backend as k
from keras.utils import plot_model
Using TensorFlow backend.
!!!!!!!

This below function is must to reproduce same results in each run

!!!!!!
In [7]:
def reproduceResult():    
    # Seed value (can actually be different for each attribution step)
    seed_value= 0

    # 1. Set `PYTHONHASHSEED` environment variable at a fixed value
    import os
    os.environ['PYTHONHASHSEED']=str(seed_value)

    # 2. Set `python` built-in pseudo-random generator at a fixed value
    import random
    random.seed(seed_value)

    # 3. Set `numpy` pseudo-random generator at a fixed value
    import numpy as np
    np.random.seed(seed_value)

    # 4. Set `tensorflow` pseudo-random generator at a fixed value
    import tensorflow as tf
    tf.compat.v1.set_random_seed(seed_value)

Generate sequential model and add layers in it

There is no correct answer as to what the best number of filters is. This strongly depends on the type and complexity of your (image) data. A suitable number of features is learnd from experience after working with similar types of datasets repeatedly over time. In general, the more features you want to capture (and are potentially available) in an image the higher the number of filters required in a CNN.

In [74]:
reproduceResult()
X = im[:,:,::-1]
model = Sequential(name="s1")
model.add(Conv2D(1, kernel_size=(2,2),strides=(1, 1),activation='relu',input_shape=X.shape,name="l1"))
model.add(MaxPooling2D(pool_size=(2, 2),name="l2"))
model.add(Conv2D(1, kernel_size=(2, 2),strides=(1, 1),activation='relu',name="l3"))

plot_model(model,rankdir="LR")

im_batch = np.expand_dims(im[:,:,::-1],axis=0)
# print(im_batch.shape)
im_conv = model.predict(im_batch,verbose=0)

op_conv = np.squeeze(im_conv) # to make it 2d array
print(op_conv.shape)
(71, 172)

Print summary of model

In [93]:
X.shape
Out[93]:
(145, 348, 3)
In [75]:
model.summary()
Model: "s1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= l1 (Conv2D) (None, 144, 347, 1) 13 _________________________________________________________________ l2 (MaxPooling2D) (None, 72, 173, 1) 0 _________________________________________________________________ l3 (Conv2D) (None, 71, 172, 1) 5 ================================================================= Total params: 18 Trainable params: 18 Non-trainable params: 0 _________________________________________________________________
In [97]:
# W=(W−F+2P)/S+1
(145-1+0)+1
Out[97]:
145

Visualize the model

In [76]:
plot_model(model,rankdir="LR")
Out[76]:
Notebook Image

Display final output of colvolved image

In [77]:
plt.imshow(op_conv,cmap="gray")
Out[77]:
<matplotlib.image.AxesImage at 0x20e9f3f8ec8>
Notebook Image
!!!!!!

How to see intermittent output of each convolution layer?

!!!!!!
In [78]:
model.layers # returns total number of layers
Out[78]:
[<keras.layers.convolutional.Conv2D at 0x20e9f41b648>,
 <keras.layers.pooling.MaxPooling2D at 0x20e9fb4ae88>,
 <keras.layers.convolutional.Conv2D at 0x20e9f401388>]
get layers details
In [87]:
model.layers[0]
Out[87]:
<keras.layers.convolutional.Conv2D at 0x20e9f41b648>
In [88]:
model.layers[0].input
Out[88]:
<tf.Tensor 'l1_input_3:0' shape=(None, 145, 348, 3) dtype=float32>
In [81]:
model.layers[0].output
Out[81]:
<tf.Tensor 'l1_3/Relu:0' shape=(None, 144, 347, 1) dtype=float32>
Provide input and output to tensorflow backend "fucntion" along with input image6
In [82]:
k.function(model.layers[0].input,model.layers[0].output)(im_batch)[0][:,:,0].shape # provides final array output of layer
Out[82]:
(144, 347)
Output of layer -1 (Conv2D)
In [83]:
plt.imshow(k.function(model.layers[0].input,model.layers[0].output)(im_batch)[0][:,:,0],cmap="gray")
Out[83]:
<matplotlib.image.AxesImage at 0x20ea12212c8>
Notebook Image
Output of layer -2 (Maxpooling)
In [84]:
plt.imshow(k.function(model.layers[0].input,model.layers[1].output)(im_batch)[0][:,:,0],cmap="gray")
Out[84]:
<matplotlib.image.AxesImage at 0x20ea1255ec8>
Notebook Image
Output of layer -1
Output of layer -3 (Conv2D)
In [85]:
plt.imshow(k.function(model.layers[0].input,model.layers[2].output)(im_batch)[0][:,:,0],cmap="gray")
Out[85]:
<matplotlib.image.AxesImage at 0x20ea12cf608>
Notebook Image

To see weights of each layer

In [86]:
for layer in model.layers: # see if weights are always same in each run for reproducibility
    print(layer.weights)
[<tf.Variable 'l1_3/kernel:0' shape=(2, 2, 3, 1) dtype=float32, numpy= array([[[[ 0.26042163], [-0.14865789], [ 0.17352915]], [[ 0.27056372], [-0.39672798], [-0.21167147]]], [[[ 0.40980643], [-0.21366382], [-0.08131087]], [[ 0.5788453 ], [ 0.3004995 ], [-0.5504155 ]]]], dtype=float32)>, <tf.Variable 'l1_3/bias:0' shape=(1,) dtype=float32, numpy=array([0.], dtype=float32)>] [] [<tf.Variable 'l3_3/kernel:0' shape=(2, 2, 1, 1) dtype=float32, numpy= array([[[[-0.6308038 ]], [[-0.6023466 ]]], [[[ 0.4050333 ]], [[ 0.20765573]]]], dtype=float32)>, <tf.Variable 'l3_3/bias:0' shape=(1,) dtype=float32, numpy=array([0.], dtype=float32)>]
In [ ]:
 
In [ ]: