...

What actually sees a CNN Deep Neural Network model ?

visualize CNN

Welcome to our comprehensive image classification tutorial series! In this tutorial playlist, consisting of five informative videos, we will guide you through the entire process of classifying monkey species in images, utilizing vgg16 transfer learning and CNN filter visualization.

This is part of our TensorFlow image classification tutorial series!

Each video focuses on a specific aspect of the process, providing you with a well-rounded understanding of the subject matter. Join us as we cover data preparation, classic CNN classification, hyperparameter tuning with Keras Tuner, fine-tuning a pretrained model (VGG16), and exploring the outcome of deep neural network layers in this TensorFlow image classification tutorial, which includes vgg16 transfer learning and CNN filter visualization. Get ready to dive into the world of monkey species classification and enhance your knowledge with practical examples and expert tips.

Video 1: Data Preparation Tutorial:

In the first video of our series, we will walk you through the essential steps of data preparation for monkey species classification. Discover how to download the necessary image data and explore its characteristics. Learn how to preprocess and format the data to ensure it is ready for the subsequent phases of the classification process. By the end of this tutorial, you will have a well-prepared dataset, laying the foundation for accurate and efficient classification.

Link for more info : https://eranfeit.net/how-to-classify-monkeys-images-using-convolutional-neural-network-keras-tuner-hyper-parameters-and-transfer-learning-part1/


Video 2: CNN Classification Tutorial:

Our second video focuses on the fundamental method of classifying monkey species using a Convolutional Neural Network (CNN). Join us as we guide you through the process of writing a CNN model, training it using the prepared dataset, and evaluating its performance. Gain insights into the inner workings of CNNs and witness their effectiveness in accurately classifying monkey species based on image data.

Link for more info : https://eranfeit.net/how-to-classify-monkeys-images-using-convolutional-neural-network-keras-tuner-hyper-parameters-and-transfer-learning-part2/


Video 3: Enhancing Classification with Keras Tuner:

In the third video, we take the CNN classification tutorial to the next level by incorporating Keras Tuner. Discover how to optimize the performance of your CNN model by automatically searching for the best hyperparameters. Learn how to leverage the power of Keras Tuner to fine-tune your model and achieve even more accurate results in classifying monkey species.

Link for more info : https://eranfeit.net/how-to-classify-monkeys-images-using-convolutional-neural-network-keras-tuner-hyper-parameters-and-transfer-learning-part3/

Video 4: Fine-tuning with Pretrained VGG16:

In the fourth video, we explore an alternative approach to image classification by utilizing a pretrained model, specifically VGG16. Join us as we guide you through the process of fine-tuning the VGG16 model for the task of monkey species classification. Learn how to adapt a powerful pretrained model to accurately classify monkey species and leverage its advanced features for improved results

Link for more info : https://eranfeit.net/how-to-classify-monkeys-images-using-convolutional-neural-network-keras-tuner-hyper-parameters-and-transfer-learning-part4/

Video 5: Visualizing Deep Neural Network Layers:

In our fifth and final video, we delve into the fascinating world of model interpretability by exploring the outcome of deep neural network layers. Witness what a trained network “sees” as we dissect the layers and visualize their outputs for a specific class. Gain a deeper understanding of how the network processes and interprets images, providing valuable insights into the classification process.

Link for more info : https://eranfeit.net/what-actually-sees-a-cnn-deep-neural-network-model/


Part 5 : How to visualize CNN Deep neural network model ?

What is actually sees during the train ?

What are the chosen filters , and what is the outcome of each neuron .

In this part we will focus of showing the outcome of the layers.

Very interesting !!

specifically focusing on fine-tuning a VGG16 model for superior classification accuracy.

You can find the link for the video tutorial here : https://youtu.be/yg4Gs5_pebY&list=UULFTiWJJhaH6BviSWKLJUM9sg

You can find the code in this link : https://ko-fi.com/s/8fbd4679f6

Link for my blog : https://eranfeit.net/blog/


Introduction

Understanding what a convolutional neural network actually learns is crucial for building trustworthy and high-performing models.
In this tutorial you will visualize convolutional filters and their resulting feature maps using Keras.
You will inspect the trained layers of a transfer-learned model and then pass a sample image through selected layers to see the activations they produce.
This end-to-end walkthrough is designed to help you debug architectures, explain predictions, and guide better data and training decisions.

Filter Visualization Script

This script loads a trained Keras model, lists its layers, extracts convolutional filter weights from selected layers, and plots both individual filters and a complete grid of filters from the first convolutional block.

### Import the Keras function to load a saved model in HDF5 format. from keras.models import load_model ### Import Matplotlib for plotting filters as images. import matplotlib.pyplot as plt ### Import NumPy for numerical array operations. import numpy as np  ### Define the square input size used by the model for clarity and reuse. IMG_SIZE = 200 ### Define a typical batch size constant for reference, not used in plotting here. BatchSize = 32  ### Add a note describing the next step to load the trained model from disk. # load out model  ### Provide the filesystem path to the trained Keras model file. modelPath = "C:/Python-cannot-upload-to-GitHub/10-Monkey-Species/myTransferLearningMonkeyModel.h5"  ### Load the trained model so we can inspect its layers and weights. model = load_model(modelPath) ### Print a textual summary of the architecture to the console. print(model.summary())  ### Access the list of layers for iteration and inspection. AllLayers = model.layers  ### Informatively print a header before listing layers. print("List of the layers") ### Print the full list object for quick inspection. print(AllLayers)  ### Add a clarifying comment to prepare for enumerating layer names and indices. # lets create a list of the layers  ### Print the total number of layers as a quick check. print("Total layers : " + str(len(AllLayers))) ### Loop over all layers and print their index and user-friendly name. for count, layer in enumerate(AllLayers):     print("layer no. "+ str(count)+ " : "+ layer.name)  ### Provide a separator for readability before focusing on a specific convolutional layer. # Total layers : 21 # layer no. 0 : input_1 # layer no. 1 : block1_conv1 # layer no. 2 : block1_conv2 # layer no. 3 : block1_pool # layer no. 4 : block2_conv1 # layer no. 5 : block2_conv2 # layer no. 6 : block2_pool # layer no. 7 : block3_conv1 # layer no. 8 : block3_conv2 # layer no. 9 : block3_conv3 # layer no. 10 : block3_pool # layer no. 11 : block4_conv1 # layer no. 12 : block4_conv2 # layer no. 13 : block4_conv3 # layer no. 14 : block4_pool # layer no. 15 : block5_conv1 # layer no. 16 : block5_conv2 # layer no. 17 : block5_conv3 # layer no. 18 : block5_pool # layer no. 19 : flatten # layer no. 20 : dense  ### Print a divider to separate sections in the console output. # lets look at the filters of layer no. 1 print("===================================================") ### Label the next inspection target for clarity. print("Conv 1") ### Extract the weight tensor and biases from layer index 1 (typically block1_conv1 in VGG-style models). filters1 , biases = model.layers[1].get_weights() ### Print the human-readable name of this layer. print(AllLayers[1].name) ### Display the raw shape of the filter tensor for this layer. print(filters1.shape)  ### Visualize a single filter channel to understand its learned pattern. # show filyer no. 20 out of 64 ### Plot filter index 20 from the first input channel using a grayscale colormap. plt.imshow(filters1[:,:,0,20], cmap='gray') # show filter no 20 . 0 is the Red channal  ### Render the figure to the screen. plt.show() ### Add a divider between layer inspections for readability. print("===================================================") ### Announce inspection of the next convolution layer. print("Conv 2") ### Extract weights from layer index 2 (typically block1_conv2). filters2 , biases = model.layers[2].get_weights() ### Print the human-readable name for confirmation. print(AllLayers[2].name) ### Show the shape of the previous layer’s filters as a reference. print(filters1.shape)  ### Visualize one filter from the second convolutional layer. # show filyer no. 20 out of 64 ### Plot filter index 20 from the first input channel of layer 2. plt.imshow(filters2[:,:,0,20], cmap='gray') # show filter no 20 . 0 is the Red channal  ### Display the plot. plt.show()  ### Insert a divider and move to the first conv layer in block2 for comparison. print("===================================================") ### Announce inspection of another convolutional layer deeper in the network. print("Conv 4") ### Extract weights from layer index 4 (typically block2_conv1). filters4 , biases = model.layers[4].get_weights() ### Print the layer’s name to confirm selection. print(AllLayers[4].name) ### Show the shape of the earlier filters for continuity of context. print(filters1.shape)  ### Visualize a representative filter in the deeper layer to compare spatial patterns. # show filyer no. 20 out of 64 ### Display filter index 20 for the first input channel of layer 4. plt.imshow(filters4[:,:,0,20], cmap='gray') # show filter no 20 . 0 is the Red channal  ### Show the plot to the user. plt.show()  ### Provide a heading comment for plotting a full grid of filters from the first conv layer. # lets see the whole 64 filters of layer 1  ### Create a figure for the filter grid with a readable size. # plot the filters : fig1 = plt.figure(figsize=(8,12)) ### Add a descriptive title to the figure. fig1.suptitle("Display filters of layer no. 1 ", fontsize=20)  ### Define the grid size in columns for an 8x8 layout. cols=8 ### Define the grid size in rows for an 8x8 layout. rows=8  ### Compute the number of filters we will display in total. n_filters = cols * rows # total 64 filters , 8 per rows in 8 columns  ### Iterate over each filter index to render it in the grid. for i in range(1, n_filters + 1):     ### Slice the i-th filter kernel across spatial dimensions and all input channels.     f = filters1[:,:,:, i-1]     ### Create a subplot location for this filter.     fig1 = plt.subplot(rows, cols, i)     ### Remove x-axis ticks for a cleaner visualization.     fig1.set_xticks([]) # turn of the axis     ### Remove y-axis ticks for a cleaner visualization.     fig1.set_yticks([]) # turn of the axis     ### Render only the first input channel of the filter in grayscale for simplicity.     plt.imshow( f[:,:,0], cmap='gray') # show the filters of the Red channal = 0  ### Display the composed grid of filters. plt.show() 

You can find the code in this link : https://ko-fi.com/s/8fbd4679f6

You loaded the trained model, listed its layers, extracted convolutional kernels, and plotted both individual filters and a complete grid from the first convolutional block.
This view helps you see edge, color, or texture detectors learned early in the network.


Feature Map Visualization Script

This script builds a short model that outputs intermediate activations for selected convolutional layers, runs an input image through those layers, and visualizes the resulting feature maps to understand what the network attends to.

### Import the Keras function to load the pretrained or transfer-learned model. from keras.models import load_model ### Import Matplotlib to display feature maps as images. import matplotlib.pyplot as plt ### Import NumPy for handling arrays and tensor transformations. import numpy as np  ### Keep constants explicit for readability and reproducibility. IMG_SIZE = 200 ### Define a conventional batch size constant for reference. BatchSize = 32  ### Provide the saved model’s filesystem path to load it for inspection. # load out model  ### Path points to the transfer-learned monkey species model used in prior training. modelPath = "C:/Python-cannot-upload-to-GitHub/10-Monkey-Species/myTransferLearningMonkeyModel.h5"  ### Load the Keras model from disk into memory. model = load_model(modelPath) ### Print the model architecture summary to verify structure and layer names. print(model.summary())  ### Cache the layer objects so we can enumerate them and build a sub-model. AllLayers = model.layers  ### Print a header for the layer list for clarity in console output. print("List of the layers") ### Print the raw Python list containing layer objects. print(AllLayers)  ### Prepare to enumerate layers so we can read their indices and names. # lets create a list of the layers  ### Print the total layer count as a quick validity check. print("Total layers : " + str(len(AllLayers))) ### Enumerate layers to show index and layer names which we will use by index later. for count, layer in enumerate(AllLayers):     print("layer no. "+ str(count)+ " : "+ layer.name)  ### List of layers shown below is a snapshot from a VGG-style backbone. # Total layers : 21 # layer no. 0 : input_1 # layer no. 1 : block1_conv1 # layer no. 2 : block1_conv2 # layer no. 3 : block1_pool # layer no. 4 : block2_conv1 # layer no. 5 : block2_conv2 # layer no. 6 : block2_pool # layer no. 7 : block3_conv1 # layer no. 8 : block3_conv2 # layer no. 9 : block3_conv3 # layer no. 10 : block3_pool # layer no. 11 : block4_conv1 # layer no. 12 : block4_conv2 # layer no. 13 : block4_conv3 # layer no. 14 : block4_pool # layer no. 15 : block5_conv1 # layer no. 16 : block5_conv2 # layer no. 17 : block5_conv3 # layer no. 18 : block5_pool # layer no. 19 : flatten # layer no. 20 : dense  ### Choose which convolutional layers to visualize by index. # lets plot the oucome of the layers # we will create a new short model with our visual relevant layers  ### Select several convolutional layers whose activations we want to inspect. conv_layer_index = [1,2,4] ### Initialize a list to collect their tensor outputs. outputs=[]  ### Iterate over chosen indices and collect the corresponding layer outputs. for i in conv_layer_index:     ### Print the index to track progress in the console.     print(i)     ### Append the output tensor of the selected layer to our list.     outputs.append(model.layers[i].output)  ### Print the list of output tensors to verify correct selection. print(outputs)  ### Import Model to construct a sub-graph that outputs intermediate activations. from keras.models import Model ### Import Keras image utilities to load and preprocess the test image. from tensorflow.keras.preprocessing import image  ### Build a short model that shares the original inputs and returns selected layer outputs. model_short = Model(inputs=model.inputs, outputs=outputs) ### Print the summary of this short model to confirm its in-out tensors. print(model_short.summary())  ### Provide the path to an image that will be forwarded through the selected layers. # load an image ### Replace this path with your own test image when adapting the script. imagePath = "C:/Python-cannot-upload-to-GitHub/10-Monkey-Species/validation/validation/n3/n311.jpg"  ### Load the image and resize it to the expected input size for the network. img = image.load_img(imagePath, target_size=(IMG_SIZE,IMG_SIZE)) ### Convert the PIL image to a NumPy array with shape (H, W, C). imgNp = image.img_to_array(img) ### Normalize pixel values to the [0,1] range as used during training. imgNp = imgNp / 255.0  ### Print the per-image array shape to confirm (H, W, C). print(imgNp.shape) ### Add a batch dimension so the tensor shape becomes (1, H, W, C). imgToModel = np.expand_dims(imgNp, axis = 0) ### Print the batched shape to validate the tensor is ready for inference. print(imgToModel.shape)  ### Run a forward pass through the short model to obtain intermediate activations. # lets run the "prediction" to get the outpus of the filters feature_output = model_short.predict(imgToModel)   ### Get the activation tensor from the second selected layer (index 2 in conv_layer_index list). feature_one_layer = feature_output[1] # get the result of layer 1 ### Print the shape which corresponds to (batch, height, width, num_filters) for that layer. print(feature_one_layer.shape) # Get how many filters for layer 1 ?  ### Visualize a single feature map to understand one filter’s response. # lets see outcome of filter no. 20 (out of 64 filters) in layer 1 ### Show the 21st feature map (zero-based index 20) for the first image in the batch. plt.imshow(feature_one_layer[0][:,:,20]) ### Draw the plot to the screen. plt.show()   ### Repeat the process for the next selected layer to compare activation patterns. # layer 2 - show outcome ### Grab the activation tensor for the third selected layer in conv_layer_index. feature_one_layer = feature_output[2] # get the result of layer 2 ### Print shape to reveal the number of channels and spatial downsampling at this depth. print(feature_one_layer.shape) # Get how many filters for layer 2 ?  ### Visualize a representative activation map from this deeper layer. # lets see outcome of filter no. 20 (out of 64 filters) in layer 1 ### Show feature map index 20 for qualitative comparison with the shallower layer. plt.imshow(feature_one_layer[0][:,:,20]) ### Display the plot. plt.show()   ### Create 8x8 grids to visualize a large set of feature maps for each selected layer. # lets plot all 64 outcome filters of our image (of all model short layers : 1 , 2 , 4)  ### Define the grid columns for each figure. cols=8 ### Define the grid rows for each figure. rows=8 ### Initialize a counter to track which layer we are displaying. layer_index = 0  ### Iterate over each activation tensor returned by the short model. for ftr in feature_output:     ### Create a new figure window for the current layer’s grid.     fig= plt.figure(figsize=(12,12))     ### Retrieve the original model layer index to include in the figure title.     layerDisplayNumber = conv_layer_index[layer_index]     ### Add a descriptive title including the layer number for context.     fig.suptitle("Layer number : "+str(layerDisplayNumber), fontsize=20 )      ### Loop across 64 channels and render each feature map in its own subplot.     for i in range(1, cols*rows + 1):         ### Create the subplot for the i-th feature map.         fig = plt.subplot(rows, cols , i)         ### Hide x-axis ticks to declutter the view.         fig.set_xticks([])         ### Hide y-axis ticks to declutter the view.         fig.set_yticks([])          ### Render the i-th channel of the activation map for the first image in the batch.         plt.imshow(ftr[0, :, :, i-1])              ### Show the current layer’s full grid of feature maps.     plt.show()     ### Increment the layer index to move to the next activation tensor.     layer_index = layer_index + 1 

You can find the code in this link : https://ko-fi.com/s/8fbd4679f6

You built a sub-model that exposes intermediate activations, pushed a sample image through, and visualized selected feature maps individually and as grids.
This reveals how spatial patterns evolve from edges and colors in shallow layers to more abstract motifs deeper in the network.


Connect :

☕ Buy me a coffee — https://ko-fi.com/eranfeit

🖥️ Email : feitgemel@gmail.com

🌐 https://eranfeit.net

🤝 Fiverr : https://www.fiverr.com/s/mB3Pbb

Enjoy,

Eran

error: Content is protected !!
Eran Feit