Saturday, November 16, 2024
Google search engine
HomeData Modelling & AIDevelop your First Image Processing Project with Convolutional Neural Network!

Develop your First Image Processing Project with Convolutional Neural Network!

This article was published as a part of the Data Science Blogathon

Introduction

Deep learning is a booming field at the current time, most of the projects and problem statement uses deep learning in any sort of work.  If you have to pick a deep learning technique for solving any computer vision problem statement then many of you including myself will go with a conventional neural network.

In this article, we will build our first image processing project using CNN and understand its power and why it has become so popular. In this article, we will walk through every step of developing our own convolutional model and build our first amazing project.

Image classification

Table of Contents

  • Why Image Classification
  • Convolutional Neural Network
  • Defining Problem Statement
  • Build our first Convolutional Model
  • Build GUI of Project
  • Conclusion

Why Image Classification

Image classification is a task where the system takes an input image and classifies it with an appropriate label.

Today Image classification is used by any organization to make the process streamline, simple and fast. Have you ever wonder about my system is capable to identify my and my family’s faces, cars are capable to follow traffic rules automatically, This all happens when Image Processing came into account.

As technology advancement takes place there are new algorithms and neural networks become more powerful be capable to handle very large size images and videos, process them, and conclude them with proper subtitles.

Brief on Convolutional Neural Network

A convolutional neural network is a class of deep learning which deals with processing image and video data by extracting features from them and build a neural network by assigning them weights and convolved them with a filter to classify and identify an image.

CNN is a prior choice of every data scientist to deal with any Image or video processing data. Using the transfer learning model and modifying it with our layers is also easy.

Defining Problem Statement

Today self-driving cars are overtaking the automobile industry where drivers can fully depend on cars. To achieve high accuracy it’s important that cars should be able to understand all traffic rules. In this project, we are going to develop a traffic sign identification problem.

There are many different traffic signs like speed limit, traffic signals, indicating directions(left or right), etc. The dataset we are working on contains 50000 images of 43 classes which are numbered from 0 to 42.

You can download the dataset from here.

Start Building Image Classification Project

In this project, we are going to build a complete end-to-end GUI for the Traffic Sign identification problem statement.

Step-1) Explore the Dataset

Load Dataset

let’s start by importing the required libraries. we will use the Keras library to load each layer that works on top of TensorFlow. So, please install TensorFlow before importing deep learning layers.

pip install tensorflow
pip install keras
pip install sciket-learn

Import all the libraries

import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from PIL import Image
import tensorflow as tf
from keras.utils import to_categorical
from keras.layers import Conv2D, Dense, Flatten, MaxPool2D, Dropout

Now what we will do is load all the images in a single list in form of an array that will describe the pixels of the image and another list which will contain labels of the corresponding image. To feed image data to the model we need to convert it into a NumPy array.

The training dataset contains a different folder with the name of classes named 0 to 42. with the help of the os module we will iterate through each class folder and append the image and respective label to the list. we also have CSV files that contain the actual label category name.

imgs_path = "gtsrb-german-traffic-sign/Train"
data = []
labels = []
classes = 43
for i in range(classes):
    img_path = os.path.join(imgs_path, str(i)) #0-42
    for img in os.listdir(img_path):
        im = Image.open(p + '/' + img)
        im = im.resize((30,30))
        im = np.array(im)
        data.append(im)
        labels.append(i)
data = np.array(data)
labels = np.array(labels)
print("success")

Explore sample Image

let’s look at one any sample image using pillow library.

path = "gtsrb-german-traffic-sign/Train/0/00000_00004_00029.png"
img = Image.open(i0)
img = img.resize((30, 30))
sr = np.array(img) 
plt.imshow(img)
plt.show()

Image classification | sample image

Step-2 Split Dataset into train and test

We will use the to_categorical method to convert labels into one-hot encoding.

from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(data, labels, test_size=0.2, random_state=42)
print("training shape: ",x_train.shape, y_train.shape)
print("testing shape: ",x_test.shape, y_test.shape)
y_train = to_categorical(y_train, 43)
y_test = to_categorical(y_test, 43)

Step-3) Build a CNN model

Now we will start developing a convolutional neural network to classify images for correct labels. CNN is best to work with image data.

The architecture of our CNN model

  • Conv2D layer –  we will add 2 convolutional layers of 32 filters, size of 5*5, and activation as relu
  • Max Pooling – MaxPool2D with 2*2 layers
  • Dropout with a rate of 0.25.0
  • 2 Convolutional layer of 64 filters and size of 3*3
  • Dropout with a rate of 0.25
  •  Flattenn layer to squeeze the layers into 1 dimension
  • Dense, feed-forward neural network(256 nodes, activation=”relu”)
  • Dropout Layer(0.5)
  • Dense layer(nodes=46, activation=”softmax”)
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(5,5), activation="relu", input_shape=x_train.shape[1:]))
model.add(Conv2D(filters=32, kernel_size=(5,5), activation="relu"))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(rate=0.25))
model.add(Conv2D(filters=64, kernel_size=(3,3), activation="relu"))
model.add(Conv2D(filters=64, kernel_size=(3,3), activation="relu"))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(rate=0.25))
model.add(Flatten())
model.add(Dense(256, activation="relu"))
model.add(Dropout(rate=0.5))
model.add(Dense(43, activation="softmax"))
  • MaxPool2D – Maximum pooling layer is used to reduce the size of images
  • Dropout – Dropout is a regularization technique to reduce overfitting
  • Flatten – to convert the parrel layers to squeeze the layers
  • Dense –  for feed-forward neural network

the last layer will have an activation function as softmax for Multi-class classification.

Step-4) Train and Validate the Model

let’s first compile the model. during compiling we need to describe the loss function and optimizer to use.

model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
  • Loss Function – to calculate the loss done by model. we will use categorical cross-entropy as we have a multiclass classification problem statement
  • Optimizer – Optimize to optimize the loss function

Let’s fit the train and test data to model and start training the convolutional model. we need to define a number of epochs to train for and batch size to consider while training the model.

epochs = 15
history = model.fit(x_train, y_train, epochs=epochs, batch_size=64, validation_data=(x_test, y_test))

It will take some time to run so please keep patience till it runs.

Our model received an accuracy of 95% on training data. Now let us plot an accuracy and loss graph using Matplotlib.

plt.figure(0)
plt.plot(history.history['accuracy'], label="Training accuracy")
plt.plot(history.history['val_accuracy'], label="val accuracy")
plt.title("Accuracy")
plt.xlabel("epochs")
plt.ylabel("accuracy")
plt.legend()
plt.figure(1)
plt.plot(history.history['loss'], label="training loss")
plt.plot(history.history['val_loss'], label="val loss")
plt.title("Loss")
plt.xlabel("epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()

accuracy

Image classification | loss

Our model is performing descent well and gives very well performance, We can see the increasing accuracy and decreasing loss on the graph.

Step-5) Test the Model 

The dataset contains a test folder that has different test images and a test.csv file that contains details related to the image path and respective labels. Again we will load the data using pandas and resize it to the shape of 30*30 pixels and convert it to a NumPy array. After processing test images we will check the accuracy of the model against actual labels.

from sklearn.metrics import accuracy_score
test = pd.read_csv("gtsrb-german-traffic-sign/Test.csv")
test_labels = test['ClassId'].values
test_img_path = "../input/gtsrb-german-traffic-sign"
test_imgs = test['Path'].values
test_data = []
test_labels = []
for img in test_imgs:
    im = Image.open(test_img_path + '/' + img)
    im = im.resize((30,30))
    im = np.array(im)
    test_data.append(im)
test_data = np.array(test_data)
predictions = model.predict_classes(test_data)
print("accuracy: ", accuracy_score(test_labels, predictions))

Step-6) Save the Model

Save the model for future use as well, we will use the dump model to create a GUI for Traffic Classification Project.

model.save('traffic_classifier.h5')

Hence we have successfully built and evaluated our Convolutional neural network for the image classification task. Now we will work on the frontend part and deploy our model on GUI using the python Tkinter library.

Build Traffic Classification GUI

let’s start developing GUI using the python Tkinter library. first, we will load the deployed model then we will define class names in the dictionary. And then we will create one by one function to upload and classify images.

from tkinter import filedialog
from tkinter import *
import tkinter as tk
from PIL import ImageTk, Image
from keras.models import load_model
import numpy as np
#load the trained model to classify traffic signs
model = load_model('traffic_classifier.h5')
#dictionary to label all traffic signs class.
classes = { 1:'Speed limit (20km/h)',
            2:'Speed limit (30km/h)',
            3:'Speed limit (50km/h)',
            4:'Speed limit (60km/h)',
            5:'Speed limit (70km/h)',
            6:'Speed limit (80km/h)',
            7:'End of speed limit (80km/h)',
            8:'Speed limit (100km/h)',
            9:'Speed limit (120km/h)',
            10:'No passing',
            11:'No passing veh over 3.5 tons',
            12:'Right-of-way at intersection',
            13:'Priority road',
            14:'Yield',
            15:'Stop',
            16:'No vehicles',
            17:'Veh > 3.5 tons prohibited',
            18:'No entry',
            19:'General caution',
            20:'Dangerous curve left',
            21:'Dangerous curve right',
            22:'Double curve',
            23:'Bumpy road',
            24:'Slippery road',
            25:'Road narrows on the right',
            26:'Road work',
            27:'Traffic signals',
            28:'Pedestrians',
            29:'Children crossing',
            30:'Bicycles crossing',
            31:'Beware of ice/snow',
            32:'Wild animals crossing',
            33:'End speed + passing limits',
            34:'Turn right ahead',
            35:'Turn left ahead',
            36:'Ahead only',
            37:'Go straight or right',
            38:'Go straight or left',
            39:'Keep right',
            40:'Keep left',
            41:'Roundabout mandatory',
            42:'End of no passing',
            43:'End no passing veh > 3.5 tons' }
#initialize GUI
top=tk.Tk()
top.geometry('800x600')
top.title('Traffic sign classification')
top.configure(background='#CDCDCD')
label=Label(top,background='#CDCDCD', font=('arial',15,'bold'))
sign_image = Label(top)
def classify(file_path):
    global label_packed
    image = Image.open(file_path)
    image = image.resize((30,30))
    image = numpy.expand_dims(image, axis=0)
    image = numpy.array(image)
    pred = model.predict_classes([image])[0]
    sign = classes[pred+1]
    print(sign)
    label.configure(foreground='#011638', text=sign)
def show_classify_button(file_path):
    classify_b=Button(top,text="Classify Image",command=lambda: classify(file_path),padx=10,pady=5)
    classify_b.configure(background='#364156', foreground='white',font=('arial',10,'bold'))
    classify_b.place(relx=0.79,rely=0.46)
def upload_image():
    try:
        file_path=filedialog.askopenfilename()
        uploaded=Image.open(file_path)
        uploaded.thumbnail(((top.winfo_width()/2.25),(top.winfo_height()/2.25)))
        im=ImageTk.PhotoImage(uploaded)
        sign_image.configure(image=im)
        sign_image.image=im
        label.configure(text='')
        show_classify_button(file_path)
    except:
        pass
upload=Button(top,text="Upload an image",command=upload_image,padx=10,pady=5)
upload.configure(background='#364156', foreground='white',font=('arial',10,'bold'))
upload.pack(side=BOTTOM,pady=50)
sign_image.pack(side=BOTTOM,expand=True)
label.pack(side=BOTTOM,expand=True)
heading = Label(top, text="Know Your Traffic Sign",pady=20, font=('arial',20,'bold'))
heading.configure(background='#CDCDCD',foreground='#364156')
heading.pack()
top.mainloop()

here the first user will get an upload button, as he/she uploads as image classify button will be visible. when the user will trigger classify button classify function will be called along with a file path where we first process the image and feed it to the model to predict its respective class and whatever class in between 0-42 predicted by the model we access its categorical name from classes dictionary and display it on screen.

Conclusion

Hurry! we have developed an end-end Image classification project as traffic Image classification using the Python Keras library. I hope you have enjoyed your first Image processing project and it will help you in future projects too. If you have any doubts please mention them in the comments, I will be happier to help you out and be a part of your data science journey.

About the Author

Raghav Agrawal

I am pursuing my bachelor’s in computer science. I am very fond of Data science and big data. I love to work with data and learn new technologies. Please feel free to connect with me on Linkedin.

The media shown in this article on Sign Language Recognition are not owned by Analytics Vidhya and are used at the Author’s discretion.

Raghav Agrawal

26 Aug 2021

I am a final year undergraduate who loves to learn and write about technology. I am a passionate learner, and a data science enthusiast. I am learning and working in data science field from past 2 years, and aspire to grow as Big data architect.

Dominic Rubhabha-Wardslaus
Dominic Rubhabha-Wardslaushttp://wardslaus.com
infosec,malicious & dos attacks generator, boot rom exploit philanthropist , wild hacker , game developer,
RELATED ARTICLES

Most Popular

Recent Comments