Saturday, September 21, 2024
Google search engine
HomeData Modelling & AINeural Network 101 – Ultimate Guide for Starters

Neural Network 101 – Ultimate Guide for Starters

Date: 03-July-2040

Mission: Project Aries

Destination: Mars

Date of arrival to Mars: 18-Feb-2041

Landing Location: Jezero Crater, Mars 

“Imagine you are on a space mission to go to mars as a part of “Project Aries”. You are in a spaceship along with your crew (8 in total) along with an ASI(Artificial Super Intelligence) let’s called it “HAL9000″. You are drifting through the vast vacuum of the universe millions of miles away from earth. In order to preserve your valuable resources like energy and resources like oxygen and water, you along with your crew enter into a deep sleep state for 4 months. In the meanwhile, your onboard ASI will be monitoring and controlling all operations of your spacecraft. Now, what if HAL9000 considers you and your crew as a threat to its existence and decided to sabotage the mission. Scary isn’t it ?. I am sure you would have figured out which movie this is relating to. That’s right! I am talking about 2001: A Space Odyssey. For those of you who do not know what is HAL9000, well this is HAL9000.” Love that glowing red eye !!

 

HAL9000

HAL9000 – Wikipedia

You might ask what is this has to do with neural networks. Well technically HAL9000 is termed as an “Artificial Super Intelligence”, but in a very simple term, it’s a neural network which is the topic of this blog. so let’s dive into the realm of neural networks.

Neural Network Definition

There are several definitions of neural networks. A few of them includes the following:

A neural network is a series of algorithms that endeavors to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates – investopedia.com

A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes – Wikipedia

Neural networks or also known as Artificial Neural Networks (ANN) are networks that utilize complex mathematical models for information processing. They are based on the model of the functioning of neurons and synapses in the brain of human beings. Similar to the human brain, a neural network connects simple nodes, also known as neurons or units. And a collection of such nodes forms a network of nodes, hence the name “neural network.” – hackr.io

Well that’s a lot of stuff to consume

For anyone starting with a neural network, let’s create our own simple definition of neural networks. Let’s split these words into two parts.

What is Neural Network?

Network means it is an interconnection of some sort between something. What is something we will see this later down the road?

Neural means neurons. What are neurons? let me explain this shortly.

 

To explain neurons in a simple manner, those are the fundamental blocks of the human brain. All your life experiences, feeling, emotions, basically your entire personality is defined by those neurons. Every decision you make in your daily life, no matter how small or big are driven by those neurons.

So neural network means the network of neurons. That’s it. You might ask “Why are we discussing biology in neural networks?”. Well in the data science realm, when we are discussing neural networks, those are basically inspired by the structure of the human brain hence the name.

Another important thing to consider is that individual neurons themselves cannot do anything. It is the collection of neurons where the real magic happens.

 

Neural Network in Data Science Universe

You might have a question “Where is neural network stands in the vast Data Science Universe?”.Let’s find this out with the help of a diagram.

Position of Neural Network in Data Science Universe

Position of Neural Network in Data Science Universe

 

In this diagram, what are you seeing? Under Data Science, we have Artificial Intelligence. Consider this as an umbrella. Under this umbrella, we have machine learning( a sub-field of AI). Under this umbrella, we have another umbrella named “Deep Learning” and this is the place where the neural network exists. (Dream inside of another dream 🙂 classical inception stuff )

Basically, deep learning is the sub-field of machine learning that deals with the study of neural networks. Why the study of neural networks called “Deep Learning”?. Well, read this blog further to know more 🙂

 

Structure of Neural Networks

Since we already said that neural networks are something that is inspired by the human brain let’s first understand the structure of the human brain first.

Each neuron composed of three parts:-

1.Axon

2.Dendrites

3.Body

As I explained earlier, neuron works in association with each other. Each neuron receives signals from another neuron and this is done by Dendrite. Axon is something that is responsible for transmitting output to another neuron. Those Dendrites and Axons are interconnected with the help of the body(simplified term). Now let’s understand its relevance to our neural network with the one used in the data science realm.

 

In any neural network, there are 3 layers present:

1.Input Layer: It functions similarly to that of dendrites. The purpose of this layer is to accept input from another neuron.

2.Hidden Layer: These are the layers that perform the actual operation

3.Output Layer: It functions similarly to that of axons. The purpose of this layer to transmit the generated output to other neurons.

One thing to be noted here is that in the above diagram we have 2 hidden layers. But there is no limit on how many hidden layers should be here. It can be as low as 1 or as high as 100 or maybe even 1000!

Now it’s time to answer our question. “Why the study of neural networks called Deep Learning”?Well, the answer is right in the figure itself 🙂

It is because of the presence of multiple hidden layers in the neural network hence the name “Deep”. Also after creating the neural network, we have to train it in order to solve the problem hence the name “Learning”. Together these two constitute “Deep Learning”

 

Ingredients of Neural Network

As Deep Learning is a sub-field of Machine Learning, the core ingredients will be the same. These ingredients include the following:

1.Data:- Information needed by neural network

2.Model:- Neural network itself

3.Objective Function:- Computes how close or far our model’s output from the expected one

4.Optimisation Algorithm:-Improving performance of the model through a loop of trial and error

The first two ingredients are quite self-explanatory. Let’s get familiar with objective functions.

 

Objective Function

The purpose of the objective function is to calculate the closeness of the model’s output to the expected output. In short, it computes the accuracy of our neural network. In this regard, there are basically two types of objective functions.

Types of Objective functions

Types of Objective functions

1. Loss Function:-

To understand loss function, let me explain this with the help of an example. Imagine you have a Roomba(A rover that cleans your house). For those who do not know what Roomba is, well this is Roomba.

Let’s call our Roomba “Mr.robot”. Mr. robot’s job is to clean the floor when it senses any dirt. Now since Mr.robot is battery-operated, each time it functions, it consumes its battery power. So in this context what is the ideal condition in which Mr.robot should operate? Well by consuming minimum possible energy but at the same time doing its job efficiently. That is the idea behind loss function.

The lower the value of the loss function, the better is the accuracy of our neural network.

2. Reward Function:

Let me explain this with the help of another example.

Reward Function illustration - KDNuggest.com

Reward Function illustration – KDNuggest.com

Let’s say you are teaching your dog to fetch a stick. Every time when your dog fetches a stick, you award it let’s say a bone. Well that is the concept behind the reward function

Higher the value, the better the accuracy of our neural network.

 

Optimization Algorithm

Any machine learning algorithm is incomplete without an optimization algorithm. The main goal of an optimization algorithm is to subject our ML model (in this case neural network) to a series of trial and error processes which eventually results in a model having higher accuracy.

In the context of neural networks, we use a specific optimization algorithm called gradient descent. Let’s understand this with the help of an example.

Optimization Algorithm for Neural Network

let’s imagine that we are climbing down a hill. With each step, we can feel that we are reaching a flat surface. Once we reach a flat surface, we no longer feel that strain on our fleet. Well, similar is the concept of gradient descent.

Gradient Descent

In gradient descent, there are few terms that we need to understand. In our previous example, when we climb down the hill we reach a flat surface. In gradient descent, we call this global minimum. Now, what do global minima mean? If you used a loss function, it means the point at which you have a minimum loss and is the preferred one.

Alternatively, if you are going to use a reward function, then our goal is to reach a point where the reward is maximum ( means reaching a global maximum). In that case, we have to use something called gradient ascent. Think of it as an opposite to gradient descent. Meaning that now we need to climb up the hill in order to reach its peak 🙂

 

Types of Neural Networks

There are many different types of neural networks. Few of the popular one includes following

Types Of Neural Networks

Types Of Neural Networks

Let me give you a single liner about where those neural networks are used

1.Convolutional Neural Network(CNN): used in image recognition and classification

2.Artificial Neural Network(ANN): used in image compression

3.Restricted Boltzmann Machine(RBM): used for a variety of tasks including classification, regression, dimensionality reduction

4.Generative Adversarial Network(GAN): used for fake news detection, face detection, etc.

5.Recurrent Neural Network(RNN): used in speech recognition

6.Self Organizing Maps(SOM): used for topology analysis

 

Applications of Neural Network in Real Life

In this part, let’s get familiar with the application of neural networks

1.Adaptive Battery in Android OS

If you happened to have an android phone running android os 9.0 or above, when you go inside the setting menu under the battery section you will see an option for an adaptive battery. What this feature does is pretty remarkable. This feature basically uses Convolutional Neural Networks(CNN) to identify which apps in your phone are consuming more power and based on that, it will restrict those apps.

Adaptive Battery

Adaptive Battery

2. Live Caption in Android OS

As a part of Android OS 10.0, Google introduced a feature called Live Caption. When enabled this feature uses a combination of CNN and RNN to recognize the video and generate a caption for the same in real-time

Live Caption in Android OS

Live Caption in Android OS

 

3. Face Unlock

Today almost any newly launched android phone is using some sort of face unlock to speed up the unlocking process. Here essentially CNN’s are used to help identify your face. That’s why you can observe that the more you use face unlock, the better it becomes over time.

 

4.Google Camera Portrait Mode

Do you have google pixel? Wondering why it takes industry-leading bokeh shots. Well, you can thank the integration of CNN into google camera for that 🙂

 

5.Google Assistant

Wonder how Google assistant wakes after saying “Ok Google”.Don’t say this loudly. You might invoke someone’s google assistant :). It uses RNN for this wake word detection.

Well, this is it. This is all you need to know about neural networks as a starter.

I hope you like this article.

If you like this article please share this with your friends and colleagues.

https://www.linkedin.com/in/shrish-mohadarkar-060209109/

RELATED ARTICLES

Most Popular

Recent Comments