Wednesday, December 25, 2024
Google search engine
HomeData Modelling & AIGetting started with Deep Learning using Keras and TensorFlow in R

Getting started with Deep Learning using Keras and TensorFlow in R

Introduction

It has always been a debatable topic to choose between R and Python. The Machine Learning world has been divided over the preference of one language over the other. But with the explosion of Deep Learning, the balance shifted towards Python as it had an enormous list of Deep Learning libraries and frameworks which R lacked (till now).

I personally switched to Python from R simply because I wanted to dive into the Deep Learning space but with an R, it was almost impossible. But not anymore!

With the launch of Keras in R, this fight is back at the center. Python was slowly becoming the de-facto language for Deep Learning models. But with the release of Keras library in R with tensorflow (CPU and GPU compatibility)  at the backend as of now, it is likely that R will again fight Python for the podium even in the Deep Learning space.

Below we will see how to install Keras with Tensorflow in R and build our first Neural Network model on the classic MNIST dataset in the RStudio.

 

Table of contents

  1. Installation of Keras with tensorflow at the backend.
  2. Different types models that can be built in R using Keras
  3. Classifying MNIST handwritten digits using an MLP in R
  4. Comparing MNIST result with equivalent code in Python
  5. End Notes

 

1. Installation of Keras with tensorflow at the backend.

The steps to install Keras in RStudio is very simple. Just follow the below steps and you would be good to make your first Neural Network Model in R.

install.packages("devtools")

devtools::install_github("rstudio/keras")

The above step will load the keras library from the GitHub repository. Now it is time to load keras into R and install tensorflow.

library(keras)

By default RStudio loads the CPU version of tensorflow. Use the below command to download the CPU version of tensorflow.

install_tensorflow()

To install the tensorflow version with GPU support for a single user/desktop system, use the below command.

install_tensorflow(gpu=TRUE)

For multi-user installation, refer this installation guide.

Now that we have keras and tensorflow installed inside RStudio, let us start and build our first neural network in R to solve the MNIST dataset.

 

2. Different types of models that can be built in R using keras

Below is the list of models that can be built in R using Keras.

  1. Multi-Layer Perceptrons
  2. Convoluted Neural Networks
  3. Recurrent Neural Networks
  4. Skip-Gram Models
  5. Use pre-trained models like VGG16, RESNET etc.
  6. Fine-tune the pre-trained models.

Let us start with building a very simple MLP model using just a single hidden layer to try and classify handwritten digits.

 

3. Classifying MNIST handwritten digits using an MLP in R

#loading keras library
library(keras)

#loading the keras inbuilt mnist dataset
data<-dataset_mnist()

#separating train and test file
train_x<-data$train$x
train_y<-data$train$y
test_x<-data$test$x
test_y<-data$test$y

rm(data)

# converting a 2D array into a 1D array for feeding into the MLP and normalising the matrix
train_x <- array(train_x, dim = c(dim(train_x)[1], prod(dim(train_x)[-1]))) / 255
test_x <- array(test_x, dim = c(dim(test_x)[1], prod(dim(test_x)[-1]))) / 255

#converting the target variable to once hot encoded vectors using keras inbuilt function
train_y<-to_categorical(train_y,10)
test_y<-to_categorical(test_y,10)

#defining a keras sequential model
model <- keras_model_sequential()

#defining the model with 1 input layer[784 neurons], 1 hidden layer[784 neurons] with dropout rate 0.4 and 1 output layer[10 neurons]
#i.e number of digits from 0 to 9

model %>%
layer_dense(units = 784, input_shape = 784) %>%
layer_dropout(rate=0.4)%>%
layer_activation(activation = 'relu') %>%
layer_dense(units = 10) %>%
layer_activation(activation = 'softmax')

#compiling the defined model with metric = accuracy and optimiser as adam.
model %>% compile(
loss = 'categorical_crossentropy',
optimizer = 'adam',
metrics = c('accuracy')
)

#fitting the model on the training dataset
model %>% fit(train_x, train_y, epochs = 100, batch_size = 128)

#Evaluating model on the cross validation dataset
loss_and_metrics <- model %>% evaluate(test_x, test_y, batch_size = 128)

 

The above code had a training accuracy of 99.14 and validation accuracy of 96.89. The code ran on my i5 processor and took around 13.5s for a single epoch whereas, on a TITANx GPU, the validation accuracy was 98.44 with an average epoch taking 2s.

 

4. MLP using keras – R vs Python

For the sake of comparison, I implemented the above MNIST problem in Python too. There should not be any difference since keras in R creates a conda instance and runs keras in it. But still, you can find the equivalent python code below.

#importing the required libraries for the MLP model
import keras
from keras.models import Sequential
import numpy as np

#loading the MNIST dataset from keras
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()

#reshaping the x_train, y_train, x_test and y_test to conform to MLP input and output dimensions
x_train=np.reshape(x_train,(x_train.shape[0],-1))/255
x_test=np.reshape(x_test,(x_test.shape[0],-1))/255

import pandas as pd
y_train=pd.get_dummies(y_train)
y_test=pd.get_dummies(y_test)

#performing one-hot encoding on target variables for train and test
y_train=np.array(y_train)
y_test=np.array(y_test)

#defining model with one input layer[784 neurons], 1 hidden layer[784 neurons] with dropout rate 0.4 and 1 output layer [10 #neurons]
model=Sequential()

from keras.layers import Dense

model.add(Dense(784, input_dim=784, activation='relu'))
keras.layers.core.Dropout(rate=0.4)
model.add(Dense(10,input_dim=784,activation='softmax'))

# compiling model using adam optimiser and accuracy as metric
model.compile(loss='categorical_crossentropy', optimizer="adam", metrics=['accuracy'])
# fitting model and performing validation

model.fit(x_train,y_train,epochs=50,batch_size=128,validation_data=(x_test,y_test))

 

The above model achieved a validation accuracy of 98.42 on the same GPU. So, as we guessed initially, the results are the same.

 

5. End Notes

If this was your first Deep Learning model in R, I hope you enjoyed it. With a very simple code, you were able to classify hand written digits with 98% accuracy. This should be motivation enough to get you started with Deep Learning.

If you have already worked on keras deep learning library in Python, then you will find the syntax and structure of the keras library in R to be very similar to that in Python. In fact, the keras package in R creates a conda environment and installs everything required to run keras in that environment. But, I am more excited to now see data scientists building real life deep learning models in R. As it is said – The competition should never stop. I would also like to hear your views on this new development for R. Feel free to comment.

RELATED ARTICLES

Most Popular

Recent Comments