Wednesday, September 3, 2025
HomeLanguagesComparison of LDA and PCA 2D projection of Iris dataset in Scikit...

Comparison of LDA and PCA 2D projection of Iris dataset in Scikit Learn

LDA and PCA both are dimensionality reduction techniques in which we try to reduce the dimensionality of the dataset without losing much information and preserving the pattern present in the dataset. In this article, we will use the iris dataset along with scikit learn pre-implemented functions to perform LDA and PCA with a single line of code. Converting it into 2D and then visualizing them in two dimensions helps us to identify the patterns present between the different classes of the dataset.

Implementing PCA using Scikit Learn

Python3




from sklearn import datasets
import pandas as pd
import numpy as np
import seaborn as sb
import matplotlib.pyplot as plt
  
iris = datasets.load_iris()
iris.keys()


Output:

dict_keys(['data', 'target', 'frame', 'target_names', 'DESCR',
 'feature_names', 'filename', 'data_module'])

There are some keys in the dataset that we can use to access particular data. For instance, you can specify iris[‘data’] to get information about the length and width of iris flowers.

Pandas is a fantastic tool for preprocessing and exploring datasets, among other dataset-related tasks. So let’s transform our dataset, which is currently in the form of matrices, into rows and columns.

Python3




iris = pd.DataFrame(
    data=np.c_[iris['data'], iris['target']],
    columns=iris['feature_names'] + ['target']
)
iris.head()


Output:

Comparison of LDA and PCA 2D projection of Iris dataset in Scikit Learn

Iris dataset first five rows

Now, let’s separate the features and the target variable.

Python3




# As we only require the measurements,
# we will drop the target and species.
X = iris.drop(['target'], axis=1)
Y = iris['target']


Now from the sklearn.decomposition module we will import PCA and then use it to convert our dataset to 2D from 4D.

Python3




from sklearn.decomposition import PCA
pca = PCA(n_components=2)
iris_pca = pca.fit_transform(X)


Now the iris_pca contains the data in the desired format. Let’s plot this on a 2D plane to visualize the pattern between the classes.

Python3




sb.scatterplot(iris_pca[:, 0],
               iris_pca[:, 1],
               hue=iris['target'])
plt.show()


Output:

Visualising data obtained by using PCA

Let’s the percentage of the variance or the information of the original dataset retained after reducing the dimensionality of the dataset.

Python3




ret_variance = pca.explained_variance_ratio_[0]
ret_variance


Output:

0.9246187232017271

Implementing LDA using Scikit Learn

We import an LDA model from the Scikit Learn Library in this step.

Python3




from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
iris_lda = lda.fit_transform(X, iris['target'])


Now let’s plot the lower dimensional data on a 2D plane and try to visualize the distinction between the three classes.

Python3




sb.scatterplot(iris_lda[:, 0],
               iris_lda[:, 1],
               hue=iris['target'])
plt.show()


Output:

Visualising data obtained by using LDA

Visualising data obtained by using LDA

LDA maximizes the distance between different classes, whereas PCA maximizes the variance of the data. When there are fewer samples in each class, PCA performs better. LDA, however, performs better on large datasets with many classes.

Dominic
Dominichttp://wardslaus.com
infosec,malicious & dos attacks generator, boot rom exploit philanthropist , wild hacker , game developer,
RELATED ARTICLES

Most Popular

Dominic
32260 POSTS0 COMMENTS
Milvus
81 POSTS0 COMMENTS
Nango Kala
6625 POSTS0 COMMENTS
Nicole Veronica
11795 POSTS0 COMMENTS
Nokonwaba Nkukhwana
11855 POSTS0 COMMENTS
Shaida Kate Naidoo
6746 POSTS0 COMMENTS
Ted Musemwa
7023 POSTS0 COMMENTS
Thapelo Manthata
6694 POSTS0 COMMENTS
Umr Jansen
6714 POSTS0 COMMENTS