A Voting Classifier is a machine learning model that trains on an ensemble of numerous models and predicts an output (class) based on their highest probability of chosen class as the output.
It simply aggregates the findings of each classifier passed into Voting Classifier and predicts the output class based on the highest majority of voting. The idea is instead of creating separate dedicated models and finding the accuracy for each them, we create a single model which trains by these models and predicts output based on their combined majority of voting for each output class.
Voting Classifier supports two types of votings.
- Hard Voting: In hard voting, the predicted output class is a class with the highest majority of votes i.e the class which had the highest probability of being predicted by each of the classifiers. Suppose three classifiers predicted the output class(A, A, B), so here the majority predicted A as output. Hence A will be the final prediction.
- Soft Voting: In soft voting, the output class is the prediction based on the average of probability given to that class. Suppose given some input to three models, the prediction probability for class A = (0.30, 0.47, 0.53) and B = (0.20, 0.32, 0.40). So the average for class A is 0.4333 and B is 0.3067, the winner is clearly class A because it had the highest probability averaged by each classifier.
Note: Make sure to include a variety of models to feed a Voting Classifier to be sure that the error made by one might be resolved by the other.
Code : Python code to implement Voting Classifier
# importing libraries from sklearn.ensemble import VotingClassifier from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.datasets import load_iris from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split # loading iris dataset iris = load_iris() X = iris.data[:, : 4 ] Y = iris.target # train_test_split X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.20 , random_state = 42 ) # group / ensemble of models estimator = [] estimator.append(( 'LR' , LogisticRegression(solver = 'lbfgs' , multi_class = 'multinomial' , max_iter = 200 ))) estimator.append(( 'SVC' , SVC(gamma = 'auto' , probability = True ))) estimator.append(( 'DTC' , DecisionTreeClassifier())) # Voting Classifier with hard voting vot_hard = VotingClassifier(estimators = estimator, voting = 'hard' ) vot_hard.fit(X_train, y_train) y_pred = vot_hard.predict(X_test) # using accuracy_score metric to predict accuracy score = accuracy_score(y_test, y_pred) print ( "Hard Voting Score % d" % score) # Voting Classifier with soft voting vot_soft = VotingClassifier(estimators = estimator, voting = 'soft' ) vot_soft.fit(X_train, y_train) y_pred = vot_soft.predict(X_test) # using accuracy_score score = accuracy_score(y_test, y_pred) print ( "Soft Voting Score % d" % score) |
Output :
Hard Voting Score 1 Soft Voting Score 1
Examples:
Input :4.7, 3.2, 1.3, 0.2 Output :Iris Setosa
In practical the output accuracy will be more for soft voting as it is the average probability of the all estimators combined, as for our basic iris dataset we are already overfitting, so there won’t be much difference in output.