Sunday, September 15, 2024
Google search engine
HomeData Modelling & AIMachine Learning Life-cycle Explained!

Machine Learning Life-cycle Explained!

This article was published as a part of the Data Science Blogathon

Introduction

This blog mainly tells the story of the Machine Learning life-cycle, starting with a business problem to finding the solution and deploying the model. This helps beginners and mid-level practitioners to connect the dots and build an end-to-end ML model.

Here are the steps involved in an ML model lifecycle.

Step 1: Business context and define a problem

Step 2: Translating to AI problem and approach

Step 3: Milestones and Planning

Step 4: Data gathering and Understanding

Step 5: Data preparation

Step 6: Data Cleaning

Step 7: Exploratory data analysis

Step 8: Feature engineering and selection

Step 9: ML Model assumption and checks

Step 10: Data preparation for modelling

Step 11: Model Building

Step 12: Model Validation & Evaluation

Step 13: Predictions & Model deployment

All the above-mentioned steps are explained in simple words below. Telecom customer Churn’s use case is taken as an example throughout the blog to keep things simple.

Step 1: Business context and define a problem

 

machine learning life-cycle | general

Understand the business and the use case you are working with and define a proper problem statement. Asking the right questions to the business people to get required information plays a prominent role.

Challenges involved:

The main challenge involved here is understanding the business context and also figuring out what are the exact challenges business is facing.

Example:

Basic understanding of what is telecom industry and how it works. In the Churn use case, the problem statement is to identify the drivers for un-subscription and also to predict existing customers who are at high risk of un-subscribing in near future. So that a retention strategy can be planned.

Step 2: Translating to AI problem and approach

This step forms the base for all the following steps. If the problem is not translated into a proper AI problem, then the final model is not deployable for the business use case. From an AI point of view, business problems need to be translated. As the approaches can be many, finding the right approach is skilful. The end-to-end framework of the approach needs to be planned in this step.

Challenges involved:

Translating a business problem to an AI problem is the main challenge industry is facing. Many of the models built are not production-ready, due to a mismatch of expectations from the business view. This happens if the business context/problem is unclear or the business problem is not properly translated to an AI problem.

Example:

Churn model can be taken as a Machine learning-based Classification problem. Users of similar behavioural patterns need to be grouped, which helps in planning a retention strategy.

Step 3: Milestones and Planning

Keeping milestones and planning a timeline helps in understanding the progress of the project, resource planning, and deliverables.

Challenges involved:

Computational & human resource allocation planning and also estimating a proper deadline for each milestone.

Example:

If the data is too large to handle, the computational time for local machines will be more. This kind of cases need to be planned and deadlines need to be planned

Step 4: Data gathering and understanding

The data is not always readily available in proper formats and also not with the required features to build a model. The required data need to be gathered from business people. Data understanding involves having good knowledge on what are the features involved in the data, what they represent exactly, and how they are derived.

Challenges involved:

Some of the potential features which affect the target may not be captured by the business in the past. A very good understanding of the provided features is needed.

Example:

“Cus_Intenet” is one of the features provided, which is not clear by just seeing the feature name. This needs to be clarified from the business, what it means.

Step 5: Data preparation

The data taken from the client contains required features but may not be in a single table. The data need to be merged from different databases when dealing with larger datasets. The entity-relationship diagram(ERD) helps in understanding the raw data and preparing the data in the required format.

Challenges involved:

Data can be from different database management systems like SQL, Oracle DBMS. Prior knowledge is needed on clubbing the data from different data sources.

Example:

Data for internet users and non-internet users can be in different databases. Data of a user’s internet balance and main balance can be in different files, which need to be clubbed based on UserID.

data internet user
Features of Telecom Churn dataset

Step 6: Data Cleaning

Below are the points that need to be addressed in this step:
1. Duplicates
2. Data validity check
3. Missing values
4. Outliers

1. Duplicates

The data may contain duplicate entries, which need to be removed in most of the applications.

Example: Same customer data might be repeated in the entries, which can be identified using customer ID.

2. Data validity check

Based on the features, need to validate the data.

Example: Customer bill containing negative values.

3. Missing values

Based on the business, a missing value imputation needs to be chosen.

Example: If the data is normally distributed, mean imputation can be performed. If the mean and median difference is huge, median imputation is preferred.

4. Outliers

Outliers can be valid outlier or data entry mistake. Need to understand what kind of outlier the data contains. Based on the outlier type, the issue needs to be addressed.

Example: The age of a customer cannot be 500. It is a data entry mistake.

Step 7: Exploratory data analysis

This step gives more insights into the data and how the data is related to the target variable. This step involves mainly:
1. Uni-variate analysis
2. Bi-variate/Multi-variate analysis
3. Pivots
4. Visualization and Data Insights

1. Uni-variate analysis

Individual feature data patterns can be known using Frequency distribution table, Bar charts, Histograms, Box plots

Example: Monthly charges distribution, Numbers of churners and non-churners

machine learning life-cycle | univariate analysis
Monthly charges distribution
churn count
Churn count

2. Bi-variate/Multi-varaite analysis

The behaviour of the target variable based on independent features can be known by using Scatter plots, Correlation Coefficients, and by doing Regression analysis.

Example: Identifying the churn based on user tenure and total charges.

machine learning life-cycle | multivariate anlysis
                                                                Multi-variate analysis on Churn, TotalCharges and tenure

3. Pivots

Pivots allow us to draw insights from the data in a quick time.

pivots
                                                                                         Pivot table based on churn column

4. Visualization and Data insights

Based on the data visualization in step 2 and 3, insights into the data need to observe and noted. These insights of data are the takeaway points from step 4.

Example: Identifying the gender and age group of churners.

Step 8: Feature engineering and selection

Feature engineering involves identifying the right drivers/features that affect the target variable and also deriving the new features based on existing features.
Based on feature importance, some of the features can be removed from the data, which helps in reducing data size. As an example, feature importance can be known by calculating correlation, information gain(IG), and also from the Random forest model.

Challenges involved:

Need to identify the right drivers/features which affect the target variable. This requires a solid understanding of business context and use case.

Example:

If the data doesn’t contain “customer tenure” which is important, it needs to be derived based on the subscription start date and current date.

machine learning life-cycle | feature importance plot
Random Forest model based Feature importance graph

Step 9: ML model assumption checks

Some of the ML models have assumptions check which needs to be done before proceeding to the model building.

Challenges involved:

In most cases, model assumptions don’t hold good for real-world data.

Example:

The linear regression model assumes

1. Data is normally distributed
2. The relation between dependent and independent variables is linear.
3. The residuals are assumed to have a constant variance. (Homoscedasticity)

4. No multicollinearity in the data

Step 10: Data Preparation for Modelling

Below are the topics that can be covered in this step :
1. Creating dummy variables
2. Over Sampling and Under Sampling (if the data is imbalanced)
3. Split the data into train and test

1. Create Dummy variables

Features of data can be categorical or continuous. For the linear regression model, the categorical data need to be addressed by creating dummy variables.
 Example: The gender of a customer is 0 if female and one in the case of a male. In regression, this categorical data need to be addressed by creating dummy variables (Customer_0, Customer_1).

2. Over Sampling and Under Sampling

Oversampling and Undersampling are the techniques used when the data is imbalanced.

machine learning life-cycle |Resampling

 

Example: The data may contain, Churn to Non-churn ratio of 95:5. In this case, the model cannot learn properly the behavior of Non-churners.

3. Split the data into train and test

Test data need to be separated for checking the model accuracy. The most common train to test ratio of a model will be 70:30, 80:20.

Step 11: Model Building

Model building is the process of developing a probabilistic model that best describes the relationship between independent and dependent variables. Various ML models are to be built based on the problem statement.

Example:

Customer classification can be done by using Decision trees, Random Forest classifier, Naive Bayes models, and much more.

Step 12 : Model Validation & Evaluation

This step covers
1. Testing the model
2. Tuning the model
3. Cross-validation
4. Model evaluation metrics trade-off
5. Model Underfitting/Overfitting

1. Testing the model :

Run the model on test data and evaluate the model performance using the correct metric based on the business use case.

2. Tuning the model :

Model tuning involves, improving model performance by iterating the parameter values during model building. After fine-tuning, the model needs to re-build.
To know more about hyperparameter tuning, refer to Hyperparameter tuning
Example: “Gridsearchcv” in sklearn helps in finding the best combination of hyperparameter values

3. Cross-validation :

Cross-validation is used to evaluate how the model will perform on an independent test dataset. Some of the cross-validation techniques are :
1. K- fold cross-validation
2. Stratified k-fold cross-validation.
 To know more about the cross-validation techniques, refer to Cross-validation techniques

4. Model Evaluation metrics trade-off :

Trade-offs always help us to find the sweet spot or the middle ground. Machine learning mostly deals with two tradeoffs :
1. Bias-variance tradeoff
2. Precision-Recall tradeoff know more about the tradeoff’s, refer to Bias-variance & precision -Recall Tradeoff

5. Model Underfitting and Overfitting :

Overfitting is the case, where the model is trying to capture all the patterns in the training data and failing to perform in the test data.
  • Underfitting is the case, where the model is not learning properly in the training data and also not performing well in test data.
machine learning life-cycle | over fitting underfitting

 

Challenges involved :

      • Model overfitting and underfitting.
      • Choosing the right model evaluation metric based on business context.

Step 13: Prediction & Model deployment

Predict and review the outputs after fine-tuning the model in step 12.
This step covers model deployment and also to production models for real-time use. Below topics will be covered under this final step.
1. Scaling the model
2. Model Deployment
3. Business adoption and consumption
4. A/B testing
5. Business KPI
6. Measure performance and monitor
7. Feedback loop

Model deployment is out of scope for this article to discuss. For more information on Model deployment please check “Deploying Machine Learning Models“.

By this, we reach the end of this article and hope this article helps you connect dots in understanding the ML lifecycle.Thanks to Mr Akshay Kulakarni for the mentorship.References :1. Lectures on Machine learning by Mr Akshay Kulakarni as part of REVA University M.Tech program.

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion. 

Dominic Rubhabha-Wardslaus
Dominic Rubhabha-Wardslaushttp://wardslaus.com
infosec,malicious & dos attacks generator, boot rom exploit philanthropist , wild hacker , game developer,
RELATED ARTICLES

Most Popular

Recent Comments