Saturday, November 16, 2024
Google search engine
HomeData Modelling & AIWhat ‘not’ to do When You’re Starting to Learn Data Science

What ‘not’ to do When You’re Starting to Learn Data Science

This article was published as a part of the Data Science Blogathon.
yes or no for learning data science

Richard Feynman, the great Physicist, has said that in order to understand a subject, understand its fundamentals to a level that you can explain it to a 10-year old kid.

Introduction

Now, if you look back at how you were taught Mathematics or any other subject for that matter in pre-school, you were taught starting from the absolute building blocks. Digits, numbers, counting, arithmetic operations and so on. So, a longer path was taken to reach a particular level of understanding.

Do you see what I am getting at?

Look at the picture below. If you have to go from A to B, what do you generally do? Try to jump directly from A to B, right?

 

learn data science stepwise manner

When we talk about data science, freshers are more eager to jump from reading the problem statements to modelling that they miss the intermediate steps.

So, a Data Science Project involves a much more holistic approach-

  1.  Defining the problem that we want to solve
  2. Brainstorming and writing about what are the kind of factors/variables that impact our problem
  3. Writing down the Hypotheses as to how other variables can affect our target variable
  4.  Gathering Data for our project (in case of hackathons, the dataset is already made available)
  5.  Creating new variables out of the existing variables (feature engineering)
  6.  Exploring our data to understand the distribution of the variables and also to disprove our hypotheses (here, you can your statistical tests, graphical measures etc.)
  7.  Preparing our data for modelling (data transformation, missing data imputation, creating more variables etc.)
  8.  Now, comes the modelling part (identifying what model will be suitable for you rather than applying everything blindly)
  9.  Variable Selection, Model Tuning and Validation so that your model is more robust to unseen data.
  10.  Deployment and Tracking the model performance on future data.

Do you think about these steps before jumping onto the modelling part?

What I have mentioned above is one of the examples of myths that exist among us freshers. It is very important to remove these so that we can focus on what is right and crucial for us in order to start a career in this field.

If you are just starting your journey, this article is for you. Or, if you have already learnt a few concepts, well, this article will help you to realign and prioritise the important stuff first.

It’s my fifth month at Paisabazaar, a Fintech company, as Assistant Manager-Analytics and my opinions about a Data Science job have changed.
In other words, I have learnt what one’s priorities should be before starting to apply or work somewhere.

Let’s do some more myth-busting!

This is what I believe most of us freshers think-

  1.  SQL and Databases are secondary to Python.
  2.  ML Algorithms are just one-line codes in Scikit-learn.
  3.  Model Building is the most fascinating job on this planet.
  4.  Deep-Learning is a prerequisite to land up a job in Data Science.
  5.  Linear Regression and Logistic Regression cannot solve problems.
  6.  Writing a clean code is not necessary.

Let’s talk about them one-by-one.

1. SQL and Databases are secondary to Python

As a Data Scientist/Analyst, what’s your raw material to make wonderful dashboards/models/summaries? It’s Data, right? And where is Data stored? Obviously, in Databases. Now, if you cannot handle your raw material, how can you expect to do the cooking?

Rather, Databases should be your friends. Using SQL/Hive to extract, summarize data should be a skill set that you should possess. To be honest, I didn’t work enough on it and I am still trying to get used to it.

When you start a job, you have to understand the business, the kind of data they collect and use to solve problems. To perform, you have to understand the context first. You are gonna spend a lot of time understanding the databases and tables that are being used across the organisation. Then only you can start solving problems.

2. ML Algorithms are just one-line codes in Scikit-learn-

Different kinds of models have different kinds of use cases. Modelling isn’t just about using the default model to get some output. Optimizing a model, making it parsimonious (so that it can handle new data effectively and give good results), tuning the parameters- you have to do all of that to design a model.

You have to do Hyperparameter tuning which takes a lot of time- choosing the right set of parameters for a particular model. Or, let’s say that you are choosing Linear Regression, making sure that your dataset satisfies the assumptions of the model before applying it. Also, there is something known as variable selection- not all variables impact the target variable so you have to choose the right subset to make sure you have an impactful model.

3. Model Building is the most fascinating job on this planet-

As I mentioned in the last point, using a one-line code isn’t enough. Before you even start to model, you need to brainstorm as to what kind of data you should have to build a model on it- the variables which can impact the output. So, Data Collection from various databases/stakeholders, Data Manipulation, Data Transformation, Data Imputation takes 80% of your time to build a model.

To make sure, you are using the right kind of data to build a model is much more important than running a model on it. You can build a skeleton, only after you have a sufficient number of bones, right?

Moreover, after you have picked a model, you have to deploy it, track it, make improvements if it is not working as desired. You have to make sure you have spent a sufficient amount of time validating it, before deploying.

4. Deep-Learning is a pre-requisite to land up a job in Data Science-

I see that many freshers try to ‘deep ’ dive into Deep Learning and Machine Learning to bag a job in Data Science. It’s almost like buying unwanted and extra ingredients to make a simple dish in the kitchen. These ingredients would definitely compliment your profile, but what’s the point of advancing if you falter on the basics?

Moreover, the person who will likely interview you for the post won’t even know deep learning himself. Yes, that’s very much a possibility.

Rather, your focus should be to get the basics right- invest time in learning EDA, Linear/Logistic Regression, Statistical Tests and Distributions, extracting data from databases etc. Then, work on how to answer questions on these in the interviews.

Have some projects under your name, where you have applied the techniques that I have mentioned above. Yes, that is gonna help you land a job rather than trying to learn everything in this domain.

When you are starting out, chances are you might not be handed over a modelling problem right away. That is gonna happen only after you are familiar with the day-to-day working of the business. That is why it is important to focus more on having problem-solving skills rather than trying to learn complicated algorithms.

5. Linear Regression and Logistic Regression cannot solve problems-

Deep Learning models are ‘Black-Box models’ or in other words, they are non-interpretable. While they are more likely able to solve your problem at hand, you might not be able to explain it to the stakeholder. What if the concerned person asks you, “How is X variable impacting our target?” It’s tough to answer the question when if you have implemented a Black-Box Model.

Linear Regression and Logistic Regression are the starting points of modelling. When you use them, you can easily understand how different variables are affecting our target variable. There is gonna be a tradeoff between interpretability and accuracy and most business choose interpretability because that helps them make business decisions on a granular level.

Spend enough time on regression and only when you are comfortable, move ahead. Do not forget to understand the assumptions behind every model.

6. Writing a clean code is not necessary-

When you are into a job, there are going to be many tasks that will be repetitive. If you sat to write those code every time you were given the same task, you are wasting a lot of time that can be used somewhere else.

Your codes should be efficient. I used to think earlier, that as long as I am completing a task, how I am doing it does not matter. The truth is that if you are not learning from every task that you do, there is something wrong with your approach.

If some technique is slightly tough to learn but can help you do your tasks efficiently, invest some time to learn it. It will be best that you prepare end-to-end codes which can be used to do repetitive tasks and invest the rest of your time brainstorming about new problems.

Mind you- I used the word ‘invest’ instead of ‘spend’.

————————————————————————————————————————————————————

If you are still reading this article, I wish you all the best. I hope this helps you understand what to spend time on and what not to spend time on when you are just starting.

In addition to that, you can connect with me by clicking here. I will be more than happy to carry out meaningful discussions with you on this topic. It’s the beauty of this field- There’s the scope of a lot of peer learning.

Let’s go from A to B via X, Y and Z!

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion. 

Sarthak Arora

17 May 2021

Dominic Rubhabha-Wardslaus
Dominic Rubhabha-Wardslaushttp://wardslaus.com
infosec,malicious & dos attacks generator, boot rom exploit philanthropist , wild hacker , game developer,
RELATED ARTICLES

Most Popular

Recent Comments