Skewed data is common in data science; skew is the degree of distortion from a normal distribution. For example, below is a plot of the house prices from Kaggle’s House Price Competition that is right skewed, meaning there are a minority of very large values.
Why do we care if the data is skewed? If the response variable is skewed like in Kaggle’s House Prices Competition, the model will be trained on a much larger number of moderately priced homes, and will be less likely to successfully predict the price for the most expensive houses. The concept is the same as training a model on imbalanced categorical classes. If the values of a certain independent variable (feature) are skewed, depending on the model, skewness may violate model assumptions (e.g. logistic regression) or may impair the interpretation of feature importance.
We can objectively determine if the variable is skewed using the Shapiro-Wilks test. The null hypothesis for this test is that the data is a sample from a normal distribution, so a p-value less than 0.05 indicates significant skewness. We’ll apply the test to the response variable Sale Price above labeled “resp” using Scipy.stats in Python.
The p-value is not surprisingly less than 0.05, so we can conclude that the variable is skewed. A more convenient way of evaluating skewness is with pandas’ “.skew” method. It calculates the Fisher–Pearson standardized moment coefficient for all columns in a dataframe. We can calculate it for all the features in Kaggle’s Home Value dataset (labeled “df”) simultaneously with the following code.
A few of the variables like Pool Area are highly right skewed due to lots of zeros, this is okay. Some models like decision trees are fairly robust to skewed features.
We can address skewed variables by transforming them (i.e. applying the same function to each value). Common transformations include square root (sqrt(x)), logarithmic (log(x)), and reciprocal (1/x). We’ll apply each in Python to the right-skewed response variable Sale Price.
Square Root Transformation
After transforming, the data is definitely less skewed, but there is still a long right tail.
Reciprocal Transformation
Still not great, the above distribution is not quite symmetrical.
Log Transformation
The log transformation seems to be the best, as the distribution of transformed sale prices is the most symmetrical.
Box Cox Transformation
An alternative to manually trying a variety of transformations is the Box Cox transformation. For each variable, a Box Cox transformation estimates the value lambda from -5 to 5 that maximizes the normality of the data using the equation below.
For negative values of lambda, the transformation performs a variant of the reciprocal of the variable. At a lambda of zero, the variable is log transformed, and for positive lambda values, the variable is transformed the power of lambda. We can apply “boxcox” to all the skewed variables in the dataframe “df” using Scipy.stats.
Skewness reduced quite a bit! The box cox transformation is not a panacea for skew however; some variables cannot be transformed to be normally distributed.
Transforming skewed data is one critical step during the data cleaning process. See this article to learn about dealing with imbalanced categorical classes.
Take the next steps in your career
From practical training, hands-on workshops, networking events, and more, ODSC Europe this June 14th-15th can help you fill in the gaps you have that can hinder your future success as a data scientist. So, what are you waiting for? Register today and build the tomorrow you want. Act now, as ODSC Europe is 40% off for a limited time.
If you’re looking to make a career shift or to level up your career, then check out our free Ai+ Careers platform. Here, you can upload your resume and get matched with jobs in AI fit for your skillset. Check it out here!