Saturday, November 16, 2024
Google search engine
HomeData Modelling & AIInterpretable Machine Learning – Fairness, Accountability, and Transparency in ML Systems

Interpretable Machine Learning – Fairness, Accountability, and Transparency in ML Systems

Editor’s note: Sayak is a speaker for ODSC West in San Francisco this November! Be sure to check out his talk, “Interpretable Machine Learning – Fairness, Accountability and Transparency in ML systems,” there!

The problem is it is much harder to evaluate machine learning systems than to train them. “It requires responsibly requires doing more than just calculating loss metrics. Before putting a model into production, it’s critical to audit training data and evaluate predictions for bias.” – Machine Learning Crash Course. There are a number of consequences in front of which show the dire consequences of bias in machine learning systems.

[Related article: Layer-wise Relevance Propagation Means More Interpretable Deep Learning]

interpret machine learning models

Data fuels machine learning systems, specifically training data. When this data is biased and it is fed to machine learning systems, the consequences are sure to turn out to be unpleasant. This often restricts machine learning to be used effectively for everyone, as Cathy O’Neil puts it – 

The privileged are processed by people; the poor are processed by algorithms.  

Why is machine learning interpretability difficult?

The problem is that a single metric, such as classification accuracy, is an incomplete description of most real-world tasks.Doshi-Velez and Kim

Machine learning models can technically be considered as functions, functions that represent some sort of mapping (supervised machine learning). In order to learn these functions, machine learning models combine and recombine the features (present in the data) in many arbitrary ways. Disaggregating such functions into reason codes based on single input features is difficult.

interpret machine learning models

What can we do about it?

  • Reconsider the data: How closely the data represents the problem? 

It is common to see data points that are not representative of the problem you are dealing with. Consider the above problem again. The task is to detect the age of individuals from their front-facing images. You would not want images of toys, dogs, airplanes in the dataset, just as you would not want images of this quality:

interpret machine learning models

Try to map this point with the idea of irrelevance. If we were having a class on the basis of differentiation, for example, we would not want lessons on how to solve a       quadratic equation in that same class. 

  • Loop in domain experts: 

They might provide guidance or suggest changes to what you’re working on, which can help your project to have a longer-term positive impact.

  • Train the models to account for bias:

What do outliers look like, and how does your model handle them? What implicit assumptions might a system be making, and how might you model or mitigate those?

  • Interpret outcomes

Is the machine learning system overgeneralizing? If a human were to perform the task, what would appropriate social behavior look like? What interpersonal cues might there be that would make an ML system perform very differently than a human?

[Related article: AI Ethics: Avoiding Our Big Questions]

Machine learning interpretability scopes

  • Algorithm transparency: Understanding how an algorithm training a model
  • Global model interpretability: How a model is making predictions after getting trained
  • Local model interpretability: Finding out reasons for making a particular prediction  

About the author:

Sayak loves everything deep learning. He goes by the motto of understanding complex things and helping people understand them as easily as possible. Sayak is an extensive blogger and all of his blogs can be found here. He is also working with his friends on the application of deep learning in Phonocardiogram classification. Sayak is also a FloydHub AI Writer. He is always open to discussing novel ideas and taking them forward to implementations. You can connect with Sayak on LinkedIn and Twitter.

Editor’s note: Sayak is a speaker for ODSC West in San Francisco this November! Be sure to check out his talk, “Interpretable Machine Learning – Fairness, Accountability and Transparency in ML systems,” there!

RELATED ARTICLES

Most Popular

Recent Comments