Saturday, November 16, 2024
Google search engine
HomeGuest BlogsLayer-wise Relevance Propagation Means More Interpretable Deep Learning

Layer-wise Relevance Propagation Means More Interpretable Deep Learning

Wojciech Samek is head of machine learning for Fraunhofer Heinrich Hertz Institute. At ODSC Europe 2018, he spoke about an active area of research in deep learning: interpretability; layer-wise relevance propagation. Samek launched his lecture with the following preface on the rising importance of interpretability of deep learning models:

[Related Article: How to Leverage Pre-Trained Layers in Image Classification]

“In the last number of years, we’ve seen pretty amazing results and performances of ML, especially deep neural networks beating human professional players in games like GO, Poker, along with other computer games. But we’re also seeing amazing performance in areas like computer vision, natural language processing, speech processing, and so on,” Samek said. “However, until now, many of these models have been applied in a ‘black box’ manner, which means you have some input, for example, an image of skin cancer. And you get a prediction, but we’re not able to understand why these networks came to their predictions to be able to verify that they are doing what we expect them to do. Interpretability is an important topic because it allows us to establish trust and to verify predictions, for example in the medical context. This is very important since we don’t want to trust the black box when it comes to very important decisions like a medical diagnoses and autonomous driving.”

Slide copyright Dr. Wojciech Samek, ODSC Europe 2018

In many problems, interpretability is as important as predictions. Trusting a black-box system is not always an option.

Samek discussed an interpretability methodology called Layer-wise Relevance Propagation (LRP), a general approach to explain predictions of AI. The mathematical interpretation is Deep Taylor Decomposition of the neural network. It was originally presented in a 2015 paper, “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation” by Bach et al.

Slide copyright Dr. Wojciech Samek, ODSC Europe 2018

In the above example, the algorithm takes the input image and propagates it through the network using forward pass. It activates the rooster neuron. Then it redistributes the evidence for the class rooster back to input space.  

The presentation resembles a kind of graduate seminar, like one that an academic department would sponsor for masters- and doctorate-level students. It’s best to review the academic paper first to make the ODSC talk more approachable. Samek’s research group’s website offers a plethora of additional materials to learn more about LRP.

Samek presents the pros and cons of popular AI explainability techniques. He goes through a comparative analysis of these techniques — e.g. sensitivity analysis, Local Interpretable Model-Agnostic Explanations (LIME), etc. — with respect to LRP.

The latter part of the talk focuses on using LRP in practice. It provides some insight into how data scientists can apply the technology to various problem domains, such as general images, speech, text analysis, games, video, morphing, faces, digits, and EEG. Samek also shows how they can apply LRP to vastly different models, such as CNNs, LSTM, one-class SVM, etc.

This presentation highlights a very important area of research: Addressing the “black box” problem with AI. It surveys state-of-the-art research efforts toward interpretable deep learning, with a focus on LRP.

To take a deeper dive into interpretable deep learning, check out Samek’s compelling hour-long talk from ODSC Europe 2018.

[Related Article: Automating Data Wrangling – The Next Machine Learning Frontier]

Key Takeaways:

  • Deep neural networks (DNNs) are reaching — or even exceeding — human performance on an increasing number of complex tasks.
  • Due to their complex, non-linear structure, these models are usually applied in a black box manner, i.e., no information is provided about what exactly makes them arrive at their predictions. The lack of transparency for the explainability of AI can be a major drawback in practice.
  • Layer-wise Relevance Propagation (LRP) is a general technique for interpreting DNNs by explaining their predictions. LRP is effective when applied to various data types (images, text, audio, video, EEG/fMRI signals) and neural architectures (ConvNets, LSTMs).

RELATED ARTICLES

Most Popular

Recent Comments