Sunday, September 22, 2024
Google search engine
HomeData Modelling & AIIBM Research Launches Explainable AI Toolkit

IBM Research Launches Explainable AI Toolkit

Explainability or interpretability of AI is a huge deal these days, especially due to the rise in the number of enterprises depending on the decisions made by machine learning and deep learning. Naturally, stakeholders want a level of transparency for how the algorithms came up with their recommendations. The so-called “black box” of AI is rapidly being questioned. For this reason, I was encouraged to learn of IBM’s recent efforts in this area. The company’s research arm just launched a new open-source AI toolkit, “AI Explainability 360,” consisting of state-of-the-art algorithms that support the interpretability and explainability of machine learning models. For starters, you can check out a summary video HERE

[Related Article: Explainable AI: From Prediction To Understanding]

According to an IBM Institute for Business Value survey, 68 percent of business leaders believe that customers will demand more explainability from AI in the next three years. It’s clear that gaining an understanding for how things work is essential to how we navigate the world around us and is essential to fostering trust and confidence in AI systems. An excellent paper*, “Explaining Explainable AI,” by Michael Hind, a Distinguished Research Staff Member in the IBM Research AI organization, explores the challenges in solving the interpretability problem and varied approaches researchers are pursuing.

Explainable AI Toolkit

AI Explainability 360 tackles interpretability in a single interface and is designed to tackle the diversity of explanation with algorithms for case-based reasoning, directly interpretable rules, post hoc local explanations, post hoc global explanations, and more. Given that there are so many different explainability options, the solution contains helpful resources all under one roof: an interactive experience that provides a gentle introduction through a credit scoring application; several detailed tutorials to educate practitioners on how to inject explainability in other high-stakes applications such as clinical medicine, healthcare management and human resources; and documentation that guides the practitioner on choosing an appropriate explanation method.

Explainable AI Toolkit

The toolkit has been engineered with a common interface for the various methods of explainability (not an easy accomplishment) and is extensible to accelerate innovation by the community advancing AI explainability. The initial release contains eight algorithms recently created by IBM Research, and also includes metrics from the community that serve as quantitative proxies for the quality of explanations.

AI Explainability 360 complements the latest algorithms developed by IBM Research that went into Watson OpenScale. Released last year, the platform helps clients manage AI transparently throughout the full AI lifecycle, regardless of where the AI applications were built or in which environment they run. OpenScale also detects and addresses bias across the spectrum of AI applications, as those applications are being run.

[Related Article: Interpretable Machine Learning – Fairness, Accountability, and Transparency in ML systems]

Conclusion

Black box machine learning models that cannot easily be understood by people, such as deep neural networks and large ensembles, are achieving impressive accuracy on various tasks. However, as machine learning is increasingly used to inform high stakes decisions, explainability and interpretability of the models is becoming essential. The AI Explainability 360 Toolkit from IBM Research is an important new open-source library for data scientists and developers that’s definitely worth a serious look. It includes algorithms, guides and tutorials to bring explainability to AI, thereby making it more trusted. If you’re part of a data science team that’s newly challenged to provide more transparency of machine learning results, this toolkit may be just what’s needed.  

* Hind, Michael (2019), XRDS: Crossroads, The ACM Magazine for Students – AI and Interpretation, Volume 25 Issue 3, Spring 2019, Pages 16-19.

RELATED ARTICLES

Most Popular

Recent Comments