DLIME - Let's dig into the code (model explainability stream)
Speaker(s): Muhammed Rehman Zafar
Facilitator(s): Ali El-Sharif
Find the recording, slides, and more info at https://ai.science/e/dlime-deterministic-local-interpretable-model-agnostic-explanations-let-s-dig-into-the-code--tJtMhBkqERdBECOQq1D7
Motivation / Abstract
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. DLIME (Deterministic - LIME) is a novel method that improves on the stability of LIME. This will be a hands-on demo of the code to get a deep dive into the experiment and results.
What was discussed?
This is a Python code walkthrough. It will cover a lot of interesting techniques including Generating Local Model Agnostic Explanations (LIME) and Deterministic-LIME (DLIME) explanations, measuring the stability of explanation using the Jaccard similarity coefficient.
DLIME has been reported more stable explanations than those generated by LIME.
What are the key takeaways?
Demonstration of the coding techniques and run through the experiment.
------
#AISC hosts 3-5 live sessions like this on various AI research, engineering, and product topics every week! Visit https://ai.science for more details