DLIME - Let's dig into the code (model explainability stream)

Published on ● Video Link: https://www.youtube.com/watch?v=vGkjOEz8Ho0



Duration: 34:39
285 views
5


Speaker(s): Muhammed Rehman Zafar
Facilitator(s): Ali El-Sharif

Find the recording, slides, and more info at https://ai.science/e/dlime-deterministic-local-interpretable-model-agnostic-explanations-let-s-dig-into-the-code--tJtMhBkqERdBECOQq1D7

Motivation / Abstract
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. DLIME (Deterministic - LIME) is a novel method that improves on the stability of LIME. This will be a hands-on demo of the code to get a deep dive into the experiment and results.

What was discussed?
This is a Python code walkthrough. It will cover a lot of interesting techniques including Generating Local Model Agnostic Explanations (LIME) and Deterministic-LIME (DLIME) explanations, measuring the stability of explanation using the Jaccard similarity coefficient.

DLIME has been reported more stable explanations than those generated by LIME.

What are the key takeaways?
Demonstration of the coding techniques and run through the experiment.

------
#AISC hosts 3-5 live sessions like this on various AI research, engineering, and product topics every week! Visit https://ai.science for more details




Other Videos By LLMs Explained - Aggregate Intellect - AI.SCIENCE


2020-07-22COVID and Racial Inequity, and Implications for AI
2020-07-21TGN: Temporal Graph Networks for Deep Learning on Dynamic Graphs [Paper Explained by the Author]
2020-07-20Founders in Fundraising, and AI Applications
2020-07-16Uncertainty-Aware Action Advising for Deep Reinforcement Learning Agents | AISC
2020-07-16Machine Learning for Forecasting Global Atmospheric Models | AISC
2020-07-15Towards Frequency-Based Explanation for Robust CNN | AISC
2020-07-14Lagrangian Neural Networks | AISC
2020-07-14TeslaSenti: Near real-time sentiment analysis of Tesla tweets | Workshop Capstone
2020-07-14See.Know.Bias - Using AI to Develop Media Literacy and Keep News Neutral | workshop capstone
2020-07-10Video Action Transformer Network | AISC
2020-07-09DLIME - Let's dig into the code (model explainability stream)
2020-07-09Overview of Machine Learning in Behavioral Economics | AISC
2020-07-08Compact Neural Representation Using Attentive Network Pruning | AISC
2020-07-08Navigating the Idea Maze: Continuous discovery frameworks for (AI?) products | AISC
2020-07-02Nvidia's RAPIDS.ai: Massively Accelerated Modern Data-Science | AISC
2020-07-02Building AI Products; The Journey | Overview
2020-07-01Building your Product Strategy - A Guide | AISC
2020-06-30Conceptual understanding through efficient inverse-design of quantum optical experiments | AISC
2020-06-30An A-Z primer on the AI Product Lifecycle | AISC
2020-06-29Reducing Gender Bias in Google Translate | Summary and Takeaways | AISC
2020-06-29Reducing Gender Bias in Google Translate | Melvin Johnson | AISC Algorithmic Inclusion