A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosi

Published on ● Video Link: https://www.youtube.com/watch?v=G2rKtLNxlxQ



Duration: 40:21
348 views
7


For slides and more information on the paper, visit https://ai.science/e/dlimea-deterministic-local-interpretable-model-agnostic-explanations-approach-for-computer-aided-diagnosis-systems--2020-05-28-dime

Speaker: Muhammad Rehman Zafar; Moderator: Ali El-Sharif

Motivation:
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically generates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g. linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation and feature selection methods result in "instability" in the generated explanations, where for the same prediction, different explanations can be generated. This is a critical issue that can prevent deployment of LIME in a Computer-Aided Diagnosis (CAD) system, where stability is of utmost importance to earn the trust of medical professionals. In this paper, we propose a deterministic version of LIME. Instead of random perturbation, we utilize agglomerative Hierarchical Clustering (HC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a linear model is trained over the selected cluster to generate the explanations. Experimental results on three different medical datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability of DLIME compared to LIME utilizing the Jaccard similarity among multiple generated explanations.




Other Videos By LLMs Explained - Aggregate Intellect - AI.SCIENCE


2020-06-23[XAI] Explainable AI in Retail | AISC
2020-06-22Why you should be part of AISC community!
2020-06-18Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning | AISC
2020-06-17GrowNet: Gradient Boosting Neural Networks | AISC
2020-06-17LogoGAN: Creating Logos with Generative Adversarial Networks | Workshop Capstone
2020-06-11Meta-Graph: Few-Shot Link Prediction Using Meta-Learning | AISC
2020-06-11Algorithmic Inclusion: A Scalable Approach to Reducing Gender Bias in Google Translate | AISC
2020-06-10Reinforcement Learning in Economics and Finance | AISC
2020-06-05Building (AI?) Products; Step by Step Guide | AISC
2020-06-03The Synthesizability of Molecules Proposed by Generative Models | AISC
2020-05-28A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosi
2020-05-28Unifying machine learning and quantum chemistry with a deep neural network | AISC
2020-05-27Model Selection for Optimal Prediction in Statistical Learning - Part 2 / 2 | AISC
2020-05-26Representation Learning of Histopathology Images using Graph Neural Networks | AISC
2020-05-26BillionX acceleration using AI Emulators | AISC
2020-05-22Machine Learning Methods for High Throughput Virtual Screening with a focus on Organic Photovoltaics
2020-05-21Learning the Graphical Structure of Electronic Health Records with Graph Convolutional Transformer
2020-05-20Reinforcement Learning for Batch-to-Batch Bioprocess Optimisation | AISC
2020-05-20Leaf Doctor: Plant Disease Detection Using Image Classification | Deep Learning Workshop Capstone
2020-05-20News ScanNER: Entity Tagging in News Headlines | Deep Learning Workshop Capstone
2020-05-19New methods for identifying latent manifold structure from neural data | ASIC