Explaining image classifiers by removing input features using generative models | AISC
Subscribers:
20,300
Published on ● Video Link: https://www.youtube.com/watch?v=Csvz5gZ1z9M
For slides and more information on the paper, visit https://ai.science/e/explaining-image-classifiers-by-removing-input-features-using-generative-models--MByMqI2w2DTzS0NAFbIm
Speaker: Anh Nguyen; Host: Muhammad Rehman Zafar, Ali El-Sharif
Motivation:
Perturbation-based explanation methods often measure the contribution of an input feature to an image classifier's outputs by heuristically removing it via e.g. blurring, adding noise, or graying out, which often produce unrealistic, out-of-samples. This work proposes to integrate a generative inpainter into three representative attribution methods to remove an input feature.
Other Videos By LLMs Explained - Aggregate Intellect - AI.SCIENCE
Tags:
ai explainability
machine learning interpretability
machine learning
deep learning
artificial intelligence
neural networks
neural networks explained
deep learning research
convolutional neural network
attention mechanism
computer vision
neural networks and deep learning