Towards Frequency-Based Explanation for Robust CNN | AISC
Speaker(s): Zifan Wang
Find the recording, slides, and more info at https://ai.science/e/towards-frequency-based-explanation-for-robust-cnn--hBiCzxIOpi3iOm0woCyY
Motivation / Abstract
Computer Vision implementations based on Convolutional Neural Networks (CNN) have been reported to be unstable and vulnerable to adversarial attacks that are not visible to humans. The authors are presenting novel work to better identify algorithms that are vulnerable to these attacks and present explainable options to help design better models.
What was discussed?
1) Is your method helping models ignore the noise in a similar way that human cognition filters out irrelevant information?
2) Why is adversarial training expensive? How does your method improve on adversarial training?
3) Do you need access to the model to generate your explanations?
------
#AISC hosts 3-5 live sessions like this on various AI research, engineering, and product topics every week! Visit https://ai.science for more details