Paper review - Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey | AISC
Speaker(s): Dr. Ikjot Saini
Facilitator(s): Ali El-Sharif
Find the recording, slides, and more info at https://ai.science/e/paper-review-threat-of-adversarial-attacks-on-deep-learning-in-computer-vision-a-survey--SufP7iFeEBpjLYxYjxsX
Motivation / Abstract
With increased dependence on computer vision algorithms to support autonomous driving, it is important to understand the vulnerabilities and threats associated with these algorithms; their impact on safety and security. Dr. Saini will present a conceptual review based on the survey paper: Threat of Adversarial Attacks on Deep Learning in Computer Vision.
What was discussed?
1)Can you expand on where do you see the near-term need for additional research in the area of computer vision to support autonomous driving?
2) Do we have any benchmark datasets that have been created specifically for the domain of autonomous driving and could be used to validate robustness to attacks?
3)Can you talk a little bit about the privacy implications for using computer vision algorithms to support autonomous driving?
What are the key takeaways?
Despite the high accuracies of deep neural networks on a wide variety of Computer Vision tasks, these are vulnerable to subtle input perturbations that lead them to completely change their outputs
It is apparent that adversarial attacks are a real threat to deep learning in practice, especially in safety and security-critical applications
The existing literature demonstrates that currently deep learning can be effectively attacked in cyberspace as well as in the physical world
------
#AISC hosts 3-5 live sessions like this on various AI research, engineering, and product topics every week! Visit https://ai.science for more details