DeepFakes & Explainable AI Applications in NLP, Biomedical & Malware Classification
For slides and more information on the paper, visit https://ai.science/e/deep-fakes-how-can-we-detect-them--2020-09-01
Speaker: Sherin Mathews; Host: Ikjot Saini; Discussion Facilitator: Ali El-Sharif
Motivation:
Deep learning algorithms have achieved high-performance accuracy in complex domains such as image classification, face recognition sentiment analysis, text classification, and speech understanding. Due to the nested non-linear structure of deep learning algorithms, these highly successful models are usually applied in a black-box manner, i.e., no information is provided about what exactly causes them to arrive at their predictions. The effectiveness of these systems is thus limited by the machine’s current inability to explain its decisions and actions to human users. Such a lack of transparency can be a major drawback.