Interpretability in NLP: Moving Beyond Vision

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=E5pi74e2d5k



Category:
Let's Play
Duration: 1:21:57
2,861 views
65


Deep neural network models have been extremely successful for natural language processing (NLP) applications in recent years, but one complaint they often suffer from is their lack of interpretability. On the other hand, the field of computer vision has navigated their own way of improving interpretability for deep learning models, most notably with post-hoc interpretation methods such as saliency. In this talk, we investigate the possibility of deploying these interpretation methods to natural language processing applications. Our study covers common NLP applications such as language modeling and neural machine translation, and we stress the necessity of quantitative evaluations of interpretations apart from qualitative evaluations. We show that this adaptation is generally feasible, while also pointing out some shortcomings of the current practice that may shed light on future research directions.

Talk slides: https://www.microsoft.com/en-us/research/uploads/prod/2019/11/Interpretability-in-NLP-Moving-Beyond-Vision-SLIDES.pdf

See more on this video at Microsoft Research: https://www.microsoft.com/en-us/research/video/interpretability-in-nlp-moving-beyond-vision/







Tags:
Deep neural network models
natural language processing
NLP
interpretability
deep learning models
post-hoc interpretatio
saliency
language modeling
neural machine translation
Shuoyang Ding
Microsoft Research