Breaking Deep Learning Systems With Adversarial Examples | Two Minute Papers #43
Artificial neural networks are computer programs that try to approximate what the human brain does to solve problems like recognizing objects in images. In this piece of work, the authors analyze the properties of these neural networks and try to unveil what exactly makes them think that a paper towel is a paper towel, and, building on this knowledge, try to fool these programs. Carefully crafted adversarial examples can be used to fool deep neural network reliably.
_______________
The paper "Intriguing properties of neural networks" is available here:
http://arxiv.org/abs/1312.6199
The paper "Explaining and Harnessing Adversarial Examples" is available here:
http://arxiv.org/abs/1412.6572
Image credits:
Thumbnail image - https://www.flickr.com/photos/healthblog/8384110298 (CC BY-SA 2.0)
Shower cap - Code Words / Julia Evans - https://codewords.recurse.com/issues/five/why-do-neural-networks-think-a-panda-is-a-vulture
MNIST - hxhl95
Andrej Karpathy's online convolutional neural network:
http://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html
Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz
Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu
Károly Zsolnai-Fehér's links:
Patreon → https://www.patreon.com/TwoMinutePapers
Facebook → https://www.facebook.com/TwoMinutePapers/
Twitter → https://twitter.com/karoly_zsolnai
Web → https://cg.tuwien.ac.at/~zsolnai/