The Dimpled Manifold Model of Adversarial Examples in Machine Learning (Research Paper Explained)

The Dimpled Manifold Model of Adversarial Examples in Machine Learning (Research Paper Explained)

Subscribers:
251,000
Published on ● Video Link: https://www.youtube.com/watch?v=k_hUdZJNzkU



Duration: 1:14:21
15,012 views
356


#adversarialexamples #dimpledmanifold #security

Adversarial Examples have long been a fascinating topic for many Machine Learning researchers. How can a tiny perturbation cause the neural network to change its output by so much? While many explanations have been proposed over the years, they all appear to fall short. This paper attempts to comprehensively explain the existence of adversarial examples by proposing a view of the classification landscape, which they call the Dimpled Manifold Model, which says that any classifier will adjust its decision boundary to align with the low-dimensional data manifold, and only slightly bend around the data. This potentially explains many phenomena around adversarial examples. Warning: In this video, I disagree. Remember that I'm not an authority, but simply give my own opinions.

OUTLINE:
0:00 - Intro & Overview
7:30 - The old mental image of Adversarial Examples
11:25 - The new Dimpled Manifold Hypothesis
22:55 - The Stretchy Feature Model
29:05 - Why do DNNs create Dimpled Manifolds?
38:30 - What can be explained with the new model?
1:00:40 - Experimental evidence for the Dimpled Manifold Model
1:10:25 - Is Goodfellow's claim debunked?
1:13:00 - Conclusion & Comments

Paper: https://arxiv.org/abs/2106.10151
My replication code: https://gist.github.com/yk/de8d987c4eb6a39b6d9c08f0744b1f64
Goodfellow's Talk: https://youtu.be/CIfsB_EYsVI?t=4280

Abstract:
The extreme fragility of deep neural networks when presented with tiny perturbations in their inputs was independently discovered by several research groups in 2013, but in spite of enormous effort these adversarial examples remained a baffling phenomenon with no clear explanation. In this paper we introduce a new conceptual framework (which we call the Dimpled Manifold Model) which provides a simple explanation for why adversarial examples exist, why their perturbations have such tiny norms, why these perturbations look like random noise, and why a network which was adversarially trained with incorrectly labeled images can still correctly classify test images. In the last part of the paper we describe the results of numerous experiments which strongly support this new model, and in particular our assertion that adversarial perturbations are roughly perpendicular to the low dimensional manifold which contains all the training examples.

Abstract: Adi Shamir, Odelia Melamed, Oriel BenShmuel

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/1824646584

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2021-08-18[ML News] Nvidia renders CEO | Jurassic-1 larger than GPT-3 | Tortured Phrases reveal Plagiarism
2021-08-16How Apple scans your phone (and how to evade it) - NeuralHash CSAM Detection Algorithm Explained
2021-08-13[ML NEWS] Apple scans your phone | Master Faces beat face recognition | WALL-E is real
2021-08-06[ML News] AI-generated patent approved | Germany gets an analog to OpenAI | ML cheats video games
2021-08-02[ML News] MMO Game destroys GPUs | OpenAI quits Robotics | Today w/ guest host Sanyam Bhutani
2021-07-15[ML News] Facebook AI adapting robots | Baidu autonomous excavators | Happy Birthday EleutherAI
2021-07-11I'm taking a break
2021-07-08[ML News] GitHub Copilot - Copyright, GPL, Patents & more | Brickit LEGO app | Distill goes on break
2021-07-03Self-driving from VISION ONLY - Tesla's self-driving progress by Andrej Karpathy (Talk Analysis)
2021-06-30[ML News] CVPR bans social media paper promotion | AI restores Rembrandt | GPU prices down
2021-06-27The Dimpled Manifold Model of Adversarial Examples in Machine Learning (Research Paper Explained)
2021-06-24[ML News] Hugging Face course | GAN Theft Auto | AI Programming Puzzles | PyTorch 1.9 Released
2021-06-23XCiT: Cross-Covariance Image Transformers (Facebook AI Machine Learning Research Paper Explained)
2021-06-19AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control (Paper Explained)
2021-06-16[ML News] De-Biasing GPT-3 | RL cracks chip design | NetHack challenge | Open-Source GPT-J
2021-06-11Efficient and Modular Implicit Differentiation (Machine Learning Research Paper Explained)
2021-06-09[ML News] EU regulates AI, China trains 1.75T model, Google's oopsie, Everybody cheers for fraud.
2021-06-08My GitHub (Trash code I wrote during PhD)
2021-06-05Decision Transformer: Reinforcement Learning via Sequence Modeling (Research Paper Explained)
2021-06-02[ML News] Anthropic raises $124M, ML execs clueless, collusion rings, ELIZA source discovered & more
2021-05-31Reward Is Enough (Machine Learning Research Paper Explained)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
adversarial examples
goodfellow
goodfellow adversarial attacks
adversarial attacks on neural networks
features not bugs
madry
dimpled manifold
why do adversarial examples exist
adversarial examples explanation
adversarial attacks explanation
computer vision
decision boundary
data manifold
low dimensional manifold
what are adversarial examples
what is deep learning