Radioactive data: tracing through training (Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=G2sr1g6rLdE



Duration: 36:02
4,526 views
165


#ai #research #privacy

Data is the modern gold. Neural classifiers can improve their performance by training on more data, but given a trained classifier, it's difficult to tell what data it was trained on. This is especially relevant if you have proprietary or personal data and you want to make sure that other people don't use it to train their models. This paper introduces a method to mark a dataset with a hidden "radioactive" tag, such that any resulting classifier will clearly exhibit this tag, which can be detected.

OUTLINE:
0:00 - Intro & Overview
2:50 - How Neural Classifiers Work
5:45 - Radioactive Marking via Adding Features
13:55 - Random Vectors in High-Dimensional Spaces
18:05 - Backpropagation of the Fake Features
21:00 - Re-Aligning Feature Spaces
25:00 - Experimental Results
28:55 - Black-Box Test
32:00 - Conclusion & My Thoughts

Paper: https://arxiv.org/abs/2002.00937

Abstract:
We want to detect whether a particular image dataset has been used to train a model. We propose a new technique, \emph{radioactive data}, that makes imperceptible changes to this dataset such that any model trained on it will bear an identifiable mark. The mark is robust to strong variations such as different architectures or optimization methods. Given a trained model, our technique detects the use of radioactive data and provides a level of confidence (p-value). Our experiments on large-scale benchmarks (Imagenet), using standard architectures (Resnet-18, VGG-16, Densenet-121) and training procedures, show that we can detect usage of radioactive data with high confidence (p < 10^-4) even when only 1% of the data used to trained our model is radioactive. Our method is robust to data augmentation and the stochasticity of deep network optimization. As a result, it offers a much higher signal-to-noise ratio than data poisoning and backdoor methods.

Authors: Alexandre Sablayrolles, Matthijs Douze, Cordelia Schmid, Hervé Jégou

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2020-10-26Rethinking Attention with Performers (Paper Explained)
2020-10-17LambdaNetworks: Modeling long-range Interactions without Attention (Paper Explained)
2020-10-11Descending through a Crowded Valley -- Benchmarking Deep Learning Optimizers (Paper Explained)
2020-10-04An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained)
2020-10-03Training more effective learned optimizers, and using them to train themselves (Paper Explained)
2020-09-18The Hardware Lottery (Paper Explained)
2020-09-13Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)
2020-09-07Learning to summarize from human feedback (Paper Explained)
2020-09-02Self-classifying MNIST Digits (Paper Explained)
2020-08-28Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
2020-08-26Radioactive data: tracing through training (Paper Explained)
2020-08-23Fast reinforcement learning with generalized policy updates (Paper Explained)
2020-08-20What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study (Paper Explained)
2020-08-18[Rant] REVIEWER #2: How Peer Review is FAILING in Machine Learning
2020-08-14REALM: Retrieval-Augmented Language Model Pre-Training (Paper Explained)
2020-08-12Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
2020-08-09Hopfield Networks is All You Need (Paper Explained)
2020-08-06I TRAINED AN AI TO SOLVE 2+2 (w/ Live Coding)
2020-08-04PCGRL: Procedural Content Generation via Reinforcement Learning (Paper Explained)
2020-08-02Big Bird: Transformers for Longer Sequences (Paper Explained)
2020-07-29Self-training with Noisy Student improves ImageNet classification (Paper Explained)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
cnn
imagenet
resnet
radioactive
fake
feature
feature space
feature extractor
facebook ai
fair
deep neural networks
classifier
classes
backpropagation
black box
white box
detect
features
privacy
adversarial examples
tagging
inria