Do ImageNet Classifiers Generalize to ImageNet? (Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=fvctpYph8Pc



Duration: 25:37
20,052 views
381


Has the world overfitted to ImageNet? What if we collect another dataset in exactly the same fashion? This paper gives a surprising answer!

Paper: https://arxiv.org/abs/1902.10811
Data: https://github.com/modestyachts/ImageNetV2

Abstract:
We build new test sets for the CIFAR-10 and ImageNet datasets. Both benchmarks have been the focus of intense research for almost a decade, raising the danger of overfitting to excessively re-used test sets. By closely following the original dataset creation processes, we test to what extent current classification models generalize to new data. We evaluate a broad range of models and find accuracy drops of 3% - 15% on CIFAR-10 and 11% - 14% on ImageNet. However, accuracy gains on the original test sets translate to larger gains on the new test sets. Our results suggest that the accuracy drops are not caused by adaptivity, but by the models' inability to generalize to slightly "harder" images than those found in the original test sets.

Authors: Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, Vaishaal Shankar

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher







Tags:
deep learning
machine learning
imagenet
cifar10
cifar10.1
generalization
overfitting
mturk
arxiv
vision
models
research
hardness
accuracy
classifier
resnet