Overfitting and Regularization For Deep Learning | Two Minute Papers #56
In this episode, we discuss the bane of many machine learning algorithms - overfitting. It is also explained why it is an undesirable way to learn and how to combat it via L1 and L2 regularization.
_____________________________
The paper "Regression Shrinkage and Selection via the Lasso" is available here:
http://statweb.stanford.edu/~tibs/lasso/lasso.pdf
Andrej Karpathy's excellent lecture notes on neural networks and regularization:
http://cs231n.github.io/neural-networks-1/
The neural network demo is available here:
http://cs.stanford.edu/people/karpathy/convnetjs/demo/classify2d.html
A playlist with out neural network and deep learning-related videos:
https://www.youtube.com/playlist?list=PLujxSBD-JXglGL3ERdDOhthD3jTlfudC2
WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE:
Sunil Kim, Vinay S.
Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz
The thumbnail image background was created by Tony Hisgett (CC BY 2.0). It has undergone recoloring. - https://flic.kr/p/5dkbNV
Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu
Károly Zsolnai-Fehér's links:
Patreon → https://www.patreon.com/TwoMinutePapers
Facebook → https://www.facebook.com/TwoMinutePapers/
Twitter → https://twitter.com/karoly_zsolnai
Web → https://cg.tuwien.ac.at/~zsolnai/