10.18: Neural Networks: Backpropagation Part 5 - The Nature of Code
In this video, I implement the formulas for "gradient descent" and adjust the bias in the train() function of my "toy" JavaScript neural network library. I also test the library with a simple XOR dataset.
This video is part of Chapter 10 of The Nature of Code (http://natureofcode.com/book/chapter-10-neural-networks/)
This video is also part of session 4 of my Spring 2017 ITP "Intelligence and Learning" course (https://github.com/shiffman/NOC-S17-2-Intelligence-Learning/tree/master/week4-neural-networks)
Support this channel on Patreon: https://patreon.com/codingtrain
To buy Coding Train merchandise: https://www.designbyhumans.com/shop/codingtrain/
To donate to the Processing Foundation: https://processingfoundation.org/
Send me your questions and coding challenges!: https://github.com/CodingTrain/Rainbow-Topics
Contact:
Twitter: https://twitter.com/shiffman
The Coding Train website: http://thecodingtrain.com/
Links discussed in this video:
The Coding Train on Amazon: https://www.amazon.com/shop/thecodingtrain
Deeplearn.js: https://deeplearnjs.org/
Stochastic Gradient Descent on Wikipedia: https://en.wikipedia.org/wiki/Stochastic_gradient_descent
Videos mentioned in this video:
My Neural Networks series: https://www.youtube.com/playlist?list=PLRqwX-V7Uu6aCibgK1PTWWu9by6XFdCfh
3Blue1Brown Neural Networks playlist: https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi
Source Code for the all Video Lessons: https://github.com/CodingTrain/Rainbow-Code
p5.js: https://p5js.org/
Processing: https://processing.org
The Nature of Code playlist: https://www.youtube.com/user/shiffman/playlists?shelf_id=6&view=50&sort=dd
For More Coding Challenges: https://www.youtube.com/playlist?list=PLRqwX-V7Uu6ZiZxtDDRCi6uhfTH4FilpH
For More Intelligence and Learning: https://www.youtube.com/playlist?list=PLRqwX-V7Uu6YJ3XfHhT2Mm4Y5I99nrIKX
📄 Code of Conduct: https://github.com/CodingTrain/Code-of-Conduct