Principles of Riemannian Geometry in Neural Networks | TDLS
Toronto Deep Learning Series, 13 August 2018
For slides and more information, visit https://aisc.ai.science/events/2018-08-13/
Paper Review: https://papers.nips.cc/paper/6873-principles-of-riemannian-geometry-in-neural-networks.pdf
Speaker: https://www.linkedin.com/in/helen-ngo/
Organizer: https://www.linkedin.com/in/amirfz/
Host: Dessa
Paper abstract:
This paper interprets neural networks from a minority perspective with the goal to formalize a theoretical basis grounded in Riemannian geometry, framing neural networks as learning an optimal coordinate representation system of the underlying data manifold where different target classes are linearly separable by hyperplanes. We will show that residual neural networks are finite difference approximations to dynamical systems of first order differential equations. Parallels are drawn between backpropagation and the pullback metric, a linear map between the spaces of 1-forms existing on two smooth manifolds, which acts on the coordinate representation of the metric tensor between network layers.