Priors for Deep Networks: Limit theorems, pitfalls, open questions

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=k5hb4V73RY0



Duration: 21:28
1,499 views
14


Much research in Bayesian Deep Learning is about approximating the posterior. With some notable exceptions, the choice of prior is less often considered. In this second direction, we discuss recent work on central limit theorems for neural networks with more than one hidden layer, some thoughts about over-confident extrapolation, and the dangers of improper priors from the literature.

See more at https://www.microsoft.com/en-us/research/video/priors-deep-networks-limit-theorems-pitfalls-open-questions/







Tags:
microsoft research