PAC-Bayesian Machine Learning: Learning by Optimizing a Performance Guarantee

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=OkftNAp1_Fg



Duration: 1:27:09
2,943 views
59


The goal of machine learning algorithms is to produce predictors having the smallest possible risk (expected loss). Since the quantity to optimize (the risk) is defined only with respect to the data-generating distribution, and not with respect to the data itself, we still do not know exactly what should be optimized on the training data in order to produce a predictor having the smallest possible risk. But a natural learning strategy is to try to optimize a good guarantee on the risk provided that such a guarantee can be computed efficiently on the available data. PAC-Bayes theory has recently emerged as a good framework for deriving such guarantees in the form of, so-called, risk bounds which can be computed on the training data. In this talk, I will present several successes that we have obtained recently using this approach---which is to first derive a risk bound and then design a learning algorithm that finds a predictor having a minimal risk bound (and, consequently the best performance guarantee).







Tags:
microsoft research