Scalable Kernel Methods via Doubly Stochastic Gradients

Subscribers:
349,000
Published on ● Video Link: https://www.youtube.com/watch?v=1eA9XN7gk20



Duration: 1:10:54
1,259 views
8


The general perception is that kernel methods are not scalable, and neural nets are the methods of choice for nonlinear learning problems. Or have we simply not tried hard enough for kernel methods? Here we propose an approach that scales up kernel methods using a novel concept called "doubly stochastic functional gradients". Our approach relies on the fact that many kernel methods can be expressed as convex optimization problems, and we solve the problems by making two unbiased stochastic approximations to the functional gradient, one using random training points and another using random functions associated with the kernel, and then descending using this noisy functional gradient. We show that a function produced by this procedure after t iterations converges to the optimal function in the reproducing kernel Hilbert space in rate O(t^-1), and achieves a generalization performance of O(t^-1/2). This doubly stochasticity also allows us to avoid keeping the support vectors and to implement the algorithm in a small memory footprint, which is linear in number of iterations and independent of data dimension. Our approach can readily scale kernel methods up to the regimes which are dominated by neural nets. We show that our method can achieve competitive performance to neural nets in datasets such as 8 million handwritten digits from MNIST, 2.3 million energy materials from MolecularSpace, and 1 million photos from ImageNet.




Other Videos By Microsoft Research


2016-06-21DNN-Based Online Speech Enhancement Using Multitask Learning and Suppression Rule Estimation
2016-06-21Computing Reliably with Molecular Walkers
2016-06-21Challenges in automated verification and synthesis for molecular programming
2016-06-21Decompositions of Natural Signals Still Need Fresh Ideas
2016-06-21Provable Non-convex Projections for High-dimensional Learning Problems -Part2
2016-06-21Optimal Design for Social Learning
2016-06-21Introduction to Machine Learning in Python with Scikit-Learn
2016-06-21Lab tutorial: Learning about Shape
2016-06-21Improving Behavioral Health Intervention Technologies: Harnessing the Human Side of Apps
2016-06-21Charles River Crypto Day - The Power of Negations in Cryptography
2016-06-21Scalable Kernel Methods via Doubly Stochastic Gradients
2016-06-21Molecular Tumor Boards: What they are; What they do; What they need
2016-06-21The Contextual Bandits Problem: A New, Fast, and Simple Algorithm
2016-06-21Cell cycle switching by an algorithm
2016-06-21Provable Non-convex Projections for High-dimensional Learning Problems - Part1
2016-06-21Northwest Probability Seminar 2014 - Sampling from the Fomin-Kirillov distribution
2016-06-21Computer Vision - StAR Lecture Series: Object Recognition
2016-06-21Local Deep Kernel Learning for Efficient Non-linear SVM Prediction
2016-06-21An Introduction to Concentration Inequalities and Statistical Learning Theory
2016-06-21Intelligible Machine Learning Models for HealthCare
2016-06-21An evolutionary view on information processing in cells



Tags:
microsoft research
machine learning
deep neural networks