LSH-Sampling Breaks the Computation Chicken-and-Egg Loop in Adaptive Stochastic Gradient Estimation

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=8hg07O1mDSA



Duration: 33:55
1,561 views
18


Stochastic Gradient Descent or SGD is the most popular algorithm for large-scale optimization. In SGD, the gradient is estimated by uniform sampling with sample size one. There have been several results that show better gradient estimates, using weighted non-uniform sampling, which leads to faster convergence. Unfortunately, the per-iteration cost of maintaining this adaptive distribution is costlier than the exact gradient computation itself, which creates a chicken-and-egg loop making the fast convergence useless. In this paper, we break this barrier by providing the first demonstration of a sampling scheme, which leads to superior gradient estimation, while keeping the sampling cost per iteration similar to the uniform sampling. Such a scheme is possible due to recent advances in Locality Sensitive Hashing (LSH) literature. As a consequence, we improve the running time of all existing gradient descent algorithms.

See more at https://www.microsoft.com/en-us/research/video/lsh-sampling-breaks-the-computation-chicken-and-egg-loop-in-adaptive-stochastic-gradient-estimation/







Tags:
microsoft research