On the Power of Preconditioning in Sparse Linear Regression

Published on ● Video Link: https://www.youtube.com/watch?v=yeb1r4a7SNY



Duration: 34:35
433 views
9


Raghu Meka (UCLA)
https://simons.berkeley.edu/talks/power-preconditioning-sparse-linear-regression
Joint IFML/Data-Driven Decision Processes Workshop

Sparse linear regression is a fundamental problem in high-dimensional statistics, but strikingly little is known about how to solve it efficiently without restrictive conditions on the design matrix. This talk will focus on the (correlated) random design setting, where the covariates are independently drawn from a multivariate Gaussian and the ground-truth signal is sparse. Information theoretically, one can achieve strong error bounds with O(klogn) samples for arbitrary covariance matrices and sparsity k; however, no efficient algorithms are known to match these guarantees even with o(n) samples in general. As for hardness, computational lower bounds are only known with worst-case design matrices. Random-design instances are known which are hard for the Lasso, but these instances can generally be solved by Lasso after a simple change-of-basis (i.e. preconditioning). In the talk, we will discuss new upper and lower bounds clarifying the power of preconditioning in sparse linear regression. First, we show that the preconditioned Lasso can solve a large class of sparse linear regression problems nearly optimally: it succeeds whenever the dependency structure of the covariates, in the sense of the Markov property, has low treewidth -- even if the covariance matrix is highly ill-conditioned. Second, we construct (for the first time) random-design instances which are provably hard for an optimally preconditioned Lasso. Based on joint works with Jonathan Kelner, Frederic Koehler, Dhruv Rohatgi.




Other Videos By Simons Institute for the Theory of Computing


2022-10-12A Game-Theoretic Approach to Offline Reinforcement Learning
2022-10-11The Statistical Complexity of Interactive Decision Making
2022-10-11A Tutorial on Finite-Sample Guarantees of Contractive Stochastic Approximation With...
2022-10-11A Tutorial on Finite-Sample Guarantees of Contractive Stochastic Approximation With...
2022-10-11Stochastic Bin Packing with Time-Varying Item Sizes
2022-10-10Constant Regret in Exchangeable Action Models: Overbooking, Bin Packing, and Beyond
2022-10-08On The Exploration In Load-Balancing Under Unknown Service Rates
2022-10-08Sample Complexity Of Policy-Based Methods Under Off-Policy Sampling And ...
2022-10-08The Compensated Coupling (or Why the Future is the Best Guide for the Present)
2022-10-08Higher-Dimensional Expansion of Random Geometric Complexes
2022-10-08On the Power of Preconditioning in Sparse Linear Regression
2022-10-07What Functions Do Transformers Prefer to Represent?
2022-10-01Optimality of Variational Inference for Stochastic Block Model
2022-10-01Machine Learning on Large-Scale Graphs
2022-10-01Survey on Sparse Graph Limits + A Toy Example
2022-10-01Long Range Dependence in Evolving Networks
2022-09-30Stochastic Processes on Sparse Graphs: Hydrodynamic Limits and Markov Approximations
2022-09-30Large Deviation Principle for the Norm of the Adjacency Matrix and the Laplacian Matrix of...
2022-09-30Longitudinal Network Models, Log-Linear Multigraph Models, and Implications to Estimation and...
2022-09-30Graphon Games
2022-09-30Vertexons and Embedded Graphon Mean Field Games



Tags:
Simons Institute
theoretical computer science
UC Berkeley
Computer Science
Theory of Computation
Theory of Computing
Joint IFML/Data-Driven Decision Processes Workshop
Raghu Meka