NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Dictionary-Dependent Penalties...

Subscribers:
348,000
Published on ● Video Link: https://www.youtube.com/watch?v=kPQLNnoX7rA



Duration: 31:52
3,417 views
19


Sparse Representation and Low-rank Approximation Workshop at NIPS 2011

Invited Talk: Dictionary-Dependent Penalties for Sparse Estimation and Rank Minimization by David Wipf, University of California at San Diego

Abstract: In the majority of recent work on sparse estimation algorithms, performance has been evaluated using ideal or quasi-ideal dictionaries (e.g., random Gaussian or Fourier) characterized by unit L2 norm, incoherent columns or features. But these types of dictionaries represent only a subset of the dictionaries that are actually used in practice (largely restricted to idealized compressive sensing applications). In contrast, herein sparse estimation is considered in the context of structured dictionaries possibly exhibiting high coherence between arbitrary groups of columns and/or rows. Sparse penalized regression models are analyzed with the purpose of finding, to the extent possible, regimes of dictionary invariant performance. In particular, a class of non-convex, Bayesian-inspired estimators with dictionary-dependent sparsity penalties is shown to have a number of desirable invariance properties leading to provable advantages over more conventional penalties such as the L1 norm, especially in areas where existing theoretical recovery guarantees no longer hold. This can translate into improved performance in applications such model selection with correlated features, source localization, and compressive sensing with constrained measurement directions. Moreover, the underlying methodology naturally extends to related rank minimization problems.




Other Videos By Google TechTalks


2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Large-Scale Matrix...
2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Randomized Smoothing for...
2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Machine Learning's Role...
2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Fast Cross-Validation...
2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: High-Performance Computing...
2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Towards Human Behavior...
2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Parallelizing Training ...
2012-02-13NIPS 2011 Big Learning Workshop - Algorithms, Systems, & Tools for Learning at Scale: NeuFlow...
2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Bootstrapping Big Data...
2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Big Machine Learning...
2012-02-09NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Dictionary-Dependent Penalties...
2012-02-09NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Robust Sparse Analysis...
2012-02-09NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Local Analysis...
2012-02-09NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Recovery of a Sparse...
2012-02-08NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Fast global convergence...
2012-02-08NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: For Transform Invariant..
2012-02-08NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Fast Approximation...
2012-02-08NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Online Spectral...
2012-02-08NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Fast & Memory...
2012-02-08NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Divide-and-Conquer ...
2012-02-08NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Coordinate Descent...



Tags:
new
sparse
morning