NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Robust Sparse Analysis...

Subscribers:
348,000
Published on ● Video Link: https://www.youtube.com/watch?v=sSU6DTXunJs



Duration: 38:49
1,360 views
5


Sparse Representation and Low-rank Approximation Workshop at NIPS 2011

Invited Talk: Robust Sparse Analysis Regularization by Gabriel Peyré, CNRS, CEREMADE, Université Paris-Dauphine

Abstract: In this talk I will detail several key properties of L1-analysis regularization for the resolution of linear inverse problems. Most previous theoretical works consider sparse synthesis priors where the sparsity is measured as the norm of the coefficients that synthesize the signal in a given dictionary. In contrast, the more general analysis regularization minimizes the L1 norm of the correlations between the signal and the atoms in the dictionary. The corresponding variational problem includes several well-known regularizations such as the discrete total variation, the fused lasso and sparse correlation with translation invariant wavelets. I will first study the variations of the solution with respect to the observations and the regularization parameter, which enables the computation of the degrees of freedom estimator. I will then give a sufficient condition to ensure that a signal is the unique solution of the analysis regularization when there is no noise in the observations. The same criterion ensures the robustness of the sparse analysis solution to a small noise in the observations. Lastly I will define a stronger condition that ensures robustness to an arbitrary bounded noise. In the special case of synthesis regularization, our contributions recover already known results, that are hence generalized to the analysis setting. I will illustrate these theoretical results on practical examples to study the robustness of the total variation, fused lasso and translation invariant wavelets regularizations. (This is joint work with S. Vaiter, C. Dossal, J. Fadili)




Other Videos By Google TechTalks


2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Randomized Smoothing for...
2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Machine Learning's Role...
2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Fast Cross-Validation...
2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: High-Performance Computing...
2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Towards Human Behavior...
2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Parallelizing Training ...
2012-02-13NIPS 2011 Big Learning Workshop - Algorithms, Systems, & Tools for Learning at Scale: NeuFlow...
2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Bootstrapping Big Data...
2012-02-13NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Big Machine Learning...
2012-02-09NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Dictionary-Dependent Penalties...
2012-02-09NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Robust Sparse Analysis...
2012-02-09NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Local Analysis...
2012-02-09NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Recovery of a Sparse...
2012-02-08NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Fast global convergence...
2012-02-08NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: For Transform Invariant..
2012-02-08NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Fast Approximation...
2012-02-08NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Online Spectral...
2012-02-08NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Fast & Memory...
2012-02-08NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Divide-and-Conquer ...
2012-02-08NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Coordinate Descent...
2012-02-08NIPS 2011 Sparse Representation & Low-rank Approximation Workshop: Automatic Relevance ...



Tags:
new
sparse
morning