A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2015; you can also visit the original URL.
The file type is application/pdf
.
Filters
Complete dictionary recovery over the sphere
2015
2015 International Conference on Sampling Theory and Applications (SampTA)
This particular geometric structure allows us to design a Riemannian trust region algorithm over the sphere that provably converges to one local minimizer with an arbitrary initialization, despite the ...
This recovery problem is central to the theoretical understanding of dictionary learning, which seeks a sparse representation for a collection of input signals, and finds numerous applications in modern ...
minimizers over the sphere S n−1 . ...
doi:10.1109/sampta.2015.7148922
fatcat:oygnuyi5s5gehiiokqjoo4hbju
Complete Dictionary Recovery over the Sphere
[article]
2015
arXiv
pre-print
This particular geometric structure allows us to design a Riemannian trust region algorithm over the sphere that provably converges to one local minimizer with an arbitrary initialization, despite the ...
This recovery problem is central to the theoretical understanding of dictionary learning, which seeks a sparse representation for a collection of input signals, and finds numerous applications in modern ...
minimizers over the sphere S n−1 . ...
arXiv:1504.06785v3
fatcat:r2bxpqxeqnhxxj7ng47otd3rc4
Complete Dictionary Recovery Over the Sphere II: Recovery by Riemannian Trust-Region Method
2017
IEEE Transactions on Information Theory
This recovery problem is central to theoretical understanding of dictionary learning, which seeks a sparse representation for a collection of input signals and finds numerous applications in modern signal ...
We consider the problem of recovering a complete (i.e., square and invertible) matrix A_0, from Y ∈R^n × p with Y = A_0 X_0, provided X_0 is sufficiently sparse. ...
JS thanks the Wei Family Private Foundation for their generous support. ...
doi:10.1109/tit.2016.2632149
fatcat:vfu2lrtvifb75bqddft4sl6hwe
Complete Dictionary Recovery Over the Sphere I: Overview and the Geometric Picture
2017
IEEE Transactions on Information Theory
In a companion paper (arXiv:1511.04777), we design a second-order trust-region algorithm over the sphere that provably converges to a local minimizer from arbitrary initializations, despite the presence ...
This recovery problem is central to theoretical understanding of dictionary learning, which seeks a sparse representation for a collection of input signals and finds numerous applications in modern signal ...
JS thanks the Wei Family Private Foundation for their generous support. ...
doi:10.1109/tit.2016.2632162
fatcat:4dnbtv2725bhxddmguarwug2b4
Complete Dictionary Recovery Using Nonconvex Optimization
2015
International Conference on Machine Learning
This recovery setting is central to the theoretical understanding of dictionary learning. ...
We consider the problem of recovering a complete (i.e., square and invertible) dictionary A 0 , from Y = A 0 X 0 with Y ∈ R n×p . ...
We thank the area chair and the anonymous reviewers for making painstaking effort to read our long proofs and providing insightful feedback. ...
dblp:conf/icml/SunQW15
fatcat:ko6g4tqutbexlmu6vbzxxg6eby
Efficient Sparse Coding using Hierarchical Riemannian Pursuit
[article]
2021
arXiv
pre-print
Recent works on sparse coding with a complete dictionary provide strong theoretical guarantees thanks to the development of the non-convex optimization. ...
The proposed scheme leverages the global and local Riemannian geometry of the two-stage optimization problem and facilitates fast implementation for superb dictionary recovery performance by a finite number ...
Acknowledgment The authors would like to thank professor Jianfeng Cai of HKUST Math Department for the discussion on the properties of the orthogonal group and the sphere. ...
arXiv:2104.10314v4
fatcat:745kyjbn5bcf3mggyxvq5s5zhy
Complete Dictionary Learning via ℓ p -norm Maximization
2020
Conference on Uncertainty in Artificial Intelligence
We further show the efficacy of the developed algorithm: for the population GPM algorithm over the sphere constraint, it first quickly enters the neighborhood of a global maximizer, and then converges ...
In this paper, we investigate a family of p -norm (p > 2, p ∈ N) maximization approaches for the complete dictionary learning problem from theoretical and algorithmic aspects. ...
First, it has been demonstrated that the performance of complete bases is competitive to over-complete dictionaries in real applications [4] . ...
dblp:conf/uai/ShenX0L020
fatcat:clbr7hppnffcfhalckm7j7qrre
Finding the Sparsest Vectors in a Subspace: Theory, Algorithms, and Applications
[article]
2020
arXiv
pre-print
recovery, dictionary learning, sparse blind deconvolution, and many other problems in signal processing and machine learning. ...
However, in contrast to the classical sparse recovery problem, the most natural formulation for finding the sparsest vector in a subspace is usually nonconvex. ...
Wright, “Complete dictionary recovery over the sphere,” arXiv preprint arXiv:1504.06785,
2015.
[20] S. Burer and R. D. ...
arXiv:2001.06970v1
fatcat:zluhhl3635bzrnnk7fjw5tvi7a
Analysis versus synthesis in signal priors
2007
Inverse Problems
We show that although when reducing to the complete and under-complete formulations the two become equivalent, in their more interesting over-complete formulation the two types depart. ...
The concept of prior probability for signals plays a key role in the successful solution of many inverse problems. ...
Acknowledgements The authors would like to thank Prof. David L. Donoho for the enlightening discussions and ideas which helped in developing the presented work. ...
doi:10.1088/0266-5611/23/3/007
fatcat:ouz4i552z5blnimrrjezgievqu
Complete Dictionary Learning via ℓ_p-norm Maximization
[article]
2020
arXiv
pre-print
We further show the efficacy of the developed algorithm: for the population GPM algorithm over the sphere constraint, it first quickly enters the neighborhood of a global maximizer, and then converges ...
In this paper, we investigate a family of ℓ_p-norm (p>2,p ∈N) maximization approaches for the complete dictionary learning problem from theoretical and algorithmic aspects. ...
First, it has been demonstrated that the performance of complete bases is competitive to over-complete dictionaries in real applications Specifically, we assume that each sample y i ∈ R n is generated ...
arXiv:2002.10043v3
fatcat:q6cbw65n4vbgbhpckhaqyqhzmm
An M* Proxy for Sparse Recovery Performance
[article]
2020
arXiv
pre-print
This paper provides a new tractable lower bound for the sparse recovery threshold of sensing matrices. ...
First, it serves as regularization for the classical dictionary learning problem in order to learn dictionaries with better generalisation properties on unseen data. ...
The authors would like to acknowledge support from the data science joint research initiative with the fonds AXA pour la recherche and Kamet ...
arXiv:1810.02748v2
fatcat:lk4gq73iffeolcwlnzarl3kcga
Learning Dictionaries With Bounded Self-Coherence
2012
IEEE Signal Processing Letters
We present a dictionary learning method with an effective control over the self-coherence of the trained dictionary, enabling a trade-off between maximizing the sparsity of codings and approximating an ...
A high coherence to the signal class enables the sparse coding of signal observations with a small approximation error, while a low self-coherence of the atoms guarantees atom recovery and a more rapid ...
From Bases to Over-Complete Dictionaries An orthonormal basis B ∈ R D×D contains D mutually orthogonal unit 2 norm atoms spanning the feature space R D . ...
doi:10.1109/lsp.2012.2223757
fatcat:3xfhtpgu4vbu3mmsyqliskucdu
Sparse linear representation
2009
2009 IEEE International Symposium on Information Theory
When the dictionary size is exponential in the dimension of signal, then the exact characterization of the optimal distortion is given as a function of the dictionary size exponent and the number of reference ...
Roughly speaking, every signal is sparse if the dictionary size is exponentially large, no matter how small the exponent is. ...
ACKNOWLEDGMENTS The authors wish to thank Yuzhe Jin and Bhaskar Rao for stimulating discussions on their formulation of the sparse signal position recovery problem, which motivated the current work. ...
doi:10.1109/isit.2009.5205585
dblp:conf/isit/JeongK09
fatcat:4tp3wnfm75dp7ekcmxs7hyhr6m
Sparse Linear Representation
[article]
2009
arXiv
pre-print
When the dictionary size is exponential in the dimension of signal, then the exact characterization of the optimal distortion is given as a function of the dictionary size exponent and the number of reference ...
Roughly speaking, every signal is sparse if the dictionary size is exponentially large, no matter how small the exponent is. ...
ACKNOWLEDGMENTS The authors wish to thank Yuzhe Jin and Bhaskar Rao for stimulating discussions on their formulation of the sparse signal position recovery problem, which motivated the current work. ...
arXiv:0905.1990v2
fatcat:wl5cyddq2nhg3oshpdrls3gl5a
Dictionary-Sparse Recovery From Heavy-Tailed Measurements
[article]
2021
arXiv
pre-print
The recovery of signals that are sparse not in a basis, but rather sparse with respect to an over-complete dictionary is one of the most flexible settings in the field of compressed sensing with numerous ...
available for the restrictive class of sub-Gaussian measurement vectors as far as the recovery of dictionary-sparse signals via ℓ_1-synthesis is concerned. ...
An important question in the context of the recovery of signals that have -sparse coefficients with respect to an over-complete dictionary D = [d 1 , . . . , d ] ∈ ℝ × with ≪ is which dictionaries are ...
arXiv:2101.08298v2
fatcat:ems62kfiuvei5l4ppzvaoohslm
« Previous
Showing results 1 — 15 out of 11,442 results