Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








4,229 Hits in 4.8 sec

New Probabilistic Bounds on Eigenvalues and Eigenvectors of Random Kernel Matrices [article]

Nima Reyhani, Hideitsu Hino, Ricardo Vigario
2012 arXiv   pre-print
In this paper, we improve earlier results on concentration bounds for eigenvalues of general kernel matrices.  ...  For distance and inner product kernel functions, e.g. radial basis functions, we provide new concentration bounds, which are characterized by the eigenvalues of the sample covariance matrix.  ...  NR and RV were funded by the Academy of Finland, through Centres of Excellence Program 2006-2011.  ... 
arXiv:1202.3761v1 fatcat:dfecgkcg7faonioz5u73gdcctu

Online kernel PCA with entropic matrix updates

Dima Kuzmin, Manfred K. Warmuth
2007 Proceedings of the 24th international conference on Machine learning - ICML '07  
The updates involve a softmin calculation based on matrix logs and matrix exponentials. We show that these updates can be kernelized.  ...  The main problem we focus on is the kernelization of an online PCA algorithm which belongs to this family of updates.  ...  to the maximum k eigenvalues, it picks n − k eigenvectors probabilistically based on a softmin function of the eigenvalues and then chooses the complementary k eigenvectors as the projection matrix.  ... 
doi:10.1145/1273496.1273555 dblp:conf/icml/KuzminW07 fatcat:t4gwapfbpvdtzph6iyhqguapse

On the Problem of Approximating the Eigenvalues of Undirected Graphs in Probabilistic Logspace [chapter]

Dean Doron, Amnon Ta-Shma
2015 Lecture Notes in Computer Science  
, probabilistic and quantum logspace computation.  ...  We therefore believe the problem of approximating the eigenvalues of an undirected graph is not only natural and important by itself, but also important for understanding the relative power of deterministic  ...  One can look up and ask which upper bounds we know on BPL.  ... 
doi:10.1007/978-3-662-47672-7_34 fatcat:qpmyhm3yurdv5isjc3g67fndoa

Clustered Nyström Method for Large Scale Manifold Learning and Dimension Reduction

Kai Zhang, James T. Kwok
2010 IEEE Transactions on Neural Networks  
Our algorithm can be applied to scale up a wide variety of algorithms that depend on the eigenvalue decomposition of kernel matrix (or its variant), such as kernel principal component analysis, Laplacian  ...  In this paper, we analyze how the approximating quality of the Nyström method depends on the choice of landmark points, and in particular the encoding powers of the landmark points in summarizing the data  ...  On the theoretical side, probabilistic error bounds have been studied in [26] and [32] .  ... 
doi:10.1109/tnn.2010.2064786 pmid:20805054 fatcat:mi3anoqssfggdlzxvfpjqgqwa4

Spectral methods in machine learning and new strategies for very large datasets

M.-A. Belabbas, P. J. Wolfe
2009 Proceedings of the National Academy of Sciences of the United States of America  
Though these algorithms have different origins, each requires the computation of the principal eigenvectors and eigenvalues of a positive-definite kernel.  ...  Motivated by such applications, we present here 2 new algorithms for the approximation of positive-semidefinite kernels, together with error bounds that improve on results in the literature.  ...  simpler eigenvalue problem, and then to extend the eigenvectors obtained therewith by using complete knowledge of the kernel.  ... 
doi:10.1073/pnas.0810600105 pmid:19129490 pmcid:PMC2626709 fatcat:k5crs5jggrg53nnrgb5oi6zf7y

Error Bounds on the SCISSORS Approximation Method

Imran S. Haque, Vijay S. Pande
2011 Journal of Chemical Information and Modeling  
These reductions allow the use of generalization bounds on these techniques to show that the expected error in SCISSORS approximations of molecular similarity kernels is bounded in expected pairwise inner  ...  G is then decomposed into eigenvectors V and eigenvalues along the diagonal of a matrix D; the vector embedding for the basis molecules lies along the rows of matrix B in the following equation: The rank  ...  The proof of theorem 1 relies on a bound on the generalization error of kernel PCA projections due to Shawe-Taylor. 10 This theorem bounds the expected residual from projecting new data onto a sampled  ... 
doi:10.1021/ci200251a pmid:21851122 pmcid:PMC3183166 fatcat:mppd4gctjvgwnb2bedvgqtcy3e

Sampling Techniques for Kernel Methods

Dimitris Achlioptas, Frank McSherry, Bernhard Schölkopf
2001 Neural Information Processing Systems  
We propose randomized techniques for speeding up Kernel Principal Component Analysis on three levels: sampling and quantization of the Gram matrix in training, randomized rounding in evaluating the kernel  ...  In all three cases, we give sharp bounds on the accuracy of the obtained approximations.  ...  BS would like to thank Santosh Venkatesh for detailed discussions on sampling kernel expansions.  ... 
dblp:conf/nips/AchlioptasMS01 fatcat:vl5vwgcfqvemrhxthw5bsjb6tm

Diffusion Maps - a Probabilistic Interpretation for Spectral Embedding and Clustering Algorithms [chapter]

Boaz Nadler, Stephane Lafon, Ronald Coifman, Ioannis G. Kevrekidis
2008 Lecture Notes in Computational Science and Engineering  
Given the pairwise adjacency matrix of all points in a dataset, we define a random walk on the graph of points and a diffusion distance between any two points.  ...  This identity shows that characteristic relaxation times and processes of the random walk on the graph are the key concept that governs the properties of these spectral clustering and spectral embedding  ...  The research of BN is supported by the Israel Science Foundation (grant 432/06) and by the Hana and Julius Rosen fund.  ... 
doi:10.1007/978-3-540-73750-6_10 fatcat:jtvcwiphdrbg3dpjkptevzuj2a

Convergence Details About k-DPP Monte-Carlo Sampling for Large Graphs

Diala Wehbe, Nicolas Wicker
2021 Sankhya B  
This yields a polynomial bound on the mixing time of the associated Markov chain under mild conditions on the eigenvalues of the Laplacian matrix when the number of edges grows.  ...  This paper aims at making explicit the mixing time found by Anari et al. (2016) for k-DPP Monte-Carlo sampling when it is applied on large graphs.  ...  To conclude, this paper makes the bound of Anari et al. (2016) more precise in the particular case where we sample -DPP on graphs using as kernel the Moore-Penrose pseudo-inverse of the normalized Laplacian  ... 
doi:10.1007/s13571-021-00258-x fatcat:vetgjpfqljhsroimoikzja2al4

Diffusion Maps, Spectral Clustering and Eigenfunctions of Fokker-Planck operators [article]

Boaz Nadler, Stephane Lafon, Ronald R. Coifman, Ioannis G. Kevrekidis
2005 arXiv   pre-print
This paper presents a diffusion based probabilistic interpretation of spectral clustering and dimensionality reduction algorithms that use the eigenvectors of the normalized graph Laplacian.  ...  dimensional reduction algorithms based on these first few eigenvectors.  ...  Acknowledgments The authors thank Mikhail Belkin and Partha Niyogi for interesting discussions. This work was partially supported by DARPA.  ... 
arXiv:math/0506090v1 fatcat:77qyt3a5orebfkkrblrn3oqdxq

Fast Landmark Subspace Clustering [article]

Xu Wang, Gilad Lerman
2015 arXiv   pre-print
Furthermore, we bound the error between the original clustering scheme and its randomization.  ...  In this paper we define a general class of kernels that can be easily approximated by randomization.  ...  More importantly it bounds the L 2 -error between the original and approximated eigenvectors of kernel matrices, which applies to FLS, Fourier random features and other landmark-based methods.  ... 
arXiv:1510.08406v1 fatcat:myxsaqmulzclthfs7ya3njg7um

A Discrete Probabilistic Approach to Dense Flow Visualization [article]

Daniel Preuß, Tino Weinkauf, Jens Krüger
2020 arXiv   pre-print
These embeddings are scalar fields that give insight into the mixing processes of the flow on different scales.  ...  We showcase the utility of our method using different 2D and 3D flows.  ...  Analytically the kernel of the Laplacian can have a dimension larger than one, corresponding to disconnected areas. Therefore, the eigenvalue 0 would have multiple eigenvectors.  ... 
arXiv:2007.01629v1 fatcat:h2zvbxkdt5egrb7fozxvvl6vpe

Stochastic processes and special functions: On the probabilistic origin of some positive kernels associated with classical orthogonal polynomials

R.D Cooper, M.R Hoare, Mizan Rahman
1977 Journal of Mathematical Analysis and Applications  
ACKNOWLEDGMENT The authors gratefully acknowledge the support of the Science Research Council, London, in the form of traveling research fellowships.  ...  One of us has recently published a number of new results going considerably beyond the simple pattern of analogies with the Erdelyi formulas, which in fact originate in probabilistic insights, and we shall  ...  The second set (of 9 cells) with their random contents are removed and joined to the third set (of Y) and the contents of these q + r cells are again randomized according to Eq. (3.2) (Figs. lb and c).  ... 
doi:10.1016/0022-247x(77)90160-3 fatcat:yzazgu3zlbdwrlfkdgjs6otkxi

Randomized Clustered Nystrom for Large-Scale Kernel Machines [article]

Farhad Pourkamali-Anaraki, Stephen Becker
2016 arXiv   pre-print
The Nystrom method has been popular for generating the low-rank approximation of kernel matrices that arise in many machine learning problems.  ...  The proposed method performs K-means clustering on low-dimensional random projections of a data set and, thus, leads to significant savings for high-dimensional data sets.  ...  of r leading eigenvectors and eigenvalues of the kernel matrix K ∈ R n×n : U (2) r ∈ R n×r , Λ (2) r ∈ R r×r 1: Form two matrices C and W: C ij = κ(x i , z j ), W ij = κ(z i , z j ) 2: Compute the eigenvalue  ... 
arXiv:1612.06470v1 fatcat:7qanrgmcijdaxas6bmizyhecia

On confidence intervals for precision matrices and the eigendecomposition of covariance matrices [article]

Teodora Popordanoska, Aleksei Tiulpin, Wacha Bounliphone, Matthew B. Blaschko
2022 arXiv   pre-print
From this result, we obtain bounds on the eigenvectors using Weyl's theorem and the eigenvalue-eigenvector identity and we derive confidence intervals on the entries of the precision matrix using matrix  ...  This paper tackles the challenge of computing confidence bounds on the individual entries of eigenvectors of a covariance matrix of fixed dimension.  ...  To the best of our knowledge, this is the first work that explores the use of the eigenvalue-eigenvector identity in the context of estimating bounds on the eigendecomposition of covariance matrices and  ... 
arXiv:2208.11977v1 fatcat:rlzncb4l7fhsrfts2bw7j2gt6y
« Previous Showing results 1 — 15 out of 4,229 results