A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Wasserstein t-SNE
[article]
2022
arXiv
pre-print
We use t-SNE to construct 2D embeddings of the units, based on the matrix of pairwise Wasserstein distances between them. ...
The distance matrix can be efficiently computed by approximating each unit with a Gaussian distribution, but we also provide a scalable method to compute exact Wasserstein distances. ...
Comparison of the Wasserstein t-SNE embeddings based on the Gaussian approximation and based on the exact Wasserstein distances. (A) The exact Wasserstein t-SNE embedding separates the classes. ...
arXiv:2205.07531v2
fatcat:r5z7wgaknzfk7ivni3tgepz57u
Iterative Mixture Component Pruning Algorithm for Gaussian Mixture PHD Filter
2014
Mathematical Problems in Engineering
As far as the increasing number of mixture components in the Gaussian mixture PHD filter is concerned, an iterative mixture component pruning algorithm is proposed. ...
The pruning algorithm is based on maximizing the posterior probability density of the mixture weights. ...
The concentrations of this paper are on the Gaussian mixture reduction of the Gaussian mixture implementation of PHD filter. ...
doi:10.1155/2014/653259
fatcat:xyzxcgpspvberciz5okizoiawa
Normalized Wasserstein Distance for Mixture Distributions with Applications in Adversarial Learning and Domain Adaptation
[article]
2019
arXiv
pre-print
This often leads to undesired results in distance-based learning methods for mixture distributions. In this paper, we resolve this issue by introducing the Normalized Wasserstein measure. ...
For mixture distributions, established distance measures such as the Wasserstein distance do not take into account imbalanced mixture proportions. ...
The distance function between distributions can be adversarial distances [6, 21] , the Wasserstein distance [20], or MMD-based distances [14, 15] . ...
arXiv:1902.00415v2
fatcat:4vy5aeme6vh2bhcrpn6p75prya
On Excess Mass Behavior in Gaussian Mixture Models with Orlicz-Wasserstein Distances
[article]
2023
arXiv
pre-print
In this work, we (re)introduce and investigate a metric, named Orlicz-Wasserstein distance, in the study of the Bayesian contraction behavior for the parameters. ...
Dirichlet Process mixture models (DPMM) in combination with Gaussian kernels have been an important modeling tool for numerous data domains arising from biological, physical, and social sciences. ...
Corollary 3.4 reveals the power of Orlicz-Wasserstein distances for Gaussian mixture models. ...
arXiv:2301.11496v1
fatcat:ifsiotbrkjdlrpezh3yowljofu
Solving General Elliptical Mixture Models through an Approximate Wasserstein Manifold
2020
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
We address the estimation problem for general finite mixture models, with a particular focus on the elliptical mixture models (EMMs). ...
Due to a probability constraint, solving this problem is extremely cumbersome and unstable, especially under the Wasserstein distance. ...
Then, an approximate Wasserstein distance for EMMs has been proposed, which unlike the existing Wasserstein distances allows the corresponding metrics to be explicitly calculated. ...
doi:10.1609/aaai.v34i04.5897
fatcat:l4mbim6t5rhdre62mxbkfula4i
Optimal Transport for Gaussian Mixture Models
2019
IEEE Access
Specifically, we treat Gaussian mixture models as a submanifold of probability densities equipped with the Wasserstein metric. ...
We introduce an optimal mass transport framework on the space of Gaussian mixture models. These models are widely used in statistical inference. ...
mixture distribution and the Wasserstein distance W 2 is replaced by its relaxed version (11) . ...
doi:10.1109/access.2018.2889838
pmid:31768305
pmcid:PMC6876701
fatcat:meon6ujywzbs3js7i6ktyt7msy
Optimal transport for Gaussian mixture models
[article]
2018
arXiv
pre-print
Our method leads to a natural way to compare, interpolate and average Gaussian mixture models. ...
We present an optimal mass transport framework on the space of Gaussian mixture models, which are widely used in statistical inference. ...
mixture distribution and the Wasserstein distance W 2 is replaced by its relaxed version (10) . ...
arXiv:1710.07876v2
fatcat:dc5edvfv3zb5jl2nb5z5erg2py
Solving general elliptical mixture models through an approximate Wasserstein manifold
[article]
2020
arXiv
pre-print
We address the estimation problem for general finite mixture models, with a particular focus on the elliptical mixture models (EMMs). ...
Due to a probability constraint, solving this problem is extremely cumbersome and unstable, especially under the Wasserstein distance. ...
On the other hand, gradient-based numerical algorithms typically rest upon additional techniques that only work in particular situations (e.g., gradient reduction (Redner and Walker 1984) , positive definite ...
arXiv:1906.03700v3
fatcat:qimlwmuyuvfijghdrjk2zvd5gy
Learning Generative Models across Incomparable Spaces
[article]
2019
arXiv
pre-print
A key component of our model is the Gromov-Wasserstein distance, a notion of discrepancy that compares distributions relationally rather than absolutely. ...
The GW GAN learns mixture of Gaussians with differing number of modes and arrangements. ...
The training task consists of translating between a mixture of Gaussian distributions in two and three dimensions. ...
arXiv:1905.05461v2
fatcat:55bzhkdvm5dincssprlcwecpyq
Stochastic Gradient Hamiltonian Monte Carlo Methods with Recursive Variance Reduction
2019
Neural Information Processing Systems
We provide a convergence analysis of SRVR-HMC for sampling from a class of non-log-concave distributions and show that SRVR-HMC converges faster than all existing HMC-type algorithms based on underdamped ...
(TV) distance [21, 26] and 2-Wasserstein distance [22, 20] . ...
We first demonstrate the performance of SRVR-HMC for fitting a Gaussian mixture model on synthetic data . ...
dblp:conf/nips/Zou0G19
fatcat:6faa4no5cjf7vel35awqpxk6oq
The Wasserstein-Fourier Distance for Stationary Time Series
[article]
2020
arXiv
pre-print
The WF distance operates by calculating the Wasserstein distance between the (normalised) power spectral densities (NPSD) of time series. ...
We propose the Wasserstein-Fourier (WF) distance to measure the (dis)similarity between time series by quantifying the displacement of their energy across frequencies. ...
Motivation for spectrum-based classification Let us consider two classes of synthetic NPSDs given by • Left-Asymetric Gaussian Mixture (L-AGM): given by a sum of two Gaussians with random means and variances ...
arXiv:1912.05509v2
fatcat:umicrj4x4ndklncpxskcuato5u
Multivariate goodness-of-Fit tests based on Wasserstein distance
[article]
2021
arXiv
pre-print
Goodness-of-fit tests based on the empirical Wasserstein distance are proposed for simple and composite null hypotheses involving general multivariate distributions. ...
The lack of asymptotic distribution theory for the empirical Wasserstein distance means that the validity of the parametric bootstrap under the null hypothesis remains a conjecture. ...
Note that in (e), P is not Gaussian even when ρ = 0. Gaussian mixture P 0 = 0.5 N 2 (0, I 2 ) + 0.5 N 2 3 0 , I 2 . ...
arXiv:2003.06684v3
fatcat:etry3m46p5hx7hivn755ntka7e
Multivariate goodness-of-fit tests based on Wasserstein distance
2021
Electronic Journal of Statistics
Goodness-of-fit tests based on the empirical Wasserstein distance are proposed for simple and composite null hypotheses involving general multivariate distributions. ...
For group families, the procedure is to be implemented after preliminary reduction of the data via invariance. ...
Note that in (e), P is not Gaussian even when ρ = 0. Gaussian mixture P 0 = 0.5 N 2 (0, I 2 )+0.5 N 2 3 0 , I 2 . ...
doi:10.1214/21-ejs1816
fatcat:gu4q6kvk2vable6ohl6uurgsgm
Variational Wasserstein Barycenters with c-Cyclical Monotonicity
[article]
2022
arXiv
pre-print
To this end, we develop a novel continuous approximation method for the Wasserstein barycenters problem given sample access to the input distributions. ...
Wasserstein barycenter, built on the theory of optimal transport, provides a powerful framework to aggregate probability distributions, and it has increasingly attracted great attention within the machine ...
In the first example, we set the variational distribution ν to be a Gaussian mixture with 30 components. In the second example, we set ν to be a Gaussian mixture with 20 components. ...
arXiv:2110.11707v2
fatcat:q4duvu5c2nfvxlqb7ovphthhwm
Gaussian Mixture Reduction with Composite Transportation Divergence
[article]
2023
arXiv
pre-print
To overcome the difficulty, the Gaussian mixture reduction (GMR), which approximates a high order Gaussian mixture by one with a lower order, can be used. ...
These applications often utilize Gaussian mixtures as initial approximations that are updated recursively. ...
We give an overview of the Gaussian barycenter under squared Wasserstein distance and KL divergence. ...
arXiv:2002.08410v4
fatcat:tu6lmbz44ne5hmcjonrd3bilry
« Previous
Showing results 1 — 15 out of 1,507 results