A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Reducing Computational Complexity of Tensor Contractions via Tensor-Train Networks
[article]
2021
arXiv
pre-print
In this work, we resort to diagrammatic tensor network manipulation to calculate such products in an efficient and computationally tractable manner, by making use of Tensor Train decomposition (TTD). ...
These ever-expanding trends have highlighted the necessity for more versatile analysis tools that offer greater opportunities for algorithmic developments and computationally faster operations than the ...
Stemming from a graphical intuition, the TTCP is more computationally efficient than the direct definition of contraction. ...
arXiv:2109.00626v2
fatcat:n6e5ftk6k5aphgvjfjmkimxcem
A dual framework for low-rank tensor completion
[article]
2018
arXiv
pre-print
Furthermore, we exploit the versatile Riemannian optimization framework for proposing computationally efficient trust region algorithm. ...
We develop a dual framework for solving the low-rank tensor completion problem. ...
A key requirement in Lemma 1 is solving (17) forẐ computationally efficiently for a given u = (U 1 , . . . , U K ). ...
arXiv:1712.01193v4
fatcat:xlvuq6mp3rgabgd7bmmug6wkqm
Tensor Completion via Integer Optimization
[article]
2024
arXiv
pre-print
This paper develops a novel tensor completion algorithm that resolves this tension by achieving both provable convergence (in numerical tolerance) in a linear number of oracle steps and the information-theoretic ...
Our approach formulates tensor completion as a convex optimization problem constrained using a gauge-based tensor norm, which is defined in a way that allows the use of integer linear optimization to solve ...
theoretically and what is attained by their computationally efficient method. ...
arXiv:2402.05141v2
fatcat:6lmk3uyryrfyjmcllsjxjlczom
Fast smooth rank approximation for tensor completion
2014
2014 48th Annual Conference on Information Sciences and Systems (CISS)
completion algorithms. ...
We compare the performance of our algorithm to state-of-the-art tensor completion algorithms using different color images and video sequences. ...
In [1] the authors proposed an extension of the work in [15] into the matrix completion problem and proposed computationally efficient algorithms for matrix completion. ...
doi:10.1109/ciss.2014.6814174
dblp:conf/ciss/Al-QizwiniR14
fatcat:3lu7ijbyt5cclo2pvsezqy5q5u
Fast Multivariate Spatio-temporal Analysis via Low Rank Tensor Learning
2014
Neural Information Processing Systems
We propose a unified low rank tensor learning framework for multivariate spatio-temporal analysis, which can conveniently incorporate different properties in spatio-temporal data, such as spatial clustering ...
We demonstrate how the general framework can be applied to cokriging and forecasting tasks, and develop an efficient greedy algorithm to solve the resulting optimization problem with convergence guarantee ...
We incorporate these principles in a concise and computationally efficient low-rank tensor learning framework. To achieve global consistency, we constrain the tensor W to be low rank. ...
dblp:conf/nips/BahadoriY014
fatcat:j5uldge4tvaeze3f2e3jkymnqq
QXTools: A Julia framework for distributed quantum circuit simulation
2022
Journal of Open Source Software
QXTools is a framework for simulating quantum circuits using tensor network methods. ...
Weak simulation is the primary use case where given a quantum circuit and input state QXTools will efficiently calculate the probability amplitude of a given output configuration or set of configurations ...
To find efficient contraction orders for tensor networks, an algorithm called FlowCutter (Hamann & Strasser, 2018 ) is used to construct tree decompositions with optimal treewidth of the network's line ...
doi:10.21105/joss.03711
fatcat:k76zlx6zqrcgxafbxkrpctwtzi
Imputation of streaming low-rank tensor data
2014
2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM)
The present paper introduces a novel online (adaptive) algorithm to decompose low-rank tensors with missing entries, and perform imputation as a byproduct. ...
Leveraging stochastic gradient descent iterations, a scalable, real-time algorithm is developed and its convergence is established under simplifying technical assumptions. ...
Online alternating minimization algorithm Towards deriving a real-time, computationally efficient, and recursive solver of (P2), an alternating minimization technique is adopted in which iterations coincide ...
doi:10.1109/sam.2014.6882435
dblp:conf/ieeesam/MardaniMG14
fatcat:7r4wprdnczcffojfv2sdnulifu
Matrix-Product Operators and States: NP-Hardness and Undecidability
2014
Physical Review Letters
It is a well-known open problem to find an efficient algorithm that decides whether a given matrix-product operator actually represents a physical state that in particular has no negative eigenvalues. ...
Furthermore, we discuss numerous connections between tensor network methods and (seemingly) different concepts treated before in the literature, such as hidden Markov models and tensor trains. ...
., efficient contractions of two-dimensional planar tensor networks, even though this task has been identified to be #P-complete [16] . ...
doi:10.1103/physrevlett.113.160503
pmid:25361243
fatcat:yvzpe7z33vgb3gbq5qkl5plpuq
Tensor completion via optimization on the product of matrix manifolds
2015
2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)
We present a method for tensor completion using optimization on low-rank matrix manifolds. ...
The tensor completion problem then reduces to finding the best approximation to the sampled data on this product manifold. ...
This leads to computationally efficient algorithms as well as sufficient conditions for provable guarantee in recovery under given sampling constraints. ...
doi:10.1109/camsap.2015.7383765
dblp:conf/camsap/GirsonA15
fatcat:at5cfxd7e5edxk4pnwsb3bnc24
Fast approximations of the rotational diffusion tensor and their application to structural assembly of molecular complexes
2011
Proteins: Structure, Function, and Bioinformatics
ELMDOCK and ELMPATIDOCK use two novel approximations of the molecular rotational diffusion tensor that allow computationally efficient docking. ...
Additionally, we describe a method for integrating the new approximation methods into the existing docking approaches that use the rotational diffusion tensor as a restraint. ...
of the ELMDOCK minimization algorithm. ...
doi:10.1002/prot.23053
pmid:21604302
pmcid:PMC3115445
fatcat:qaow47h5lzg45bicnmbf2c6xwa
Low-rank tensor approximation for Chebyshev interpolation in parametric option pricing
[article]
2019
arXiv
pre-print
The core of our method is to express the tensorized interpolation in tensor train (TT) format and to develop an efficient way, based on tensor completion, to approximate the interpolation coefficients. ...
Among the growing literature addressing this problem, Gass et al. [14] propose a complexity reduction technique for parametric option pricing based on Chebyshev interpolation. ...
In particular, we first construct the matrix M via Algorithm 5 with 10 5 simulations and subsequently compute P using Algorithm 6. ...
arXiv:1902.04367v1
fatcat:d5eb4ridkzerrd2aimdvhy6a44
Tensor Decomposition-Based Training Method for High-Order Hidden Markov Models
2021
Conference on Theory and Practice of Information Technologies
We reformulate HMMs using tensor decomposition to efficiently build higher-order models with the use of stochastic gradient descent. ...
Based on this, we propose a new modified version of a training algorithm for HMMs, especially suitable for high-order HMMs. Further, we show its capabilities and convergence on synthetic data. ...
We have experimentally verified that our method works and is able to train an HMM model on a synthetic dataset accurately. ...
dblp:conf/itat/CibulaM21
fatcat:oziabdv6ara2josyl4q5xzn3yi
Optimal Low-Rank Tensor Recovery from Separable Measurements: Four Contractions Suffice
[article]
2015
arXiv
pre-print
We present a computationally efficient algorithm, with rigorous and order-optimal sample complexity results (upto logarithmic factors) for tensor recovery. ...
tensor, and (b) the completion problem where measurements constitute revelation of a random set of entries. ...
Our algorithm, known as T-ReCs, built on the classical Leurgans' algorithm for tensor decomposition, was shown to be computationally efficient, and enjoy almost optimal sample complexity guarantees in ...
arXiv:1505.04085v1
fatcat:pwiqpp7gpje35ojznotes6zvba
Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train
2017
IEEE Transactions on Image Processing
Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. ...
This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. ...
The latter is more computationally efficient due to the fact that it does not need the SVD. ...
doi:10.1109/tip.2017.2672439
pmid:28237929
fatcat:7sjzrhuftbb37g2hj74g5rqeam
Simultaneous Visual Data Completion and Denoising Based on Tensor Rank and Total Variation Minimization and Its Primal-Dual Splitting Algorithm
2017
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Furthermore, we developed its solution algorithm based on a primal-dual splitting method, which is computationally efficient as compared to tensor decomposition based non-convex optimization. ...
In this paper, we propose a new tensor completion and denoising model including tensor total variation and tensor nuclear norm minimization with a range of values and noise inequalities. ...
tensor completion in a noisy scenario. ...
doi:10.1109/cvpr.2017.409
dblp:conf/cvpr/YokotaH17
fatcat:vqfr3mr7unditnpd7chjd2lzeu
« Previous
Showing results 1 — 15 out of 40,925 results