A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Power up! Robust Graph Convolutional Network via Graph Powering
2021
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
By challenging the classical graph Laplacian, we propose a new convolution operator that is provably robust in the spectral domain and is incorporated in the GCN architecture to improve expressivity and ...
By extending the original graph to a sequence of graphs, we also propose a robust training paradigm that encourages transferability across graphs that span a range of spatial and spectral characteristics ...
For instance, in semi-supervised learning, D consists of features and labels on a small set of nodes, and is the cross-entropy loss over all labeled examples. ...
doi:10.1609/aaai.v35i9.16976
fatcat:mkvoey7p3nefxmkcwcl3iddfym
Graph Neural Networks: Methods, Applications, and Opportunities
[article]
2021
arXiv
pre-print
This article provides a comprehensive survey of graph neural networks (GNNs) in each learning setting: supervised, unsupervised, semi-supervised, and self-supervised learning. ...
Taxonomy of each graph based learning setting is provided with logical divisions of methods falling in the given learning setting. ...
Graph-Based Semi-Supervised Learning Semi-supervised learning has been around for many years. ...
arXiv:2108.10733v2
fatcat:j3rfmkiwenebvmfyboasjmx4nu
Network Unfolding Map by Vertex-Edge Dynamics Modeling
2018
IEEE Transactions on Neural Networks and Learning Systems
In this paper, we develop a computational technique using distributed processing elements in a complex network, which are called particles, to solve semi-supervised learning problems. ...
The emergence of collective dynamics in neural networks is a mechanism of the animal and human brain for information processing. ...
That way, we may classify classical graph-based semi-supervised learning techniques as objective-based design and the particle competition technique as behavior-based design. ...
doi:10.1109/tnnls.2016.2626341
pmid:27913359
fatcat:3ivzwmewofcq5chh6dqcrzmfbq
Semi-supervised learning for scalable and robust visual search
2011
ACM SIGMultimedia Records
Specifically, we investigate two classes of approaches: graph-based semi-supervised learning and hashing techniques. ...
many machine learning algorithms, including graph-based semi-supervised learning and clustering. ...
Bivariate Graph Transduction: We developed a novel bivariate regularization framework for graph-based semi-supervised learning. ...
doi:10.1145/2069210.2069213
fatcat:hblb5ncrprcrlgi6ugph6naucy
Regularized Laplacian Estimation and Fast Eigenvector Approximation
[article]
2011
arXiv
pre-print
regularized Semi-Definite Programs (SDPs). ...
Our framework will imply that the solutions to the Mahoney-Orecchia regularized SDP can be interpreted as regularized estimates of the pseudoinverse of the graph Laplacian. ...
This first nontrivial vector is, of course, of widespread interest in applications due to its usefulness for graph partitioning, image segmentation, data clustering, semi-supervised learning, etc. ...
arXiv:1110.1757v2
fatcat:2lwmtycg2fgcpeyntpyr6rrpkq
Optimal Sampling Density for Nonparametric Regression
[article]
2021
arXiv
pre-print
function that minimizes MISE in the asymptotic limit. ...
tuned locally: We adopt the mean integrated squared error (MISE) as a generalization criterion, and use the asymptotic behavior of the MISE as well as the locally optimal bandwidths (LOB) - the bandwidth ...
Panknin was funded by the BMBF project ALICE III, Autonomous Learning in Complex Environments (01IS18049B). S. Nakajima and K.-R. ...
arXiv:2105.11990v2
fatcat:66bwm7itbrgdrbczsmh5ojtgju
Adaptive Diffusions for Scalable Learning over Graphs
[article]
2018
arXiv
pre-print
Furthermore, a robust version of the classifier facilitates learning even in noisy environments. ...
The novel learning approach leverages the notion of "landing probabilities" of class-specific random walks, which can be computed efficiently, thereby ensuring scalability to large graphs. ...
Graph-based semi-supervised learning (SSL) methods tackle this task building on the premise that the true labels are distributed "smoothly" with respect to the underlying network, which then motivates ...
arXiv:1804.02081v2
fatcat:gqy6jnnzwnge3n6srko26fgqoq
Lecture Notes on Spectral Graph Methods
[article]
2016
arXiv
pre-print
These are lecture notes that are based on the lectures from a class I taught on the topic of Spectral Graph Methods at UC Berkeley during the Spring 2015 semester. ...
In between the extremes of pure unsupervised learning and pure supervised learning, there are semi-supervised learning, transductive learning, and several other related classes of machine learning methods ...
As noted above, this latter expression is of the form of Laplacian-based linear equations. It is of the same form that arises in those semi-supervised learning examples that we discusses. ...
arXiv:1608.04845v1
fatcat:ppy6mlmfsvfcxedriwnndv6ztq
Semi-Supervised Learning with Measure Propagation
2011
Journal of machine learning research
In this context, we propose a graph node ordering algorithm that is also applicable to other graph-based semi-supervised learning approaches. ...
We describe a new objective for graph-based semi-supervised learning based on minimizing the Kullback-Leibler divergence between discrete probability measures that encode class membership probabilities ...
semi-supervised classification. ...
dblp:journals/jmlr/SubramanyaB11
fatcat:yxrgbbfg4vdbzcaijcdmyebuse
LASS: a simple assignment model with Laplacian smoothing
[article]
2014
arXiv
pre-print
We consider the problem of learning soft assignments of N items to K categories given two sources of information: an item-category similarity matrix, which encourages items to be assigned to categories ...
We give necessary conditions for its solution to be unique, define an out-of-sample mapping, and derive a simple, effective training algorithm based on the alternating direction method of multipliers. ...
, where we may expect each row of Z to be sparse. ...
arXiv:1405.5960v1
fatcat:wtehvcvberd4vdacdslhira3ye
Learning Representation and Control in Markov Decision Processes: New Frontiers
2007
Foundations and Trends® in Machine Learning
A unified mathematical framework for learning representation and optimal control in MDPs is presented based on a class of singular operators called Laplacians, whose matrix representations have nonpositive ...
Exact solutions of discounted and average-reward MDPs are expressed in terms of a generalized spectral inverse of the Laplacian called the Drazin inverse. ...
Finally, I would like to acknowledge the useful feedback of past and present members of the Autonomous Learning Laboratory at the University of Massachusetts, Amherst. ...
doi:10.1561/2200000003
fatcat:ioytgr63tff45kmhw7oueeeiye
From Kernel Methods to Neural Networks: A Unifying Variational Formulation
[article]
2022
arXiv
pre-print
The minimization of a data-fidelity term and an additive regularization functional gives rise to a powerful framework for supervised learning. ...
In particular, we retrieve the popular ReLU networks by letting the operator be the Laplacian. We also characterize the solution for the intermediate regularization norms ·=·_L_p with p∈(1,2]. ...
Introduction Regularization theory constitutes a powerful framework for the derivation of algorithms for supervised learning [12, 39, 40] . ...
arXiv:2206.14625v1
fatcat:ljcy4vlmlfhqbfv64lyxvr2yvi
Graph Neural Networks: A Review of Methods and Applications
[article]
2021
arXiv
pre-print
In other domains such as learning from non-structural data like texts and images, reasoning on extracted structures (like the dependency trees of sentences and the scene graphs of images) is an important ...
In recent years, variants of GNNs such as graph convolutional network (GCN), graph attention network (GAT), graph recurrent network (GRN) have demonstrated ground-breaking performances on many deep learning ...
Oono and Suzuki (2020) discuss the asymptotic behaviors of GNNs as the model deepens and model them as dynamic systems. ...
arXiv:1812.08434v6
fatcat:ncz44kny6nairjjnysrqd5qjoi
Multi-Lateral Teleoperation Based on Multi-Agent Framework: Application to Simultaneous Training and Therapy in Telerehabilitation
2020
Frontiers in Robotics and AI
For implementing such a scheme, a novel theoretical method is proposed using the power of multi-agent systems (MAS) theory into the multi-lateral teleoperation, based on the self-intelligence in the MAS ...
In the previous related works, changing the number of participants in the multi-lateral teleoperation tasks required redesigning the controllers; while, in this paper using both of the decentralized control ...
Therefore, based on Barbalat's Lemma, the parameter E(t) converge to zero asymptotically. ...
doi:10.3389/frobt.2020.538347
pmid:33501308
pmcid:PMC7805999
fatcat:gfjdhp3s25be5ly5atzihkvrtu
A Time-Vertex Signal Processing Framework: Scalable Processing and Meaningful Representations for Time-Series on Graphs
2018
IEEE Transactions on Signal Processing
Our results suggest that joint analysis of time-vertex signals can bring benefits to regression and learning. ...
The utility of our tools is illustrated in numerous applications and datasets, such as dynamic mesh denoising and classification, still-video inpainting, and source localization in seismic events. ...
to improvements for tasks such as clustering [10] , low-rank extraction [11] , spectral estimation [12] , [13] , non-stationary analysis [14] , [15] and semi-supervised learning [16] , [17] . ...
doi:10.1109/tsp.2017.2775589
fatcat:7hk2ktluevdz7iwbwejxwhxnrm
« Previous
Showing results 1 — 15 out of 92 results