A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Joint Embedding Self-Supervised Learning in the Kernel Regime
[article]
2022
arXiv
pre-print
The fundamental goal of self-supervised learning (SSL) is to produce useful representations of data without access to any labels for classifying the data. ...
In this kernel regime, we derive methods to find the optimal form of the output representations for contrastive and non-contrastive loss functions. ...
kernel in a self-supervised setting. ...
arXiv:2209.14884v1
fatcat:cakvnmq2hnh4hcmhd67ltponie
Evaluating Disentanglement in Generative Models Without Knowledge of Latent Factors
[article]
2022
arXiv
pre-print
In this work, we address this gap by introducing a method for ranking generative models based on the training dynamics exhibited during learning. ...
Probabilistic generative models provide a flexible and systematic framework for learning the underlying geometry of data. ...
Joint embeddings. ...
arXiv:2210.01760v1
fatcat:l54r6rfbofa7vjap3vwrzdv5gu
Learning Representation from Neural Fisher Kernel with Low-rank Approximation
[article]
2022
arXiv
pre-print
In this paper, we study the representation of neural networks from the view of kernels. We first define the Neural Fisher Kernel (NFK), which is the Fisher Kernel applied to neural networks. ...
We show that the low-rank approximation of NFKs derived from unsupervised generative models and supervised learning models gives rise to high-quality compact representations of data, achieving competitive ...
Unsupervised/self supervised representation learning. Unsupervised representation learning is an old idea in deep learning. ...
arXiv:2202.01944v1
fatcat:hawsnqhv4fb27gkaydvoi6ehna
Self-Supervised Visual Place Recognition Learning in Mobile Robots
[article]
2019
arXiv
pre-print
In this work, we develop a self-supervised approach to place recognition in robots. ...
Furthermore, we show that the newly learned embedding can be particularly powerful in disambiguating visual scenes for the task of vision-based loop-closure identification in mobile robots. ...
The K kernel computed in Equation 6 is used to "supervise" the sampling procedure. ...
arXiv:1905.04453v1
fatcat:vvtf7wnbwnbflkrjbr6kckthwa
GeRA: Label-Efficient Geometrically Regularized Alignment
[article]
2023
arXiv
pre-print
We introduce a semi-supervised Geometrically Regularized Alignment (GeRA) method to align the embedding spaces of pretrained unimodal encoders in a label-efficient way. ...
We provide empirical evidence to the effectiveness of our method in the domains of speech-text and image-text alignment. ...
Our approach falls into the regime of semi-supervised learning, as we can leverage the vast amount of unpaired (unlabeled) data with relatively few pairs to establish alignment. ...
arXiv:2310.00672v2
fatcat:lrxtu7ljcrevlaxiixanp73wue
On the stepwise nature of self-supervised learning
[article]
2023
arXiv
pre-print
We present a simple picture of the training process of self-supervised learning methods with joint embedding networks. ...
Our theory suggests that, just as kernel regression can be thought of as a model of supervised learning, kernel PCA may serve as a useful model of self-supervised learning. ...
useful discussions and comments on the manuscript. ...
arXiv:2303.15438v1
fatcat:nngsd3ypdfgjlc7dawvy7m7bcq
Trading robust representations for sample complexity through self-supervised visual experience
2018
Neural Information Processing Systems
Learning in small sample regimes is among the most remarkable features of the human perceptual system. ...
Our results suggest that equivalence sets other than class labels, which are abundant in unlabeled visual experience, can be used for self-supervised learning of semantically relevant image embeddings. ...
Acknowledgments We would like to thank Tomaso Poggio for his advice and supervision throughout the project and the McGovern Institute for Brain Research at MIT for supporting this research. ...
dblp:conf/nips/TacchettiVE18
fatcat:kdwlacsth5huhgdotvpx4km5a4
Federated Self-Supervised Learning of Multi-Sensor Representations for Embedded Intelligence
2020
IEEE Internet of Things Journal
Notably, it improves the generalization in a semi-supervised setting as it reduces the volume of labeled data required through leveraging self-supervised learning. ...
We demonstrate the effectiveness of representations learned from an unlabeled input collection on downstream tasks with training a linear classifier over pretrained network, usefulness in low-data regime ...
Fig. 6 : 6 Effectiveness of self-supervised learning in a low-data regime. ...
doi:10.1109/jiot.2020.3009358
fatcat:ylwl4dvr2rczdlxar77d7bcxkq
A Convolutional Deep Markov Model for Unsupervised Speech Representation Learning
[article]
2020
arXiv
pre-print
Probabilistic Latent Variable Models (LVMs) provide an alternative to self-supervised learning approaches for linguistic representation learning from speech. ...
Lastly, we find that ConvDMM features enable learning better phone recognizers than any other features in an extreme low-resource regime with few labeled training examples. ...
There is a glaring gap between the supervised system and all other representation learning techniques, even in the very few data regime (0.1%). ...
arXiv:2006.02547v2
fatcat:6x67rbwgqraprmovzje4nkny2i
A Convolutional Deep Markov Model for Unsupervised Speech Representation Learning
2020
Interspeech 2020
Probabilistic Latent Variable Models (LVMs) provide an alternative to self-supervised learning approaches for linguistic representation learning from speech. ...
Lastly, we find that ConvDMM features enable learning better phone recognizers than any other features in an extreme low-resource regime with few labelled training examples. ...
There is a glaring gap between the supervised system and all other representation learning techniques, even in the very few data regime (0.1%). ...
doi:10.21437/interspeech.2020-3084
dblp:conf/interspeech/KhuranaLHCLMG20
fatcat:2resntl7wzhoxi2elxcfgqkjsq
Deep anomaly detection for industrial systems: a case study
2020
Proceedings of the Annual Conference of the Prognostics and Health Management Society, PHM
In real world applications, many control settings are categorical in nature. In this paper, vector embedding and joint losses are employed to deal with such situations. ...
We formulate the problem as a self-supervised learning where data under normal operation is used to train a deep neural network autoregressive model, i.e., use a window of time series data to predict future ...
ACKNOWLEDGMENT This material is based upon work supported by the Department of Energy, National Energy Technology Laboratory under Award Number DE-FE0031763. ...
doi:10.36001/phmconf.2020.v12i1.1186
fatcat:mtseixwucjebdjbnzli356c6rm
Encoder-Decoder Networks for Self-Supervised Pretraining and Downstream Signal Bandwidth Regression on Digital Antenna Arrays
[article]
2023
arXiv
pre-print
This work presents the first applications of self-supervised learning applied to data from digital antenna arrays. ...
Encoder-decoder networks are pretrained on digital array data to perform a self-supervised noisy-reconstruction task called channel in-painting, in which the network infers the contents of array data that ...
Self-Supervised Learning The area of self-supervised pretraining of neural networks was similarly revived in the post-AlexNet era, with major advances first in natural language modeling [12] , followed ...
arXiv:2307.03327v1
fatcat:6njvwgiag5g3rmuiio5cprfnvu
More From Less: Self-Supervised Knowledge Distillation for Routine Histopathology Data
[article]
2023
arXiv
pre-print
Using self-supervised deep learning, we demonstrate that it is possible to distil knowledge during training from information-dense data into models which only require information-sparse data for inference ...
This improves downstream classification accuracy on information-sparse data, making it comparable with the fully-supervised baseline. ...
Acknowledgements LF is supported by the MRC grant MR/W006804/ ...
arXiv:2303.10656v2
fatcat:ylvxd7w2lfbrtituspabfkhnay
Sense and Learn: Self-Supervision for Omnipresent Sensors
[article]
2021
arXiv
pre-print
In particular, we show that the self-supervised network can be utilized as initialization to significantly boost the performance in a low-data regime with as few as 5 labeled instances per class, which ...
In this work, we leverage the self-supervised learning paradigm towards realizing the vision of continual learning from unlabeled inputs. ...
Various icons used in the figure are created by Sriramteja SRT, Berkah Icon, Ben Davis, Eucalyp, ibrandify, Clockwise, Aenne Brielmann, Anuar Zhumaev, and Tim Madle from the Noun Project. ...
arXiv:2009.13233v2
fatcat:ver2i7o5zvgv3boterps4tqxcu
Pseudo Label Is Better Than Human Label
[article]
2022
arXiv
pre-print
In this paper, we show that we can train a strong teacher model to produce high quality pseudo labels by utilizing recent self-supervised and semi-supervised learning techniques. ...
Specifically, we use JUST (Joint Unsupervised/Supervised Training) and iterative noisy student teacher training to train a 600 million parameter bi-directional teacher model. ...
In this work, we use joint unsupervised/supervised training (JUST) [25] to combine the supervised RNNT loss and the self-supervised W2v-BERT loss. ...
arXiv:2203.12668v3
fatcat:cgcqnldibva5fk2w6jcbstey34
« Previous
Showing results 1 — 15 out of 2,532 results