A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Tailoring Self-Supervision for Supervised Learning
[article]
2022
arXiv
pre-print
Yet, the benefits of self-supervision are not fully exploited as previous pretext tasks are specialized for unsupervised representation learning. ...
First, the tasks need to guide the model to learn rich features. Second, the transformations involved in the self-supervision should not significantly alter the training distribution. ...
However, we claim that the self-supervision task is still solvable since the features encode the information of the absolute position thanks to the zero paddings [30] . ...
arXiv:2207.10023v1
fatcat:dr7soaiqh5efhjnrtspqvwtwvi
A Large-Scale Analysis on Self-Supervised Video Representation Learning
[article]
2023
arXiv
pre-print
Next, we study five different aspects of self-supervised learning important for videos; 1) dataset size, 2) complexity, 3) data distribution, 4) data noise, and, 5)feature analysis. ...
Self-supervised learning is an effective way for label-free model pre-training, especially in the video domain where labeling is expensive. ...
analysis on five important factors for self-supervised learning in videos; 1) dataset size, 2) task complexity, 3) distribution shift, 4) data noise, and, 5) feature analysis. • Finally, we put some of ...
arXiv:2306.06010v2
fatcat:pbcufpt6rzcl5opxzqkvu7hbwu
Self-Supervised Dynamic Networks for Covariate Shift Robustness
[article]
2020
arXiv
pre-print
We present the conceptual and empirical advantages of the proposed method on the problem of image classification under different covariate shifts, and show that it significantly outperforms comparable ...
, and thus directly handle covariate shifts at test-time. ...
shifted compared to the train distribution. ...
arXiv:2006.03952v1
fatcat:r3b6qagax5bpvgwgdgzq6xtfja
Big Self-Supervised Models Advance Medical Image Classification
[article]
2021
arXiv
pre-print
In addition, we show that big self-supervised models are robust to distribution shift and can learn efficiently with a small number of labeled medical images. ...
for self-supervised learning. ...
We are also grateful to Jim Winkens, Megan Wilson, Umesh Telang, Patricia Macwilliams, Greg Corrado, Dale Webster, and our collaborators at DermPath AI for their support of this work. ...
arXiv:2101.05224v2
fatcat:ed4k5blox5evzlu7zhza2gdbpm
Day2Dark: Pseudo-Supervised Activity Recognition beyond Silent Daylight
[article]
2023
arXiv
pre-print
The main causes are the limited availability of labeled dark videos to learn from, as well as the distribution shift towards the lower color contrast at test-time. ...
To compensate for the lack of labeled dark videos, we introduce a pseudo-supervised learning scheme, which utilizes easy to obtain unlabeled and task-irrelevant dark videos to improve an activity recognizer ...
Ministry of Economic Affairs and Climate Policy. ...
arXiv:2212.02053v3
fatcat:kgcl56ok45dfrfwa7fk3moip24
Learning to Embed Time Series Patches Independently
[article]
2024
arXiv
pre-print
In addition, we introduce complementary contrastive learning to hierarchically capture adjacent time series information efficiently. ...
and training/inference time. ...
Figure 3 : 3 Figure 3: Complementary contrastive learning.
Figure 4 : 4 Figure 4: MSE by D and dropout.
Figure 5 : 5 Figure 5: PI vs. PD tasks under distribution shifts. ...
arXiv:2312.16427v4
fatcat:4hzgdch5cbchleccn36efe7wji
Self-supervised SAR-optical Data Fusion and Land-cover Mapping using Sentinel-1/-2 Images
[article]
2021
arXiv
pre-print
Experimental results show that the proposed approach achieves a comparable accuracy and that reduces the dimension of features with respect to the image-level contrastive learning method. ...
For the land-cover mapping task, we assign each pixel a land-cover class by the joint use of pre-trained features and spectral information of the image itself. ...
We also investigate the self-trained land-cover classification in considering spectral information in SAR-optical images and the benefits of selfsupervised pre-trained features. ...
arXiv:2103.05543v3
fatcat:5futni3e35bj3nd6tux7cmgjp4
Collaboration of Pre-trained Models Makes Better Few-shot Learner
[article]
2022
arXiv
pre-print
Recently, CLIP-based methods have shown promising few-shot performance benefited from the contrastive language-image pre-training. ...
By such collaboration, CoMo can fully unleash the potential of different pre-training methods and unify them to perform state-of-the-art for few-shot classification. ...
We further evaluate our CoMo's ro- bustness to distribution shift by training on "Source" dataset and testing on "Target" datasets. ...
arXiv:2209.12255v2
fatcat:fswdxtwiozbnfiy2pjlxefe4pu
TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive?
2021
Neural Information Processing Systems
Test-time training (TTT) through self-supervised learning (SSL) is an emerging paradigm to tackle distributional shifts. ...
This analysis motivates our use of more informative self-supervision in the form of contrastive learning for visual recognition problems. ...
Acknowledgements This work was supported by the Swiss National Science Foundation under the Grant 2OOO21-L92326, Honda R&D Co. Ltd, EPFL Open Science fund and Valeo. ...
dblp:conf/nips/LiuKDBMA21
fatcat:x5flrtxdlfc45cnaj62hapdypq
Audio-Adaptive Activity Recognition Across Video Domains
[article]
2022
arXiv
pre-print
The leading approaches reduce the shift in activity appearance by adversarial training and self-supervised learning. ...
This paper strives for activity recognition under domain shift, for example caused by change of scenery or camera viewpoint. ...
Ministry of Economic Affairs and Climate Policy. ...
arXiv:2203.14240v2
fatcat:fibtjrqa4vay3h4ngatq6cveeu
A simple, efficient and scalable contrastive masked autoencoder for learning visual representations
[article]
2022
arXiv
pre-print
We introduce CAN, a simple, efficient and scalable method for self-supervised learning of visual representations. ...
The learning mechanisms are complementary to one another: contrastive learning shapes the embedding space across a batch of image samples; masked autoencoders focus on reconstruction of the low-frequency ...
Birds
ROBUSTNESS TO DISTRIBUTION SHIFT Finally, we consider the robustness of CAN to distribution shifts. ...
arXiv:2210.16870v1
fatcat:4ugxkwczjzchbhrfbxmpkmdfky
Representation Learning with Multiple Lipschitz-Constrained Alignments on Partially-Labeled Cross-Domain Data
2020
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
by cluster assumption-based class alignment while keeping the target local topological information in complementary representation by self alignment. ...
However, existing cross-domain representation learning focuses on building one shared space and ignores the unlabeled data in the source domain, which cannot effectively capture the distribution and structure ...
This self alignment not only directly affects the generation of complementary representation but also benefits the learning of common representation. ...
doi:10.1609/aaai.v34i04.5856
fatcat:an2fveyfe5hl3f5fzvtgskrx7y
A Survey of Data-Efficient Graph Learning
[article]
2024
arXiv
pre-print
Next, we systematically review recent advances on this topic from several key aspects, including self-supervised graph learning, semi-supervised graph learning, and few-shot graph learning. ...
In this paper, we introduce a novel concept of Data-Efficient Graph Learning (DEGL) as a research frontier, and present the first survey that summarizes the current progress of DEGL. ...
Graph Learn- ing (CSemi); Semi-supervised Graph Learning under Domain Shift (Semi w DS). ...
arXiv:2402.00447v3
fatcat:zygm52p4rvhrraag46b7lyzzsy
CropMix: Sampling a Rich Input Distribution via Multi-Scale Cropping
[article]
2022
arXiv
pre-print
Moreover, we show that CropMix is of benefit to both contrastive learning and masked image modeling towards more powerful representations, where preferable results are achieved when learned representations ...
The new input distribution, serving as training data, useful for a number of vision tasks, is then formed by simply mixing multiple cropped views. ...
We then evaluate the effectiveness of CropMix on contrastive learning [17, 8, 21] . ...
arXiv:2205.15955v1
fatcat:b4depefjonanlggi3vgdjpwnxm
Multimodal Self-Supervised Learning of General Audio Representations
[article]
2021
arXiv
pre-print
As a result, our audio model achieves a state-of-the-art of 42.4 mAP on the AudioSet classification downstream task, closing the gap between supervised and self-supervised methods trained on the same dataset ...
Existing contrastive audio representation learning methods mainly focus on using the audio modality alone during training. ...
Acknowledgements The authors would like to thank Marco Tagliasacchi and Neil Zeghidour for their help with the downstream tasks. ...
arXiv:2104.12807v2
fatcat:xqe2v7ol2bf5nbfxpzkyh6wxju
« Previous
Showing results 1 — 15 out of 56,960 results