Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
And our main findings can be summarized in two folds: 1) masked image modeling remains demanding large-scale data in order to scale up computes and model parameters; 2) masked image modeling cannot benefit from more data under a non-overfitting scenario, which diverges from the previous observations in self-supervised ...
May 14, 2024
Jun 9, 2022 · This observation allows us to pre-evaluate pre-trained models in advance without having to make costly trial-and-error assessments of downstream ...
In this paper, we systematically investigate the scaling properties, especially the data scaling capability of masked image modeling in terms of different ...
People also ask
May 24, 2023 · Abstract:Understanding whether self-supervised learning methods can scale with unlimited data is crucial for training large-scale models.
In this work, we try to break down these preconceptions and systematically study the scaling behaviors of MIM through extensive experiments, with data ranging ...
In this paper, we systematically investigate the scaling properties, especially the data scaling capability of masked image modeling in terms of different ...
The application task may require learning from information such as labels and domain techniques to gain sufficient performance. In addition, the application ...
In this work, we try to break down these preconceptions and systematically study the scaling behaviors of MIM through extensive experiments, with data ranging ...
An important goal of self-supervised learning is to enable model pre-training to benefit from almost unlimited data. However, one method that has recently ...
We illustrate the training details of pre-training and fine- tuning for different tasks and different models. Table 1 presents pre-training details.