Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
To fully exploit PTM hubs, we propose a new paradigm: ranking and tuning pre-trained models. Figure 2 provides an overview of the paradigm. It consists of two parts: (1) PTMs are ranked by a transferability metric; (2) top-ranked PTMs are tuned to meet downstream applications' requirements.
Oct 20, 2021 · In this paper, we propose a new paradigm for exploiting model hubs that is intermediate between these extremes. The paradigm is characterized by ...
We propose a new paradigm for exploiting PTM hubs, namely ranking and tuning pre-trained models. It has significant advantages compared with the common practice.
People also ask
A new paradigm for exploiting model hubs that is intermediate between these extremes is proposed, which improves upon specialized methods designed for ...
Mar 15, 2023 · In this paper, we propose a new paradigm for exploiting model hubs that is intermediate between these extremes. The paradigm is characterized by ...
Oct 20, 2021 · Pre-trained model hubs with many pre-trained models (PTMs) have been a cornerstone in deep learning. Although built at a high cost, ...
In this paper, we propose a new paradigm for exploiting model hubs that is intermediate between these extremes. The paradigm is characterized by two aspects: (1) ...
These models can be exploited to improve performance, or at least accelerate training of new models using the proposed "ranking and tuning" paradigm. Trained ...
The proposed paradigm of ranking and tuning pre-trained models. PTMs are ranked by their transferability to the target data, then either the best PTM is ...
LogME: Practical Assessment of Pre-trained Models for Transfer Learning, ICML 2021. Ranking and Tuning Pre-trained Models: A New Paradigm for Exploiting Model ...