A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
A Survey on Masked Autoencoder for Self-supervised Learning in Vision and Beyond
[article]
2022
arXiv
pre-print
As a milestone to bridge the gap with BERT in NLP, masked autoencoder has attracted unprecedented attention for SSL in vision and beyond. ...
Masked autoencoders are scalable vision learners, as the title of MAE , which suggests that self-supervised learning (SSL) in vision might undertake a similar trajectory as in NLP. ...
Their method, based on a Swin transformer, combines the multiscale feature learnability of hViT and the efficiency of masked image modeling by making the hierarchical transformer compatible with MAE. ...
arXiv:2208.00173v1
fatcat:d2bxvpzcabg3lei4mcnsts5wqe
Discerning a 'Rhetorics of Catechesis' in Origen of Alexandria's Commentary on the Gospel of John: A Sociorhetorical Analysis of Book XIII:3-42 (John 4:13-15)
2013
Their assessments indicate that we need a way out of this like-dislike bind in order to move more towards appreciating his writings based on their inner structure and any discernible consi [...] ...
Robbins and continues to be developed by a group of international scholars under his direction. ...
, the Hivites, and the Jebusites ...
doi:10.20381/ruor-844
fatcat:o6eyuyvbxbg5doe7xbukqzetti