A Survey on Segment Anything Model (SAM): Vision Foundation Model Meets Prompt Engineering
release_wtfxuprg25akdobdtx7u54huwu
by
Chaoning Zhang, Fachrina Dewi Puspitasari, Sheng Zheng, Chenghao Li, Yu Qiao, Taegoo Kang, Xinru Shan, Chenshuang Zhang, Caiyan Qin, Francois Rameau, Lik-Hang Lee, Sung-Ho Bae (+1 others)
2023
Abstract
Segment anything model (SAM) developed by Meta AI Research has recently
attracted significant attention. Trained on a large segmentation dataset of
over 1 billion masks, SAM is capable of segmenting any object on a certain
image. In the original SAM work, the authors turned to zero-short transfer
tasks (like edge detection) for evaluating the performance of SAM. Recently,
numerous works have attempted to investigate the performance of SAM in various
scenarios to recognize and segment objects. Moreover, numerous projects have
emerged to show the versatility of SAM as a foundation model by combining it
with other models, like Grounding DINO, Stable Diffusion, ChatGPT, etc. With
the relevant papers and projects increasing exponentially, it is challenging
for the readers to catch up with the development of SAM. To this end, this work
conducts the first yet comprehensive survey on SAM. This is an ongoing project
and we intend to update the manuscript on a regular basis. Therefore, readers
are welcome to contact us if they complete new works related to SAM so that we
can include them in our next version.
In text/plain
format
Archived Files and Locations
application/pdf 12.6 MB
file_3flg2s2ynjh6zl5eopvqsjg37e
|
arxiv.org (repository) web.archive.org (webarchive) |
2306.06211v3
access all versions, variants, and formats of this works (eg, pre-prints)