Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








66,142 Hits in 4.6 sec

Multi-Prompt Fine-Tuning of Foundation Models for Enhanced Medical Image Segmentation [article]

Xiangru Li, Yifei Zhang, Liang Zhao
2023 arXiv   pre-print
The Segment Anything Model (SAM) is a powerful foundation model that introduced revolutionary advancements in natural image segmentation.  ...  wide range of segmentation tasks.  ...  The Segment Anything Model (SAM), introduced by the SA project, enjoys robust zero-shot performance comparable to many supervised methods [9] .  ... 
arXiv:2310.02381v1 fatcat:h2ao64opqnbipoxudwmay6vkr4

A Survey on Segment Anything Model (SAM): Vision Foundation Model Meets Prompt Engineering [article]

Chaoning Zhang, Fachrina Dewi Puspitasari, Sheng Zheng, Chenghao Li, Yu Qiao, Taegoo Kang, Xinru Shan, Chenshuang Zhang, Caiyan Qin, Francois Rameau, Lik-Hang Lee, Sung-Ho Bae (+1 others)
2023 arXiv   pre-print
Trained on a large segmentation dataset of over 1 billion masks, SAM is capable of segmenting any object on a certain image.  ...  Segment anything model (SAM) developed by Meta AI Research has recently attracted significant attention.  ...  Another recent work performs comprehensive evaluations on the robustness of SAM on corruption and beyond.  ... 
arXiv:2306.06211v3 fatcat:wtfxuprg25akdobdtx7u54huwu

Polyp-SAM++: Can A Text Guided SAM Perform Better for Polyp Segmentation? [article]

Risab Biswas
2023 arXiv   pre-print
We will evaluate the performance of a text-guided SAM on the polyp segmentation task on benchmark datasets. We will also compare the results of text-guided SAM vs unprompted SAM.  ...  In the field of medical image segmentation, polyp segmentation holds a position of high importance, thus creating a model which is robust and precise is quite challenging.  ...  lang-segment-anything.  ... 
arXiv:2308.06623v1 fatcat:zmrpxyhg2feqxix5rc6vlvee5y

Robustness of SAM: Segment Anything Under Corruptions and Beyond [article]

Yu Qiao, Chaoning Zhang, Taegoo Kang, Donghun Kim, Chenshuang Zhang, Choong Seon Hong
2023 arXiv   pre-print
Segment anything model (SAM), as the name suggests, is claimed to be capable of cutting out any object and demonstrates impressive zero-shot transfer performance with the guidance of prompts.  ...  To the best of our knowledge, our work is the first of its kind to evaluate the robustness of SAM under style change, local occlusion, and local adversarial patch attacks.  ...  Conclusions In this work, we are among the early pioneers to evaluate the robustness of the segment anything model (SAM), for which we provide a comprehensive evaluation.  ... 
arXiv:2306.07713v3 fatcat:vw3fsccx4reb7ihli6vigltfgi

AquaSAM: Underwater Image Foreground Segmentation [article]

Muduo Xu, Jianhao Su, Yutao Liu
2023 arXiv   pre-print
The Segment Anything Model (SAM) has revolutionized natural image segmentation, nevertheless, its performance on underwater images is still restricted.  ...  This work presents AquaSAM, the first attempt to extend the success of SAM on underwater images with the purpose of creating a versatile method for the segmentation of various underwater targets.  ...  of the Dice loss and cross-entropy loss, which has demonstrated robustness in various segmentation tasks [30] [31] .  ... 
arXiv:2308.04218v1 fatcat:lh7w2vzu3ratplxdd3vkwhi6fy

CEmb-SAM: Segment Anything Model with Condition Embedding for Joint Learning from Heterogeneous Datasets [article]

Dongik Shin, Beomsuk Kim, Seungjun Baek
2023 arXiv   pre-print
For robust segmentation, we leverage recently proposed Segment Anything model (SAM) in order to incorporate sub-group information into the model.  ...  Although using the common modality of ultrasound, one typically needs separate datasets in order to segment, for example, different anatomical structures or lesions with different levels of malignancy.  ...  Our main contributions are as follows: -We propose CEmb-SAM, which jointly trains a model over heterogeneous datasets leveraging Segment Anything model for robust segmentation performances.  ... 
arXiv:2308.06957v1 fatcat:l26p3b2wy5acdmtbp7we44g6tu

SSR: SAM is a Strong Regularizer for domain adaptive semantic segmentation [article]

Yanqi Ge, Ye Huang, Wen Li, Lixin Duan
2024 arXiv   pre-print
We introduced SSR, which utilizes SAM (segment-anything) as a strong regularizer during training, to greatly enhance the robustness of the image encoder for handling various domains.  ...  Meanwhile, the ImageNet pre-trained image encoder is still a mature choice of backbone for the semantic segmentation task, especially when the SAM is category-irrelevant.  ...  SAM (Segment-anything [33] ) is a vision foundation model trained on a massive-scale dataset (SA-1B).  ... 
arXiv:2401.14686v1 fatcat:hpm532rtvrc4bkdtzt3flnsega

SAM Meets Robotic Surgery: An Empirical Study on Generalization, Robustness and Adaptation [article]

An Wang, Mobarakol Islam, Mengya Xu, Yang Zhang, Hongliang Ren
2023 arXiv   pre-print
The Segment Anything Model (SAM) serves as a fundamental model for semantic segmentation and demonstrates remarkable generalization capabilities across a wide range of downstream scenarios.  ...  In this empirical study, we examine SAM's robustness and zero-shot generalizability in the field of robotic surgery.  ...  https://github.com/hendrycks/robustness https://github.com/facebookresearch/segment-anything  ... 
arXiv:2308.07156v1 fatcat:hrlkpkiyvbds7n5iir53tyyuqa

From Generalization to Precision: Exploring SAM for Tool Segmentation in Surgical Environments [article]

Kanyifeechukwu J. Oguine, Roger D. Soberanis-Mukul, Nathan Drenkow, Mathias Unberath
2024 arXiv   pre-print
Initial exploratory works with the Segment Anything Model (SAM) show that bounding-box-based prompting presents notable zero-short generalization.  ...  Method: We use SAM to generate the over-segmented prediction of endoscopic frames.  ...  Note that to test the zero-shot generalizability of the network, the segment anything model was used out-of-the-box with its predefined weights, with no additional fine-tuning to the datasets.  ... 
arXiv:2402.17972v1 fatcat:ztrdlbfrr5cubhfawbkezi3tme

Personalize Segment Anything Model with One Shot [article]

Renrui Zhang, Zhengkai Jiang, Ziyu Guo, Shilin Yan, Junting Pan, Xianzheng Ma, Hao Dong, Peng Gao, Hongsheng Li
2023 arXiv   pre-print
Driven by large-data pre-training, Segment Anything Model (SAM) has been demonstrated as a powerful and promptable framework, revolutionizing the segmentation models.  ...  To further alleviate the mask ambiguity, we present an efficient one-shot fine-tuning variant, PerSAM-F.  ...  Overall, our PerSAM-F indicates better robustness to quality of the given mask than SegGPT. 4 . Robustness to Quality of the One-shot Mask.  ... 
arXiv:2305.03048v2 fatcat:uv5pdhefyrdgdcx4ekl6dseanu

Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks [article]

Tianhe Ren, Shilong Liu, Ailing Zeng, Jing Lin, Kunchang Li, He Cao, Jiayu Chen, Xinyu Huang, Yukang Chen, Feng Yan, Zhaoyang Zeng, Hao Zhang (+5 others)
2024 arXiv   pre-print
Grounded SAM also shows superior performance on open-vocabulary benchmarks, achieving 48.7 mean AP on SegInW (Segmentation in the wild) zero-shot benchmark with the combination of Grounding DINO-Base and  ...  This integration enables the detection and segmentation of any regions based on arbitrary text inputs and opens a door to connecting various vision models.  ...  By leveraging the capabilities of these two robust expert models, the open-set detection and segmentation tasks can be more effortlessly accomplished.  ... 
arXiv:2401.14159v1 fatcat:hh2oaburnffjnemzpld564qynu

Matte Anything: Interactive Natural Image Matting with Segment Anything Models [article]

Jingfeng Yao, Xinggang Wang, Lang Ye, Wenyu Liu
2024 arXiv   pre-print
Specifically, we use the segment anything model to predict high-quality contour with user interaction and an open-vocabulary detector to predict the transparency of any object.  ...  However, the production of trimap often requires significant labor, which limits the widespread application of matting algorithms on a large scale.  ...  Recently, Kirillov et al. have introduced the Segment Anything Model (SAM) [16] as a segmentation foundation model in computer vision, capable of segmenting any object based on user prompts.  ... 
arXiv:2306.04121v2 fatcat:kac5v5mg55e67f7jullcfbmulm

Black-box Targeted Adversarial Attack on Segment Anything (SAM) [article]

Sheng Zheng, Chaoning Zhang, Xinhong Hao
2024 arXiv   pre-print
Realizing flexible attacks on SAM is beneficial for understanding the robustness of SAM in the adversarial context. To this end, this work aims to achieve a targeted adversarial attack (TAA) on SAM.  ...  Recently, Segment Anything Model (SAM) has emerged to become a popular foundation model in computer vision due to its impressive generalization to unseen data and tasks.  ...  Beyond the segmentation object, SAM is also used for image editing, such as Magic Copy Kevmo ( 2023 ), which focuses on extracting the foreground using the capability of segmenting anything of SAM.  ... 
arXiv:2310.10010v2 fatcat:zdumbff2ejdzjhnbpwcpbd4qgm

Segment Anything Meets Point Tracking [article]

Frano Rajič, Lei Ke, Yu-Wing Tai, Chi-Keung Tang, Martin Danelljan, Fisher Yu
2023 arXiv   pre-print
We highlight the merits of point-based tracking through direct evaluation on the zero-shot open-world Unidentified Video Objects (UVO) benchmark.  ...  The Segment Anything Model (SAM) has established itself as a powerful zero-shot image segmentation model, enabled by efficient point-centric annotation and prompt-based models.  ...  One such promising model is the Segment Anything Model (SAM) [16] .  ... 
arXiv:2307.01197v2 fatcat:dftcss2whzdela7v7m4mqhnigi

Part-aware Personalized Segment Anything Model for Patient-Specific Segmentation [article]

Chenhui Zhao, Liyue Shen
2024 arXiv   pre-print
To further promote the robustness of the selected prompt, we propose a retrieval approach to handle outlier prompts.  ...  We introduce a novel part-aware prompt mechanism to select multiple-point prompts based on part-level features of the one-shot data.  ...  Adapt SAM to Medical Image Domain with Fine-tuning Segment Anything Model (SAM) [24] is initially pre-trained on the SA-1B [24] dataset in the natural image domain.  ... 
arXiv:2403.05433v1 fatcat:h3abokhykvdppp2prfcxoqylne
« Previous Showing results 1 — 15 out of 66,142 results