Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Black-box Targeted Adversarial Attack on Segment Anything (SAM) release_zdumbff2ejdzjhnbpwcpbd4qgm

by Sheng Zheng, Chaoning Zhang, Xinhong Hao

Released as a article .

2024  

Abstract

Deep recognition models are widely vulnerable to adversarial examples, which change the model output by adding quasi-imperceptible perturbation to the image input. Recently, Segment Anything Model (SAM) has emerged to become a popular foundation model in computer vision due to its impressive generalization to unseen data and tasks. Realizing flexible attacks on SAM is beneficial for understanding the robustness of SAM in the adversarial context. To this end, this work aims to achieve a targeted adversarial attack (TAA) on SAM. Specifically, under a certain prompt, the goal is to make the predicted mask of an adversarial example resemble that of a given target image. The task of TAA on SAM has been realized in a recent arXiv work in the white-box setup by assuming access to prompt and model, which is thus less practical. To address the issue of prompt dependence, we propose a simple yet effective approach by only attacking the image encoder. Moreover, we propose a novel regularization loss to enhance the cross-model transferability by increasing the feature dominance of adversarial images over random natural images. Extensive experiments verify the effectiveness of our proposed simple techniques to conduct a successful black-box TAA on SAM.
In text/plain format

Archived Files and Locations

application/pdf  1.4 MB
file_aglr7qhhhzce5maae6gcbby3ky
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2024-02-28
Version   v2
Language   en ?
arXiv  2310.10010v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 37c97719-4b11-4460-a9eb-6d5db5a9d940
API URL: JSON