Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Developing Imperceptible Adversarial Patches to Camouflage Military Assets From Computer Vision Enabled Technologies release_sr36jl2pgnfndjspzjlmtx3ymq

by Chris Wise, Jo Plested

Released as a article .

2022  

Abstract

Convolutional neural networks (CNNs) have demonstrated rapid progress and a high level of success in object detection. However, recent evidence has highlighted their vulnerability to adversarial attacks. These attacks are calculated image perturbations or adversarial patches that result in object misclassification or detection suppression. Traditional camouflage methods are impractical when applied to disguise aircraft and other large mobile assets from autonomous detection in intelligence, surveillance and reconnaissance technologies and fifth generation missiles. In this paper we present a unique method that produces imperceptible patches capable of camouflaging large military assets from computer vision-enabled technologies. We developed these patches by maximising object detection loss whilst limiting the patch's colour perceptibility. This work also aims to further the understanding of adversarial examples and their effects on object detection algorithms.
In text/plain format

Archived Files and Locations

application/pdf  716.1 kB
file_yhgkc77ixvfzjae74wqfvxm3my
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-05-11
Version   v2
Language   en ?
arXiv  2202.08892v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 6c95cb1f-2493-4d4d-9d13-760e223cb04e
API URL: JSON