Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Adversarial Examples: Opportunities and Challenges release_2b3pp2altzcerpkwutfe33ma4i

by Jiliang Zhang, Chen Li

Released as a article .

2019  

Abstract

Deep neural networks (DNNs) have shown huge superiority over humans in image recognition, speech processing, autonomous vehicles and medical diagnosis. However, recent studies indicate that DNNs are vulnerable to adversarial examples (AEs), which are designed by attackers to fool deep learning models. Different from real examples, AEs can mislead the model to predict incorrect outputs while hardly be distinguished by human eyes, therefore threaten security-critical deep-learning applications. In recent years, the generation and defense of AEs have become a research hotspot in the field of artificial intelligence (AI) security. This article reviews the latest research progress of AEs. First, we introduce the concept, cause, characteristics and evaluation metrics of AEs, then give a survey on the state-of-the-art AE generation methods with the discussion of advantages and disadvantages. After that, we review the existing defenses and discuss their limitations. Finally, future research opportunities and challenges on AEs are prospected.
In text/plain format

Archived Files and Locations

application/pdf  1.6 MB
file_77lt34tsnzbifnujsyqbqjqkve
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   accepted
Date   2019-09-23
Version   v4
Language   en ?
arXiv  1809.04790v4
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: b838d6e2-ee88-4ca4-a341-1d97a3ae8d93
API URL: JSON