Adversarial Examples: Opportunities and Challenges
release_2b3pp2altzcerpkwutfe33ma4i
by
Jiliang Zhang, Chen Li
2019
Abstract
Deep neural networks (DNNs) have shown huge superiority over humans in image
recognition, speech processing, autonomous vehicles and medical diagnosis.
However, recent studies indicate that DNNs are vulnerable to adversarial
examples (AEs), which are designed by attackers to fool deep learning models.
Different from real examples, AEs can mislead the model to predict incorrect
outputs while hardly be distinguished by human eyes, therefore threaten
security-critical deep-learning applications. In recent years, the generation
and defense of AEs have become a research hotspot in the field of artificial
intelligence (AI) security. This article reviews the latest research progress
of AEs. First, we introduce the concept, cause, characteristics and evaluation
metrics of AEs, then give a survey on the state-of-the-art AE generation
methods with the discussion of advantages and disadvantages. After that, we
review the existing defenses and discuss their limitations. Finally, future
research opportunities and challenges on AEs are prospected.
In text/plain
format
Archived Files and Locations
application/pdf 1.6 MB
file_77lt34tsnzbifnujsyqbqjqkve
|
arxiv.org (repository) web.archive.org (webarchive) |
1809.04790v4
access all versions, variants, and formats of this works (eg, pre-prints)