A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
ATMPA: Attacking Machine Learning-based Malware Visualization Detection Methods via Adversarial Examples
[article]
2019
arXiv
pre-print
In this paper, we demonstrate that the state-of-the-art ML-based visualization detection methods are vulnerable to Adversarial Example (AE) attacks. ...
Since the threat of malicious software (malware) has become increasingly serious, automatic malware detection techniques have received increasing attention, where machine learning (ML)-based visualization ...
In this paper, we propose the first attack approach on ML-based visualization malware detection methods based on Adversarial Examples (AEs) [34] , named Adversarial Texture Malware Perturbation Attack ...
arXiv:1808.01546v3
fatcat:ebpc4um62bemdkhqjdr4nrgy2a
Generation Evaluation of Adversarial Examples for Malware Obfuscation
[article]
2019
arXiv
pre-print
There has been an increased interest in the application of convolutional neural networks for image based malware classification, but the susceptibility of neural networks to adversarial examples allows ...
We further evaluate the effectiveness of the proposed method by reporting insignificant change in the evasion rate of our adversarial examples against popular defense strategies. ...
Static and dynamic analysis have been the cornerstone of malware detection and classification, but are increasingly being replaced by rule based and machine learning models trained on features extracted ...
arXiv:1904.04802v3
fatcat:jk3v5tmc4rcwdfs5b4gc6pqwjq
Resilience against Adversarial Examples: Data-Augmentation Exploiting Generative Adversarial Networks
2021
KSII Transactions on Internet and Information Systems
These specially crafted pieces of malware are referred to as adversarial examples. ...
In this paper, we propose a DNN-based malware classifier that becomes resilient to these kinds of attacks by exploiting Generative Adversarial Network (GAN) based data augmentation. ...
In order to respond to the rapid changes in malware, several machine learning (ML) based malware classification methods have recently been introduced [1] [2] [3] . ...
doi:10.3837/tiis.2021.11.013
fatcat:j4ee6d23cjc3nmorh2egtxsdte
Adversarial Examples: Opportunities and Challenges
[article]
2019
arXiv
pre-print
However, recent studies indicate that DNNs are vulnerable to adversarial examples (AEs), which are designed by attackers to fool deep learning models. ...
First, we introduce the concept, cause, characteristics and evaluation metrics of AEs, then give a survey on the state-of-the-art AE generation methods with the discussion of advantages and disadvantages ...
In the malware detection, the machine learning (ML)-based visualization malware detectors are vulnerable to AE attacks, where a malicious malware may be classified as a benign one by adding a slight perturbation ...
arXiv:1809.04790v3
fatcat:hbanzwd4knhgblqtcbedqce3de
Effectiveness of Adversarial Examples and Defenses for Malware Classification
[article]
2019
arXiv
pre-print
In order to better understand the space of adversarial examples in malware classification, we study different approaches of crafting adversarial examples and defense techniques in the malware domain and ...
Although artificial neural networks perform very well on these tasks, they are also vulnerable to adversarial examples. ...
[37] also work on RNN based malware detection. ...
arXiv:1909.04778v1
fatcat:wpiqojci2ndrpovogjpai3e3ni
Adversarial Examples Detection for XSS Attacks based on Generative Adversarial Networks
2020
IEEE Access
Models based on deep learning are prone to misjudging the results when faced with adversarial examples. ...
In this paper, we propose an MCTS-T algorithm for generating adversarial examples of cross-site scripting (XSS) attacks based on Monte Carlo tree search (MCTS) algorithm. ...
CONCLUSION Deep-learning-based XSS attack detection models can detect attacks effectively, but they cannot to detect adversarial examples. ...
doi:10.1109/access.2020.2965184
fatcat:z7gcbf2pevclrfpn54nis3goee
MalFox: Camouflaged Adversarial Malware Example Generation Based on Conv-GANs Against Black-Box Detectors
[article]
2022
arXiv
pre-print
Relying on the strong capabilities of deep learning, we propose a convolutional generative adversarial network-based (Conv-GAN) framework titled MalFox, targeting adversarial malware example generation ...
, and evasive rate of the generated adversarial malware examples. ...
Second, distinctive from other GAN-based adversarial malware example generation methods that attack selfdeveloped scan engines based on machine learning models, MalFox intends to attack a collection of ...
arXiv:2011.01509v6
fatcat:rm7ne77ncnbeff5b227eizvasy
Utilizing Generative Adversarial Networks to develop a robust Defensive System against Adversarial Examples
2022
IJARCCE
This pipeline allows us to identify the transformations between adversarial examples and clean data as well as create new adversarial examples on the fly. ...
Using the special ability of Generative Adversarial Networks (GANs) to create fresh adversarial instances for model retraining, we offer a novel defense strategy against adversarial examples in this study ...
machine learning methods. ...
doi:10.17148/ijarcce.2022.111001
fatcat:ryguphjhlfah5mqd3vqqcnyxae
Adversarial Examples in Constrained Domains
[article]
2022
arXiv
pre-print
Machine learning algorithms have been shown to be vulnerable to adversarial manipulation through systematic modification of inputs (e.g., adversarial examples) in domains such as image recognition. ...
Indeed, with as little as five randomly selected features, one can still generate adversarial examples. ...
One of the directions the field of adversarial machine learning explores is the impact of adversarial examples: inputs to machine learning models that an attacker has intentionally designed to cause the ...
arXiv:2011.01183v3
fatcat:ktwfw6ry4bexpkbldloeoebp5m
Developing a Robust Defensive System against Adversarial Examples Using Generative Adversarial Networks
2020
Big Data and Cognitive Computing
These adversarial examples are employed to strengthen the model, attack, and defense in an iterative pipeline. Our simulation results demonstrate the success of the proposed method. ...
examples and clean data, and to automatically synthesize new adversarial examples. ...
Figure 2 . 2 General structure of the generic adversarial attack on deep learning-based botnet detection systems. ...
doi:10.3390/bdcc4020011
fatcat:yf65f4mrcjdzpav3dc5f5e3d4u
Motivating the Rules of the Game for Adversarial Example Research
[article]
2018
arXiv
pre-print
Advances in machine learning have led to broad deployment of systems with impressive performance on important problems. ...
Nonetheless, these systems can be induced to make errors on data that are surprisingly similar to examples the learned system handles correctly. ...
Time-limited humans may also be fooled by perturbation-based adversarial examples [81] . The notion of an adversarial example is not specific to neural networks or to machine learning models. ...
arXiv:1807.06732v2
fatcat:zglozbuqone4xl7pudpvjfa7w4
DomainGAN: Generating Adversarial Examples to Attack Domain Generation Algorithm Classifiers
[article]
2020
arXiv
pre-print
There are many algorithms that are used to generate domains, however many of these algorithms are simplistic and easily detected by traditional machine learning techniques. ...
several state-of-the-art deep learning based DGA classifiers. ...
With the emergence of neural networks, machine learning based DGAs have been developed [4, 14] to specifically evade DGA classifier detection. ...
arXiv:1911.06285v3
fatcat:57ntsi4gffec3irfusexziwuqu
Generating End-to-End Adversarial Examples for Malware Classifiers Using Explainability
[article]
2022
arXiv
pre-print
Previous adversarial attacks against such classifiers only add new features and not modify existing ones to avoid harming the modified malware executable's functionality. ...
In recent years, the topic of explainable machine learning (ML) has been extensively researched. Up until now, this research focused on regular ML users use-cases such as debugging a ML model. ...
Generating an End-to-End Adversarial Example In order to evade detection by the malware classifier, the adversary is using the method specified in Algorithm 1. ...
arXiv:2009.13243v2
fatcat:5ubzsgeqgfhwdg76tasieoaq3i
A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples
[article]
2017
arXiv
pre-print
Most machine learning classifiers, including deep neural networks, are vulnerable to adversarial examples. ...
The goal of this paper is not to introduce a single method, but to make theoretical steps towards fully understanding adversarial examples. ...
One may argue that "since Cuckoo sandbox works well for PDF-malware identification, why a machine-learning based detection system is even necessary?". ...
arXiv:1612.00334v12
fatcat:wovg3okfjzfkvlhlrzauj3bdkq
On the (Statistical) Detection of Adversarial Examples
[article]
2017
arXiv
pre-print
Machine Learning (ML) models are applied in a variety of tasks such as network intrusion detection or Malware classification. ...
We evaluate our approach on multiple adversarial example crafting methods (including the fast gradient sign and saliency map methods) with several datasets. ...
Statistical tests can thus be designed based on these metrics to detect adversarial examples crafted with several known techniques. ...
arXiv:1702.06280v2
fatcat:37jf3nzevzczjlbyyycb7kixty
« Previous
Showing results 1 — 15 out of 2,757 results