Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








495 Hits in 1.7 sec

Poster: Fooling XAI with Explanation-Aware Backdoors

Maximilian Noppel, Christian Wressnegger
2023 Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security  
While we have presented successful explanation-aware backdoors in our original work, "Disguising Attacks with Explanation-Aware Backdoors, " in this paper, we provide a brief overview and a focus on the  ...  Explanation-aware backdoors therefore can bypass explanation-based detection techniques and "throw a red herring" at the human analyst.  ...  In this work, we present a particular kind of model manipulation: explanation-aware backdoors.  ... 
doi:10.1145/3576915.3624379 fatcat:wl2cakt2yjhgfnv3vrbps5ak34

Backdooring Explainable Machine Learning [article]

Maximilian Noppel and Lukas Peter and Christian Wressnegger
2022 arXiv   pre-print
In this paper, we demonstrate blinding attacks that can fully disguise an ongoing attack against the machine learning model.  ...  Similar to neural backdoors, we modify the model's prediction upon trigger presence but simultaneously also fool the provided explanation.  ...  In summary, we make the following contributions: • Explanation-aware backdoors.  ... 
arXiv:2204.09498v1 fatcat:7ebxccwfwfcqviypeewrqwoyx4

Revealing Vulnerabilities of Neural Networks in Parameter Learning and Defense Against Explanation-Aware Backdoors [article]

Md Abdul Kadir, GowthamKrishna Addluri, Daniel Sonntag
2024 arXiv   pre-print
The method we suggest defences against most modern explanation-aware adversarial attacks, achieving an approximate decrease of ~99\% in the Attack Success Rate (ASR) and a ~91\% reduction in the Mean Square  ...  Error (MSE) between the original explanation and the defended (post-attack) explanation across three unique types of attacks.  ...  Revealing Vulnerabilities of Neural Networks in Parameter Learning and Defense Against Explanation-Aware Backdoors Supplementary Material Attack Scenarios Table 6 presents various attack and evaluation  ... 
arXiv:2403.16569v1 fatcat:bbc6ecnk5vevjnpf55oy76ozja

WaNet – Imperceptible Warping-based Backdoor Attack [article]

Anh Nguyen, Anh Tran
2021 arXiv   pre-print
With the thriving of deep learning and the widespread practice of using pre-trained networks, backdoor attacks have become an increasing security threat drawing many research interests in recent years.  ...  Behavior analyses show that our backdoors are transparent to network inspection, further proving this novel attack mechanism's efficiency.  ...  Lately, Liu et al. (2020) proposed to disguise backdoor triggers as reflectance to make the poisoned images look natural.  ... 
arXiv:2102.10369v4 fatcat:ye5s7eye55a4ldn4dtf3bozpyy

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review [article]

Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, Hyoungshick Kim
2020 arXiv   pre-print
This work provides the community with a timely comprehensive review of backdoor attacks and countermeasures on deep learning.  ...  In some cases, an attacker can intelligently bypass existing defenses with an adaptive attack.  ...  New devised empirical countermeasures against backdoor attacks would soon be broken by strong adaptive backdoor attacks when the attacker is aware of the defense.  ... 
arXiv:2007.10760v3 fatcat:6i4za6345jedbe5ek57l2tgld4

ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms [article]

Minzhou Pan, Yi Zeng, Lingjuan Lyu, Xue Lin, Ruoxi Jia
2023 arXiv   pre-print
Successful backdoor attacks have also been demonstrated in these new settings.  ...  attack.  ...  Figure 7 : 7 Figure 7: Visual examples of the backdoor poisoned samples disguised by adaptive attacks (Case-0, CIFAR-10).  ... 
arXiv:2302.11408v2 fatcat:5hg666to2naz5ipe35hsfxkoku

Backdoor Attack with Mode Mixture Latent Modification [article]

Hongwei Zhang, Xiaoyin Xu, Dongsheng An, Xianfeng Gu, Min Zhang
2024 arXiv   pre-print
aware of the existence of backdoor attacks.  ...  Previous research can be categorized into two genres: poisoning a portion of the dataset with triggered images for users to train the model from scratch, or training a backdoored model alongside a triggered  ...  A potential explanation is that previous methods for backdoor attacks depend on θe to learn to cluster poisoned images in the latent space and then classify them using θc.  ... 
arXiv:2403.07463v1 fatcat:iwgzfqaxyvc3tovpbeg3ycvqae

Poison Attack and Defense on Deep Source Code Processing Models [article]

Jia Li, Zhuo Li, Huangzhao Zhang, Ge Li, Zhi Jin, Xing Hu, Xin Xia
2022 arXiv   pre-print
The attackers aim to inject insidious backdoors into models by poisoning the training data with poison samples.  ...  By activating backdoors, attackers can manipulate the poisoned models in security-related scenarios.  ...  Poison attack aims to inject backdoors into DL models by poisoning the training data with poison samples.  ... 
arXiv:2210.17029v1 fatcat:bi3to7ta5bbtlc5iqo2qzwwjcm

Review on Malware, Types, and its Analysis

Jeff Chandy
2022 International Journal for Research in Applied Science and Engineering Technology  
Malicious software, sometimes known as malware or mal-ware, is a type of program with malicious intents.  ...  Malware that compromises computers through a coordinated attack includes worms, trojan horses, botnets, and rootkits.  ...  The purpose of the disguised payloads or attacks is to deceive the user into carrying out execution by making them appear friendly.  ... 
doi:10.22214/ijraset.2022.47887 fatcat:qfehxarjkzhlzdfccrzwyibqam

Machine Learning Security: Threats, Countermeasures, and Evaluations

Mingfu Xue, Chengxiang Yuan, Heyi Wu, Yushu Zhang, Weiqiang Liu
2020 IEEE Access  
INDEX TERMS Artificial intelligence security, poisoning attacks, backdoor attacks, adversarial examples, privacy-preserving machine learning.  ...  It has demonstrated significant success in dealing with various complex problems, and shows capabilities close to humans or even beyond humans.  ...  The malicious workers can be disguised as normal ones to evade detection, while achieving the maximum attack utility meanwhile.  ... 
doi:10.1109/access.2020.2987435 fatcat:ksinvcvcdvavxkzyn7fmsa27ji

Page 194 of The Academy and Literature Vol. 7, Issue 146 [page]

1875 The Academy and Literature  
The name “ Moro” might very naturally suggest to Cinthio the desire of disguising his hero as a Moor—a device which would save the author from unpleasant consequences, while the disguise would be too flimsy  ...  (Vol. ii. pp. 141, 142, 166, That in his defence Rowland Williams adopted a “policy of evasion,” and availed himself of “ —— and backdoors,” is simply untrue.  ... 

Fully Undetectable Remote Access Trojan: Android

Akshitasinh Chauhan
2019 International Journal for Research in Applied Science and Engineering Technology  
acting as the interface between the server and the attacker.  ...  With over 80 percent market share, Android is the most prevailing player in the mobile platform and people are switching to it with its every update.  ...  A computer with a sophisticated backdoor Trojan installed may also be referred to as a zombie or bot.  ... 
doi:10.22214/ijraset.2019.5182 fatcat:yuh25mmekne37d5bab5gxblc5u

Towards Security Threats of Deep Learning Systems: A Survey [article]

Yingzhe He and Guozhu Meng and Kai Chen and Xingbo Hu and Jinwen He
2020 arXiv   pre-print
In particular, we focus on four types of attacks associated with security threats of deep learning: model extraction attack, model inversion attack, poisoning attack and adversarial attack.  ...  To this end, lots of research has been conducted with the purpose of exhaustively identifying intrinsic weaknesses and subsequently proposing feasible mitigation.  ...  In recent years, with the development of technology, more research has focused on backdoor poisoning attacks [258] [76] [141] [23] [212] .  ... 
arXiv:1911.12562v2 fatcat:m3lyece44jgdbp6rlcpj6dz2gm

Encryption and Globalization

Peter P. Swire, Kenesa Ahmad
2011 Social Science Research Network  
The apparent source of attack is often not the actual source. 86 The ability to disguise the source of an attack greatly weakens deterrence, because the defense often has no workable way to locate and  ...  ; 2) attacks that are more efficient than brute force; and 3) attacks assisted by a flaw known to the attacker, or "backdoors."  ... 
doi:10.2139/ssrn.1960602 fatcat:apqj5d2f4ra2jfi46ei4ruuqg4

Strategically-Motivated Advanced Persistent Threat: Definition, Process, Tactics and a Disinformation Model of Counterattack [article]

Atif Ahmad, Jeb Webb, Kevin C. Desouza, James Boorman
2021 arXiv   pre-print
Finally, we present a general disinformation model, derived from situation awareness theory, and explain how disinformation can be used to attack the situation awareness and decision making of not only  ...  APT refers to knowledgeable human attackers that are organized, highly sophisticated and motivated to achieve their objectives against a targeted organization(s) over a prolonged period.  ...  Some intrusions have been facilitated through the distribution of hardware with built-in backdoors-"malicious firmware" (Sood and Enbody 2013).  ... 
arXiv:2103.15005v1 fatcat:ihrpyqzsinemrauntrf2mquqgm
« Previous Showing results 1 — 15 out of 495 results