Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Past year
  • Any time
  • Past hour
  • Past 24 hours
  • Past week
  • Past month
  • Past year
All results
Aug 8, 2023 · Backdoor attacks aim to inject backdoors to victim machine learning models during training time, such that the backdoored model maintains the prediction ...
Aug 8, 2023 · In this article, from a malicious model provider perspective, we propose a black-box backdoor attack, named B3, where neither the rare victim model (including ...
Apr 23, 2024 · Wang, b3: Backdoor attacks against black-box machine learning models. ACM Trans. Privacy Secur. 26, 1–24 (2023). A. Awajan, A novel deep learning-based ...
Jan 18, 2024 · B3: Backdoor attacks against black-box machine learning models. ACM ... One-to-N & N-to-One: Two advanced backdoor attacks against deep learning models.
Apr 25, 2024 · Backdoor attacks aim to implant a backdoor into DNN. (Deep Neural Network) models, establishing a secret map- ping relationship between triggers and target ...
Aug 11, 2023 · However, recent studies [25, 59] show that transfer learning, which uses pre-trained models, is at risk of suffering from backdoor attacks. For a backdoor.
Dec 31, 2023 · In this paper, we propose a novel analysis-by-synthesis backdoor attack against face forgery detection models, which embeds the natural triggers in the latent ...
Apr 28, 2024 · However, deep learning models are vulnerable and can be compromised by various attacks. A backdoor attack is a model poisoning-based attack where the poisoned ...
Jul 9, 2023 · They are unsuitable for black-box access of stealer's models. In addition, their watermarks cannot be transferred to stealer's models through model ex- traction ...
Jul 9, 2023 · Most research works on black-box attacks focus on this setting, exploring different word substitution methods and search algorithms to reduce the victim models' ...