Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleNovember 2023
Not What You've Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityNovember 2023, pp 79–90https://doi.org/10.1145/3605764.3623985Large Language Models (LLMs) are increasingly being integrated into applications, with versatile functionalities that can be easily modulated via natural language prompts. So far, it was assumed that the user is directly prompting the LLM. But, what if ...
- research-articleNovember 2023
Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityNovember 2023, pp 233–244https://doi.org/10.1145/3605764.3623920Machine-learning phishing webpage detectors (ML-PWD) have been shown to suffer from adversarial manipulations of the HTML code of the input webpage. Nevertheless, the attacks recently proposed have demonstrated limited effectiveness due to their lack of ...
- research-articleNovember 2023
Drift Forensics of Malware Classifiers
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityNovember 2023, pp 197–207https://doi.org/10.1145/3605764.3623918The widespread occurrence of mobile malware still poses a significant security threat to billions of smartphone users. To counter this threat, several machine learning-based detection systems have been proposed within the last decade. These methods have ...
- research-articleNovember 2023
Certifiers Make Neural Networks Vulnerable to Availability Attacks
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityNovember 2023, pp 67–78https://doi.org/10.1145/3605764.3623917To achieve reliable, robust, and safe AI systems, it is vital to implement fallback strategies when AI predictions cannot be trusted. Certifiers for neural networks are a reliable way to check the robustness of these predictions. They guarantee for some ...
- research-articleNovember 2023
Reward Shaping for Happier Autonomous Cyber Security Agents
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityNovember 2023, pp 221–232https://doi.org/10.1145/3605764.3623916As machine learning models become more capable, they have exhibited increased potential in solving complex tasks. One of the most promising directions uses deep reinforcement learning to train autonomous agents in computer network defense tasks. This ...
-
- research-articleNovember 2023
Broken Promises: Measuring Confounding Effects in Learning-based Vulnerability Discovery
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityNovember 2023, pp 149–160https://doi.org/10.1145/3605764.3623915Several learning-based vulnerability detection methods have been proposed to assist developers during the secure software development life-cycle. In particular, recent learning-based large transformer networks have shown remarkably high performance in ...
- research-articleNovember 2023
Certified Robustness of Static Deep Learning-based Malware Detectors against Patch and Append Attacks
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityNovember 2023, pp 173–184https://doi.org/10.1145/3605764.3623914Machine learning-based (ML) malware detectors have been shown to be susceptible to adversarial malware examples. Given the vulnerability of deep learning detectors to small changes on the input file, we propose a practical and certifiable defense against ...
- research-articleNovember 2023
Measuring Equality in Machine Learning Security Defenses: A Case Study in Speech Recognition
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityNovember 2023, pp 161–171https://doi.org/10.1145/3605764.3623911Over the past decade, the machine learning security community has developed a myriad of defenses for evasion attacks. An understudied question in that community is: for whom do these defenses defend? This work considers common approaches to defending ...
- research-articleNovember 2023
Utility-preserving Federated Learning
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityNovember 2023, pp 55–65https://doi.org/10.1145/3605764.3623908We investigate the concept of utility-preserving federated learning (UPFL) in the context of deep neural networks. We theoretically prove and experimentally validate that UPFL achieves the same accuracy as centralized training independent of the data ...
- research-articleNovember 2023
AVScan2Vec: Feature Learning on Antivirus Scan Data for Production-Scale Malware Corpora
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityNovember 2023, pp 185–196https://doi.org/10.1145/3605764.3623907When investigating a malicious file, searching for related files is a common task that malware analysts must perform. Given that production malware corpora may contain over a billion files and consume petabytes of storage, many feature extraction and ...
- research-articleNovember 2023
Membership Inference Attacks Against Semantic Segmentation Models
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityNovember 2023, pp 43–53https://doi.org/10.1145/3605764.3623906Membership inference attacks aim to infer whether a data record has been used to train a target model by observing its predictions. In sensitive domains such as healthcare, this can constitute a severe privacy violation. In this work we attempt to ...
- research-articleNovember 2023
Information Leakage from Data Updates in Machine Learning Models
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityNovember 2023, pp 35–41https://doi.org/10.1145/3605764.3623905In this paper we consider the setting where machine learning models are retrained on updated datasets in order to incorporate the most up-to-date information or reflect distribution shifts. We investigate whether one can infer information about these ...
- research-articleNovember 2023
Probing the Transition to Dataset-Level Privacy in ML Models Using an Output-Specific and Data-Resolved Privacy Profile
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityNovember 2023, pp 23–33https://doi.org/10.1145/3605764.3623904Differential privacy (DP) is the prevailing technique for protecting user data in machine learning models. However, deficits to this framework include a lack of clarity for selecting the privacy budget ε and a lack of quantification for the privacy ...
- research-articleNovember 2023
Equivariant Differentially Private Deep Learning: Why DP-SGD Needs Sparser Models
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityNovember 2023, pp 11–22https://doi.org/10.1145/3605764.3623902Differentially Private Stochastic Gradient Descent (DP-SGD) limits the amount of private information deep learning models can memorize during training. This is achieved by clipping and adding noise to the model's gradients, and thus networks with more ...
- proceedingNovember 2023
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security
It is our pleasure to welcome you to the 16th ACM Workshop on Artificial Intelligence and Security - AISec 2023. AISec, having been annually co-located with CCS for 16 consecutive years, is the premier meeting place for researchers interested in the ...