Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








93 Hits in 5.8 sec

Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs [article]

Mohammad Malekzadeh and Anastasia Borovykh and Deniz Gündüz
2021 arXiv   pre-print
This results in an attack that works even if users have a full white-box view of the classifier, and can keep all internal representations hidden except for the classifier's outputs for the target attribute  ...  We take a step forward and show that deep classifiers can be trained to secretly encode a sensitive attribute of users' input data into the classifier's outputs for the target attribute, at inference time  ...  Code and instructions for reproducing the reported results are available at https://github.com/mmalekzadeh/honest-but-curiousnets. PROBLEM FORMULATION Notation.  ... 
arXiv:2105.12049v2 fatcat:ffsok2y7ajgnvawqyevpif36mi

Robust Membership Encoding: Inference Attacks and Copyright Protection for Deep Learning [article]

Congzheng Song, Reza Shokri
2020 arXiv   pre-print
The resulting models are considered as intellectual property of the model owners and their copyright should be protected.  ...  Machine learning as a service (MLaaS), and algorithm marketplaces are on a rise. Data holders can easily train complex models on their data using third party provided learning codes.  ...  Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.  ... 
arXiv:1909.12982v2 fatcat:avg5abeayzbhln3ambp4ye4zvu

Privacy-Preserving Deep Learning on Machine Learning as a Service - A Comprehensive Survey

Harry Chandra Tanuwidjaja, Rakyong Choi, Seunggeun Baek, Kwangjo Kim
2020 IEEE Access  
order to make convolution, then sum up the result Weighted Sum B.  ...  The convolutional layer has n × n size, on which we will perform a dot product between neighboring values to make a convolution.  ... 
doi:10.1109/access.2020.3023084 fatcat:inxcd6sbhfewtkm4q3krnkx4oe

Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation [article]

Mehdi Sadi, B. M. S. Bahar Talukder, Kaniz Mishty, Md Tauhidur Rahman
2021 arXiv   pre-print
Universal Adversarial Perturbations are image-agnostic and model-independent noise that when added with any image can mislead the trained Deep Convolutional Neural Networks into the wrong prediction.  ...  Since these Universal Adversarial Perturbations can seriously jeopardize the security and integrity of practical Deep Learning applications, existing techniques use additional neural networks to detect  ...  Especially, for modern pruned and AI models, the weights are sparse and compressed, as a result exactly identifying and confining each weight within DRAM pages is very complex.  ... 
arXiv:2111.09488v1 fatcat:62ouldbkvjhithhv4kouriz6va

Digital watermarking for deep neural networks

Yuki Nagai, Yusuke Uchida, Shigeyuki Sakazawa, Shin'ichi Satoh
2018 International Journal of Multimedia Information Retrieval  
We show that our framework can embed a watermark during the training of a deep neural network from scratch, and during fine-tuning and distilling, without impairing its performance.  ...  A fine-tuning step helps to reduce both the computational cost and improve performance. Therefore, sharing trained models has been very important for the rapid progress of research and development.  ...  Because quantization has less impact than parameter pruning and the Huffman coding is lossless compression, we focus on parameter pruning.  ... 
doi:10.1007/s13735-018-0147-1 fatcat:agb6aopi4zfxtgusj5j6wryyey

The Recent Trends in Malware Evolution, Detection and Analysis for Android Devices

Kakelli Anil Kumar, School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India., A. Raman, C. Gupta, R.R. Pillai
2020 Journal of Engineering Science and Technology Review  
This work also discusses modern threats, the magnitude of their influence, and how they were discovered alongside analysis of said threats.  ...  In our work, we have analyzed various malware detection techniques and methods for mobile devices and study how these techniques have been changed over the years.  ...  Static analysis is utilized to discover possible activity paths and assesses danger. Dynamic analysis is then guided through these paths to log and analyze the maliciousness of the result.  ... 
doi:10.25103/jestr.134.25 fatcat:7ogsu5reqzcjfltv7wqbsjgp5i

White Box Watermarking for Convolution Layers in Fine-Tuning Model Using the Constant Weight Code

Minoru Kuribayashi, Tatsuya Yasui, Asad Malik
2023
In this study, we extended the method such that the model can be applied to any convolution layer of the DNN model and designed a watermark detector based on a statistical analysis of the extracted weight  ...  Studies have focused on robustness against retraining and fine-tuning. However, less important neurons in the DNN model may be pruned.  ...  Compared with the baseline results, no clear difference was observed in performance when the watermark was embedded into the convolution layers in these DNN models.  ... 
doi:10.3390/jimaging9060117 pmid:37367465 pmcid:PMC10299526 fatcat:nsc6it3i7ffq5gsj26yvwsvof4

SoK: Privacy-Preserving Computation Techniques for Deep Learning

José Cabrero-Holgueras, Sergio Pastrana
2021 Proceedings on Privacy Enhancing Technologies  
., Homomorphic Encryption and Secure Multiparty Computation) have enabled DL training and inference over protected data.  ...  We highlight the relative advantages and disadvantages, considering aspects such as efficiency shortcomings, reproducibility issues due to the lack of standard tools and programming interfaces, or lack  ...  Acknowledgments We thank the anonymous reviewers and our shepherd, Phillipp Schoppmann, for their valuable feedback. We also thank Alberto Di Meglio, Marco Manca  ... 
doi:10.2478/popets-2021-0064 fatcat:hb3kdruxozbspnowy63gynuapy

Input-Aware Dynamic Backdoor Attack [article]

Anh Nguyen, Anh Tran
2020 arXiv   pre-print
Such systems, while achieving the state-of-the-art performance on clean data, perform abnormally on inputs with predefined triggers.  ...  Our code is publicly available at https://github.com/VinAIResearch/input-aware-backdoor-attack-release.  ...  The attacker can secretly exploit the backdoor to gain illegal benefits from the system.  ... 
arXiv:2010.08138v1 fatcat:qgsbkg6imbbt7b26pwgvgolnou

Adversarial Attacks Against Deep Generative Models on Data: A Survey

Hui Sun, Tianqing Zhu, Zhiqiu Zhang, Dawei Jin, Ping Xiong, Wanlei Zhou
2021 IEEE Transactions on Knowledge and Data Engineering  
the security and privacy preservation of GANs and VAEs.  ...  Our focus is on the inner connection between attacks and model architectures and, more specifically, on five components of deep generative models: the training data, the latent code, the generators/decoders  ...  As the traditional DDMs, recurrent neural network (RNN) [9] , convolutional neural networks (CNN) [10] , and their variants perform well at sentiment analysis [11] , image recognition [12] , natural  ... 
doi:10.1109/tkde.2021.3130903 fatcat:2ljxpihgf5fefoiwwjg3bjrypi

A Survey of What to Share in Federated Learning: Perspectives on Model Utility, Privacy Leakage, and Communication Efficiency [article]

Jiawei Shao, Zijian Li, Wenqiang Sun, Tailin Zhou, Yuchang Sun, Lumin Liu, Zehong Lin, Yuyi Mao, Jun Zhang
2024 arXiv   pre-print
Third, we conduct extensive experiments to compare the learning performance and communication overhead of various sharing methods in FL.  ...  Federated learning (FL) has emerged as a secure paradigm for collaborative training among clients.  ...  HE and additive secret sharing. [42] ✓ Implement HE on machine learning problems by approximating objective functions with polynomials. [116] ✓ Adopt Lagrange coding to secretly share samples. [134]  ... 
arXiv:2307.10655v2 fatcat:eg7w2ebisvc3vkaztuayxxu7be

Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions [article]

Marwan Omar
2023 arXiv   pre-print
We then provide a detailed review and analysis of evaluation metrics, benchmark datasets, threat models, and challenges related to backdoor learning in NLP.  ...  To this end, we identify troubling gaps in the literature and offer insights and ideas into open challenges and future research directions.  ...  In [104] , Lyu et al. suggest a highly significant effect of performance degradation. A significant amount of literature discusses overcoming performance degradation as a result of pruning defense.  ... 
arXiv:2302.06801v1 fatcat:gslzn477hvgnxhhlkqfmpidmhe

Rethinking White-Box Watermarks on Deep Learning Models under Neural Structural Obfuscation [article]

Yifan Yan, Xudong Pan, Mi Zhang, Min Yang
2023 arXiv   pre-print
Sacrificing less functionality and involving more knowledge about the target DNN, the latter branch called white-box DNN watermarking is believed to be accurate, credible and secure against most known  ...  watermark removal attacks, with emerging research efforts in both the academy and the industry.  ...  Acknowledgments We would like to thank the anonymous reviewers and the shepherd for their insightful comments that helped improve the quality of the paper.  ... 
arXiv:2303.09732v1 fatcat:tcbwmilijbdjxhwqdnausvqbzi

Machine Learning for the Detection and Identification of Internet of Things (IoT) Devices: A Survey [article]

Yongxin Liu, Jian Wang, Jianqiang Li, Shuteng Niu, Houbing Song
2021 arXiv   pre-print
The first step in securing the IoT is detecting rogue IoT devices and identifying legitimate ones.  ...  Therefore, non-cryptographic IoT device identification and rogue device detection become efficient solutions to secure existing systems and will provide additional protection to systems with cryptographic  ...  Their results show that CNN has the best performance, followed by DNN and LSTM.  ... 
arXiv:2101.10181v1 fatcat:in5x7tp5fbdernmlzyvv52muke

CaPC Learning: Confidential and Private Collaborative Learning [article]

Christopher A. Choquette-Choo, Natalie Dullerud, Adam Dziedzic, Yunxiang Zhang, Somesh Jha, Nicolas Papernot, Xiao Wang
2021 arXiv   pre-print
We leverage secure multi-party computation (MPC), homomorphic encryption (HE), and other techniques in combination with privately aggregated teacher models.  ...  Each party is able to improve the accuracy and fairness of their model, even in settings where each party has a model that performs well on their own dataset or when datasets are not IID and model architectures  ...  Intermediate values are all secretly shared (and only recovered within garbled circuits) so they are not visible to any party. Differential Privacy Analysis.  ... 
arXiv:2102.05188v2 fatcat:gphcc6vtefftzb2er5jbnjennm
« Previous Showing results 1 — 15 out of 93 results