A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Privacy-Preserving Machine Learning Using EtC Images
[article]
2019
arXiv
pre-print
In this paper, we propose a novel privacy-preserving machine learning scheme with encrypted images, called EtC (Encryption-then-Compression) images. ...
In an experiment, the proposed scheme is applied to a facial recognition algorithm with classifiers for confirming the effectiveness of the scheme under the use of support vector machine (SVM) with the ...
carry out privacy-preserving machine learning. ...
arXiv:1911.00227v1
fatcat:zqst6xdfcjaxhjqflz5s3pinty
BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks
[article]
2024
arXiv
pre-print
However, little to no existing work has investigated the privacy of neuromorphic architectures against model inversion. ...
Particularly, model inversion (MI) attacks enable the reconstruction of data samples that have been used to train the model. ...
Particularly, we have explored the privacy risks posed by model inversion attacks to these architectures. ...
arXiv:2402.00906v2
fatcat:hyphw3rpz5gvjmpfpwa4oi2gie
Adversarial Learning of Privacy-Preserving and Task-Oriented Representations
[article]
2019
arXiv
pre-print
Our work aims at learning a privacy-preserving and task-oriented representation to defend against such model inversion attacks. ...
For instance, there could be a potential privacy risk of machine learning systems via the model inversion attack, whose goal is to reconstruct the input data from the latent representation of deep networks ...
Evaluation against Data Privacy Attacks Adversary's Attacks. ...
arXiv:1911.10143v1
fatcat:jkkpytmskbfinf2yhiil3znxdu
Adversarial Learning of Privacy-Preserving and Task-Oriented Representations
2020
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
Our work aims at learning a privacy-preserving and task-oriented representation to defend against such model inversion attacks. ...
For instance, there could be a potential privacy risk of machine learning systems via the model inversion attack, whose goal is to reconstruct the input data from the latent representation of deep networks ...
Evaluation against Data Privacy Attacks Adversary's Attacks. ...
doi:10.1609/aaai.v34i07.6930
fatcat:enzdvnrf4jehtescgrhnisvycq
Privacy-Preserving Face Recognition with Learnable Privacy Budgets in Frequency Domain
[article]
2022
arXiv
pre-print
This paper proposes a privacy-preserving face recognition method using differential privacy in the frequency domain. ...
Current privacy-preserving approaches to face recognition are often accompanied by many side effects, such as a significant increase in inference time or a noticeable decrease in recognition accuracy. ...
The masked images can defend against the black-box recovery attack of UNet. These show that our method performs far better than other SOTA face recognition protection methods. Fig. 1 . 1 Fig. 1. ...
arXiv:2207.07316v3
fatcat:odmkn7tkjna2dhwpu5yzvwroe4
Vulnerability of Face Recognition Systems Against Composite Face Reconstruction Attack
[article]
2020
arXiv
pre-print
Face composition parts enable the attacker to violate the privacy of face recognition models even with a blind search. ...
To address this problem, we successfully test Face Detection Score Filtering (FDSF) as a countermeasure to protect the privacy of training data against proposed attack. ...
We showed the vulnerability of face recognition systems to preserve the privacy of training data. ...
arXiv:2009.02286v1
fatcat:ddi6qmkuprh7vmecl3ot5fqkde
Sanitization of Visual Multimedia Content: A Survey of Techniques, Attacks, and Future Directions
[article]
2022
arXiv
pre-print
., images and videos), the attacks against the cited mechanisms, and possible countermeasures. ...
Data sanitization -- the process of obfuscating or removing sensitive content related to the data -- helps to mitigate the severe impact of potential security and privacy risks. ...
Later on, it was uncovered that P3 is not effective in privacy preservation against artificial neural networks [65, 28] . ...
arXiv:2207.02051v1
fatcat:oevtpttxvvgo3p537t54wwhq5y
Privacy-Preserving Face Recognition Using Trainable Feature Subtraction
[article]
2024
arXiv
pre-print
We distill our methodologies into a novel privacy-preserving face recognition method, MinusFace. Experiments demonstrate its high recognition accuracy and effective privacy protection. ...
This paper explores face image protection against viewing and recovery attacks. ...
Privacy is heightened through random channel shuffling, which obscures facial texture signals and increases randomness to hinder recovery attacks. ...
arXiv:2403.12457v1
fatcat:vsek3glxuvgyjmftvnriyjyaz4
Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment
[article]
2019
arXiv
pre-print
The rise of deep learning technique has raised new privacy concerns about the training data and test data. ...
The inversion model can be trained with black-box accesses to the target model. We propose two main techniques towards training the inversion model in the adversarial settings. ...
Secure & Privacy-Preserving ML In the wake of security and privacy threats posed to ML techniques, much research has been devoted to provisioning secure and privacy-preserving training of ML models [58 ...
arXiv:1902.08552v1
fatcat:jqfsdfsb5jcnfphv4k6bcci7uu
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
2015
Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security - CCS '15
Machine-learning (ML) algorithms are increasingly utilized in privacy-sensitive applications such as predicting lifestyle choices, making medical diagnoses, and facial recognition. ...
Whether model inversion attacks apply to settings outside theirs, however, is unknown. ...
The error rate for each model is given in Figure 6 . Basic MI attack. We now turn to inversion attacks against the models described above. ...
doi:10.1145/2810103.2813677
dblp:conf/ccs/FredriksonJR15
fatcat:dh65uwob5bbhzo67togrw5mu7u
OpticalDR: A Deep Optical Imaging Model for Privacy-Protective Depression Recognition
[article]
2024
arXiv
pre-print
Experiments on CelebA, AVEC 2013, and AVEC 2014 datasets demonstrate that our OpticalDR has achieved state-of-the-art privacy protection performance with an average AUC of 0.51 on popular facial recognition ...
Depression Recognition (DR) poses a considerable challenge, especially in the context of the growing concerns surrounding privacy. ...
Additionally, privacy-preserving performance assessed on the CelebA dataset, measuring the AUC under various facial recognition models. ...
arXiv:2402.18786v1
fatcat:q4eepfjambe2xemvrqt5zs7xjy
Visual Security Evaluation of Learnable Image Encryption Methods against Ciphertext-only Attacks
[article]
2020
arXiv
pre-print
In this paper, we evaluate state-of-the-art visual protection methods for privacy-preserving DNNs in terms of visual security against ciphertext-only attacks (COAs). ...
Various visual information protection methods have been proposed for privacy-preserving deep neural networks (DNNs). ...
Therefore, DNNs have been deployed in privacy-sensitive/security-critical applications, such as facial recognition, biometric authentication, and medical image analysis. ...
arXiv:2010.06198v1
fatcat:kxq77nnlubegnfdc4eay3iyivm
Latent-space-level Image Anonymization with Adversarial Protector Networks
2019
IEEE Access
to model inversion attacks. ...
INDEX TERMS Adversarial learning, data privacy, deep learning, differential privacy, generative adversarial networks, machine learning, model inversion attacks. 84992 2169-3536 ...
MODEL INVERSION ATTACK Fredrikson et al. ...
doi:10.1109/access.2019.2924479
fatcat:kml2fqemrbfwrl74clzpclqef4
Video scrambling for privacy protection in video surveillance: recent results and validation framework
2011
Mobile Multimedia/Image Processing, Security, and Applications 2011
Indeed, this is a major threat to privacy in video surveillance. A face de-identification algorithm is proposed in [5] , which preserves many facial characteristics but makes the face unrecognizable. ...
In particular, it is paramount to validate proposed PET against user and system requirements for privacy. ...
Finally, performance analysis should also include the impact on compression efficiency, complexity, and security against attacks. ...
doi:10.1117/12.883948
fatcat:oehfeqmoo5bqzcfx4cvjw5gmfy
A review of privacy-preserving human and human activity recognition
2020
International Journal on Smart Sensing and Intelligent Systems
This paper analyzes the cutting-edge research trends, techniques, and issues of privacy-preserving human and human activity recognition. ...
Therefore, it is necessary to study privacy protection methods in the automatic recognition of human and human activities. ...
(2019) proposed PRADA to protect against the model stealing attack. ...
doi:10.21307/ijssis-2020-008
fatcat:fmrjiyw63jdxroj3ssd74iliq4
« Previous
Showing results 1 — 15 out of 1,845 results