Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Feb 17, 2021 · In this work, we explore whether it is possible for one party to steal the private label information from the other party during split training, ...
Jan 28, 2022 · This paper formulates a threat model on two-party split learning (parties have different features, with one party holding the labels) for binary ...
In this work, we explore whether it is possible for one party to steal the private label information from the other party during split training, and whether ...
This work first formulate a realistic threat model and proposes a privacy loss metric to quantify label leakage in split learning, and shows that there ...
Code Repo for paper Label Leakage and Protection in Two-party Split Learning (ICLR 2022). Requirements. Python 3; Tensorflow >2.0; scikit-learn ...
We first show that, norm attack, a simple method that uses the norm of the communicated gradients between the parties, can largely reveal the ground-truth ...
Label Leakage and Protection in Two-party Split Learning. Course: Fundamentals Of Marketing (MKTG 300). 35 Documents. Students shared 35 documents in this ...
Oct 21, 2023 · Although no raw data is communicated between the two parties during model training, several works have demonstrated that data privacy, ...
We propose ExPLoit - a label-leakage attack that allows an adversarial input-owner to extract the private labels of the label-owner during split-learning.