A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2023; you can also visit the original URL.
The file type is application/pdf
.
Filters
Training Latency Minimization for Model-Splitting Allowed Federated Edge Learning
[article]
2023
arXiv
pre-print
To alleviate the shortage of computing power faced by clients in training deep neural networks (DNNs) using federated learning (FL), we leverage the edge computing and split learning to propose a model-splitting ...
Therefore, the training latency minimization problem (TLMP) is modelled as a minimizing-maximum problem. ...
RELATED WORKS Federated Learning: As an effective distributed machine learning method for privacy protection, FL has been widely studied and applied in many fields. ...
arXiv:2307.11532v1
fatcat:nffhu3idhvdv5nxplzcei2nlty
Efficient Rate-Splitting Multiple Access for the Internet of Vehicles: Federated Edge Learning and Latency Minimization
[article]
2022
arXiv
pre-print
To tackle this challenge, we propose an RSMA-based Internet of Vehicles (IoV) solution that jointly considers platoon control and FEderated Edge Learning (FEEL) in the downlink. ...
Given this sophisticated framework, a multi-objective optimization problem is formulated to minimize both the latency of the FEEL downlink and the deviation of the vehicles within the platoon. ...
To achieve lower latency in each iteration, Wang et al. [32] proposed an asynchronous federated learning over wireless communication. ...
arXiv:2212.06396v1
fatcat:vkv5ipyxszgbncnztftznhhqwa
Unsupervised Data Splitting Scheme for Federated Edge Learning in IoT Networks
[article]
2022
arXiv
pre-print
First, we design an unsupervised data-aware splitting scheme that partitions the node's local data into diverse samples used for training. ...
Federated Edge Learning (FEEL) is a promising distributed learning technique that aims to train a shared global model while reducing communication costs and promoting users' privacy. ...
ACKNOWLEDGMENTS The authors would like to thank the Natural Sciences and Engineering Research Council of Canada (NSERC) for the financial support of this research. ...
arXiv:2203.04376v1
fatcat:npjtzcku5fcyddlv7ujbrocy4u
Decentralized Proactive Model Offloading and Resource Allocation for Split and Federated Learning
[article]
2024
arXiv
pre-print
To address these problems, this paper first formulates the latency and data leakage risk of training DNN models using Split Federated learning. ...
Next, we frame the Split Federated learning problem as a mixed-integer nonlinear programming challenge. ...
• We formulate joint model offloading and resource allocation problem for split federated learning to be a mixed integer non-linear programming problem, aiming at minimizing the training latency while ...
arXiv:2402.06123v1
fatcat:vpoxjdmvv5gjpo22kaiqdnpaf4
Pervasive AI for IoT Applications: Resource-efficient Distributed Artificial Intelligence
[article]
2021
arXiv
pre-print
We then review the background, applications and performance metrics of AI, particularly Deep Learning (DL) and online learning, running in a ubiquitous system. ...
Designing accurate models using such data streams, to predict future insights and revolutionize the decision-taking process, inaugurates pervasive systems as a worthy paradigm for a better quality-of-life ...
We present in what follows an overview for this emerging pervasive learning technique, i.e., Federated Learning. ...
arXiv:2105.01798v1
fatcat:4tnq2wjw4bcqdfvhnoij55s2rm
Client Selection Approach in Support of Clustered Federated Learning over Wireless Edge Networks
[article]
2021
arXiv
pre-print
Clustered Federated Multitask Learning (CFL) was introduced as an efficient scheme to obtain reliable specialized models when data is imbalanced and distributed in a non-i.i.d. ...
learning rounds. ...
Finding the optimal threshold values for splitting the clusters will be considered in future studies. Figure 1 : 1 Clustered Multitask Federated Learning over wireless edge networks. ...
arXiv:2108.08768v1
fatcat:scsc2p5rrfepng4dg4lq33ga5m
Optimal Resource Allocation for U-Shaped Parallel Split Learning
[article]
2023
arXiv
pre-print
Split learning (SL) has emerged as a promising approach for model training without revealing the raw data samples from the data owners. ...
Index Terms: U-shaped network, split learning, label privacy, resource allocation, 5G/6G edge networks. ...
Furthermore, split federated learning (SFL) [14] integrates federated learning (FL) into SL to allow parallel training. ...
arXiv:2308.08896v3
fatcat:ydb7q2jh2rbspfqkghmwrwuxfe
Exploring the Privacy-Energy Consumption Tradeoff for Split Federated Learning
[article]
2024
arXiv
pre-print
Split Federated Learning (SFL) has recently emerged as a promising distributed learning technology, leveraging the strengths of both federated and split learning. ...
However, since the model is split at a specific layer, known as a cut layer, into both client-side and server-side models for the SFL, the choice of the cut layer in SFL can have a substantial impact on ...
By studying the impact of the cut layer selection on both energy consumption and privacy, we have provided a concrete example of an efficient cut layer selection to minimize the risk of reconstruction ...
arXiv:2311.09441v3
fatcat:jl3e53jnurcnjh36zauxje7jqq
Split Learning in 6G Edge Networks
[article]
2024
arXiv
pre-print
This leads to the emergence of split learning (SL) which enables servers to handle the major training workload while still enhancing data privacy. ...
Along this line, the proposal to incorporate federated learning into the mobile edge has gained considerable interest in recent years. ...
RESOURCE MANAGEMENT FOR SPLIT LEARNING: THE SINGLE-CELL PERSPECTIVE In parallel split learning, the training latency is determined by the slowest client, also known as the "straggler." ...
arXiv:2306.12194v3
fatcat:bsdtfl5ldnh47bp3x32zbqy5iq
Goal-Oriented Semantic Communications for 6G Networks
[article]
2024
arXiv
pre-print
Reality (XR) and Unmanned Aerial Vehicles (UAVs), the traditional communication framework is approaching Shannon's physical capacity limit and fails to guarantee the massive amount of transmission within latency ...
We then propose a detailed goal-oriented semantic communication framework for different time-critical and non-critical tasks. ...
Federated Learning Machine-Machine Critical/Non-Critical •Reliability •Convergence Speed ML •Accuracy Model •Latency Split Learning Machine-Machine Critical/Non-Critical •Reliability •Convergence Speed ...
arXiv:2210.09372v3
fatcat:ccsydqzn3jgopoui3cjcy2l3cm
Blockchain-based Monitoring for Poison Attack Detection in Decentralized Federated Learning
[article]
2022
arXiv
pre-print
The parallelization of operations results in minimized latency over the end-to-end communication, computation, and consensus delays incurred during the FL and blockchain operations. ...
To achieve decentralized federated learning, blockchain-based FL was proposed as a distributed FL architecture. ...
is minimized in the detection phase because verification is done separately of the decentralized federated learning process. ...
arXiv:2210.02873v1
fatcat:rzwfieog7nd6bnbuclixidvbja
Completion Time Minimization of Fog-RAN-Assisted Federated Learning With Rate-Splitting Transmission
[article]
2022
arXiv
pre-print
The problem of completion time minimization for FL is tackled by optimizing the rate-splitting transmission and fronthaul quantization strategies along with training hyperparameters such as precision and ...
This work studies federated learning (FL) over a fog radio access network, in which multiple internet-of-things (IoT) devices cooperatively learn a shared machine learning model by communicating with a ...
FEDERATED LEARNING SYSTEM OVER FOG-RAN We consider an FL system in a fog network, in which a CS and N I IDs collaboratively learn a machine learning model through N A APs. ...
arXiv:2206.01373v1
fatcat:hiaiqfe22bgvvd67kfolschmle
Pushing Large Language Models to the 6G Edge: Vision, Challenges, and Opportunities
[article]
2023
arXiv
pre-print
In both aspects, considering the inherent resource limitations at the edge, we discuss various cutting-edge techniques, including split learning/inference, parameter-efficient fine-tuning, quantization ...
Then, we identify the critical challenges for LLM deployment at the edge and envision the 6G MEC architecture for LLMs. ...
, such as federated and split learning, to train/deploy models at the edge. ...
arXiv:2309.16739v1
fatcat:4de73353v5hrlngj57epy7vnwi
AI in 6G: Energy-Efficient Distributed Machine Learning for Multilayer Heterogeneous Networks
[article]
2022
arXiv
pre-print
From both the environmental and economical perspectives, non-homogeneous QoS demands obstruct the minimization of the energy footprints and operational costs of the envisioned robust networks. ...
The fusion of artificial intelligence (AI) and mobile networks will allow for the dynamic and automatic configuration of network functionalities. ...
FEDERATED AND SPLIT LEARNING TECHNIQUES In this section, we discuss several distributed collaborative learning approaches involved in the device layer.
A. Federated Learning Technique In
B. ...
arXiv:2207.00415v1
fatcat:2x7y6fjyvba5pfkz2bup2a2634
Enabling All In-Edge Deep Learning: A Literature Review
[article]
2022
arXiv
pre-print
Secondly, this paper presents enabling technologies, such as model parallelism and split learning, which facilitate DL training and deployment at edge servers. ...
Usually, central cloud servers are used for the computation, but it opens up other significant challenges, such as high latency, increased communication costs, and privacy concerns. ...
The ADAPT SFI Centre for Digital Media Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) ...
arXiv:2204.03326v2
fatcat:wlji2pvp4fd5pardzfgrezpc3u
« Previous
Showing results 1 — 15 out of 6,034 results