A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Fast-Convergent Federated Learning via Cyclic Aggregation
[article]
2022
arXiv
pre-print
Federated learning (FL) aims at optimizing a shared global model over multiple edge devices without transmitting (private) data to the central server. ...
Numerical results validate that, simply plugging-in the proposed cyclic aggregation to the existing FL algorithms effectively reduces the number of training iterations with improved performance. ...
Federated learning (FL) [5] mitigates this problem via asking for updated models at the edge device side instead of their data sets. ...
arXiv:2210.16520v1
fatcat:fezdwxb6h5drdih5bydpayhcra
Federated Learning for Wireless Applications: A Prototype
[article]
2023
arXiv
pre-print
To tackle these challenges, Federated Learning (FL) has emerged as a distributed optimization approach to the decentralization of the model training process. ...
Wireless embedded edge devices are ubiquitous in our daily lives, enabling them to gather immense data via onboard sensors and mobile applications. ...
We observe that the convergence is fast in the case of IID data, and higher local learning helps. Albeit reduced accuracy gains in the non-IID context, more local learning helps here as well. ...
arXiv:2312.08577v1
fatcat:6prl2rgjlbay3obmp53ceuwrgu
FedCluster: Boosting the Convergence of Federated Learning via Cluster-Cycling
[article]
2020
arXiv
pre-print
The FedCluster groups the devices into multiple clusters that perform federated learning cyclically in each learning round. ...
We develop FedCluster–a novel federated learning framework with improved optimization efficiency, and investigate its theoretical convergence properties. ...
In the future, we expect and hope that FedCluster can be implemented in practical federated learning systems to demonstrate its fast convergence and provide great flexibility in scheduling the workload ...
arXiv:2009.10748v1
fatcat:umxay6r45jadda7i6qcrye4zpq
Communication-Efficient Edge AI: Algorithms and Systems
[article]
2020
arXiv
pre-print
This is driven by the explosive growth of data, advances in machine learning (especially deep learning), and easy access to vastly powerful computing resources. ...
an over-the-air computation approach for fast model aggregation in each round of training for on-device federated learning. ...
The efficiency of over-the-air computation for fast aggregation in federated edge learning has also been demonstrated in [84] , which characterized two trade-offs between communication and learning performance ...
arXiv:2002.09668v1
fatcat:nhasdzb7t5dt5brs2r7ocdzrnm
FedSR: A Semi-Decentralized Federated Learning Algorithm for Non-IIDness in IoT System
[article]
2024
arXiv
pre-print
To address the above issues, in this paper, we combine centralized federated learning with decentralized federated learning to design a semi-decentralized cloud-edge-device hierarchical federated learning ...
Due to privacy and security issues, it is difficult to collect all these data together to train deep learning models, thus the federated learning, a distributed machine learning paradigm that protects ...
Index Terms-Federated learning (FL), decentralized federated learning (DFL), non-iid data, hierarchical federated learning.
I. ...
arXiv:2403.14718v1
fatcat:mo4ln7tnofeilacfxygb7342fy
One-Bit Over-the-Air Aggregation for Communication-Efficient Federated Edge Learning: Design and Convergence Analysis
[article]
2020
arXiv
pre-print
Federated edge learning (FEEL) is a popular framework for model training at an edge server using data distributed at edge devices (e.g., smart-phones and sensors) without compromising their privacy. ...
The analysis shows that the hostilities slow down the convergence of the learning process by introducing a scaling factor and a bias term into the gradient norm. ...
existing federated learning literature (see e.g., [3] - [22] ). ...
arXiv:2001.05713v2
fatcat:jvjltw46k5d7vpohvqq64nix6a
FedBE: Making Bayesian Model Ensemble Applicable to Federated Learning
[article]
2021
arXiv
pre-print
your federated learning algorithm intact. ...
Federated learning aims to collaboratively train a strong global model by accessing users' locally trained models but not their own data. ...
We improve aggregation via Bayesian ensemble and knowledge distillation, bypassing weight matching. Ensemble learning and knowledge distillation. ...
arXiv:2009.01974v4
fatcat:svmvm4zxevcpjhwwxxjbhobyau
Technical Sessions
2021
2021 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)
Spatial to Gradient Domain Feature Aggregation
Jian Ma, Anhui University
Research on 5G Wireless Networks and Evolution
Guiqing Liu, China Telecom Group
A Spectrum Sensing Algorithm for DTMB-A based ...
Data Fresh under Heterogeneous QoS Requirements Yiqin Tan, Tsinghua University Design of a next generation 5G broadcasting core network in China Zhixin Liu, Shanghai Jiao Tong University Application of Federated ...
doi:10.1109/bmsb53066.2021.9547160
fatcat:3npwqozpznfa7npul4jitqgbq4
Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge Computing
[article]
2020
arXiv
pre-print
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes. ...
To train large-scale machine learning models, edge/fog computing is often leveraged as an alternative to centralized learning. ...
However, for large-scale machine learning problem such as distributed systems with unstable links in federated learning [13] , the impact of communication costs becomes pronounced while computation is ...
arXiv:2010.00914v1
fatcat:o7oy4w4hznehtok35l546kard4
Federated Learning for Privacy-Aware Human Mobility Modeling
2022
Frontiers in Artificial Intelligence
This work investigates the creation of spatiotemporal models using a Federated Learning (FL) approach—a machine learning technique that avoids sharing personal data with centralized servers. ...
While spatiotemporal data can be collected easily via smartphones, current state-of-the-art deep learning methods require vast amounts of such privacy-sensitive data to generate useful models. ...
ACKNOWLEDGMENTS We thank Matías Laporte for maintaining the deep learning hardware architecture, used for the experiments in this study. ...
doi:10.3389/frai.2022.867046
pmid:35837615
pmcid:PMC9273827
fatcat:4myna6f4svcg7kugrx466ap5ua
Towards Privacy-Preserving and Verifiable Federated Matrix Factorization
[article]
2022
arXiv
pre-print
Recent years have witnessed the rapid growth of federated learning (FL), an emerging privacy-aware machine learning paradigm that allows collaborative learning over isolated datasets distributed across ...
Moreover, VPFedMF ambitiously and newly supports correctness verification of the aggregation results produced by the coordinating server in federated MF. ...
VPFedMF enables matrix factorization in a federated learning setting, while preventing privacy leakages from the gradients by aggregating gradients in the ciphertext domain via secure aggregation techniques ...
arXiv:2204.01601v2
fatcat:366tl652orer7fcvhaxy7qz7yi
Advances and Open Problems in Federated Learning
[article]
2021
arXiv
pre-print
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service ...
FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science ...
[171] formulated this problem and studied the convergence of semi-cyclic SGD, where multiple blocks of clients with different characteristics are sampled from following a regular cyclic pattern (e.g ...
arXiv:1912.04977v3
fatcat:efkbqh4lwfacfeuxpe5pp7mk6a
One-Bit Byzantine-Tolerant Distributed Learning via Over-the-Air Computation
[article]
2023
arXiv
pre-print
To achieve fast and reliable model aggregation in the presence of Byzantine attacks, we develop a signed stochastic gradient descent (SignSGD)-based Hierarchical Vote framework via over-the-air computation ...
We comprehensively analyze the proposed framework on the impacts including Byzantine attacks and the wireless environment (channel fading and receiver noise), followed by characterizing the convergence ...
Celebrated distributed learning paradigms such as federated learning [1] - [4] , swarm learning [5] , and split learning [6] , have realized a wide scope of applications including 6G networks [1] ...
arXiv:2310.11998v2
fatcat:3oerjfmngzd33bni2wrqom2dki
Coding for Large-Scale Distributed Machine Learning
2022
Entropy
Moreover, the involved computing nodes and data volumes for learning tasks have also increased significantly. ...
For large-scale distributed learning systems, significant challenges have appeared in terms of delay, errors, efficiency, etc. ...
For instance, it can be a federated learning network with worker and server nodes. ...
doi:10.3390/e24091284
pmid:36141170
pmcid:PMC9497980
fatcat:ul4lu6xty5cwbnsop5ccv7ns64
Empowering Federated Learning for Massive Models with NVIDIA FLARE
[article]
2024
arXiv
pre-print
In this paper, we explore how federated learning enabled by NVIDIA FLARE can address these challenges with easy and scalable integration capabilities, enabling parameter-efficient and full supervised fine-tuning ...
Most state-of-the-art machine learning algorithms are data-centric. ...
This procedure is repeated until convergence is achieved. ...
arXiv:2402.07792v1
fatcat:xjy5b2oeybhxdcuajoujhuzsr4
« Previous
Showing results 1 — 15 out of 4,144 results