前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >阅读笔记 | Privacy vs. Efficiency: Achieving Both Through Adaptive Hierarchical Federated Learning

阅读笔记 | Privacy vs. Efficiency: Achieving Both Through Adaptive Hierarchical Federated Learning

作者头像
Ranlychan
发布2023-11-29 10:49:16
1310
发布2023-11-29 10:49:16
举报
文章被收录于专栏:蓝里小窝蓝里小窝

Summary

The paper argue that the efficiency and data privacy of Federated Learning are non-orthogonal from the perspective of model training, which means they are restricting each other. So that the paper strictly formulates the problem at first, and designs a cloud-edge-end hierarchical FL system with adaptive control algorithm embedding a two-level Differential Protection method to relieve both the resource and privacy concerns. The design follows the following ideas:

1.Offload part of the model training from resource-limited end devices to the proximate edges by splitting the model into two parts(shallow layers and deep layers) to improve efficiency.

2.Apply two-level differential privacy (DP) noise injection to protect privacy against the honest-but-curious cloud and the edge server, perturbing model updates to cloud and intermediate features to edges. Specifically, the noise function between end and edge is DP_1(O,\sigma_e)=O+N(0,S_f^2 \sigma_e^2), in which the function simply plus the intermediate features O of the end’s shallow layers and noise function N. For the cloud the DP function is DP_2(w_i^j)=\zeta \times w_i^j / \parallel w_i^j \parallel + N(0,\zeta^2 \sigma^2) , in which the model needs to be performed L2-normalization layer by layer.

3.Adaptively coordinate resource optimization and privacy protection to maximize accuracy under resource budget and privacy budgets. Main adaptive controls are:

  • ​ Dynamically schedule local and global aggregations based on resource consumption to minimize loss.
  • ​ Adjust device sampling rate based on remaining rounds and privacy risk to avoid early termination.
  • ​ Tune offloading decision and local noise intensity to minimize resource consumption since more noise make it harder to coverage.

A prototype of AHFL have been implemented and experiments are conducted on CIFAR-10 and MNIST, showing that AHFL reduces end's computation time by 8.58%, communication time by 59.35%, memory by 43.61% and improves accuracy by 6.34% over state-of-the-art approaches like ltAdap, FLGDP, FEEL.

Strengths

  • Base on the assumption that security issues are worth concerning which is relatively less considered in some works.
  • Offloading model training tasks by splitting the model into two parts while having a relatively comprehensive consideration on the privacy concerns between end device, edge server and cloud.
  • Formulate the problem rigorously with math and UML diagram tools.

Weaknesses

  • Balancing between iteration times and noise intensity may not be the optimal way because the model can be evaluated in many aspects and the iteration times only contributes to a part of it.
  • Does not evaluate the effect of splitting method at the end device because different splitting may cause different computation and influence the offloading.
  • The evaluation on CIFAR-10 and MNIST datasets are relatively simple which make it less convincing.

Comments

The paper mainly focus on privacy and efficiency in a federated learning system and design a hierarchical architecture with adaptive algorithms for balancing. Although the experiments are relatively not that convincing and some key details are not introduced, it’s still a innovative work. Besides, the figure 2 impress me a lot for using a UML lane diagram combining formula to precisely illustrate the workflow.

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2023-11-28,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • Summary
  • Strengths
  • Weaknesses
  • Comments
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档