首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >Async - FL模型

Async - FL模型
EN

Stack Overflow用户
提问于 2020-04-11 13:42:47
回答 1查看 47关注 0票数 3

如何使用TFF框架进行异步模型训练?

我回顾了迭代训练过程循环,但是我不确定如何知道接收到了哪些客户端模型。

EN

回答 1

Stack Overflow用户

发布于 2020-05-10 21:24:16

在TFF中模拟类似“异步FL”的东西是很有可能的。考虑这一点的一种方法是在概念上将模拟时间与挂钟时间解耦。

每轮对不同数量的客户端进行采样(而不是通常所做的统一K客户端),可能会模拟异步FL。首先只处理选定客户端的一部分是可能的,研究人员可以根据自己的需要自由地对数据/计算进行切片。

Python风格的伪代码演示了这两种技术,不同的客户端采样和延迟的渐变应用:

代码语言:javascript
运行
复制
state = fed_avg_iter_proc.initialize()
for round_num in range(NUM_ROUNDS):
  # Here we conceptualize a "round" as a block of time, rather than a synchronous
  # round. We have a function that determines which clients will "finish" within 
  # our configured block of time. This might even return only a single client.
  participants = get_next_clients(time_window=timedelta(minutes=30))
  num_participants = len(participants)

  # Here we only process the first half, and then updated the global model.
  state2, metrics = fed_avg_iter_proc.next(state, participants[:num_participants/2])

  # Now process the second half of the selected clients. 
  # Note: this is now apply the 'pseudo-gradient' that was computed on clients
  # (the difference between the original `state` and their local training result),
  # to the model that has already taken one step (`state2`). This possibly has
  # undesirable effects on the optimisation process, or may be improved with
  # techniques that handle "stale" gradients.
  state3, metrics = fed_avg_iter_proc.next(state2, participants[num_participants/2:])

  # Finally update the state for the next for-loop of the simulation.
  state = state3
票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/61152605

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档