首页
学习
活动
专区
工具
TVP
发布
精选内容/技术社群/优惠产品,尽在小程序
立即前往
  • 您找到你想要的搜索结果了吗?
    是的
    没有找到

    【总结性】微服务调度相关论文

    摘要: With the development in the Cloud datacenters, the purpose of the efficient resource allocation is to meet the demand of the users instantly with the minimum rent cost. Thus, the elastic resource allocation strategy is usually combined with the prediction technology. This article proposes a novel predict method combination forecast technique, including both exponential smoothing (ES) and auto-regressive and polynomial fitting (PF) model. The aim of combination prediction is to achieve an efficient forecast technique according to the periodic and random feature of the workload and meet the application service level agreement (SLA) with the minimum cost. Moreover, the ES prediction with PSO algorithm gives a fine-grained scaling up and down the resources combining the heuristic algorithm in the future. APWP would solve the periodical or hybrid fluctuation of the workload in the cloud data centers. Finally, experiments improve that the combined prediction model meets the SLA with the better precision accuracy with the minimum renting cost. 预测式策略,使用功能了exponential smoothing and auto-regressive and polynomical fitting model,组合预测模型的目的是满足不同流量的需要同时满足服务SLA的要求使用PSO算法来进行一个细粒度的调度。用更低的租用成本实现更高的预测精度。

    04

    【阅读】A Comprehensive Survey on Distributed Training of Graph Neural Networks——翻译

    Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.

    03
    领券