在之前的 cluster 配置文件 patos-cluster-with-mng.yaml 中,我们使用的是 nodeGroups ,这是非 Managed 的 NodeGroup ,在 EKS 的界面上是看不到的...v1alpha5 kind: ClusterConfig metadata: name: patos-cluster region: cn-northwest-1 version: '1.18' nodeGroups...然后我会清除一下 cluster 配置文件中的 nodeGroups 部分,让我的配置文件与实际的集群配置保持一致。 轻松愉快。...name: private-mng-1 instanceType: t3a.2xlarge minSize: 2 maxSize: 2 privateNetworking: true 对 nodeGroups
version: '1.18' managedNodeGroups: - name: mng-1 instanceType: t3a.2xlarge minSize: 2 maxSize: 2 nodeGroups...mng-win-1 instanceType: t3a.large minSize: 2 maxSize: 2 amiFamily: WindowsServer2019FullContainer nodeGroups...是新加的节点组,受管节点组仅支持 AmazonLinux2 ,所以这里是只能是非受管节点组(nodeGroups)。
/etc/salt/master 配置 1 # The nodegroups master config file parameter is used to define nodegroups....组名前面到破折号「-」 13 nodegroups: 14 - group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com...' 15 16 注意: 17 Nodegroups可以参考group3中看到的其他Nodegroups,确保没有循环引用。...根据列表的minion IDs定义为节点组 1 # 常规的定义方式 2 nodegroups: 3 group1: L@host1,host2,host3 4 5 # YAML 定义方式...6 nodegroups: 7 group1: 8 - host1 9 - host2 10 - host3 9.
:= cache.cloudProvider.NodeGroups() // 移除不存在的 node groupcache.removeEntriesForNonExistingNodeGroupsLocked...(nodeGroups)for _, nodeGroup := range nodeGroups { // 调用 cloud provider 中 node group 接口扩展点nodeGroupInstances...id, nil}return false, "", nil} // 从 Node Group 中扩展新的节点for _, nodeGroup := range ctx.CloudProvider.NodeGroups...= 可扩容的资源余量scaleUpResourcesLeft, errLimits := computeScaleUpResourcesLeftLimits(context, processors, nodeGroups...skippedNodeGroups := map[string]status.Reasons{} // 外层循环,遍历所有的 NodeGroupfor _, nodeGroup := range nodeGroups
─ metrics# 指标采集├── processors│ ├── callbacks│ ├── customresources│ ├── nodegroupconfig│ ├── nodegroups...核心方法有:Name():提供唯一的名称Refresh():刷新云厂商资源信息NodeGroups():获取所有的节点组NodeGroupForNode(...)...map[string]struct{}// 清理工作,比如:go 协程Cleanup() error// 在每次主循环之前调用,并且用于动态更新 cloud provider 状态 // 尤其是由 NodeGroups...newAsgToInstancesCache // 生成 instance -> asg 的缓存m.instanceToAsg = newInstanceToAsgCachereturn nil}2.4.3 NodeGroups...调用 awsManager 获取所有的 asgfunc (aws *awsCloudProvider) NodeGroups() []cloudprovider.NodeGroup { // 调用 awsManager
【etc/salt/master】 nodegroups: web1group: 'L@SN2012-07-010,SN2012-07-011,SN2012-07-012' web2group
$ midir -p /root/prometheus/groups/nodegroups && cd /root/prometheus/groups/nodegroups $ vim node.json...- job_name: 'node-exporter' file_sd_configs: - files: ['/usr/local/prometheus/groups/nodegroups...node-exporter' scrape_interval: 5s file_sd_configs: - files: ['/usr/local/prometheus/groups/nodegroups
/prod pillar_roots: live: - /srv/salt/pillar/live game: - /srv/salt/pillar/game #主机分组 nodegroups
bool, error) { fixed := false // 遍历所有的 ASG for _, nodeGroup := range context.CloudProvider.NodeGroups...updateIncorrectNodeGroupSizes(currentTime time.Time) { for _, nodeGroup := range csr.cloudProvider.NodeGroups
) (bool, error) { fixed := false // 遍历所有的 ASG for _, nodeGroup := range context.CloudProvider.NodeGroups...updateIncorrectNodeGroupSizes(currentTime time.Time) { for _, nodeGroup := range csr.cloudProvider.NodeGroups
master.d/*.conf [root@linuxprobe ~]# mkdir /etc/salt/master.d [root@linuxprobe ~]# vi /etc/salt/master.d/nodegroups.conf...# create new # group_org : # group_os : specify OS is CentOS nodegroups: group_org: 'L@linuxprobe.org
Centos and S@172.18.20.227'test.ping image.png 3.6 分组匹配 [root@Saltstack01 /]# vim /etc/salt/master nodegroups
,即主机id以逗号隔开; G@:表示以grain格式描述; S@:表示以ip子网或地址格式描述; [root@saltstack-master salt]# vim /etc/salt/master nodegroups
salt0-master ~]# salt -S '192.168.70.171' test.ping 5.分组匹配方式 [root@salt0-master ~]# vi /etc/salt/master nodegroups
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: region: us-west-2 nodeGroups
webserver top file中使用jinja模板 {% set self = grains['node_type'] %} - match: grain - {{ self }} 5.nodegroups...比如nodegroup.conf vim /etc/salt/master.d/nodegroup.conf #写到master中也是这个格式,master.d中*.conf是默认动态加载的 nodegroups...test2: ‘G@os:CenOS or test2' salt -N test1 test.ping #-N指定groupname 在top file中使用nodegroups...saltstack master和minion认证机制 saltstack自定义模块示例 使用salt state执行一个复制文件并执行的任务 saltstack pillar设置 saltstack的nodegroups
[root@linux-node1 ~]# vim /etc/salt/master ....... nodegroups: web1group: 'L@minion-192-168-1-102,minion
minionfs_update_interval: 313 60 314 minionfs_whitelist: 315 module_dirs: 316 nodegroups.../root/.ssh/config 457 ssh_identities_only: 458 False 459 ssh_list_nodegroups
MemoryTable 包含 NUMA 节点内存关联信息 MemoryMap map[v1.ResourceName]*MemoryTable `json:"memoryMap"` // NodeGroups
salt-master.pid # saltstack 可以控制的文件系统的开始位置 root_dir: / # 日志文件地址 log_file: /var/log/salt_master.log # 分组设置 nodegroups
领取专属 10元无门槛券
手把手带您无忧上云