前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Linux多节点部署KubeSphere

Linux多节点部署KubeSphere

作者头像
OY
发布2023-02-23 20:14:28
2K0
发布2023-02-23 20:14:28
举报
文章被收录于专栏:OY_学习记录OY_学习记录

一、准备环境

  • 4c8g (master)
  • 2c4g * 2(worker)
  • centos7.9
  • 内网互通
  • 每个机器有自己域名
  • 防火墙开放 30000~32767 端口

二、使用 KubeKey 创建集群

1、下载 KubeKey

代码语言:javascript
复制
export KKZONE=cn


curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh -

chmod +x kk
image-20230127141120178
image-20230127141120178

2、创建集群配置文件

代码语言:javascript
复制
./kk create config --with-kubernetes v1.20.4 --with-kubesphere v3.1.1
image-20230127141255781
image-20230127141255781

3、创建集群

代码语言:javascript
复制
./kk create cluster -f config-sample.yaml

这里需要修改一下 config-sample.yaml

image-20230127142425982
image-20230127142425982
image-20230127142752564
image-20230127142752564

config-sample.yaml 示例文件:

代码语言:javascript
复制
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
    - {
        name: k8s-master,
        address: 10.0.2.15,
        internalAddress: 10.0.2.15,
        user: root,
        password: 123456,
      }
    - {
        name: k8s-node1,
        address: 10.0.2.7,
        internalAddress: 10.0.2.7,
        user: root,
        password: 123456,
      }
    - {
        name: k8s-node2,
        address: 10.0.2.8,
        internalAddress: 10.0.2.8,
        user: root,
        password: 123456,
      }
  roleGroups:
    etcd:
      - k8s-master
    master:
      - k8s-master
    worker:
      - k8s-node1
      - k8s-node2
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.20.4
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []
  addons: []

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.1.1
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    redis:
      enabled: false
    redisVolumSize: 2Gi
    openldap:
      enabled: false
    openldapVolumeSize: 2Gi
    minioVolumeSize: 20Gi
    monitoring:
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
    es:
      elasticsearchMasterVolumeSize: 4Gi
      elasticsearchDataVolumeSize: 20Gi
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""
  console:
    enableMultiLogin: true
    port: 30880
  alerting:
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
  devops:
    enabled: false
    jenkinsMemoryLim: 2Gi
    jenkinsMemoryReq: 1500Mi
    jenkinsVolumeSize: 8Gi
    jenkinsJavaOpts_Xms: 512m
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:
    enabled: false
    ruler:
      enabled: true
      replicas: 2
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:
    enabled: false
  monitoring:
    storageClass: ""
    prometheusMemoryRequest: 400Mi
    prometheusVolumeSize: 20Gi
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: false
  kubeedge:
    enabled: false
    cloudCore:
      nodeSelector: { "node-role.kubernetes.io/worker": "" }
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress:
          - ""
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: { "node-role.kubernetes.io/worker": "" }
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: { "node-role.kubernetes.io/worker": "" }
        tolerations: []

报错:conntrack is required.

代码语言:javascript
复制
# 解决方式
yum install -y conntrack

4、查看进度

代码语言:javascript
复制
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

三、演示

image-20230127161943779
image-20230127161943779

访问:http://192.168.56.11:30880/

image-20230127162008774
image-20230127162008774
本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2023-01-28,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 一、准备环境
  • 二、使用 KubeKey 创建集群
    • 1、下载 KubeKey
      • 2、创建集群配置文件
        • 3、创建集群
          • 4、查看进度
          • 三、演示
          领券
          问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档