K8s 安装

环境初始化

1:分别在2台主机设置主机名称

hostnamectl set-hostname node1hostnamectl set-hostname node2

2:配置主机映射

cat <<EOF > /etc/hosts127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.124.129 node1192.168.124.132 node2EOF

3:node1上执行ssh免密码登陆配置

ssh-keygen  #一路回车即可ssh-copy-id  node2

4:两台主机配置、停防火墙、关闭Swap、关闭Selinux

# 禁用防火墙:$ systemctl stop firewalld$ systemctl disable firewalld# 禁用SELINUX:$ setenforce 0$ cat /etc/selinux/configSELINUX=disabled# 创建/etc/sysctl.d/k8s.conf文件,添加如下内容cat <<EOF >  /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsysctl --system#关闭系统Swapswapoff -ased -i 's/.*swap.*/#&/' /etc/fstab

镜像准备

如果你的节点上面有科学上网的工具,可以忽略这一步,我们需要提前将所需的gcr.io上面的镜像下载到节点上面,当然前提条件是你已经成功安装了docker。

1、master节点,执行下面的命令:

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.0docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.0docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.0docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.0docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2docker pull cnych/flannel:v0.10.0-amd64docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/kubernetes-dashboard-amd64:v1.8.3 docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/heapster-grafana-amd64:v4.4.3 docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/heapster-influxdb-amd64:v1.3.3docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/heapster-amd64:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.0 k8s.gcr.io/kube-apiserver:v1.12.0docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.0docker tagregistry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.0 k8s.gcr.io/kube-proxy:v1.12.0docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.0docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2docker tag  cnych/flannel:v0.10.0-amd64:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

2、slave节点,执行下面的命令:

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.0docker pull cnych/flannel:v0.10.0-amd64docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 
docker tagregistry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.0 k8s.gcr.io/kube-proxy:v1.12.0docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag  cnych/flannel:v0.10.0-amd64:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

安装配置kubernetes

两个节点都需要安装 安装

cat > /etc/yum.repos.d/kubernetes.repo <<EOF[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0EOF

我们是安装最新版本的,所以直接yum install -y kubeadm即可,它会安装相应依赖包。

如果要指定版本,可以先看看有那些版本yum list kubeadm --showduplicates

配置 kubelet

修改文件kubelet的配置文件/etc/systemd/system/kubelet.service.d/10-kubeadm.conf, 配置文件中增加一项配置(在ExecStart之前):

Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"

如图所示:

# kubelet设置开机自动运行systemctl enable kubelet

启动kubelet:

systemctl start kubelet

集群安装

 $ kubeadm init \>    --kubernetes-version=v1.12.0 \>    --pod-network-cidr=10.244.0.0/16 \>    --apiserver-advertise-address=192.168.124.129 \>    --ignore-preflight-errors=Swap[init] using Kubernetes version: v1.12.0[preflight] running pre-flight checks    [WARNING Swap]: running with swap on is not supported. Please disable swap    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.1-ce. Latest validated version: 18.06[preflight/images] Pulling images required for setting up a Kubernetes cluster[preflight/images] This might take a minute or two, depending on the speed of your internet connection[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[preflight] Activating the kubelet service[certificates] Generated etcd/ca certificate and key.[certificates] Generated etcd/server certificate and key.[certificates] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [127.0.0.1 ::1][certificates] Generated etcd/peer certificate and key.[certificates] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [192.168.124.129 127.0.0.1 ::1][certificates] Generated apiserver-etcd-client certificate and key.[certificates] Generated etcd/healthcheck-client certificate and key.[certificates] Generated ca certificate and key.[certificates] Generated apiserver certificate and key.[certificates] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.124.129][certificates] Generated apiserver-kubelet-client certificate and key.[certificates] Generated front-proxy-ca certificate and key.[certificates] Generated front-proxy-client certificate and key.[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"[certificates] Generated sa key and public key.[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled[apiclient] All control plane components are healthy after 22.507741 seconds[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster[markmaster] Marking the node node1 as master by adding the label "node-role.kubernetes.io/master=''"[markmaster] Marking the node node1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule][patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation[bootstraptoken] using token: momf47.0scodcv3cm6t75vm[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each nodeas root:
  kubeadm join 192.168.124.129:6443 --token momf47.0scodcv3cm6t75vm --discovery-token-ca-cert-hash sha256:cfeed429d671eeb39b8980ada10b55e79057cc65e108660ef86ae5043d6275f8

初始化主要过程为:

  • 1.kubeadm 执行初始化前的检查
  • 2.生成 token 和证书
  • 3.生成 KubeConfig 文件,kubelet 需要这个文件与 Master 通信。
  • 4.安装 Master 组件,会从 goolge 的 Registry 下载组件的 Docker 镜像,这一步可能会花一些时间,主要取决于网络质量,如果本地已有相关镜像则会优先使用本地的。
  • 5.安装附加组件 kube-proxy 和 kube-dns
  • 6.Kubernetes Master 初始化成功
  • 7.提示后续操作

初始化失败后处理办法

kubeadm reset

node1上面执行如下命令

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

kubeadm生成证书密码文件分发到node2上面去

scp -r /etc/kubernetes/pki  node03:/etc/kubernetes/

部署flannel网络,只需要在node1执行就行

$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml$ kubectl apply -f  kube-flannel.yml
 $ kubectl get nodeNAME    STATUS   ROLES    AGE     VERSIONnode1   Ready    master   5m32s   v1.12.2

安装完成后使用kubectl get pods命令可以查看到我们集群中的组件运行状态,如果都是Running状态的话,那么恭喜你,你的master节点安装成功了。

kubectl   get pods --all-namespacesNAMESPACE     NAME                            READY   STATUS    RESTARTS   AGEkube-system   coredns-576cbf47c7-mntdb        1/1     Running   0          5m20skube-system   coredns-576cbf47c7-rswvv        1/1     Running   0          5m20skube-system   etcd-node1                      1/1     Running   0          4m33skube-system   kube-apiserver-node1            1/1     Running   0          4m44skube-system   kube-controller-manager-node1   1/1     Running   0          4m28skube-system   kube-flannel-ds-amd64-gxqq8     1/1     Running   0          24skube-system   kube-proxy-4xgvb                1/1     Running   0          5m20skube-system   kube-scheduler-node1            1/1     Running   0          4m38s

部署Dashboard插件

kubernetes-dashboard.yaml文件内容如下:

# Copyright 2017 The Kubernetes Authors.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at##     http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with# Kubernetes 1.8.## Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1kind: Secretmetadata:  labels:    k8s-app: kubernetes-dashboard  name: kubernetes-dashboard-certs  namespace: kube-systemtype: Opaque
---# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1kind: ServiceAccountmetadata:  labels:    k8s-app: kubernetes-dashboard  name: kubernetes-dashboard  namespace: kube-system
---# ------------------- Dashboard Role & Role Binding ------------------- #
kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: kubernetes-dashboard-minimal  namespace: kube-systemrules:  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.- apiGroups: [""]  resources: ["secrets"]  verbs: ["create"]  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.- apiGroups: [""]  resources: ["configmaps"]  verbs: ["create"]  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]  resources: ["secrets"]  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]  verbs: ["get", "update", "delete"]  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]  resources: ["configmaps"]  resourceNames: ["kubernetes-dashboard-settings"]  verbs: ["get", "update"]  # Allow Dashboard to get metrics from heapster.- apiGroups: [""]  resources: ["services"]  resourceNames: ["heapster"]  verbs: ["proxy"]- apiGroups: [""]  resources: ["services/proxy"]  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]  verbs: ["get"]
---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:  name: kubernetes-dashboard-minimal  namespace: kube-systemroleRef:  apiGroup: rbac.authorization.k8s.io  kind: Role  name: kubernetes-dashboard-minimalsubjects:- kind: ServiceAccount  name: kubernetes-dashboard  namespace: kube-system
---# ------------------- Dashboard Deployment ------------------- #
kind: DeploymentapiVersion: apps/v1beta2metadata:  labels:    k8s-app: kubernetes-dashboard  name: kubernetes-dashboard  namespace: kube-systemspec:  replicas: 1  revisionHistoryLimit: 10  selector:    matchLabels:      k8s-app: kubernetes-dashboard  template:    metadata:      labels:        k8s-app: kubernetes-dashboard    spec:      nodeSelector:        node-role.kubernetes.io/master: ""      containers:      - name: kubernetes-dashboard        image: registry.cn-hangzhou.aliyuncs.com/k8sth/kubernetes-dashboard-amd64:v1.8.3        ports:        - containerPort: 8443          protocol: TCP        args:          - --auto-generate-certificates          # Uncomment the following line to manually specify Kubernetes API server Host          # If not specified, Dashboard will attempt to auto discover the API server and connect          # to it. Uncomment only if the default does not work.          # - --apiserver-host=http://my-address:port        volumeMounts:        - name: kubernetes-dashboard-certs          mountPath: /certs          # Create on-disk volume to store exec logs        - mountPath: /tmp          name: tmp-volume        livenessProbe:          httpGet:            scheme: HTTPS            path: /            port: 8443          initialDelaySeconds: 30          timeoutSeconds: 30      volumes:      - name: kubernetes-dashboard-certs        secret:          secretName: kubernetes-dashboard-certs      - name: tmp-volume        emptyDir: {}      serviceAccountName: kubernetes-dashboard      # Comment the following tolerations if Dashboard must not be deployed on master      tolerations:      - key: node-role.kubernetes.io/master        effect: NoSchedule
---# ------------------- Dashboard Service ------------------- #
kind: ServiceapiVersion: v1metadata:  labels:    k8s-app: kubernetes-dashboard  name: kubernetes-dashboard  namespace: kube-systemspec:  type: NodePort  ports:    - port: 443      targetPort: 8443      nodePort: 30000  selector:    k8s-app: kubernetes-dashboard
---apiVersion: v1kind: ServiceAccountmetadata:  name: admin-user  namespace: kube-system
---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:  name: admin-userroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: cluster-adminsubjects:- kind: ServiceAccount  name: admin-user  namespace: kube-system

执行如下文件

kubectl create -f kubernetes-dashboard.yaml

获取token,通过令牌登陆

$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')Name:         admin-user-token-z6lhqNamespace:    kube-systemLabels:       <none>Annotations:  kubernetes.io/service-account.name: admin-user              kubernetes.io/service-account.uid: 1b1a9541-ed6d-11e8-8458-000c2960d34d
Type:  kubernetes.io/service-account-token
Data====ca.crt:     1025 bytesnamespace:  11 bytestoken:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXo2bGhxIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxYjFhOTU0MS1lZDZkLTExZTgtODQ1OC0wMDBjMjk2MGQzNGQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.sdAwrS-tLEHY62ufIwZBrC58-yVsqMGV_AdeyFu8EBPpzdSfgCDemPIyEonGdz9cU6CLMpLnvJ4r7OLBexCaZ4WPIh_Q6N3YjK150d--3uzxQVxtoezVrrgBUCAUgC1KNewa0Suu32A-c-tPj2ykxSGpIYVDDQDQKqw_2E91diF-WKD9YMTl2H9sQU6N9RvSW7t0kQKcBFe8mDTUl4jrT-LnaISL_Qxcn0gwTlU-cbTBYuTpKvyLJ-aa6DfmFebQWA_Je-CLwh6ayk1X6DVVaSE_H5S9atGvnLQ1QVuj3ukHRtKnNSzAGM-boBaGTWZ0Khxo3sbsi7kvUZArYV1Vow

通过firefox访问dashboard,输入token,即可登陆

https://192.168.124.129:30000/#!/login

安装heapster

 $ kubectl create -f kube-heapster/influxdb/ $ kubectl create -f kube-heapster/rbac/

heapster文件信息:

[root@node01 ~]# tree kube-heapster/kube-heapster/├── influxdb│   ├── grafana.yaml│   ├── heapster.yaml│   └── influxdb.yaml└── rbac    └── heapster-rbac.yaml

grafana.yaml

apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: monitoring-grafana  namespace: kube-systemspec:  replicas: 1  template:    metadata:      labels:        task: monitoring        k8s-app: grafana    spec:      nodeSelector:        node-role.kubernetes.io/master: ""      containers:      - name: grafana        image: registry.cn-hangzhou.aliyuncs.com/k8sth/heapster-grafana-amd64:v4.4.3        imagePullPolicy: IfNotPresent        ports:        - containerPort: 3000          protocol: TCP        volumeMounts:        - mountPath: /etc/ssl/certs          name: ca-certificates          readOnly: true        - mountPath: /var          name: grafana-storage        env:        - name: INFLUXDB_HOST          value: monitoring-influxdb        - name: GF_SERVER_HTTP_PORT          value: "3000"          # The following env variables are required to make Grafana accessible via          # the kubernetes api-server proxy. On production clusters, we recommend          # removing these env variables, setup auth for grafana, and expose the grafana          # service using a LoadBalancer or a public IP.        - name: GF_AUTH_BASIC_ENABLED          value: "false"        - name: GF_AUTH_ANONYMOUS_ENABLED          value: "true"        - name: GF_AUTH_ANONYMOUS_ORG_ROLE          value: Admin        - name: GF_SERVER_ROOT_URL          # If you're only using the API Server proxy, set this value instead:          # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy          value: /      volumes:      - name: ca-certificates        hostPath:          path: /etc/ssl/certs      - name: grafana-storage        emptyDir: {}---apiVersion: v1kind: Servicemetadata:  labels:    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)    # If you are NOT using this as an addon, you should comment out this line.    kubernetes.io/cluster-service: 'true'    kubernetes.io/name: monitoring-grafana  name: monitoring-grafana  namespace: kube-systemspec:  # In a production setup, we recommend accessing Grafana through an external Loadbalancer  # or through a public IP.  # type: LoadBalancer  # You could also use NodePort to expose the service at a randomly-generated port  # type: NodePort  ports:  - port: 80    targetPort: 3000  selector:    k8s-app: grafana

heapster.yaml

apiVersion: v1kind: ServiceAccountmetadata:  name: heapster  namespace: kube-system---apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: heapster  namespace: kube-systemspec:  replicas: 1  template:    metadata:      labels:        task: monitoring        k8s-app: heapster    spec:      serviceAccountName: heapster      nodeSelector:        node-role.kubernetes.io/master: ""      containers:      - name: heapster        image: registry.cn-hangzhou.aliyuncs.com/k8sth/heapster-amd64:v1.4.2        imagePullPolicy: IfNotPresent        command:        - /heapster        - --source=kubernetes:https://kubernetes.default        - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086---apiVersion: v1kind: Servicemetadata:  labels:    task: monitoring    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)    # If you are NOT using this as an addon, you should comment out this line.    kubernetes.io/cluster-service: 'true'    kubernetes.io/name: Heapster  name: heapster  namespace: kube-systemspec:  ports:  - port: 80    targetPort: 8082  selector:    k8s-app: heapster

influxdb.yaml

apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: monitoring-influxdb  namespace: kube-systemspec:  replicas: 1  template:    metadata:      labels:        task: monitoring        k8s-app: influxdb    spec:      nodeSelector:        node-role.kubernetes.io/master: ""      containers:      - name: influxdb        image: registry.cn-hangzhou.aliyuncs.com/k8sth/heapster-influxdb-amd64:v1.3.3        imagePullPolicy: IfNotPresent        volumeMounts:        - mountPath: /data          name: influxdb-storage      volumes:      - name: influxdb-storage        emptyDir: {}---apiVersion: v1kind: Servicemetadata:  labels:    task: monitoring    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)    # If you are NOT using this as an addon, you should comment out this line.    kubernetes.io/cluster-service: 'true'    kubernetes.io/name: monitoring-influxdb  name: monitoring-influxdb  namespace: kube-systemspec:  ports:  - port: 8086    targetPort: 8086  selector:    k8s-app: influxdb

heapster-rbac.yaml

kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata:  name: heapsterroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: system:heapstersubjects:- kind: ServiceAccount  name: heapster  namespace: kube-system

让master也运行pod(默认master不运行pod)

kubectl taint nodes --all node-role.kubernetes.io/master-

添加node2节点到集群

在node2节点执行如下命令,即可将节点添加进集群

kubeadm join 192.168.124.129:6443 --token momf47.0scodcv3cm6t75vm --discovery-token-ca-cert-hash sha256:cfeed429d671eeb39b8980ada10b55e79057cc65e108660ef86ae5043d6275f8  --ignore-preflight-errors=Swap[preflight] running pre-flight checks    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules;2. Provide the missing builtin kernel ipvs support
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.1-ce. Latest validated version: 18.06[discovery] Trying to connect to API Server "192.168.124.129:6443"[discovery] Created cluster-info discovery client, requesting info from "https://192.168.124.129:6443"[discovery] Requesting info from "https://192.168.124.129:6443" again to validate TLS against the pinned public key[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.124.129:6443"[discovery] Successfully established connection with API Server "192.168.124.129:6443"[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[preflight] Activating the kubelet service[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2" as an annotation
This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
$ kubectl get nodeNAME    STATUS   ROLES    AGE   VERSIONnode1   Ready    master   78m   v1.12.2node2   Ready    <none>   64m   v1.12.2
$ kubectl get pods --all-namespacesNAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGEkube-system   coredns-576cbf47c7-mntdb                1/1     Running   0          20mkube-system   coredns-576cbf47c7-rswvv                1/1     Running   0          20mkube-system   etcd-node1                              1/1     Running   0          19mkube-system   heapster-6955774cc5-gt4c8               1/1     Running   0          13mkube-system   kube-apiserver-node1                    1/1     Running   0          20mkube-system   kube-controller-manager-node1           1/1     Running   0          19mkube-system   kube-flannel-ds-amd64-2qzpw             1/1     Running   0          6m55skube-system   kube-flannel-ds-amd64-gxqq8             1/1     Running   0          15mkube-system   kube-proxy-4xgvb                        1/1     Running   0          20mkube-system   kube-proxy-xhvfn                        1/1     Running   0          6m55skube-system   kube-scheduler-node1                    1/1     Running   0          20mkube-system   kubernetes-dashboard-55f88765fb-qvm9w   1/1     Running   0          15mkube-system   monitoring-grafana-9658ddc99-k8np4      1/1     Running   0          13mkube-system   monitoring-influxdb-96bf68f65-9mn86     1/1     Running   0          13m

访问dashboard

原文发布于微信公众号 - 分母为零(gmg1014)

原文发表时间:2019-04-21

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

扫码关注云+社区

领取腾讯云代金券