前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >kubernetes部署:基于kubeadm的国内镜像源安装

kubernetes部署:基于kubeadm的国内镜像源安装

作者头像
机械视角
发布2019-10-23 11:26:02
13.7K0
发布2019-10-23 11:26:02
举报
文章被收录于专栏:TensorbytesTensorbytes

基于kubeadm工具的kubernetes1.13.2部署中国区镜像部署安装实践。

1、kubernetes架构

Kubernetes主要由以下几个核心组件组成:

  • etcd保存了整个集群的状态;
  • apiserver提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;
  • controller manager负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;
  • scheduler负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;
  • kubelet负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;
  • Container runtime负责镜像管理以及Pod和容器的真正运行(CRI);
  • kube-proxy负责为Service提供cluster内部的服务发现和负载均衡;

除了核心组件,还有一些推荐的Add-ons:

  • kube-dns负责为整个集群提供DNS服务
  • Ingress Controller为服务提供外网入口
  • Heapster提供资源监控
  • Dashboard提供GUI
  • Federation提供跨可用区的集群
  • Fluentd-elasticsearch提供集群日志采集、存储与查询

下面介绍如何安装。

2、安装kubeadm

采用国内阿里云镜像源,安装kubelet、kubeadm、kubectl:

代码语言:javascript
复制
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable kubelet

centos7用户还需要设置路由:

代码语言:javascript
复制
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动,关闭系统的Swap方法如下:

代码语言:javascript
复制
swapoff -a

修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。 swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:

代码语言:javascript
复制
vm.swappiness=0

执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。


注:

代码语言:javascript
复制
kubeadm init 启动一个 Kubernetes 主节点
kubeadm join 启动一个 Kubernetes 工作节点并且将其加入到集群
kubeadm upgrade 更新一个 Kubernetes 集群到新版本
kubeadm config 如果使用 v1.7.x 或者更低版本的 kubeadm 初始化集群,您需要对集群做一些配置以便使用 kubeadm upgrade 命令
kubeadm token 管理 kubeadm join 使用的令牌
kubeadm reset 还原 kubeadm init 或者 kubeadm join 对主机所做的任何更改

3、用kubeadm安装master

kubeadm init这个命令帮助你启动跟Master相关的组件APIServer、Etcd、Scheduler、Controller-Manager等

kubeadm init的参数:

代码语言:javascript
复制
--apiserver-advertise-address string
API Server将要广播的监听地址。如指定为 `0.0.0.0` 将使用缺省的网卡地址。
--apiserver-bind-port int32     缺省值: 6443
API Server绑定的端口
--apiserver-cert-extra-sans stringSlice
可选的额外提供的证书主题别名(SANs)用于指定API Server的服务器证书。可以是IP地址也可以是DNS名称。
--cert-dir string     缺省值: "/etc/kubernetes/pki"
证书的存储路径。
--config string
kubeadm配置文件的路径。警告:配置文件的功能是实验性的。
--cri-socket string     缺省值: "/var/run/dockershim.sock"
指明要连接的CRI socket文件
--dry-run
不会应用任何改变;只会输出将要执行的操作。
--feature-gates string
键值对的集合,用来控制各种功能的开关。可选项有:
Auditing=true|false (当前为ALPHA状态 - 缺省值=false)
CoreDNS=true|false (缺省值=true)
DynamicKubeletConfig=true|false (当前为BETA状态 - 缺省值=false)
-h, --help
获取init命令的帮助信息
--ignore-preflight-errors stringSlice
忽视检查项错误列表,列表中的每一个检查项如发生错误将被展示输出为警告,而非错误。 例如: 'IsPrivilegedUser,Swap'. 如填写为 'all' 则将忽视所有的检查项错误。
--kubernetes-version string     缺省值: "stable-1"
为control plane选择一个特定的Kubernetes版本。
--node-name string
指定节点的名称。
--pod-network-cidr string
指明pod网络可以使用的IP地址段。 如果设置了这个参数,control plane将会为每一个节点自动分配CIDRs。
--service-cidr string     缺省值: "10.96.0.0/12"
为service的虚拟IP地址另外指定IP地址段
--service-dns-domain string     缺省值: "cluster.local"
为services另外指定域名, 例如: "myorg.internal".
--skip-token-print
不打印出由 `kubeadm init` 命令生成的默认令牌。
--token string
这个令牌用于建立主从节点间的双向受信链接。格式为 [a-z0-9]{6}\.[a-z0-9]{16} - 示例: abcdef.0123456789abcdef
--token-ttl duration     缺省值: 24h0m0s
令牌被自动删除前的可用时长 (示例: 1s, 2m, 3h). 如果设置为 '0', 令牌将永不过期。

在运行 kubeadm init 之前可以先执行 kubeadm config images pull 来测试与 gcr.io 的连接kubeadm config images pull尝试是否可以拉取镜像,由于国内访问”k8s.gcr.io”, “gcr.io”, “quay.io” 有困难,这里采用自建docker register的方式

通过私有仓库拉取k8s.gcr.io等镜像

构建私有镜像:

代码语言:javascript
复制
docker pull registry
docker run --restart=always -d -p 15000:5000 -v /mnt/date/registry:/var/lib/registry registry

可以使用使用准备好的harbor仓库

百度云盘-harbor私有仓库

代码语言:javascript
复制
k8s.gcr.io/coredns:1.2.6
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/kube-apiserver:v1.13.0
k8s.gcr.io/kube-controller-manager:v1.13.0
k8s.gcr.io/kube-proxy:v1.13.0
k8s.gcr.io/kube-scheduler:v1.13.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/addon-resizer:1.8.4
k8s.gcr.io/metrics-server-amd64:v0.3.1
k8s.gcr.io/traefik:1.7.5
k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

gcr.io/kubernetes-helm/tiller:v2.12.0

quay.io/calico/cni:v3.3.2
quay.io/calico/node:v3.3.2
quay.io/calico/typha:v3.3.2

下载下来后导入即可:

代码语言:javascript
复制
docker load -i /path/to/k8s-repo-1.13.0
docker run --restart=always -d -p 80:5000 --name repo harbor.io:1180/system/k8s-repo:v1.13.0
构建私有仓库并上传镜像的方法(安装不需要看这部)

从docker上拉取镜像:

代码语言:javascript
复制
docker pull mirrorgooglecontainers/kube-apiserver:v1.13.2
docker pull mirrorgooglecontainers/kube-proxy:v1.13.2
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.2
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.2
docker pull mirrorgooglecontainers/coredns:1.2.6
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0

docker pull shikanon096/traefik:1.7.5
docker pull shikanon096/gcr.io.kubernetes-helm.tiller:v2.12.0

docker pull mirrorgooglecontainers/addon-resizer:1.8.4
docker pull mirrorgooglecontainers/metrics-server-amd64:v0.3.1

docker pull quay.io/calico/cni:v3.3.2
docker pull quay.io/calico/node:v3.3.2
docker pull quay.io/calico/typha:v3.3.2

...

切换tag

切换tag,方便后面直接push本地仓库:

代码语言:javascript
复制
docker tag mirrorgooglecontainerso/kube-controller-manager:v1.13.2 192.168.1.118:80/kube-controller-manager:v1.13.2

docker tag mirrorgooglecontainers/kube-proxy:v1.13.2 192.168.1.118:80/kube-proxy:v1.13.2

docker tag mirrorgooglecontainers/kube-scheduler:v1.13.2 192.168.1.118:80/kube-scheduler:v1.13.2

docker tag mirrorgooglecontainers/kube-apiserver:v1.13.2 192.168.1.118:80/kube-apiserver:v1.13.2

...

由于是registry用的是http,需要设置docker的daemon

/etc/docker/daemon.json中加入:

代码语言:javascript
复制
{
  ...
  "insecure-registries": ["192.168.1.118:80"]
}

push 到仓库 push 到私有仓库:

代码语言:javascript
复制
docker push 192.168.1.118:80/kube-scheduler:v1.13.2
docker push 192.168.1.118:80/kube-apiserver:v1.13.2
docker push 192.168.1.118:80/kube-proxy:v1.13.2
docker push 192.168.1.118:80/kube-controller-manager:v1.13.2
...

配置仓库地址 master配置新的仓库地址:

代码语言:javascript
复制
# 将源设置为insercure

mkdir -p /etc/docker
echo -e '{\n"insecure-registries":["k8s.gcr.io", "gcr.io", "quay.io"]\n}' > /etc/docker/daemon.json
systemctl restart docker
 
# 此处为registry所在机器的IP
REGISTRY_HOST="192.168.1.118"

# 设置Hosts,让所有上面域名转到本地
yes | cp /etc/hosts /etc/hosts_bak
cat /etc/hosts_bak|grep -vE '(gcr.io|harbor.io|quay.io)' > /etc/hosts
echo """
$REGISTRY_HOST gcr.io harbor.io k8s.gcr.io quay.io """ >> /etc/hosts

测试:

代码语言:javascript
复制
kubeadm config images pull

I0125 01:41:57.398374    5002 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I0125 01:41:57.398536    5002 version.go:95] falling back to the local client version: v1.13.2
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.13.2
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.13.2
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.13.2
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.13.2
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.2.24
[config/images] Pulled k8s.gcr.io/coredns:1.2.6

成功后可以直接使用kubeadm init:

代码语言:javascript
复制
kubeadm init \
  --kubernetes-version=v1.13.2 \
  --pod-network-cidr=10.244.0.0/16

注:如果中途出错可以用kubeadm reset来进行回退

在使用kubeadm的时候出现kubelet node found bug:

代码语言:javascript
复制
E1002 23:32:36.072441 49157 kubelet.go:2236] node "master01" not found
E1002 23:32:36.172630 49157 kubelet.go:2236] node "master01" not found
E1002 23:32:36.273892 49157 kubelet.go:2236] node "master01" not found

这主要由于--api-server引起,可以去掉这个参数试试。

kebeadm init 成功后结果

代码语言:javascript
复制
kubeadm init --kubernetes-version=v1.13.2 --pod-network-cidr=10.244.0.0/16

[init] Using Kubernetes version: v1.13.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node10 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.120]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node10 localhost] and IPs [192.168.1.120 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node10 localhost] and IPs [192.168.1.120 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 59.004591 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node10" as an annotation
[mark-control-plane] Marking the node node10 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node10 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ngcxcv.5b60k99xhckulox4
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.1.120:6443 --token ngcxcv.5b60k99xhckulox4 --discovery-token-ca-cert-hash sha256:630385738470e6ad0fa065a92eb6519d9a05e593b3896fccadef4f39e025f273

设置普通账户权限:

代码语言:javascript
复制
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

root 用户,则可以运行:

代码语言:javascript
复制
export KUBECONFIG=/etc/kubernetes/admin.conf

4、用kubeadm部署node,把其加入master

用上面kubeadm init 安装完成后给的命令即可:

代码语言:javascript
复制
kubeadm join 192.168.1.120:6443 --token xmjnn0.39xbep2zpyh0rjam --discovery-token-ca-cert-hash sha256:9c2dc63bab2a1392e797bca8104eac3ce115589af0486259a06d3277eb21b4cb

为了能使用kubectl,可以从master拷贝过来:

代码语言:javascript
复制
scp root@<master ip>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes

测试:

代码语言:javascript
复制
kubectl get nodes
NAME     STATUS     ROLES    AGE    VERSION
master   NotReady   <none>   2m3s   v1.13.2
node10   NotReady   master   6m7s   v1.13.2

5、安装网络插件

安装网络插件需要注意保证docker register私有仓库中已经有该镜像,或者网络可以访问quay.io等国外网站。

Flannel

代码语言:javascript
复制
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

Calico

代码语言:javascript
复制
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

注:应用层网络(覆盖网络)是指建立在另一个网络上的网络。该网络中的结点可以看作通过虚拟或逻辑链路而连接起来的。虽然在底层有很多条物理链路,但是这些虚拟或逻辑链路都与路径一一对应。Flannel实质上是一种“覆盖网络(overlay network)”,也就是将TCP数据包装在另一种网络包里面进行路由转发和通信

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2019-01-27,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1、kubernetes架构
  • 2、安装kubeadm
  • 3、用kubeadm安装master
    • 通过私有仓库拉取k8s.gcr.io等镜像
      • 构建私有仓库并上传镜像的方法(安装不需要看这部)
  • 4、用kubeadm部署node,把其加入master
  • 5、安装网络插件
    • Flannel
      • Calico
      相关产品与服务
      容器镜像服务
      容器镜像服务(Tencent Container Registry,TCR)为您提供安全独享、高性能的容器镜像托管分发服务。您可同时在全球多个地域创建独享实例,以实现容器镜像的就近拉取,降低拉取时间,节约带宽成本。TCR 提供细颗粒度的权限管理及访问控制,保障您的数据安全。
      领券
      问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档