k8s中文社区:https://www.kubernetes.org.cn/
Kubernetes与较早的集群管理系统Mesos和YARN相比,对容器尤其是 Docker的支持更加原生,同时提供了更强大的机制实现资源调度,自动 管理容器生命周期,负载均衡,高可用等底层功能,使开发者可以专注于开发应用。
Kubernetes是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制。
Kubernetes是为生产环境而设计的容器调度管理系统,对于负载均衡、 服务发现、高可用、滚动升级、自动伸缩等容器云平台的功能要求有原生支持
一个K8s集群是由分布式存储(etcd)、服务节点(Minion, etcd现在称为Node)和控制节点(Master)构成的。所有的集群状态都保存在etcd中,Master节点上则运行集群的管理控制模块。Node节点是真正运行应用容器的主机节点,在每个Minion节点上都会运行一个Kubelet代理,控制该节点上的容器、镜像和存储卷等。
yum安装 1.5.2
二进制安装
kubeadm安装(官方) ,全部容器化
minkube安装
编译安装
自动化安装
官网 kubernetes.io
中文社区 https://www.kubernetes.org.cn/
Github https://github.com/kubernetes/kubernetes
命令行参考 https://kubernetes.io/docs/reference/generated/kub ectl/kubectl-commands#
1、linux 内核3.10以上
2、64位系统
3、内存4G
4、安装epel
5、安装docker
6、开启yum cache保存安装RPM包
物理ip(宿主机ip)
集群ip(cluster ip):10.254.0.0/16
pod(容器的ip):172.16.0.0/16
环境:三台机器,两个node(计算节点),一个主节点(master)
yum源需要:repo:CentOS-Base.repo docker1.12
主机名:K8s-master ip:10.0.0.11 系统:centos7.2
yum install etcd -y
yum install docker -y
yum install kubernetes -y
yum install flannel -y
K8s-node-1 10.0.0.12 centos7.2
K8s-node-2 10.0.0.13 centos7.2
yum install docker -y
yum install kubernetes -y
yum install flannel -y
[root@k8s-master ~]# vim /etc/etcd/etcd.conf
ETCD_NAME="default"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
启动
systemctl enable etcd.service
systemctl start etcd.service
检查
[root@k8s-master ~]# etcdctl -C http://10.0.0.11:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://10.0.0.11:2379
cluster is healthy
vim /etc/kubernetes/apiserver
8 KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
11 KUBE_API_PORT="--port=8080"
17 KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379"
23 KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
vim /etc/kubernetes/config
22 KUBE_MASTER="--master=http://10.0.0.11:8080"
启动
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
查看是否启动成功
systemctl status kube-apiserver.service kube-controller-manager.service kube-scheduler.service
vim /etc/kubernetes/config
KUBE_MASTER="--master=http://10.0.0.11:8080"
vim /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=10.0.0.12"
KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"
vim /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=10.0.0.13"
KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"
systemctl enable kubelet.service
systemctl start kubelet.service
systemctl enable kube-proxy.service
systemctl start kube-proxy.service
检查:
[root@k8s-master ~]# kubectl get nodes
NAME STATUS AGE
10.0.0.12 Ready 3m
10.0.0.13 Ready 3m
master、node上均编辑/etc/sysconfig/flanneld
vim /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://10.0.0.11:2379"
etcdctl mk /atomic.io/network/config '{ "Network": "172.16.0.0/16" }'
实操:
[root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "172.16.0.0/16" }'
{ "Network": "172.16.0.0/16" }
在master执行:
systemctl enable flanneld.service
systemctl start flanneld.service
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service
在node上执行:
systemctl enable flanneld.service
systemctl start flanneld.service
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service
命令:kubectl create -f hello.yaml
文件内容:
[root@k8s-master ~]# vim hello.yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec:
restartPolicy: Never
containers:
- name: hello
image: "docker.io/busybox:latest"
command: ["/bin/echo","hello”,”world"]
实操:
[root@k8s-master ~]# kubectl create -f hello.yaml
pod "hello-world" created
kubectl get pods 查看默认name信息
kubectl describe pods hello-world 查看hello-world的详细信息
kubectl delete pods hello-world 删除名叫hello-world
kubectl replace -f nginx-rc.yaml 对已有资源进行更新、替换
kubectl edit rc nginx 对现有资源直接进行修改,立即生效
kubectl logs nginx-gt1jd 查看访问日志
因为没有证书,拉取图像失败。
[root@k8s-master ~]# kubectl describe pods hello-world
Name: hello-world
Namespace: default
Node: 10.0.0.13/10.0.0.13
Start Time: Fri, 02 Feb 2018 19:28:31 +0800
Labels: <none>
Status: Pending
IP:
Controllers: <none>
Containers:
hello:
Container ID:
Image: docker.io/busybox:latest
Image ID:
Port:
Command:
/bin/echo
hello”,”world
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Volume Mounts: <none>
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
No volumes.
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4m 4m 1 {default-scheduler } Normal Scheduled Successfully assigned hello-world to 10.0.0.13
4m 1m 5 {kubelet 10.0.0.13} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"
3m 5s 16 {kubelet 10.0.0.13} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""
解决:yum install python-rhsm* -y
创建:
[root@k8s-master ~]# kubectl create -f nginx.yaml
pod "hello-nginx" created
检查是否成功
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
hello-nginx 1/1 Running 0 2h 172.16.42.2 10.0.0.13
始终保持一个在活着
rc版yaml编写:
[root@k8s-master ~]# cat nginx-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 1
selector:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
启动rc版容器
[root@k8s-master ~]# kubectl create -f nginx-rc.yaml
replicationcontroller "nginx" created
检查:
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-gt1jd 1/1 Running 0 2m 172.16.79.2 10.0.0.12
这样的话就算删除了这个容器RC也会立马在起一个
[root@k8s-master ~]# cat web-rc2.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: myweb-2
spec:
replicas: 2
selector:
app: myweb-2
template:
metadata:
labels:
app: myweb-2
spec:
containers:
- name: myweb-2
image: kubeguide/tomcat-app:v2
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: 'mysql'
- name: MYSQL_SERVICE_PORT
value: '3306'
升级操作:
[root@k8s-master ~]# kubectl rolling-update myweb -f web-rc2.yaml
Created myweb-2
Scaling up myweb-2 from 0 to 2, scaling down myweb from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
Scaling myweb-2 up to 1
Scaling myweb down to 1
Scaling myweb-2 up to 2
Scaling myweb down to 0
Update succeeded. Deleting myweb
replicationcontroller "myweb" rolling updated to "myweb-2"
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb-2-mmlcm 1/1 Running 0 32s 172.16.42.3 10.0.0.13
myweb-71438 1/1 Running 0 2m 172.16.42.2 10.0.0.13
myweb-cx9j2 1/1 Running 0 2m 172.16.79.3 10.0.0.12
nginx-gt1jd 1/1 Running 0 1h 172.16.79.2 10.0.0.12
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb-2-0kmzf 1/1 Running 0 7s 172.16.79.4 10.0.0.12
myweb-2-mmlcm 1/1 Running 0 1m 172.16.42.3 10.0.0.13
myweb-cx9j2 1/1 Running 0 2m 172.16.79.3 10.0.0.12
nginx-gt1jd 1/1 Running 0 1h 172.16.79.2 10.0.0.12
[root@k8s-master ~]# cat web-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: myweb
spec:
replicas: 2
selector:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: kubeguide/tomcat-app:v1
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: 'mysql'
- name: MYSQL_SERVICE_PORT
value: '3306'
操作:
[root@k8s-master ~]# kubectl rolling-update myweb-2 -f web-rc.yaml
Created myweb
Scaling up myweb from 0 to 2, scaling down myweb-2 from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
Scaling myweb up to 1
Scaling myweb-2 down to 1
Scaling myweb up to 2
Scaling myweb-2 down to 0
Update succeeded. Deleting myweb-2
replicationcontroller "myweb-2" rolling updated to "myweb"
检查
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb-mbndc 1/1 Running 0 1m 172.16.79.3 10.0.0.12
myweb-qh38r 1/1 Running 0 2m 172.16.42.2 10.0.0.13
nginx-gt1jd 1/1 Running 0 1h 172.16.79.2 10.0.0.12
[root@k8s-master ~]# cat web-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: myweb
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001
selector:
app: myweb
[root@k8s-master ~]# kubectl create -f web-svc.yaml
[root@k8s-master ~]# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 6h
myweb 10.254.91.34 <nodes> 8080:30001/TCP 1m
然后取node节点检查30001端口是否启动
然后浏览器web访问node节点的ip:30001进行测试
[root@k8s-master ~]# cat dashboard.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Keep the name in sync with image version and
# gce/coreos/kube-manifests/addons/dashboard counterparts
name: kubernetes-dashboard-latest
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
version: latest
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: kubernetes-dashboard
image: index.tenxcloud.com/google_containers/kubernetes-dashboard-amd64:v1.4.1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
args:
- --apiserver-host=http://10.0.0.11:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
操作:
[root@k8s-master ~]# cat dashboard-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
启动:
kubectl create -f dashboard.yaml
kubectl create -f dashboard-svc.yaml
然后访问:http://10.0.0.11:8080/ui/