1、k8s中namespace的作用?
答:Namespace命名空间,是kubernetes系统中的另一个非常重要的概念,Namespace在很多情况下用于实现多租户的资源隔离,不同的业务可以使用不同的namespace进行隔离。
2、k8s中创建namespace。
1 [root@k8s-master ~]# kubectl create namespace biehl
2 namespace "biehl" created
3 [root@k8s-master ~]#
3、k8s中查看namespace。
1 [root@k8s-master ~]# kubectl get namespace
2 NAME STATUS AGE
3 biehl Active 38s
4 default Active 17d
5 kube-system Active 17d
可以查看指定的namespace,或者查看所有的namespace的内容,如下所示:
1 [root@k8s-master ~]# kubectl get all --namespace=kube-system
2 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
3 deploy/kube-dns 1 1 1 1 4d
4 deploy/kubernetes-dashboard-latest 1 1 1 1 3d
5
6 NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
7 svc/kube-dns 10.254.230.254 <none> 53/UDP,53/TCP 4d
8 svc/kubernetes-dashboard 10.254.12.102 <none> 80/TCP 3d
9
10 NAME DESIRED CURRENT READY AGE
11 rs/kube-dns-778415672 1 1 1 4d
12 rs/kubernetes-dashboard-latest-3333846798 1 1 1 3d
13
14 NAME READY STATUS RESTARTS AGE
15 po/kube-dns-778415672-q23st 4/4 Running 4 58m
16 po/kubernetes-dashboard-latest-3333846798-j8zjc 1/1 Running 1 3d
17 [root@k8s-master ~]# kubectl get all --all-namespaces
18 NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
19 kube-system deploy/kube-dns 1 1 1 1 4d
20 kube-system deploy/kubernetes-dashboard-latest 1 1 1 1 3d
21
22 NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
23 default svc/kubernetes 10.254.0.1 <none> 443/TCP 17d
24 kube-system svc/kube-dns 10.254.230.254 <none> 53/UDP,53/TCP 4d
25 kube-system svc/kubernetes-dashboard 10.254.12.102 <none> 80/TCP 3d
26
27 NAMESPACE NAME DESIRED CURRENT READY AGE
28 kube-system rs/kube-dns-778415672 1 1 1 4d
29 kube-system rs/kubernetes-dashboard-latest-3333846798 1 1 1 3d
30
31 NAMESPACE NAME READY STATUS RESTARTS AGE
32 default po/busybox2 1/1 Running 9 4d
33 kube-system po/kube-dns-778415672-q23st 4/4 Running 4 58m
34 kube-system po/kubernetes-dashboard-latest-3333846798-j8zjc 1/1 Running 1 3d
35 [root@k8s-master ~]#
4、删除namespace,注意,特别危险,会删除namespace下所有的k8s资源。
1 [root@k8s-master ~]# kubectl delete namespace biehl
2 namespace "biehl" deleted
3 [root@k8s-master ~]# kubectl get namespace
4 NAME STATUS AGE
5 biehl Terminating 1m
6 default Active 17d
7 kube-system Active 17d
8 [root@k8s-master ~]# kubectl get namespace
9 NAME STATUS AGE
10 default Active 17d
11 kube-system Active 17d
12 [root@k8s-master ~]#
5、练习使用k8s的namespace进行测试。
1 [root@k8s-master ~]# cd k8s/
2 [root@k8s-master k8s]# ls
3 book-master.war dashboard dashboard.zip deploy health pod rc skydns skydns.zip svc tomcat_demo tomcat_demo.zip
4 [root@k8s-master k8s]# mkdir namespace
5 [root@k8s-master k8s]# cd namespace/
6 [root@k8s-master namespace]# ls
7 [root@k8s-master namespace]# cp ../rc/nginx_rc.yaml .
8 [root@k8s-master namespace]# cp ../svc/nginx_svc.yaml .
9 [root@k8s-master namespace]# ls
10 nginx_rc.yaml nginx_svc.yaml
11 [root@k8s-master namespace]#
修改nginx_rc.yaml配置文件,将namespace添加到metadata配置下面。
1 [root@k8s-master namespace]# vim nginx_rc.yaml
开始创建这个RC副本控制器,观察使用namespace的神奇之处,即使RC的名称相同,但是它们属于不同的namespace也是可以同时存在的。
1 [root@k8s-master namespace]# kubectl create namespace biehl
2 namespace "biehl" created
3 [root@k8s-master namespace]# kubectl create -f nginx_rc.yaml
4 replicationcontroller "myweb" created
5 [root@k8s-master namespace]# kubectl get all --namespace=biehl
6 NAME DESIRED CURRENT READY AGE
7 rc/myweb 2 2 2 23s
8
9 NAME READY STATUS RESTARTS AGE
10 po/myweb-5qwsn 1/1 Running 0 23s
11 po/myweb-6j0qs 1/1 Running 0 23s
12 [root@k8s-master namespace]# kubectl get all --all-namespaces
13 NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
14 kube-system deploy/kube-dns 1 1 1 1 4d
15 kube-system deploy/kubernetes-dashboard-latest 1 1 1 1 3d
16
17 NAMESPACE NAME DESIRED CURRENT READY AGE
18 biehl rc/myweb 2 2 2 31s
19
20 NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
21 default svc/kubernetes 10.254.0.1 <none> 443/TCP 17d
22 kube-system svc/kube-dns 10.254.230.254 <none> 53/UDP,53/TCP 4d
23 kube-system svc/kubernetes-dashboard 10.254.12.102 <none> 80/TCP 3d
24
25 NAMESPACE NAME DESIRED CURRENT READY AGE
26 kube-system rs/kube-dns-778415672 1 1 1 4d
27 kube-system rs/kubernetes-dashboard-latest-3333846798 1 1 1 3d
28
29 NAMESPACE NAME READY STATUS RESTARTS AGE
30 biehl po/myweb-5qwsn 1/1 Running 0 31s
31 biehl po/myweb-6j0qs 1/1 Running 0 31s
32 default po/busybox2 1/1 Running 9 4d
33 kube-system po/kube-dns-778415672-q23st 4/4 Running 4 1h
34 kube-system po/kubernetes-dashboard-latest-3333846798-j8zjc 1/1 Running 1 3d
35 [root@k8s-master namespace]#
如果此时需要被外界访问,就需要创建SVC的,修改nginx_svc.yaml配置文件,将namespace添加到metadata配置下面。需要注意的是svc和rc必须处于同一个namespace下面才可以的
1 [root@k8s-master namespace]# vim nginx_svc.yaml
开始创建这个SVC,并进行查看,如下所示:
1 [root@k8s-master namespace]# kubectl create -f nginx_svc.yaml
2 service "myweb" created
3 [root@k8s-master namespace]# kubectl get all --namespace=biehl
4 NAME DESIRED CURRENT READY AGE
5 rc/myweb 2 2 2 4m
6
7 NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
8 svc/myweb 10.254.154.202 <nodes> 80:30000/TCP 6s
9
10 NAME READY STATUS RESTARTS AGE
11 po/myweb-5qwsn 1/1 Running 0 4m
12 po/myweb-6j0qs 1/1 Running 0 4m
13 [root@k8s-master namespace]# kubectl get all --all-namespaces
14 NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
15 kube-system deploy/kube-dns 1 1 1 1 4d
16 kube-system deploy/kubernetes-dashboard-latest 1 1 1 1 3d
17
18 NAMESPACE NAME DESIRED CURRENT READY AGE
19 biehl rc/myweb 2 2 2 4m
20
21 NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
22 biehl svc/myweb 10.254.154.202 <nodes> 80:30000/TCP 14s
23 default svc/kubernetes 10.254.0.1 <none> 443/TCP 17d
24 kube-system svc/kube-dns 10.254.230.254 <none> 53/UDP,53/TCP 4d
25 kube-system svc/kubernetes-dashboard 10.254.12.102 <none> 80/TCP 3d
26
27 NAMESPACE NAME DESIRED CURRENT READY AGE
28 kube-system rs/kube-dns-778415672 1 1 1 4d
29 kube-system rs/kubernetes-dashboard-latest-3333846798 1 1 1 3d
30
31 NAMESPACE NAME READY STATUS RESTARTS AGE
32 biehl po/myweb-5qwsn 1/1 Running 0 4m
33 biehl po/myweb-6j0qs 1/1 Running 0 4m
34 default po/busybox2 1/1 Running 9 4d
35 kube-system po/kube-dns-778415672-q23st 4/4 Running 4 1h
36 kube-system po/kubernetes-dashboard-latest-3333846798-j8zjc 1/1 Running 1 3d
37 [root@k8s-master namespace]#
可以访问一下,看是否已经可以被外界所访问了,如下所示:
1 [root@k8s-master namespace]# curl -I 192.168.110.133:30000
2 HTTP/1.1 200 OK
3 Server: nginx/1.13.12
4 Date: Mon, 22 Jun 2020 13:01:09 GMT
5 Content-Type: text/html
6 Content-Length: 612
7 Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
8 Connection: keep-alive
9 ETag: "5acb8e45-264"
10 Accept-Ranges: bytes
11
12 [root@k8s-master namespace]#
可以k8s的Dashboard的图形化界面进行观看。
6、Kubernetes(k8s)中反向代理访问k8s中的应用,使用proxy的方式访问k8s中的服务。
6.1、访问k8中应用的方式,需要创建一个server,其实service里面有不同的类型,比如说NodePort类型、ClusterIP类型。
1)、第一种,是通过NodePort的类型。
1 type: NodePort
2 ports:
3 - port: 80
4 targetPort: 80
5 nodePort: 30008
2)、第二种,是ClusterIP类型,比如k8s的databoard界面访问就是通过ClusterIP类型进行访问的。默认类型使用的是ClusterIP类型的。
1 type: ClusterIP
2 ports:
3 - port: 80
4 targetPort: 80
比如ClusterIP类型的访问,http://192.168.110.133:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard,可以替换namespace的名称,service的名称进行访问,如下所示:
1 [root@k8s-master namespace]# kubectl get all --namespace=biehl
2 NAME DESIRED CURRENT READY AGE
3 rc/myweb 2 2 2 22m
4
5 NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
6 svc/myweb 10.254.154.202 <nodes> 80:30000/TCP 18m
7
8 NAME READY STATUS RESTARTS AGE
9 po/myweb-5qwsn 1/1 Running 0 22m
10 po/myweb-6j0qs 1/1 Running 0 22m
11 [root@k8s-master namespace]#
7、部署k8s的heapster监控。
7.1、Kubernetes扩展组件。
1)、kube-dns,负责为整个集群提供DNS服务。 2)、Ingress Controller,为服务提供外网入口。 3)、Heapster,提供资源监控。 4)、Dashboard,提供GUI。 5)、Federation,提供跨可用区的集群。 6)、Fluentd-elasticsearch,提供集群日志采集,存储与查询。
7.2、heapster监控的安装与部署。
1 [root@k8s-master ~]# cd k8s/
2 [root@k8s-master k8s]# ls
3 book-master.war dashboard dashboard.zip deploy health namespace pod rc skydns skydns.zip svc tomcat_demo tomcat_demo.zip
4 [root@k8s-master k8s]# mkdir heapster
5 [root@k8s-master k8s]# cd heapster/
6 [root@k8s-master heapster]# wget https://www.qstack.com.cn/heapster-influxdb.zip
7 --2020-06-25 15:05:44-- https://www.qstack.com.cn/heapster-influxdb.zip
8 Resolving www.qstack.com.cn (www.qstack.com.cn)... 111.202.85.37, 123.125.46.149
9 Connecting to www.qstack.com.cn (www.qstack.com.cn)|111.202.85.37|:443... connected.
10 HTTP request sent, awaiting response... 200 OK
11 Length: 2636 (2.6K) [application/zip]
12 Saving to: ‘heapster-influxdb.zip’
13
14 100%[=======================================================================================================================================================================>] 2,636 --.-K/s in 0.001s
15
16 2020-06-25 15:05:45 (2.11 MB/s) - ‘heapster-influxdb.zip’ saved [2636/2636]
17
18 [root@k8s-master heapster]# unzip heapster-influxdb.zip
19 Archive: heapster-influxdb.zip
20 creating: heapster-influxdb/
21 inflating: heapster-influxdb/grafana-service.yaml
22 inflating: heapster-influxdb/heapster-controller.yaml
23 inflating: heapster-influxdb/heapster-service.yaml
24 inflating: heapster-influxdb/influxdb-grafana-controller.yaml
25 inflating: heapster-influxdb/influxdb-service.yaml
26 [root@k8s-master heapster]# ls
27 heapster-influxdb heapster-influxdb.zip
28 [root@k8s-master heapster]# cd heapster-influxdb/
29 [root@k8s-master heapster-influxdb]# ls
30 grafana-service.yaml heapster-controller.yaml heapster-service.yaml influxdb-grafana-controller.yaml influxdb-service.yaml
31 [root@k8s-master heapster-influxdb]#
修改配置文件,将地址指定为自己的apiServer的地址,如下所示:
1 [root@k8s-master heapster-influxdb]# vim heapster-controller.yaml
修改内容如下所示:
需要,注意的是每个配置文件里面都需要有它的一个镜像,这里的镜像都是从官方仓库下载的,也可以下载好上传到自己的私有仓库里面。
1 [root@k8s-master heapster-influxdb]# ls
2 grafana-service.yaml heapster-controller.yaml heapster-service.yaml influxdb-grafana-controller.yaml influxdb-service.yaml
3 [root@k8s-master heapster-influxdb]# kubectl create -f .
4 service "monitoring-grafana" created
5 replicationcontroller "heapster" created
6 service "heapster" created
7 replicationcontroller "influxdb-grafana" created
8 service "monitoring-influxdb" created
9 [root@k8s-master heapster-influxdb]#
上面的命令执行结束,五个都创建了。开始检查是否部署成功了,如下所示:
1 [root@k8s-master heapster-influxdb]# kubectl get svc --namespace=kube-system
2 NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
3 heapster 10.254.196.163 <none> 80/TCP 2m
4 kube-dns 10.254.230.254 <none> 53/UDP,53/TCP 6d
5 kubernetes-dashboard 10.254.12.102 <none> 80/TCP 6d
6 monitoring-grafana 10.254.196.53 <none> 80/TCP 2m
7 monitoring-influxdb 10.254.0.197 <none> 8083/TCP,8086/TCP 2m
8 [root@k8s-master heapster-influxdb]# kubectl get pod --namespace=kube-system
9 NAME READY STATUS RESTARTS AGE
10 heapster-26qj0 0/1 ImagePullBackOff 0 2m
11 influxdb-grafana-g1pg1 0/2 ErrImagePull 0 2m
12 kube-dns-778415672-q23st 4/4 Running 4 2d
13 kubernetes-dashboard-latest-3333846798-j8zjc 1/1 Running 1 6d
发现svc是创建了,但是pod却创建失败了,这里我还是先下载下来,再上传到私有仓库进行创建吧。首先将heapster-amd64镜像拉取下来。
1 [root@k8s-master ~]# docker images
2 REPOSITORY TAG IMAGE ID CREATED SIZE
3 192.168.110.133:5000/tomcat latest 2eb5a120304e 2 weeks ago 647 MB
4 docker.io/tomcat latest 2eb5a120304e 2 weeks ago 647 MB
5 192.168.110.133:5000/mysql 5.7.30 9cfcce23593a 2 weeks ago 448 MB
6 docker.io/mysql 5.7.30 9cfcce23593a 2 weeks ago 448 MB
7 docker.io/busybox latest 1c35c4412082 3 weeks ago 1.22 MB
8 docker.io/registry latest 708bc6af7e5e 5 months ago 25.8 MB
9 192.168.110.133:5000/nginx 1.15 53f3fd8007f7 13 months ago 109 MB
10 docker.io/nginx 1.15 53f3fd8007f7 13 months ago 109 MB
11 192.168.110.133:5000/kubernetes-dashboard-amd64 v1.10.0 9e12bc435ba6 15 months ago 122 MB
12 registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64 v1.10.0 9e12bc435ba6 15 months ago 122 MB
13 192.168.110.133:5000/nginx 1.13 ae513a47849c 2 years ago 109 MB
14 docker.io/nginx 1.13 ae513a47849c 2 years ago 109 MB
15 registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 2 years ago 209 MB
16 192.168.110.133:5000/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB
17 registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64 v1.5.0 e5133bac8024 3 years ago 88.9 MB
18 192.168.110.133:5000/kubernetes-dashboard-amd64 v1.5.0 e5133bac8024 3 years ago 88.9 MB
19 myhub.fdccloud.com/library/kubedns-amd64 1.9 26cf1ed9b144 3 years ago 47 MB
20 myhub.fdccloud.com/library/dnsmasq-metrics-amd64 1.0 5271aabced07 3 years ago 14 MB
21 myhub.fdccloud.com/library/kube-dnsmasq-amd64 1.4 3ec65756a89b 3 years ago 5.13 MB
22 myhub.fdccloud.com/library/exechealthz-amd64 1.2 93a43bfb39bf 3 years ago 8.37 MB
23 [root@k8s-master ~]# docker ps
24 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25 9985f7777ad8 192.168.110.133:5000/pod-infrastructure:latest "/pod" 4 minutes ago Up 4 minutes k8s_POD.f88ae15c_influxdb-grafana-g1pg1_kube-system_18d4836b-b6b3-11ea-95a8-000c2919d52d_e8beeae9
26 48570ce53483 192.168.110.133:5000/pod-infrastructure:latest "/pod" 4 minutes ago Up 4 minutes k8s_POD.f88ae15c_heapster-26qj0_kube-system_1880f7ad-b6b3-11ea-95a8-000c2919d52d_f74ff6b4
27 9e809e332352 docker.io/busybox:latest "sleep 3600" 30 minutes ago Up 30 minutes k8s_busybox.f685e864_busybox2_default_d531ac21-b139-11ea-80b4-000c2919d52d_173f998d
28 9d9c246b8448 myhub.fdccloud.com/library/exechealthz-amd64:1.2 "/exechealthz '--c..." 2 days ago Up 2 days k8s_healthz.d09c0e9e_kube-dns-778415672-q23st_kube-system_38d64226-b47e-11ea-af83-000c2919d52d_5791ed6e
29 bf36e34ee5b5 myhub.fdccloud.com/library/dnsmasq-metrics-amd64:1.0 "/dnsmasq-metrics ..." 2 days ago Up 2 days k8s_dnsmasq-metrics.b0e0edc1_kube-dns-778415672-q23st_kube-system_38d64226-b47e-11ea-af83-000c2919d52d_25402444
30 d3f96f8c4a93 myhub.fdccloud.com/library/kube-dnsmasq-amd64:1.4 "/usr/sbin/dnsmasq..." 2 days ago Up 2 days k8s_dnsmasq.243653a3_kube-dns-778415672-q23st_kube-system_38d64226-b47e-11ea-af83-000c2919d52d_b6766883
31 617caefac177 myhub.fdccloud.com/library/kubedns-amd64:1.9 "/kube-dns --domai..." 2 days ago Up 2 days k8s_kubedns.80237e3f_kube-dns-778415672-q23st_kube-system_38d64226-b47e-11ea-af83-000c2919d52d_43df1c07
32 fff0e755ebbe 192.168.110.133:5000/pod-infrastructure:latest "/pod" 2 days ago Up 2 days k8s_POD.bec6e800_kube-dns-778415672-q23st_kube-system_38d64226-b47e-11ea-af83-000c2919d52d_e45259a0
33 66cd3f3b42e5 192.168.110.133:5000/pod-infrastructure:latest "/pod" 2 days ago Up 2 days k8s_POD.f88ae15c_busybox2_default_d531ac21-b139-11ea-80b4-000c2919d52d_fa900973
34 5e72b0961647 registry "/entrypoint.sh /e..." 7 days ago Up 2 days 0.0.0.0:5000->5000/tcp registry
35 [root@k8s-master ~]# docker search kubernetes/heapster:canary
36 INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED
37 [root@k8s-master ~]# docker search kubernetes/heapster
38 INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED
39 [root@k8s-master ~]# docker search heapster
40 INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED
41 docker.io docker.io/fishchen/heapster-amd64 k8s.gcr.io/heapster-amd64 3 [OK]
42 docker.io docker.io/mirrorgooglecontainers/heapster-amd64 3
43 docker.io docker.io/ist0ne/heapster-grafana-amd64 gcr.io/google_containers/heapster-grafana-... 2 [OK]
44 docker.io docker.io/ist0ne/heapster-influxdb-amd64 gcr.io/google_containers/heapster-influxdb... 2 [OK]
45 docker.io docker.io/lvanneo/heapster-amd64 heapster-amd64 2 [OK]
46 docker.io docker.io/lvanneo/heapster-influxdb-amd64 heapster-influxdb-amd64 2 [OK]
47 docker.io docker.io/wavefronthq/heapster-amd64 2
48 docker.io docker.io/diamanti/heapster 1
49 docker.io docker.io/fishchen/heapster-influxdb-amd64 k8s.gcr.io/heapster-influxdb-amd64 1 [OK]
50 docker.io docker.io/ist0ne/heapster https://gcr.io/google_containers/heapster 1 [OK]
51 docker.io docker.io/lvanneo/heapster-grafana-amd64 heapster-grafana-amd64 1 [OK]
52 docker.io docker.io/vish/heapster 1
53 docker.io docker.io/alvintz/heapster_influxdb gcr.io/google_containers/heapster_influxdb 0 [OK]
54 docker.io docker.io/angelnu/heapster-grafana 0
55 docker.io docker.io/bitnami/heapster Bitnami Docker Image for Heapster 0 [OK]
56 docker.io docker.io/bonifaido/heapster Kubernetes Heapster that work on Rancher. 0 [OK]
57 docker.io docker.io/cedbossneo/heapster Heapster for Kubernetes 0.10.0 0
58 docker.io docker.io/ibmcom/heapster Docker Image for IBM Cloud private-CE (Com... 0
59 docker.io docker.io/ibmcom/heapster-ppc64le Docker Image for IBM Cloud Private-CE (Com... 0
60 docker.io docker.io/ist0ne/heapster-amd64 gcr.io/google_containers/heapster-amd64 0 [OK]
61 docker.io docker.io/pupudaye/heapster-influxdb-amd64 heapster-influxdb-amd64:v1.3.3 0 [OK]
62 docker.io docker.io/rancher/heapster-influxdb-amd64 0
63 docker.io docker.io/steady1211/heapster_grafana-v2.6.0-2 heapster_grafana-v2.6.0-2 0 [OK]
64 docker.io docker.io/vish/heapster-buddy-coreos 0
65 docker.io docker.io/zhaoqing/heapster-amd64 heapster-amd64:1.4.2 0 [OK]
66 [root@k8s-master ~]# docker pull docker.io/fishchen/heapster-amd64
67 Using default tag: latest
68 Trying to pull repository docker.io/fishchen/heapster-amd64 ...
69 sha256:694fedc23a10a39c8396dd0cec3625df11b809a0a4d7d215edc3becfc356835c: Pulling from docker.io/fishchen/heapster-amd64
70 ee522dc3e6e3: Pull complete
71 7f01af7be3bc: Pull complete
72 Digest: sha256:694fedc23a10a39c8396dd0cec3625df11b809a0a4d7d215edc3becfc356835c
73 Status: Downloaded newer image for docker.io/fishchen/heapster-amd64:latest
74 [root@k8s-master ~]#
然后将heapster_influxdb-amd64镜像拉取下来,如下所示:
1 [root@k8s-master ~]# docker search heapster_influxdb
2 INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED
3 docker.io docker.io/ist0ne/heapster_influxdb-amd64 https://gcr.io/google_containers/heapster_... 1 [OK]
4 docker.io docker.io/aking666/heapster_influxdb Automated build heapster_influxdb image. 0 [OK]
5 docker.io docker.io/alvintz/heapster_influxdb gcr.io/google_containers/heapster_influxdb 0 [OK]
6 docker.io docker.io/arthas/heapster_influxdb heapster_influxdb 0 [OK]
7 docker.io docker.io/digbull/heapster_influxdb heapster_influxdb 0 [OK]
8 docker.io docker.io/dingzh/heapster_influxdb-v0.7 heapster_influxdb:v0.7 0 [OK]
9 docker.io docker.io/dockermonster/heapster_influxdb heapster_influxdb 0 [OK]
10 docker.io docker.io/eviloop/heapster_influxdb 0
11 docker.io docker.io/fociceo/heapster_influxdb heapster_influxdb 0 [OK]
12 docker.io docker.io/forestgun007/heapster_influxdb heapster_influxdb v0.7 0 [OK]
13 docker.io docker.io/haojianxun/heapster_influxdb 0
14 docker.io docker.io/hasura/heapster_influxdb Clone of gcr.io/google_containers/heapster... 0
15 docker.io docker.io/ist0ne/heapster_influxdb https://gcr.io/google_containers/heapster_... 0 [OK]
16 docker.io docker.io/locutus1/heapster_influxdb heapster_influxdb 0 [OK]
17 docker.io docker.io/maodouio/heapster_influxdb 0
18 docker.io docker.io/mirrorgooglecontainers/heapster_influxdb 0
19 docker.io docker.io/nelcy/heapster_influxdb 0
20 docker.io docker.io/sailsxu/heapster_influxdb heapster_influxdb:v0.6 0 [OK]
21 docker.io docker.io/shaloulcy/heapster_influxdb heapster_influxdb:v0.7 0 [OK]
22 docker.io docker.io/shenshouer/heapster_influxdb gcr.io/google_containers/heapster_influxdb... 0
23 docker.io docker.io/siriuszg/heapster_influxdb gcr.io/google_containers/heapster-influxdb... 0 [OK]
24 docker.io docker.io/storm2016/heapster_influxdb heapster_influxdb 0 [OK]
25 docker.io docker.io/typhoon1986/heapster_influxdb heapster_influxdb for kubernetes 0
26 docker.io docker.io/vish/heapster_influxdb 0
27 docker.io docker.io/zhaosijun/heapster_influxdb mirror from gcr.io/google_containers/heaps... 0 [OK]
28 [root@k8s-master ~]# docker pull docker.io/dingzh/heapster_influxdb-v0.7
29 Using default tag: latest
30 Trying to pull repository docker.io/dingzh/heapster_influxdb-v0.7 ...
31 sha256:2c9a3ac89f208147ab1d8731920cec681bf13596dcec381035623587a164cca1: Pulling from docker.io/dingzh/heapster_influxdb-v0.7
32 bbe5368a0432: Pull complete
33 98ff17f2ae39: Pull complete
34 b2c6d7e5c802: Pull complete
35 577d9278897e: Pull complete
36 a3ed95caeb02: Pull complete
37 e6bcb7eeeab6: Pull complete
38 ae45ea99a302: Pull complete
39 Digest: sha256:2c9a3ac89f208147ab1d8731920cec681bf13596dcec381035623587a164cca1
40 Status: Downloaded newer image for docker.io/dingzh/heapster_influxdb-v0.7:latest
41 [root@k8s-master ~]#
然后将heapster_grafana镜像拉取下来,如下所示:
1 [root@k8s-master ~]# docker search heapster_grafana
2 INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED
3 docker.io docker.io/ist0ne/heapster_grafana https://gcr.io/google_containers/heapster_... 1 [OK]
4 docker.io docker.io/aking666/heapster_grafana Automated build heapster_grafana image. 0 [OK]
5 docker.io docker.io/alvintz/heapster_grafana gcr.io/google_containers/heapster_grafana 0 [OK]
6 docker.io docker.io/arthas/heapster_grafana heapster_grafana 0 [OK]
7 docker.io docker.io/caicloud/heapster_grafana 0
8 docker.io docker.io/cloudrti/heapster_grafana fork of the original to disable anonymous ... 0 [OK]
9 docker.io docker.io/digbull/heapster_grafana heapster_grafana 0 [OK]
10 docker.io docker.io/dingzh/heapster_grafana-v3.1.1 heapster_grafana:v3.1.1 0 [OK]
11 docker.io docker.io/dockermonster/heapster_grafana heapster_grafana 0 [OK]
12 docker.io docker.io/forestgun007/heapster_grafana heapster_grafana:v3.1.1 0 [OK]
13 docker.io docker.io/gcrxio/heapster_grafana 0
14 docker.io docker.io/ist0ne/heapster_grafana-amd64 https://gcr.io/google_containers/heapster_... 0 [OK]
15 docker.io docker.io/lupan/heapster_grafana heapster_grafana-v2.6.0-2 0
16 docker.io docker.io/mirrorgooglecontainers/heapster_grafana 0
17 docker.io docker.io/pkhanhao/heapster_grafana heapster_grafana 0 [OK]
18 docker.io docker.io/sailsxu/heapster_grafana heapster_grafana:v2.6.0 0 [OK]
19 docker.io docker.io/shaloulcy/heapster_grafana heapster_grafana 0 [OK]
20 docker.io docker.io/shenshouer/heapster_grafana gcr.io/google_containers/heapster_grafana:... 0
21 docker.io docker.io/siriuszg/heapster_grafana heapster_grafana 0
22 docker.io docker.io/steady1211/heapster_grafana-v2.6.0-2 heapster_grafana-v2.6.0-2 0 [OK]
23 docker.io docker.io/storm2016/heapster_grafana heapster_grafana 0 [OK]
24 docker.io docker.io/visenzek8s/heapster_grafana 0
25 docker.io docker.io/vish/heapster_grafana 0
26 docker.io docker.io/zhaosijun/heapster_grafana mirror from gcr.io/google_containers/heaps... 0 [OK]
27 docker.io docker.io/zzoujinn/heapster_grafana heapster_grafana 0 [OK]
28 [root@k8s-master ~]# docker pull docker.io/steady1211/heapster_grafana-v2.6.0-2
29 Using default tag: latest
30 Trying to pull repository docker.io/steady1211/heapster_grafana-v2.6.0-2 ...
31 sha256:5074ecd1033ca000ff642231b7a0f3af9a3e599ffd48ddd56c74670139eab63d: Pulling from docker.io/steady1211/heapster_grafana-v2.6.0-2
32 03e1855d4f31: Pull complete
33 a3ed95caeb02: Pull complete
34 7f1ce4d71e93: Pull complete
35 23d149931be4: Pull complete
36 2e86b9218e3a: Pull complete
37 db71c66d238d: Pull complete
38 de3678928269: Pull complete
39 Digest: sha256:5074ecd1033ca000ff642231b7a0f3af9a3e599ffd48ddd56c74670139eab63d
40 Status: Downloaded newer image for docker.io/steady1211/heapster_grafana-v2.6.0-2:latest
41 [root@k8s-master ~]#
将之前创建的删除掉svc、pod,需要根据namespace进行删除的哈,不然会找不到的。
1 [root@k8s-master heapster-influxdb]# kubectl get pod --namespace=kube-system
2 NAME READY STATUS RESTARTS AGE
3 heapster-26qj0 0/1 ImagePullBackOff 0 3m
4 influxdb-grafana-g1pg1 0/2 ErrImagePull 0 3m
5 kube-dns-778415672-q23st 4/4 Running 4 2d
6 kubernetes-dashboard-latest-3333846798-j8zjc 1/1 Running 1 6d
7 [root@k8s-master heapster-influxdb]# kubectl delete svc --namespace=kube-system heapster monitoring-grafana monitoring-influxdb
8 service "heapster" deleted
9 service "monitoring-grafana" deleted
10 service "monitoring-influxdb" deleted
11 [root@k8s-master heapster-influxdb]# kubectl get svc --namespace=kube-system
12 NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
13 kube-dns 10.254.230.254 <none> 53/UDP,53/TCP 6d
14 kubernetes-dashboard 10.254.12.102 <none> 80/TCP 6d
15 [root@k8s-master heapster-influxdb]# kubectl get pod --namespace=kube-system
16 NAME READY STATUS RESTARTS AGE
17 heapster-26qj0 0/1 ImagePullBackOff 0 11m
18 influxdb-grafana-g1pg1 0/2 ImagePullBackOff 0 11m
19 kube-dns-778415672-q23st 4/4 Running 4 2d
20 kubernetes-dashboard-latest-3333846798-j8zjc 1/1 Running 1 6d
21 [root@k8s-master heapster-influxdb]# kubectl delete pod --namespace=kube-system heapster-26qj0 influxdb-grafana-g1pg1
22 pod "heapster-26qj0" deleted
23 pod "influxdb-grafana-g1pg1" deleted
24 [root@k8s-master heapster-influxdb]# kubectl get pod --namespace=kube-system
25 NAME READY STATUS RESTARTS AGE
26 heapster-6fz3z 0/1 ContainerCreating 0 5s
27 influxdb-grafana-ms803 0/2 ContainerCreating 0 5s
28 kube-dns-778415672-q23st 4/4 Running 4 2d
29 kubernetes-dashboard-latest-3333846798-j8zjc 1/1 Running 1 6d
30 [root@k8s-master heapster-influxdb]#
此时,将下载下来的镜像上传到自己的私有仓库里面。
1 [root@k8s-master ~]# docker tag docker.io/fishchen/heapster-amd64:latest 192.168.110.133:5000/docker.io/fishchen/heapster-amd64:latest
2 [root@k8s-master ~]# docker push 192.168.110.133:5000/docker.io/fishchen/heapster-amd64:latest
3 The push refers to a repository [192.168.110.133:5000/docker.io/fishchen/heapster-amd64]
4 0c8eef97f390: Pushed
5 7034bc6e734f: Pushed
6 latest: digest: sha256:694fedc23a10a39c8396dd0cec3625df11b809a0a4d7d215edc3becfc356835c size: 739
7 [root@k8s-master ~]#
1 [root@k8s-master ~]# docker tag docker.io/dingzh/heapster_influxdb-v0.7:latest 192.168.110.133:5000/docker.io/dingzh/heapster_influxdb-v0.7:latest
2 [root@k8s-master ~]# docker push 192.168.110.133:5000/docker.io/dingzh/heapster_influxdb-v0.7:latest
3 The push refers to a repository [192.168.110.133:5000/docker.io/dingzh/heapster_influxdb-v0.7]
4 5f70bf18a086: Pushed
5 59b1df063c4d: Pushed
6 1a2eb7707e1e: Pushed
7 737f40e80b7f: Pushed
8 82b57dbc5385: Pushed
9 19429b698a22: Pushed
10 9436069b92a3: Pushed
11 latest: digest: sha256:3db1a13192c3ecb973c316b7dce485e61332011cfd60fa6a77f884583cdccce0 size: 3011
12 [root@k8s-master ~]#
1 [root@k8s-master ~]# docker push 192.168.110.133:5000/docker.io/steady1211/heapster_grafana-v2.6.0-2:latest
2 The push refers to a repository [192.168.110.133:5000/docker.io/steady1211/heapster_grafana-v2.6.0-2]
3 5f70bf18a086: Mounted from docker.io/dingzh/heapster_influxdb-v0.7
4 edec8b16494f: Pushed
5 ca627f7178ed: Pushed
6 a89b3190964f: Pushed
7 8683f0f614c9: Pushed
8 0828a6c7d921: Pushed
9 78dbfa5b7cbc: Pushed
10 latest: digest: sha256:40cf6e4b47c79092d299e658449ae687f42ffa9ec2de1dae3e4d7f550879b762 size: 3018
11 [root@k8s-master ~]#
此时,还需要进行修改配置文件,将镜像的地址填写为自己的私有仓库的地址。
Kubernetes Dashboard的界面默认是这样的,如下所示:
开始启动heapster,如下所示:
如果一个Pod里面有多个服务都需要被外界访问的时候,就需要为这两个服务分别建立service。
1 [root@k8s-master heapster-influxdb]# ls
2 grafana-service.yaml heapster-controller.yaml heapster-service.yaml influxdb-grafana-controller.yaml influxdb-service.yaml
3 [root@k8s-master heapster-influxdb]# kubectl create -f .
4 service "monitoring-grafana" created
5 replicationcontroller "heapster" created
6 service "heapster" created
7 replicationcontroller "influxdb-grafana" created
8 service "monitoring-influxdb" created
9 [root@k8s-master heapster-influxdb]# kubectl get pod -o wide --namespace=kube-system
10 NAME READY STATUS RESTARTS AGE IP NODE
11 heapster-vx4f2 1/1 Running 0 24s 172.16.47.5 k8s-master
12 influxdb-grafana-2q47p 2/2 Running 0 24s 172.16.47.6 k8s-master
13 kube-dns-778415672-q23st 4/4 Running 8 5d 172.16.47.4 k8s-master
14 kubernetes-dashboard-latest-3333846798-3mm68 1/1 Running 0 2m 172.16.16.3 k8s-node3
15 [root@k8s-master heapster-influxdb]#
检查heapster是否部署成功,如下所示:
1 [root@k8s-master heapster-influxdb]# kubectl get svc --namespace=kube-system
2 NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
3 heapster 10.254.69.6 <none> 80/TCP 6m
4 kube-dns 10.254.230.254 <none> 53/UDP,53/TCP 9d
5 kubernetes-dashboard 10.254.127.32 <none> 80/TCP 8m
6 monitoring-grafana 10.254.65.105 <none> 80/TCP 6m
7 monitoring-influxdb 10.254.159.92 <none> 8083/TCP,8086/TCP 6m
8 [root@k8s-master heapster-influxdb]# kubectl get pod -o wide --namespace=kube-system
9 NAME READY STATUS RESTARTS AGE IP NODE
10 heapster-vx4f2 1/1 Running 0 6m 172.16.47.5 k8s-master
11 influxdb-grafana-2q47p 2/2 Running 0 6m 172.16.47.6 k8s-master
12 kube-dns-778415672-q23st 4/4 Running 8 5d 172.16.47.4 k8s-master
13 kubernetes-dashboard-latest-3333846798-3mm68 1/1 Running 0 8m 172.16.16.3 k8s-node3
14 [root@k8s-master heapster-influxdb]#
部署完毕heapster之后,刷新你的Kubernetes Dashboard的界面,就变成了这个样子了,如下所示:
8、Kubernetes架构图,如下所示:
每个Node节点上面都有kubelet服务和kube-proxy服务,在早期每个Node节点上还需要安装一个CAdvisor服务,但是新版中CAdvisor服务都集称到了kubelet服务中了,CAdvisor服务内部可以访问,如果需要外部访问的话,就需要配置一下kubelet的配置文件,将--cadvisor-port=8080添加到kubelet配置文件中,指定cadvisor对外暴漏的端口,不然无法进行访问。
1 [root@k8s-node2 ~]# vim /etc/kubernetes/kubelet
然后重启kubelet服务,如下所示:
1 [root@k8s-node2 ~]# systemctl restart kubelet.service
2 [root@k8s-node2 ~]#
重启之后,查看是否暴漏了8080端口号,如下所示:
然后访问这个服务器的8080端口号,如下所示:
heapster的各个cpu指标都是 从cAdvisor的指标里面取出来的。