前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >自定义方便kubectl中pods的管理

自定义方便kubectl中pods的管理

作者头像
runzhliu
发布2020-08-06 09:46:33
6510
发布2020-08-06 09:46:33
举报
文章被收录于专栏:容器计算容器计算容器计算

1 Overview

我猜很多接触 K8S 的同学应该都是运维的同学为主,一般上来说,运维的同学写 Shell 显然是比 Java 程序员专业的,但是当大数据遇到 K8S 的时候,搞大数据的同学还在每条 kubectl 去操作实在是太浪费时间了。 在我学习的过程中,我会创建很多临时的 Pods,测试完其实这些 Pods 就没用了,或者说 Status 是 Error 或者 Complete 的 Pods 已经不是我学习的对象,想删掉,所以 kubectl get pods 的时候想显示少一点。 简单的办法就是用 Alias 来封装一下各种状态的显示。

2 Examples

以下是我利用 grep 和 awk 封装的两个 alias,可以参考一下。

alias getComplete="kubectl get pods | grep Completed | awk  -F ' '  '{print $1}'"
alias getError="kubectl get pods | grep Error | awk  -F ' '  '{print $1}'"

grepawk 不熟悉的同学请千万不要去百度谷歌,因为这样会造成依赖,每次一用就去搜,用完过几天就忘,我的建议是直接看命令的手册,这里举个 awk 中 -F 的例子。

awk

NAME
       awk - pattern-directed scanning and processing language

请注意 awk 的用法

SYNOPSIS
       awk [ -F fs ] [ -v var=value ] [ 'prog' | -f progfile ] [ file ...  ]

请注意看手册,这里的 -F 的作用是什么,就是做分隔符,并且支持正则表达式

DESCRIPTION
       Awk  scans  each  input  file  for lines that match any of a set of patterns specified literally in prog or in one or more files
       specified as -f progfile.  With each pattern there can be an associated action that will be performed when  a  line  of  a  file
       matches  the pattern.  Each line is matched against the pattern portion of every pattern-action statement; the associated action
       is performed for each matched pattern.  The file name - means the standard input.  Any file of the form var=value is treated  as
       an  assignment, not a filename, and is executed at the time it would have been opened if it were a filename.  The option -v fol-
       lowed by var=value is an assignment to be done before prog is executed; any number of -v options may  be  present.   The  -F  fs
       option defines the input field separator to be the regular expression fs.

有了这两个 alias 之后,我们就可以把他加到 .bash_profile 中,以后调用的时候就只要这个 alias 就好了。

➜  ~ getError
spark-pi-37d1f76b946d7c0f-driver
➜  ~ getComplete
group-by-test-1560763907118-driver
hdfs-test-driver
spark-driver-2.3
spark-hdfs-1561689711995-driver
spark-hdfs-1561689794687-driver
spark-hdfs-1561689834591-driver
spark-hdfs-1561689875798-driver
spark-hdfs-1561690011058-driver
spark-hdfs-1561690211210-driver
spark-hdfs-1561691706756-driver
spark-hdfs-1561700636764-driver
spark-pi-064dbc6e21463c7cb72a82f8b9d0c1ab-driver
spark-pi-1e4bae6b95fe78d9-driver
spark-pi-driver

然后比如说你想删除这些你不需要再研究的某种状态的 Pods。

➜  ~ getError | xargs kubectl delete pods
pod "spark-pi-37d1f76b946d7c0f-driver" deleted
➜  ~ getComplete | xargs kubectl delete pods
pod "group-by-test-1560763907118-driver" deleted
pod "hdfs-test-driver" deleted
pod "spark-driver-2.3" deleted
pod "spark-hdfs-1561689711995-driver" deleted
pod "spark-hdfs-1561689794687-driver" deleted
pod "spark-hdfs-1561689834591-driver" deleted
pod "spark-hdfs-1561689875798-driver" deleted
pod "spark-hdfs-1561690011058-driver" deleted
pod "spark-hdfs-1561690211210-driver" deleted
pod "spark-hdfs-1561691706756-driver" deleted
pod "spark-hdfs-1561700636764-driver" deleted
pod "spark-pi-064dbc6e21463c7cb72a82f8b9d0c1ab-driver" deleted
pod "spark-pi-1e4bae6b95fe78d9-driver" deleted
pod "spark-pi-driver" deleted

3 Summary

删掉了一堆没用的 Pods 之后,一下就清爽了,其实通过 dashboard 来删除也可以,只是说需要一个个点,效率很低,简单写几个通用的 alias 甚至更高级点的写个 shell 脚本定期去删除,那就更好了。

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2019-06-30 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1 Overview
  • 2 Examples
  • 3 Summary
相关产品与服务
大数据
全栈大数据产品,面向海量数据场景,帮助您 “智理无数,心中有数”!
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档