第一次使用GCE,以前在AWS中使用k8s和kops。
我有一个PV和PVC设置,它们都是状态绑定的。
我第一次尝试运行部署/pod时,yaml配置大部分都是从AWS中的工作设置中复制出来的。
当我从部署中删除卷时,它会启动并进入运行状态。
附加卷后,它在:启动时间:尚未启动阶段:未决状态: ContainerCreating暂停。
对于容器,日志中没有任何内容,没有一行。
编辑:最终在pod事件而不是容器日志中找到了有用的东西。
卷"tio-pv-ssl“失败:挂载失败:退出状态1安装命令:systemd-运行安装参数:- /var/lib/kubelet/pods/c64b2284-de81-11e8-9ead-42010a9400a0/volumes/kubernetes.io~nfs/tio-pv-ssl - /home/kubernetes/containerized_mounter/mounter -/home/kubernetes/containerized_mounter/mounter -t -t nfs 10.148.0.6:/ssl /var/lib/kubelet/pods/c64b2284-de81-11e8-9ead-42010a9400a0/volumes/kubernetes.io~ nfs /tio ssl输出:作为单位运行范围:run-r68f0f0ac5bf54be2b47ac60d9e533712范围安装失败:挂载失败:退出状态32安装命令: chroot挂载参数:/home/kubernetes/容器化_ -t挂载-t nfs 10.148.0.6:/ssl -t输出: mount.nfs:服务器在挂载10.148.0.6:/ssl时拒绝访问
使用https://cloud.google.com/launcher/docs/single-node-fileserver安装的NFS服务器10.148.0.6似乎运行良好,/ssl文件夹位于NFS (/data/ssl)下
Kubectl状态
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
tio-pv-ssl 1000Gi RWX Retain Bound core/tio-pv-claim-ssl standard 17m
kubectl get pvc --namespace=core
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
tio-pv-claim-ssl Bound tio-pv-ssl 1000Gi RWX standard 18m
kubectl get pods --namespace=core
NAME READY STATUS RESTARTS AGE
proxy-deployment-64b9cdb55d-8htjf 0/1 ContainerCreating 0 13m
卷Yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: tio-pv-ssl
spec:
capacity:
storage: 1000Gi
storageClassName: standard
accessModes:
- ReadWriteMany
nfs:
server: 10.148.0.6
path: "/ssl"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tio-pv-claim-ssl
namespace: core
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
volumeName: tio-pv-ssl
storageClassName: standard
部署yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: proxy-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: proxy
spec:
containers:
- name: proxy-ctr
image: asia.gcr.io/xyz/nginx-proxy:latest
resources:
limits:
cpu: "500m"
memory: 1024Mi
requests:
cpu: 100m
memory: 256Mi
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- name: tio-ssl-storage
mountPath: "/etc/nginx/ssl"
volumes:
- name: tio-ssl-storage
persistentVolumeClaim:
claimName: tio-pv-claim-ssl
strategy:
type: "RollingUpdate"
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
---
apiVersion: v1
kind: Service
metadata:
name: proxyservice
namespace: core
labels:
app: proxy
spec:
ports:
- port: 80
name: port-http
protocol: TCP
- port: 443
name: port-https
protocol: TCP
selector:
app: proxy
type: LoadBalancer
发布于 2018-11-02 10:01:47
解决了我自己的问题一旦发现原木藏在哪里。
path: "/ssl"
应该是服务器上的完整路径,而不是相对于nfs数据文件夹。
path: "/data/ssl"
https://stackoverflow.com/questions/53116183
复制相似问题