k8s v1.11.1rbd-provisioner报错missing ceph monitors?

  • 回答 (2)
  • 关注 (0)
  • 查看 (444)

这个报错是node2中:

tail -f /var/log/containers/rbd-provisioner-857866b5b7-ftmxq_kube-system_rbd-provisioner-:

{"log":"I0827 22:27:08.163484 1 controller.go:948] provision \"default/mypvc2\" class \"ceph-rbd\": started\n","stream":"stderr","time":"2018-08-27T22:27:08.164113972Z"}

{"log":"I0827 22:27:08.170400 1 event.go:221] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"default\", Name:\"mypvc2\", UID:\"3e0bb388-aa47-11e8-b78a-00505697564e\", APIVersion:\"v1\", ResourceVersion:\"4366965\", FieldPath:\"\"}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim \"default/mypvc2\"\n","stream":"stderr","time":"2018-08-27T22:27:08.170996675Z"}

{"log":"W0827 22:27:08.181881 1 controller.go:707] Retrying syncing claim \"default/mypvc2\" because failures 5 \u003c threshold 15\n","stream":"stderr","time":"2018-08-27T22:27:08.182301488Z"}

{"log":"E0827 22:27:08.181960 1 controller.go:722] error syncing claim \"default/mypvc2\": failed to provision volume with StorageClass \"ceph-rbd\": missing Ceph monitors\n","stream":"stderr","time":"2018-08-27T22:27:08.182339857Z"}

{"log":"I0827 22:27:08.182067 1 event.go:221] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"default\", Name:\"mypvc2\", UID:\"3e0bb388-aa47-11e8-b78a-00505697564e\", APIVersion:\"v1\", ResourceVersion:\"4366965\", FieldPath:\"\"}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass \"ceph-rbd\": missing Ceph monitors\n","stream":"stderr","time":"2018-08-27T22:27:08.182363909Z"}

*************

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: rbd-provisione

namespace: kube-system

spec:

replicas: 1

strategy:

type: Recreate

template:

metadata:

labels:

app: rbd-provisione

spec:

containers:

- name: rbd-provisione

image: "quay.io/external_storage/rbd-provisioner:latest"

env:

- name: PROVISIONER_NAME

value: ceph.com/rbd

serviceAccount: rbd-provisioner

**************

kind: StorageClass

apiVersion: storage.k8s.io/v1

metadata:

name: ceph-rbd

provisioner: ceph.com/rbd

parameters:

adminId: admin

adminSecretName: ceph-secret-admin4

adminSecretNamespace: kube-system

monitors: 172.16.2.211:6789

pool: kube

userId: kube

userSecretName: ceph-secret-kube4

fsType: ext4

imageFormat: "2"

imageFeatures: "layering"

---

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: mypvc2

spec:

accessModes:

- ReadWriteOnce

storageClassName: ceph-rbd

resources:

requests:

storage: 1Gi

---

apiVersion: v1

kind: Pod

metadata:

labels:

run: mypod2

name: mypod2

spec:

restartPolicy: OnFailure

containers:

- image: busybox

name: bb

volumeMounts:

- mountPath: /zhaer

name: shared-volume

args:

- /bin/sh

- -c

- touch /zhaer/zhaer.txt ; sleep 6000

volumes:

- name: shared-volume

persistentVolumeClaim:

claimName: mypvc2

用户3001631用户3001631修改于
GUNLinux回答于
用户3001631回答于

ceph -s显示正常的!我用pod直接静态挂载ceph volume也是正常的,但是用这个动态PVC就不能常了!有人说rbd-provisioner缺失ceph.conf和keyring,但是我传进去后还是报这个错误!

扫码关注云+社区

领取腾讯云代金券