我按照官方文档设置了Ceph群集并使用sudo mount -t
命令进行了手动安装,然后检查了Ceph群集的状态-那里没有问题。现在我正在尝试在Kubernetes上安装我的CephFS,但是当我运行kubectl create命令时,我的容器卡在了ContainerCreating中,因为它无法安装。我在线查看了许多相关的问题/解决方案,但没有任何效果。
作为参考,我正在遵循此指南:https://medium.com/velotio-perspectives/an-innovators-guide-to-kubernetes-storage-using-ceph-a4b919f4e469
我的设置包含5个AWS实例,它们如下:
节点1:Ceph Mon
节点2:OSD1 + MDS
节点3:OSD2 + K8s Master
节点4:OSD3 + K8s Worker1
节点5:CephFS + K8s Worker2
可以在与Ceph相同的实例上堆叠K8吗?我很确定这是允许的,但是如果不允许,请告诉我。
在描述的pod日志中,这是错误/警告:
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /root/userone/kubelet/pods/bbf28924-3639-11ea-879d-0a6b51accf30/volumes/kubernetes.io~cephfs/pvc-4777686c-3639-11ea-879d-0a6b51accf30 --scope -- mount -t ceph -o name=kubernetes-dynamic-user-4d05a2df-3639-11ea-b2d3-5a4147fda646,secret=AQC4whxeqq9ZERADD2nUgxxOktLE1OIGXThbmw== 172.31.15.110:6789:/pvc-volumes/kubernetes/kubernetes-dynamic-pvc-4d05a269-3639-11ea-b2d3-5a4147fda646 /root/userone/kubelet/pods/bbf28924-3639-11ea-879d-0a6b51accf30/volumes/kubernetes.io~cephfs/pvc-4777686c-3639-11ea-879d-0a6b51accf30
Output: Running scope as unit run-2382233.scope.
couldn't finalize options: -34
这些是我的.yaml文件:
供应商:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-provisioner-dt
namespace: test-dt
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get","list","watch","create","delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get","update","create"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get","watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create","patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list","get"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create","get","delete"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get","patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-provisioner-dt
namespace: test-dt
subjects:
- kind: Serviceaccount
name: test-provisioner-dt
namespace: test-dt
roleRef:
kind: ClusterRole
name: test-provisioner-dt
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: test-provisioner-dt
namespace: test-dt
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create","patch"]
---
Storageclass:
apiVersion: storage.k8s.io/v1
kind: Storageclass
metadata:
name: postgres-pv
namespace: test-dt
provisioner: ceph.com/cephfs
parameters:
monitors: 172.31.15.110:6789
adminId: admin
adminSecretName: ceph-secret-admin-dt
adminSecretNamespace: test-dt
claimRoot: /pvc-volumes
PVC:
apiVersion: v1
metadata:
name: postgres-pvc
namespace: test-dt
spec:
storageclassname: postgres-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
kubectl get pv
和kubectl get pvc
的输出显示卷已绑定并声明,没有错误。
预配置程序pod日志的输出全部显示成功/没有错误。
请帮助!