参考:
https://www.cnblogs.com/wsjhk/p/13710577.html
https://www.jianshu.com/p/5cbe9f58dda7
一、集群和组件版本
K8S集群:1.19.8 Ceph集群:14.2.22 Ceph-CSI:tag v3.4.0
-
镜像版本:
docker pull registry.aliyuncs.com/it00021hot/cephcsi:v3.4.0 docker pull registry.aliyuncs.com/it00021hot/csi-provisioner:v2.2.2 docker pull registry.aliyuncs.com/it00021hot/csi-resizer:v1.2.0 docker pull registry.aliyuncs.com/it00021hot/csi-snapshotter:v4.1.1 docker pull registry.aliyuncs.com/it00021hot/csi-attacher:v3.2.1 docker pull registry.aliyuncs.com/it00021hot/csi-node-driver-registrar:v2.2.0
-
ceph状态:
[root@k8s-master v3.4.0]# ceph -s cluster: id: 627227aa-4a5e-47c1-b822-28251f8a9936 health: HEALTH_WARN 2 pools have too many placement groups mons are allowing insecure global_id reclaim services: mon: 3 daemons, quorum k8s-master,k8s-node1,k8s-node2 (age 56m) mgr: k8s-node1(active, since 56m), standbys: k8s-node2, k8s-master mds: cephfs:1 {0=k8s-master=up:active} osd: 3 osds: 3 up (since 2h), 3 in (since 4d) data: pools: 2 pools, 256 pgs objects: 26 objects, 240 KiB usage: 3.2 GiB used, 27 GiB / 30 GiB avail pgs: 256 active+clean [root@k8s-master v3.4.0]# ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ] [root@k8s-master v3.4.0]# ceph osd pool ls cephfs_data cephfs_metadata
二、部署
1、部署Ceph-CSI
git clone https://github.com/ceph/ceph-csi.git -b release-v3.4 cd ceph-csi/deploy/cephfs/kubernetes
2、修改yaml文件
# cat csi-config-map.yaml --- apiVersion: v1 kind: ConfigMap data: config.json: |- [ { "clusterID": "627227aa-4a5e-47c1-b822-28251f8a9936", "monitors": [ "192.168.130.141:6789", "192.168.130.142:6789", "192.168.130.143:6789" ], "cephFS": { "subvolumeGroup": "test" } } ] metadata: name: ceph-csi-config
官方字段解释: https://github.com/ceph/ceph-csi/blob/v3.4.0/examples/csi-config-map-sample.yaml
3、部署cephfs相关的CSI
# kubectl apply -f ceph-csi/deploy/cephfs/kubernetes/ # kubectl get pods | grep cephfs csi-cephfsplugin-provisioner-b77cd56c9-4kdq8 6/6 Running 0 14m csi-cephfsplugin-vgzrx 3/3 Running 0 14m csi-cephfsplugin-zn7sg 3/3 Running 0 14m
三、kubernetes使用
1、创建连接ceph集群的秘钥
# cat csi-secret.yaml --- apiVersion: v1 kind: Secret metadata: name: csi-cephfs-secret stringData: userID: admin userKey: AQAOns1hV041AhAAhEATG95ZS5kTL78mjNWEeA== adminID: admin adminKey: AQAOns1hV041AhAAhEATG95ZS5kTL78mjNWEeA==
2、创建storeclass
--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-cephfs-sc provisioner: cephfs.csi.ceph.com parameters: clusterID: 627227aa-4a5e-47c1-b822-28251f8a9936 fsName: cephfs pool: cephfs_data
# mounter: fuse 挂载方式 csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret csi.storage.k8s.io/provisioner-secret-namespace: default csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret csi.storage.k8s.io/controller-expand-secret-namespace: default csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret csi.storage.k8s.io/node-stage-secret-namespace: default reclaimPolicy: Delete allowVolumeExpansion: true mountOptions: - discard
官方参考: https://github.com/ceph/ceph-csi/blob/v3.4.0/examples/cephfs/storageclass.yaml
3、基于storeclass创建pvc
# cat pvc.yaml --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: csi-cephfs-sc
官方字段解释: https://github.com/ceph/ceph-csi/blob/v3.4.0/docs/deploy-cephfs.md