zoukankan      html  css  js  c++  java
  • k8s集成cephfs(StorageClass方式)

    k8s 中 pv 有以下三种访问模式(Access Mode):

    • ReadWriteOnce:只可被一个Node挂载,这个Node对PV拥有读写权限
    • ReadOnlyMany: 可以被多个Node挂载,这些Node对PV只有只读权限
    • ReadWriteMany: 可以被多个Node挂载,这些Node对PV拥有读写权限

    ceph rbd 模式是不支持 ReadWriteMany,而 cephfs 是支持的,详见官方文档 Persistent Volumes | Kubernetes

    还有一点,当创建 ceph rbd 的 storageclass 时,k8s 官方集成了 provisioner 的,只需指定 provisioner: kubernetes.io/rbd 即可;

    而 cephfs 的 provisioner 当前是并未集成的,所以需要额外自行安装 cephfs-provisioner,具体方法如下:

    1. ceph 集群创建 cephfs

      ceph-deploy mds create ceph01
      ceph-deploy mds create ceph02
      ceph-deploy mds create ceph03
      ceph osd pool create cephfs_data 64
      ceph osd pool create cephfs_metadata 64
      ceph fs new cephfs cephfs_metadata cephfs_data
      ceph fs ls
      
    2. 获取 key

      $ ceph auth get-key client.admin | base64
      QVFEMjVxVmhiVUNJRHhBQUxwdmVHbUdNTWtXZjB6VXovbWlBY3c9PQ==
      
    3. k8s 集群节点安装 ceph-common,版本需和 ceph 集群一致

      rpm -ivh http://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm
      sed -i 's#download.ceph.com#mirrors.aliyun.com/ceph#g' /etc/yum.repos.d/ceph.repo
      yum install epel-release -y
      yum install -y ceph-common
      
    4. 安装 cephfs-provisioner

      git clone https://github.com/kubernetes-retired/external-storage.git
      cd external-storage/ceph/cephfs/deploy
      kubectl create namespace cephfs
      kubectl -n cephfs apply -f ./rbac/
      
    5. 编辑 storageclass yaml 文件

      $ vi ceph-sc.yaml
      apiVersion: v1
      kind: Secret
      metadata:
        name: cephfs-storageclass-secret
        namespace: cephfs
      data:
        key: QVFEMjVxVmhiVUNJRHhBQUxwdmVHbUdNTWtXZjB6VXovbWlBY3c9PQ==
      ---
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        name: cephfs-leffss
        annotations:
          storageclass.kubernetes.io/is-default-class: "false"
      provisioner: ceph.com/cephfs
      parameters:
        monitors: 10.10.10.51:6789,10.10.10.53:6789,10.10.10.53:6789
        # 不能使用主机名方式,因为前面安装的 cephfs-provisioner 的 pod 是无法访问的
        #monitors: ceph01:6789,ceph02:6789,ceph03:6789
        adminId: admin
        adminSecretName: cephfs-storageclass-secret
        adminSecretNamespace: cephfs
        claimRoot: /k8s-volumes
      

      测试 yaml:

      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: ceph-pvc-test1
        namespace: default
        annotations:
          volume.beta.kubernetes.io/storage-class: cephfs-leffss
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 1Gi
      ---
      kind: Pod
      apiVersion: v1
      metadata:
        name: test-pod-1
      spec:
        containers:
        - name: test-pod-1
          image: hub.leffss.com/library/busybox:v1.29.2
          command:
            - "/bin/sh"
          args:
            - "-c"
            - "touch /mnt/SUCCESS-ceph-pvc-test1 && exit 0 || exit 1"
          volumeMounts:
            - name: pvc
              mountPath: "/mnt"
        restartPolicy: "Never"
        volumes:
          - name: pvc
            persistentVolumeClaim:
              claimName: ceph-pvc-test1
              
      
      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: ceph-pvc-test2
        namespace: default
      spec:
        storageClassName: cephfs-leffss
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 1Gi
      ---
      kind: Pod
      apiVersion: v1
      metadata:
        name: test-pod-2
      spec:
        containers:
        - name: test-pod-2
          image: hub.leffss.com/library/busybox:v1.29.2
          command:
            - "/bin/sh"
          args:
            - "-c"
            - "touch /mnt/SUCCESS-ceph-pvc-test2 && exit 0 || exit 1"
          volumeMounts:
            - name: pvc
              mountPath: "/mnt"
        restartPolicy: "Never"
        volumes:
          - name: pvc
            persistentVolumeClaim:
              claimName: ceph-pvc-test2
      
    6. 验证

      $ ceph auth get-key client.admin
      AQD25qVhbUCIDxAALpveGmGMMkWf0zUz/miAcw==
      
      $ mkdir /mycephfs
      $ mount -t ceph ceph01:6789,ceph02:6789,ceph03:6789:/ /mycephfs -o name=admin,secret=AQD25qVhbUCIDxAALpveGmGMMkWf0zUz/miAcw==
      
      $ tree /mycephfs
      /mycephfs
      └── k8s-volumes
          ├── kubernetes
          │   ├── kubernetes-dynamic-pvc-59ea31d4-52a4-11ec-962a-567006a2be7a
          │   │   └── SUCCESS-ceph-pvc-test1
          │   └── kubernetes-dynamic-pvc-6553ea70-52a4-11ec-962a-567006a2be7a
          │       └── SUCCESS-ceph-pvc-test2
          ├── _kubernetes:kubernetes-dynamic-pvc-59ea31d4-52a4-11ec-962a-567006a2be7a.meta
          └── _kubernetes:kubernetes-dynamic-pvc-6553ea70-52a4-11ec-962a-567006a2be7a.meta
      
      4 directories, 4 filess
      

    最后说一点:cephfs 不太稳定,建议生产不要使用,我在测试环境都会遇到使用 cephfs 后导致 ceph 集群异常的情况

  • 相关阅读:
    git常用命令及多人协同开发及遇到的问题
    LightOJ
    LightOJ
    LightOJ
    LightOJ
    LightOJ
    LightOJ
    LightOJ
    LightOJ
    Codeforces Round #604 (Div. 2) E. Beautiful Mirrors
  • 原文地址:https://www.cnblogs.com/leffss/p/15630641.html
Copyright © 2011-2022 走看看