zoukankan      html  css  js  c++  java
  • 独立部署GlusterFS+Heketi实现Kubernetes共享存储

    环境

    主机名 系统 ip地址 角色
    ops-k8s-175 ubuntu16.04 192.168.75.175 k8s-master,glusterfs,heketi
    ops-k8s-176 ubuntu16.04 192.168.75.176 k8s-node,glusterfs
    ops-k8s-177 ubuntu16.04 192.168.75.177 k8s-node,glusterfs
    ops-k8s-178 ubuntu16.04 192.168.175.178 k8s-node,glusterfs

    glusterfs配置

    安装

    # 在所有节点执行:
    apt-get install glusterfs-server glusterfs-common glusterfs-client fuse
    systemctl start glusterfs-server
    systemctl enable glusterfs-server
    # 在175上执行:
    gluster peer probe 192.168.75.176
    gluster peer probe 192.168.75.177
    gluster peer probe 192.168.75.178
    

    测试

    创建测试卷

    
    # 创建
    gluster volume create test-volume replica 2 192.168.75.175:/home/gluterfs/data 192.168.75.176:/home/glusterfs/data force
    
    # 激活卷
    gluster volume start test-volume
    
    # 挂载
    mount -t glusterfs 192.168.75.175:/test-volume /mnt/mytest
    
    

    扩容测试卷

    # 向卷中添加brick
    gluster volume add-brick test-volume 192.168.75.177:/home/gluterfs/data 192.168.75.178:/home/glusterfs/data force
    
    

    删除测试卷

    gluster volume stop test-volume
    gluster volume delete test-volume
    

    heketi配置

    部署

    简介

    Heketi提供了一个RESTful管理界面,可以用来管理GlusterFS卷的生命周期。 通过Heketi,就可以像使用OpenStack Manila,Kubernetes和OpenShift一样申请可以动态配置GlusterFS卷。Heketi会动态在集群内选择bricks构建所需的volumes,这样以确保数据的副本会分散到集群不同的故障域内。同时Heketi还支持任意数量的ClusterFS集群,以保证接入的云服务器不局限于单个GlusterFS集群。

    heketi项目地址:https://github.com/heketi/heketi

    下载heketi相关包:
    https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-client-v5.0.1.linux.amd64.tar.gz
    https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-v5.0.1.linux.amd64.tar.gz

    修改heketi配置文件

    修改heketi配置文件/etc/heketi/heketi.json,内容如下:

    ......
    #修改端口,防止端口冲突
      "port": "18080",
    ......
    #允许认证
      "use_auth": true,
    ......
    #admin用户的key改为adminkey
          "key": "adminkey"
    ......
    #修改执行插件为ssh,并配置ssh的所需证书,注意要能对集群中的机器免密ssh登陆,使用ssh-copy-id把pub key拷到每台glusterfs服务器上
        "executor": "ssh",
        "sshexec": {
          "keyfile": "/root/.ssh/id_rsa",
          "user": "root",
          "port": "22",
          "fstab": "/etc/fstab"
        },
    ......
    # 定义heketi数据库文件位置
        "db": "/var/lib/heketi/heketi.db"
    ......
    #调整日志输出级别
        "loglevel" : "warning"
    
    

    需要说明的是,heketi有三种executor,分别为mock、ssh、kubernetes,建议在测试环境使用mock,生产环境使用ssh,当glusterfs以容器的方式部署在kubernetes上时,才使用kubernetes。我们这里将glusterfs和heketi独立部署,使用ssh的方式。

    配置ssh密钥

    在上面我们配置heketi的时候使用了ssh的executor,那么就需要heketi服务器能通过ssh密钥的方式连接到所有glusterfs节点进行管理操作,所以需要先生成ssh密钥

    ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ''
    chmod 600 /etc/heketi/heketi_key.pub
    
    # ssh公钥传递,这里只以一个节点为例
    ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.75.175
    
    # 验证是否能通过ssh密钥正常连接到glusterfs节点
    
    ssh -i /etc/heketi/heketi_key root@192.168.75.175
    
    

    启动heketi

    nohup heketi -config=/etc/heketi/heketi.json &
    

    生产案例

    在我实际生产中,使用docker-compose来管理heketi,而不直接手动启动,下面直接给出docker-compose配置示例:

    version: "2"
    services:
      heketi:
        container_name: heketi
        image: dk-reg.op.douyuyuba.com/library/heketi:5
        volumes:
          - "/etc/heketi:/etc/heketi"
          - "/var/lib/heketi:/var/lib/heketi"
          - "/etc/localtime:/etc/localtime"
        network_mode: host
    

    heketi添加glusterfs

    添加cluster

    heketi-cli --user admin -server http://192.168.75.175:18080 --secret adminkey --json  cluster create
    
    {"id":"d102a74079dd79aceb3c70d6a7e8b7c4","nodes":[],"volumes":[]}
    

    将4个glusterfs作为node添加到cluster

    由于我们开启了heketi认证,所以每次执行heketi-cli操作时,都需要带上一堆的认证字段,比较麻烦,我在这里创建一个别名来避免相关操作:

    alias heketi-cli='heketi-cli --server "http://192.168.75.175:18080" --user "admin" --secret "adminkey"'
    

    下面添加节点

    heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.175 --storage-host-name 192.168.75.175 --zone 1
    
    heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.176 --storage-host-name 192.168.75.176 --zone 1
    
    heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.177 --storage-host-name 192.168.75.177 --zone 1
    
    heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.178 --storage-host-name 192.168.75.178 --zone 1
    

    看到有些文档说需要在centos上部署时,需要注释每台glusterfs上的/etc/sudoers中的Defaults requiretty,不然加第二个node死活报错,最后把日志级别调高才看到日志里有记录sudo提示require tty。由于我这里直接部署在ubuntu上,所有不存在上述问题。如果有遇到这种问题的,可以照着操作下。

    添加device

    这里需要特别说明的是,目前heketi仅支持使用裸分区或裸磁盘添加为device,不支持文件系统。

    # --node参数给出的id是上一步创建node时生成的,这里只给出一个添加的示例,实际配置中,要添加每一个节点的每一块用于存储的硬盘
    heketi-cli  -json device add -name="/dev/vda2" --node "c3638f57b5c5302c6f7cd5136c8fdc5e"
    
    

    生产实际配置

    上面展示了如何手动一步步生成cluster,往cluster中添加节点,添加device的操作,在我们实际生产配置中,可以直接通过配置文件完成。

    创建一个/etc/heketi/topology-sample.json的文件,内容如下:

    {
        "clusters": [
            {
                "nodes": [
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "192.168.75.175"
                                ],
                                "storage": [
                                    "192.168.75.175"
                                ]
                            },
                            "zone": 1
                        },
                        "devices": [
                            "/dev/vda2"
                        ]
                    },
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "192.168.75.176"
                                ],
                                "storage": [
                                    "192.168.75.176"
                                ]
                            },
                            "zone": 1
                        },
                        "devices": [
                            "/dev/vda2"
                        ]
                    },
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "192.168.75.177"
                                ],
                                "storage": [
                                    "192.168.75.177"
                                ]
                            },
                            "zone": 1
                        },
                        "devices": [
                            "/dev/vda2"
                        ]
                    },
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "192.168.75.178"
                                ],
                                "storage": [
                                    "192.168.75.178"
                                ]
                            },
                            "zone": 1
                        },
                        "devices": [
                            "/dev/vda2"
                        ]
                    }               
                ]
            }
        ]
    }
    
    
    

    创建:

    heketi-cli  topology load --json topology-sample.json
    
    

    添加volume

    这里仅仅是做一个测试,实际使用中,会由kubernetes自动创建pvc

    如果添加的volume小的话可能会提示No Space,要解决这一问题要在heketi.json添加"brick_min_size_gb" : 1 ,1为1G

    ......
        "brick_min_size_gb" : 1,
        "db": "/var/lib/heketi/heketi.db"
    ......
    

    size要比brick_min_size_gb大,如果设成1还是报min brick limit,replica必须大于1

    heketi-cli --json  volume create  --size 3 --replica 2
    
    

    在执行创建的时候,抛出了如下异常:

    Error: /usr/sbin/thin_check: execvp failed: No such file or directory
      WARNING: Integrity check of metadata for pool vg_d9fb2bec56cfdf73e21d612b1b3c1feb/tp_e94d763a9b687bfc8769ac43b57fa41e failed.
      /usr/sbin/thin_check: execvp failed: No such file or directory
      Check of pool vg_d9fb2bec56cfdf73e21d612b1b3c1feb/tp_e94d763a9b687bfc8769ac43b57fa41e failed (status:2). Manual repair required!
      Failed to activate thin pool vg_d9fb2bec56cfdf73e21d612b1b3c1feb/tp_e94d763a9b687bfc8769ac43b57fa41e.
    
    

    这需要在所有glusterfs节点机上安装thin-provisioning-tools包:

    apt-get -y install thin-provisioning-tools
    

    成功创建的返回输出如下:

    heketi-cli --json volume create  --size 3 --replica 2
    
    {"size":3,"name":"vol_7fc61913851227ca2c1237b4c4d51997","durability":{"type":"replicate","replicate":{"replica":2},"disperse":{"data":4,"redundancy":2}},"snapshot":{"enable":false,"factor":1},"id":"7fc61913851227ca2c1237b4c4d51997","cluster":"dae1ab512dfad0001c3911850cecbd61","mount":{"glusterfs":{"hosts":["10.1.61.175","10.1.61.178"],"device":"10.1.61.175:vol_7fc61913851227ca2c1237b4c4d51997","options":{"backup-volfile-servers":"10.1.61.178"}}},"bricks":[{"id":"004f34fd4eb9e04ca3e1ca7cc1a2dd2c","path":"/var/lib/heketi/mounts/vg_d9fb2bec56cfdf73e21d612b1b3c1feb/brick_004f34fd4eb9e04ca3e1ca7cc1a2dd2c/brick","device":"d9fb2bec56cfdf73e21d612b1b3c1feb","node":"20d14c78691d9caef050b5dc78079947","volume":"7fc61913851227ca2c1237b4c4d51997","size":3145728},{"id":"2876e9a7574b0381dc0479aaa2b64d46","path":"/var/lib/heketi/mounts/vg_b7fd866d3ba90759d0226e26a790d71f/brick_2876e9a7574b0381dc0479aaa2b64d46/brick","device":"b7fd866d3ba90759d0226e26a790d71f","node":"9cddf0ac7899676c86cb135be16649f5","volume":"7fc61913851227ca2c1237b4c4d51997","size":3145728}]}
    

    配置kubernetes使用glusterfs

    参考https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

    创建storageclass

    添加storageclass-glusterfs.yaml文件,内容如下:

    apiVersion: storage.k8s.io/v1beta1
    kind: StorageClass
    metadata:
      name: glusterfs
    provisioner: kubernetes.io/glusterfs
    parameters:
      resturl: "http://192.168.75.175:18080"
      restauthenabled: "true"
      restuser: "admin"
      restuserkey: "adminkey"
      volumetype: "replicate:2"
    
    kubectl apply -f storageclass-glusterfs.yaml 
    

    这是直接将userkey明文写入配置文件创建storageclass的方式,官方推荐将key使用secret保存。示例如下:

    # glusterfs-secret.yaml内容如下:
    
    apiVersion: v1
    kind: Secret
    metadata:
      name: heketi-secret
      namespace: default
    data:
      # base64 encoded password. E.g.: echo -n "mypassword" | base64
      key: TFRTTkd6TlZJOEpjUndZNg==
    type: kubernetes.io/glusterfs
    
    
    # storageclass-glusterfs.yaml内容修改如下:
    
    apiVersion: storage.k8s.io/v1beta1
    kind: StorageClass
    metadata:
      name: glusterfs
    provisioner: kubernetes.io/glusterfs
    parameters:
      resturl: "http://10.1.61.175:18080"
      clusterid: "dae1ab512dfad0001c3911850cecbd61"
      restauthenabled: "true"
      restuser: "admin"
      secretNamespace: "default"
      secretName: "heketi-secret"
      #restuserkey: "adminkey"
      gidMin: "40000"
      gidMax: "50000"
      volumetype: "replicate:2"
    
    

    更详细的用法参考:https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs

    创建pvc

    glusterfs-pvc.yaml内容如下:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: glusterfs-mysql1
      namespace: default
      annotations:
        volume.beta.kubernetes.io/storage-class: "glusterfs"
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 2Gi
          
    kubectl create -f glusterfs-pvc.yaml
    

    创建pod,使用pvc

    mysql-deployment.yaml内容如下:

    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: mysql
      namespace: default
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            name: mysql
        spec:
          containers:
          - name: mysql
            image: mysql:5.7
            imagePullPolicy: IfNotPresent
            env:
            - name: MYSQL_ROOT_PASSWORD
              value: root123456
            ports:
              - containerPort: 3306
            volumeMounts:
            - name: gluster-mysql-data
              mountPath: "/var/lib/mysql"
          volumes:
            - name: glusterfs-mysql-data
              persistentVolumeClaim:
                claimName: glusterfs-mysql1
                
    kubectl create -f /etc/kubernetes/mysql-deployment.yaml
    

    需要说明的是,我这里使用的动态pvc的方式来创建glusterfs挂载盘,还有一种手动创建pvc的方式,可以参考:http://rdc.hundsun.com/portal/article/826.html

  • 相关阅读:
    Django学习笔记之model篇(二)
    Django学习笔记之model篇(一)
    Django学习笔记之auth系统
    rust中文论坛
    cookies和session总结
    golang 简书
    mac快捷键
    目前的缺点
    Phalcon notes
    Docker note
  • 原文地址:https://www.cnblogs.com/breezey/p/8849466.html
Copyright © 2011-2022 走看看