zoukankan      html  css  js  c++  java
  • Ceph集群搭建及Kubernetes上实现动态存储(StorageClass)

    集群准备

    ceph集群配置说明

     
    节点名称 IP地址 配置 作用
    ceph-moni-0 10.10.3.150

    centos7.5

    4C,16G,200Disk

    管理节点,监视器 monitor
    ceph-moni-1 10.10.3.151

    centos7.5

    4C,16G,200Disk
    监视器 monitor
    ceph-moni-2 10.10.3.152

    centos7.5

    4C,16G,200Disk
    监视器 monitor
    ceph-osd-0 10.10.3.153

    centos7.5

    4C,16G,200Disk
    存储节点 osd
    ceph-osd-1 10.10.3.154

    centos7.5

    4C,16G,200Disk
    存储节点 osd
    ceph-osd-2 10.10.3.155

    centos7.5

    4C,16G,200Disk
    存储节点 osd

    本文使用ceph-deploy安装配置集群,以 6 个节点—3 个 monitor 节点,3 个 osd 节点。

    Ceph 集群的安装配置

    安装依赖包(所有节点)

    sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*
    yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

    配置cephd的yum源(所有节点)

    vim /etc/yum.repos.d/ceph.repo
    
    [ceph-noarch]
    name=Ceph noarch packages
    baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/noarch/
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
    priority=1
    
    [ceph]
    name=Ceph packages for $basearch
    baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/$basearch
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
    priority=1
    
    [ceph-source]
    name=Ceph source packages
    baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/SRPMS
    enabled=0
    gpgcheck=1
    type=rpm-md
    gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
    priority=1

    更新yum源(所有节点)

    sudo yum update 

    添加集群hosts信息(所有节点)

    cat >> /etc/hosts <<EOF
    10.10.3.150 ceph-moni-0
    10.10.3.151 ceph-moni-1
    10.10.3.152 ceph-moni-2
    10.10.3.153 ceph-osd-0
    10.10.3.154 ceph-osd-1
    10.10.3.155 ceph-osd-2
    EOF

    创建用户,赋予 root 权限 设置sudo免密(所有节点)

     useradd -d /home/ceph -m ceph  && echo 123456 | passwd --stdin ceph && echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph

    在管理节点安装 ceph-deploy(管理节点)

    sudo yum install ceph-deploy

    配置 ssh 免密登录(管理节点)

    su - ceph
    ssh-keygen -t rsa
    #传key到各节点
    ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-moni-1
    ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-moni-2
    ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-osd-0
    ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-osd-1
    ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-osd-2
    #管理节点上更改~/.ssh/config
    Host ceph-moni-1
    Hostname ceph-moni-1
    User ceph
    Host ceph-moni-2
    Hostname ceph-moni-2
    User ceph
    Host ceph-osd-0
    Hostname cceph-osd-0
    User ceph
    Host ceph-osd-1
    Hostname cceph-osd-1
    User ceph
    Host ceph-osd-2
    Hostname cceph-osd-2
    User ceph
    #更改权限
    sudo chmod 600 ~/.ssh/config

    创建管理节点

    su - ceph 
    mkdir ceph-cluster
    cd ceph-cluster
    ceph-deploy new {initial-monitor-node(s)}
    例如
    ceph-deploy new ceph-moni-0 ceph-moni-1 ceph-moni-2

    在管理节点上,更改生成的 ceph 配置文件,添加以下内容

    vim ceph.conf 
    #更改 osd 个数
    osd pool default size = 3
    #允许 ceph 集群删除 pool
    [mon]
    mon_allow_pool_delete = true

    在管理节点上给集群所有节点安装 ceph

    ceph-deploy install {ceph-node} [{ceph-node} ...]
    例如:
    ceph-deploy install ceph-moni-0 ceph-moni-1 ceph-moni-2 ceph-osd-0 ceph-osd-1 ceph-osd2
    

    配置初始 monitor(s)、并收集所有密钥

    ceph-deploy mon create-initial

    在管理节点上登录到每个 osd 节点,创建 osd 节点的数据存储目录(所有osd节点)

    ssh ceph-osd-0
    sudo mkdir /var/local/osd0
    sudo chmod 777 -R /var/local/osd0 exit
    ssh ceph-osd-1 sudo mkdir /var/local/osd1
    sudo chmod 777 -R /var/local/osd1 exit
    ssh ceph-osd-2 sudo mkdir /var/local/osd2
    sudo chmod 777 -R /var/local/osd2 exit

    使每个 osd 就绪(管理节点执行)

    ceph-deploy osd prepare ceph-osd-0:/var/local/osd0 ceph-osd-1:/var/local/osd1 ceph-osd-2:/var/local/osd2

    激活每个 osd 节点(管理节点执行)

    ceph-deploy osd activate ceph-osd-0:/var/local/osd0 ceph-osd-1:/var/local/osd1 ceph-osd-2:/var/local/osd2

    在管理节点把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点,赋予 ceph.client.admin.keyring 有操作权限(所有节点)

    ceph-deploy admin {manage-node} {ceph-node}
    例如:
    ceph-deploy admin ceph-moni-0 ceph-moni-1 ceph-moni-2 ceph-osd-0 ceph-osd-1 ceph-osd-2
    #所有节点执行
    sudo chmod +r /etc/ceph/ceph.client.admin.keyring

    部署完成。查看集群状态

    $ eph health
    HEALTH_OK

     客户端配置

    因为我的kubernetes的操作系统是Ubuntu操作系统,Kubernetes 集群中的每个节点要想使用 Ceph,需要按照 Ceph 客户端来安装配置 Ceph,所有我这里有两种操作系统的安装方式。

    Centos

    添加 ceph 源

    vim /etc/yum.repos.d/ceph.repo
    
    [ceph-noarch]
    name=Ceph noarch packages
    baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/noarch/
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
    priority=1
    
    [ceph]
    name=Ceph packages for $basearch
    baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/$basearch
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
    priority=1
    
    [ceph-source]
    name=Ceph source packages
    baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/SRPMS
    enabled=0
    gpgcheck=1
    type=rpm-md
    gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
    priority=1

    安装 ceph client

    yum update & yum install -y ceph

    添加集群信息

    cat >> /etc/hosts <<EOF
    10.10.3.30 ceph-client01
    10.10.3.150 ceph-moni-0 10.10.3.151 ceph-moni-1 10.10.3.152 ceph-moni-2 10.10.3.153 ceph-osd-0 10.10.3.154 ceph-osd-1 10.10.3.155 ceph-osd-2 EOF

    拷贝集群配置信息和 admin 密钥

    scp -r root@ceph-moni-0:/etc/ceph/{ceph.conf,ceph.client.admin.keyring} /etc/ceph/

    Ubuntu

    配置源

    wget -q -O- https://mirrors.aliyun.com/ceph/keys/release.asc | sudo apt-key add -; echo deb https://mirrors.aliyun.com/ceph/debian-kraken xenial main | sudo tee /etc/apt/sources.list.d/ceph.list

    更新源

    apt update  && apt -y dist-upgrade && apt -y autoremove

    安装 ceph client

    apt-get install ceph 

    添加集群信息

    cat >> /etc/hosts <<EOF
    10.10.3.30 ceph-client01
    10.10.3.150 ceph-moni-0
    10.10.3.151 ceph-moni-1
    10.10.3.152 ceph-moni-2
    10.10.3.153 ceph-osd-0
    10.10.3.154 ceph-osd-1
    10.10.3.155 ceph-osd-2
    EOF

    拷贝集群配置信息和 admin 密钥

    scp -r root@ceph-moni-0:/etc/ceph/{ceph.conf,ceph.client.admin.keyring} /etc/ceph/

     配置StorageClass

    所有的k8s节点的node节点到要能访问到ceph的服务端,所以所有的node节点要安装客户端(ceph-common),我上面是直接安装ceph,也是可以的。

    生成key文件

    $ grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64
    QVFCWXB0RmIzK2dqTEJBQUtsYm4vaHU2NWZ2eHlaaGRnM2hwc1E9PQ==

    配置访问ceph的secret

    下面的key默认是default的Namespace,所有只能在default下使用,要想其他namespace下使用,需要在指定namespace下创建key,修改namespace即可。

    $ vim ceph-secret.yaml
    apiVersion: v1
    kind: Secret
    metadata:
      name: ceph-secret
    namespace: default type:
    "kubernetes.io/rbd" data: key: QVFCWXB0RmIzK2dqTEJBQUtsYm4vaHU2NWZ2eHlaaGRnM2hwc1E9PQ== $ kubectl apply -f ceph-secret.yaml secret/ceph-secret created $ kubectl get secret NAME TYPE DATA AGE ceph-secret kubernetes.io/rbd 1 4s default-token-lplp6 kubernetes.io/service-account-token 3 50d mysql-root-password Opaque 1 2d

    配置ceph的存储类

    $ vim ceph-storageclass.yaml
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
       name: jax-ceph
    provisioner: kubernetes.io/rbd
    parameters:
      monitors: 10.10.3.150:6789,10.10.3.151:6789,10.10.3.152:6789
      adminId: admin
      adminSecretName: ceph-secret
      adminSecretNamespace: default
      pool: rbd
      userId: admin
      userSecretName: ceph-secret
    $ kubectl apply -f ceph-storageclass.yaml 
    storageclass.storage.k8s.io/jax-ceph created
    $ kubectl get storageclass
    NAME              PROVISIONER          AGE
    jax-ceph          kubernetes.io/rbd    1

    到此动态存储创建完成

    下面是一个在statefulset中的一个使用方法

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: myapp
    spec:
      serviceName: myapp-sts-svc
      replicas: 2
      selector:
        matchLabels:
          app: myapp-pod
      template:
        metadata:
          labels:
            app: myapp-pod
        spec:
          containers:
          - name: myapp
            image: ikubernetes/myapp:v1
            ports:
            - containerPort: 80
              name: web
            volumeMounts:
            - name: myappdata
              mountPath: /usr/share/nginx/html
      volumeClaimTemplates:
      - metadata:
          name: myappdata
        spec:
          accessModes: ["ReadWriteOnce"]
          storageClassName: "jax-ceph"
          resources:
            requests:
              storage: 5Gi
  • 相关阅读:
    初识jQuery
    JDBC和数据库连接池
    JavaScript Cookie
    服务器常用的状态码
    XMLHttpRequest 对象-回调函数
    AJAX-responseXML 属性
    JavaScript闭包
    Angular.forEach用法总结
    随机梯度下降法优化实例
    梯度下降法和随机梯度下降法
  • 原文地址:https://www.cnblogs.com/xzkzzz/p/9848930.html
Copyright © 2011-2022 走看看