zoukankan      html  css  js  c++  java
  • 手动安装ceph集群

    先使用管理员用户做前期的准备,后面都是使用ceph用户进行安装。

    sudo su -
    

    设置主机名

    hostnamectl set-hostname storage-ceph01
    hostnamectl set-hostname storage-ceph02
    hostnamectl set-hostname storage-ceph03
    

    设置主机名映射

    cat << EOF | sudo tee -a  /etc/hosts >> /dev/null
    172.20.0.5 storage-ceph01
    172.20.0.27 storage-ceph02
    172.20.0.6 storage-ceph03
    EOF
    

    关闭防火墙

    systemctl stop firewalld
    systemctl disable firewalld
    

    关闭selinux

    setenforce 0
    sed -ri 's#(SELINUX=).*#1disabled#g' /etc/selinux/config
    

    安装ntp服务同步

    yum install -y ntp
    vi /etc/ntp.conf
    注释 `server xxxx iburst` 的几行,在下面添加 `server ntp1.aliyun.com iburst`。
    systemctl enable ntpd
    systemctl start ntpd
    

    添加ceph用户

    useradd -d /home/ceph -m ceph
    echo 123456 | passwd --stdin ceph
    

    设置ceph密码

    echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
    sudo chmod 0440 /etc/sudoers.d/ceph
    

    添加ceph源

    cat << EOM | sudo tee /etc/yum.repos.d/ceph.repo
    [ceph-noarch]
    name=Ceph noarch packages
    baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-15.2.12/el7/noarch/
    enabled=1
    gpgcheck=0
     
    [ceph-x84_64]
    name=Ceph x86_64 packages
    baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-15.2.12/el7/x86_64/
    enabled=1
    gpgcheck=0
    EOM
    
    yum makecache
    

    以下操作都使用ceph用户

    su - ceph
    

    下载ceph相关的安装包

    sudo yum install -y snappy leveldb gdisk python-argparse gperftools-libs
    sudo yum install -y ceph
    

    安装mon

    生成ceph集群的uuid

    # uuidgen       (其中一台主机生成即可)
    4d8fec26-e363-4753-b60f-49d69ab44cab
    
    export cephuid=4d8fec26-e363-4753-b60f-49d69ab44cab     # (三台主机都执行,且uuid一致。)
    

    ceph的全局配置文件

    cat <<EOF | sudo tee /etc/ceph/ceph.conf >> /dev/null
    [global]
    fsid = $cephuid
    mon initial members = storage-ceph01, storage-ceph02, storage-ceph03
    mon host = 172.20.0.5, 172.20.0.27, 172.20.0.6
    public network = 192.168.31.0/24
    cluster network = 172.20.0.0/24
    auth cluster required = cephx
    auth service required = cephx
    auth client required = cephx
    osd journal size = 1024
    osd pool default size = 3
    osd pool default min size = 2
    osd pool default pg num = 333
    osd pool default pgp num = 333
    osd crush chooseleaf type = 1
    EOF
    

    需要修改 mon initial members mon host public network cluster network 的值,如果只有一个内网的话,可以把 public network 的参数去掉即可。

    生成monitor keyring

    #storage-ceph01
    ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
    

    生成client.admin keyring

    #storage-ceph01
    sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
    sudo chown ceph.ceph /etc/ceph/ceph.client.admin.keyring
    

    生成用于集群初始化初始化的cluster.bootstrap keyring

    #storage-ceph01
    ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
     
    ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
    ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
    

    生成初始化monmap

    #storage-ceph01
    monmaptool --create --add storage-ceph01 172.20.0.5 --add storage-ceph02 172.20.0.27 --add storage-ceph03 172.20.0.6 --fsid $cephuid /tmp/monmap
    

    注意修改IP地址及主机名

    分发monmap文件

    #storage-ceph01
    scp /tmp/monmap root@storage-ceph02:/tmp/
    scp /tmp/monmap root@storage-ceph03:/tmp/
    

    分发client.admin keyring

    #storage-ceph01
    scp /etc/ceph/ceph.client.admin.keyring root@storage-ceph02:/etc/ceph/
    scp /etc/ceph/ceph.client.admin.keyring root@storage-ceph03:/etc/ceph/
    

    分发monitor keyring

    #storage-ceph01
    scp /tmp/ceph.mon.keyring root@storage-ceph02:/tmp/
    scp /tmp/ceph.mon.keyring root@storage-ceph03:/tmp/
    

    修改权限

    #storage-ceph02 && storage-ceph03
    sudo chown ceph:ceph /tmp/ceph.mon.keyring
    sudo chown ceph.ceph /etc/ceph/ceph.client.admin.keyring
    

    创建mon的目录

    #storage-ceph01
    mkdir /var/lib/ceph/mon/ceph-storage-ceph01
     
    #storage-ceph02
    mkdir /var/lib/ceph/mon/ceph-storage-ceph02
     
    #storage-ceph03
    mkdir /var/lib/ceph/mon/ceph-storage-ceph03
    

    对节点monitor初始化

    #storage-ceph01
    ceph-mon --mkfs -i storage-ceph01 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
     
    #storage-ceph02
    ceph-mon --mkfs -i storage-ceph02 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
     
    #storage-ceph03
    ceph-mon --mkfs -i storage-ceph03 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
     
    #所有查看生成的文件
    [ceph@storage-ceph01 ~]$ ls /var/lib/ceph/mon/ceph-storage-ceph01/
    keyring  kv_backend  store.db
    

    启动mon服务

    sudo systemctl restart ceph-mon@storage-ceph01
    sudo systemctl enable ceph-mon@storage-ceph01
    sudo systemctl restart ceph-mon@storage-ceph02
    sudo systemctl enable ceph-mon@storage-ceph02
    sudo systemctl restart ceph-mon@storage-ceph03
    sudo systemctl enable ceph-mon@storage-ceph03
    

    问题一: 3 monitors have not enabled msgr2 ,该问题的方法。(所有mon节点执行)

    ceph mon enable-msgr2
    

    问题二: mons are allowing insecure global_id reclaim ,该问题的方法。(其中一个mon节点执行)

    如果AUTH_INSECURE_GLOBAL_ID_RECLAIM还没有引发健康警报并且auth_expose_insecure_global_id_reclaim尚未禁用该设置(默认情况下处于启用状态),则当前没有需要升级的客户端已连接,可以安全地禁止不安全的global_id回收:
    ceph config set mon auth_allow_insecure_global_id_reclaim false
    # 如果仍然有需要升级的客户端,则可以使用以下方法暂时使此警报静音:
    ceph health mute AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED 1w   # 1 week
    # 不建议这样做,但是您也可以无限期地禁用此警告,方法是:
    ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false
    

    安装mgr

    生成mgr keyring

    #storage-ceph01
    mkdir /var/lib/ceph/mgr/ceph-storage-ceph01
    cat <<EOF | tee /var/lib/ceph/mgr/ceph-storage-ceph01/keyring >> /dev/null
    $(ceph auth get-or-create mgr.storage-ceph01 mon 'allow profile mgr' osd 'allow *' mds 'allow *')
    EOF
     
    #storage-ceph02
    mkdir /var/lib/ceph/mgr/ceph-storage-ceph02
    cat <<EOF | tee /var/lib/ceph/mgr/ceph-storage-ceph02/keyring >> /dev/null
    $(ceph auth get-or-create mgr.storage-ceph02 mon 'allow profile mgr' osd 'allow *' mds 'allow *')
    EOF
     
     
    #storage-ceph03
    mkdir /var/lib/ceph/mgr/ceph-storage-ceph03
    cat <<EOF | tee /var/lib/ceph/mgr/ceph-storage-ceph03/keyring >> /dev/null
    $(ceph auth get-or-create mgr.storage-ceph03 mon 'allow profile mgr' osd 'allow *' mds 'allow *')
    EOF
    

    启动mgr服务

    #storage-ceph01
    sudo systemctl restart ceph-mgr@storage-ceph01
    sudo systemctl enable ceph-mgr@storage-ceph01
     
    #storage-ceph02
    sudo systemctl restart ceph-mgr@storage-ceph02
    sudo systemctl enable ceph-mgr@storage-ceph02
     
    #storage-ceph03
    sudo systemctl restart ceph-mgr@storage-ceph03
    sudo systemctl enable ceph-mgr@storage-ceph03
    

    问题三:解决 HEALTH_WARN Module 'restful' has failed dependency: No module named 'pecan' ,该问题的方法。(所有mgr节点执行)

    sudo su -
    pip3 install pecan werkzeug
    

    应该只需要执行 sudo su -pip3 install pecan werkzeug 即可。需要重启操作系统。等一段时间再看看 ceph -s。如果还没有好的话,pip3 install --user ceph pecan werkzeug 再重启操作系统。

    安装osd

    分发cluster.bootstrap keyring

    #storage-ceph01
    scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@storage-ceph02:/var/lib/ceph/bootstrap-osd/
    scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@storage-ceph03:/var/lib/ceph/bootstrap-osd/
     
    #三台主机
    sudo chown ceph.ceph /var/lib/ceph/bootstrap-osd/ceph.keyring 
    

    创建lvm

    # 清理磁盘
    sudo dmsetup remove ceph--8ac0d9e1--ace9--4260--bc3d--9984442293f2-osd--block--05fa6b88--5b2b--4f06--8f7f--85218373da0e
    sudo wipefs -af /dev/vdb 
    
    # osd节点执行
    sudo ceph-volume lvm create --data /dev/vdb
    

    启动服务

    sudo systemctl restart ceph-osd@0.service
    sudo systemctl enable ceph-osd@0.service
    sudo systemctl restart ceph-osd@1.service
    sudo systemctl enable ceph-osd@1.service
    sudo systemctl restart ceph-osd@2.service
    sudo systemctl enable ceph-osd@2.service
    

    安装mds

    创建mds目录

    #storage-ceph01
    mkdir -p /var/lib/ceph/mds/ceph-storage-ceph01
     
    #storage-ceph02
    mkdir -p /var/lib/ceph/mds/ceph-storage-ceph02
     
    #storage-ceph03
    mkdir -p /var/lib/ceph/mds/ceph-storage-ceph03
    

    创建mds keyring

    #storage-ceph01
    ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-storage-ceph01/keyring --gen-key -n mds.storage-ceph01
     
    #storage-ceph02
    ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-storage-ceph02/keyring --gen-key -n mds.storage-ceph02
     
    #storage-ceph03
    ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-storage-ceph03/keyring --gen-key -n mds.storage-ceph03
    

    授权mds keyring

    #storage-ceph01
    ceph auth add mds.storage-ceph01 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-storage-ceph01/keyring
     
    #storage-ceph02
    ceph auth add mds.storage-ceph02 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-storage-ceph02/keyring
     
    #storage-ceph03
    ceph auth add mds.storage-ceph03 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-storage-ceph03/keyring
    

    配置文件添加mds配置

    cat <<EOF | sudo tee -a /etc/ceph/ceph.conf >> /dev/null
     
    [mds.storage-ceph01]
    host = storage-ceph01
     
    [mds.storage-ceph02]
    host = storage-ceph02
     
    [mds.storage-ceph03]
    host = storage-ceph03
    EOF
    

    启动mds服务

    #storage-ceph01
    sudo systemctl restart ceph-mds@storage-ceph01
    sudo systemctl enable ceph-mds@storage-ceph01
     
    #storage-ceph02
    sudo systemctl restart ceph-mds@storage-ceph02
    sudo systemctl enable ceph-mds@storage-ceph02
     
    #storage-ceph03
    sudo systemctl restart ceph-mds@storage-ceph03
    sudo systemctl enable ceph-mds@storage-ceph03
    
  • 相关阅读:
    使用logstash迁移elasticsearch
    cratedb 4.2.1单机安装
    es6.8.5集群部署(使用x-pack ssl方式)
    es从6.5升级到6.8(单节点)
    elasticsearch-6.8.5单机部署(当生产环境使用)
    mysql_upgrade升级(主从模式,先升级从库)
    mysql_upgrade升级(直接本地升级)
    主从数据不一致导出同步错误(从库记录存在导致写入报主键重复)
    12C下使用logminer
    mysql主库磁盘空间爆满导致从库错误
  • 原文地址:https://www.cnblogs.com/mycloudedu/p/14963607.html
Copyright © 2011-2022 走看看