zoukankan      html  css  js  c++  java
  • ceph-deploy部署过程

    [root@ceph-1 my_cluster]# ceph-deploy --overwrite-conf osd create ceph-1 --data data_vg1/data_lv1 --block-db block_db_vg1/block_db_lv1
    [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
    [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf osd create ceph-1 --data data_vg1/data_lv1 --block-db block_db_vg1/block_db_lv1
    [ceph_deploy.cli][INFO ] ceph-deploy options:
    [ceph_deploy.cli][INFO ] verbose : False
    [ceph_deploy.cli][INFO ] bluestore : None
    [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x132d170>
    [ceph_deploy.cli][INFO ] cluster : ceph
    [ceph_deploy.cli][INFO ] fs_type : xfs
    [ceph_deploy.cli][INFO ] block_wal : None
    [ceph_deploy.cli][INFO ] default_release : False
    [ceph_deploy.cli][INFO ] username : None
    [ceph_deploy.cli][INFO ] journal : None
    [ceph_deploy.cli][INFO ] subcommand : create
    [ceph_deploy.cli][INFO ] host : ceph-1
    [ceph_deploy.cli][INFO ] filestore : None
    [ceph_deploy.cli][INFO ] func : <function osd at 0x12b7a28>
    [ceph_deploy.cli][INFO ] ceph_conf : None
    [ceph_deploy.cli][INFO ] zap_disk : False
    [ceph_deploy.cli][INFO ] data : data_vg1/data_lv1
    [ceph_deploy.cli][INFO ] block_db : block_db_vg1/block_db_lv1
    [ceph_deploy.cli][INFO ] dmcrypt : False
    [ceph_deploy.cli][INFO ] overwrite_conf : True
    [ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
    [ceph_deploy.cli][INFO ] quiet : False
    [ceph_deploy.cli][INFO ] debug : False
    [ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device data_vg1/data_lv1
    [ceph-1][DEBUG ] connected to host: ceph-1
    [ceph-1][DEBUG ] detect platform information from remote host
    [ceph-1][DEBUG ] detect machine type
    [ceph-1][DEBUG ] find the location of an executable
    [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.1.1503 Core
    [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-1
    [ceph-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
    [ceph-1][DEBUG ] find the location of an executable
    [ceph-1][INFO ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data data_vg1/data_lv1 --block.db block_db_vg1/block_db_lv1
    [ceph-1][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
    [ceph-1][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e9d0e462-08f9-4cb4-99de-ae360feeb5d8
    [ceph-1][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
    [ceph-1][DEBUG ] Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
    [ceph-1][DEBUG ] Running command: restorecon /var/lib/ceph/osd/ceph-0
    [ceph-1][DEBUG ] Running command: chown -h ceph:ceph /dev/data_vg1/data_lv1
    [ceph-1][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-2
    [ceph-1][DEBUG ] Running command: ln -s /dev/data_vg1/data_lv1 /var/lib/ceph/osd/ceph-0/block
    [ceph-1][DEBUG ] Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
    [ceph-1][DEBUG ] stderr: got monmap epoch 1
    [ceph-1][DEBUG ] Running command: ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQDo2p5cB9vsIxAAIOyJUSvxxvhWxpmoMkqg/g==
    [ceph-1][DEBUG ] stdout: creating /var/lib/ceph/osd/ceph-0/keyring
    [ceph-1][DEBUG ] added entity osd.0 auth auth(auid = 18446744073709551615 key=AQDo2p5cB9vsIxAAIOyJUSvxxvhWxpmoMkqg/g== with 0 caps)
    [ceph-1][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
    [ceph-1][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
    [ceph-1][DEBUG ] Running command: chown -h ceph:ceph /dev/block_db_vg1/block_db_lv1
    [ceph-1][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-3
    [ceph-1][DEBUG ] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --bluestore-block-db-path /dev/block_db_vg1/block_db_lv1 --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid e9d0e462-08f9-4cb4-99de-ae360feeb5d8 --setuser ceph --setgroup ceph
    [ceph-1][DEBUG ] --> ceph-volume lvm prepare successful for: data_vg1/data_lv1
    [ceph-1][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
    [ceph-1][DEBUG ] Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/data_vg1/data_lv1 --path /var/lib/ceph/osd/ceph-0
    [ceph-1][DEBUG ] Running command: ln -snf /dev/data_vg1/data_lv1 /var/lib/ceph/osd/ceph-0/block
    [ceph-1][DEBUG ] Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
    [ceph-1][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-2
    [ceph-1][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
    [ceph-1][DEBUG ] Running command: ln -snf /dev/block_db_vg1/block_db_lv1 /var/lib/ceph/osd/ceph-0/block.db
    [ceph-1][DEBUG ] Running command: chown -h ceph:ceph /dev/block_db_vg1/block_db_lv1
    [ceph-1][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-3
    [ceph-1][DEBUG ] Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block.db
    [ceph-1][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-3
    [ceph-1][DEBUG ] Running command: systemctl enable ceph-volume@lvm-0-e9d0e462-08f9-4cb4-99de-ae360feeb5d8
    [ceph-1][DEBUG ] stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-e9d0e462-08f9-4cb4-99de-ae360feeb5d8.service to /usr/lib/systemd/system/ceph-volume@.service.
    [ceph-1][DEBUG ] Running command: systemctl enable --runtime ceph-osd@0
    [ceph-1][DEBUG ] Running command: systemctl start ceph-osd@0
    [ceph-1][DEBUG ] --> ceph-volume lvm activate successful for osd ID: 0
    [ceph-1][DEBUG ] --> ceph-volume lvm create successful for: data_vg1/data_lv1
    [ceph-1][INFO ] checking OSD status...
    [ceph-1][DEBUG ] find the location of an executable
    [ceph-1][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
    [ceph_deploy.osd][DEBUG ] Host ceph-1 is now ready for osd use.

    ceph删除一个osd

    删除osd
    正常处理流程:
    停止osd进程——将节点状态标记为out——从crush中移除节点——删除节点——删除节点认证
    根据这个方法会触发两次迁移,一次是在节点osd out以后,一次是在crush remove以后。参考磨渣-删除OSD的正确方式,调整处理步骤能够减少一次数据迁移。

    在ceph的集群当中关于节点的替换的问题,一直按照以前的方式进行的处理,处理的步骤如下:

    调整OSD的CRUSH WEIGHT

    ceph osd crush reweight osd.0 0.1
    

    说明:这个地方如果想慢慢的调整就分几次将crush 的weight 减低到0 ,这个过程实际上是让数据不分布在这个节点上,让数据慢慢的分布到其他节点上,直到最终为没有分布在这个osd,并且迁移完成
    这个地方不光调整了osd 的crush weight ,实际上同时调整了host 的 weight ,这样会调整集群的整体的crush 分布,在osd 的crush 为0 后, 再对这个osd的任何删除相关操作都不会影响到集群的数据的分布

    停止OSD进程

    systemctl stop ceph-osd@0
    

    停止到osd的进程,这个是通知集群这个osd进程不在了,不提供服务了,因为本身没权重,就不会影响到整体的分布,也就没有迁移

    将节点状态标记为out

    ceph osd out osd.0
    

    停止到osd的进程,这个是通知集群这个osd不再映射数据了,不提供服务了,因为本身没权重,就不会影响到整体的分布,也就没有迁移

    从CRUSH中移除节点

    ceph osd crush remove osd.0
    

    这个是从crush中删除,因为已经是0了 所以没影响主机的权重,也就没有迁移了

    删除节点

    ceph osd rm osd.0
    

    这个是从集群里面删除这个节点的记录

    删除节点认证(不删除编号会占住)

    ceph auth del osd.0
    

    删除HOST节点
    将OSD全部删除后,如果还需要在集群中删除该osd的host节点,可以使用该命令。

    ceph osd crush rm test5



  • 相关阅读:
    学习随笔
    javaWeb(1)
    javaWeb(3)----EL,JSTL
    开发时要注意的地方!!!
    Mybatis(0)——基础入门,hello,Mybatis! (使用IDEA)
    SpringAOP——通过JdbcTemplate连接数据库,并使用事务(Transactional)(使用IDEA进行编程)
    SpringAOP基础实战知识------hello!AspectJ (使用IDEA进行编程)
    5 jQuery
    4.1 js 配合dom 案例
    4.js
  • 原文地址:https://www.cnblogs.com/mylovelulu/p/10625987.html
Copyright © 2011-2022 走看看