zoukankan      html  css  js  c++  java
  • 使用 ceph 作为 openstack 的后端

    openstack 与 ceph 集成

    1. ceph 上创建 openstack 需要的 pool.
    sudo ceph osd pool create volumes 128
    sudo ceph osd pool create images 128
    sudo ceph osd pool create backups 128
    sudo ceph osd pool create vms 128
    
    1. 将 ceph 服务器上 /etc/ceph/ceph.conf 复制到 openstackcomputeglance 节点中。
    2. 安装 ceph 相关依赖
    sudo yum install python-rbd ceph-common
    
    1. cephadmin-node 上创建相关的用户
    sudo ceph auth get-or-create client.glance mon 'allow *' osd 'allow * pool=images' -o client.glance.keyring
    sudo ceph auth get-or-create client.cinder mon 'allow *' osd 'allow * pool=volumes, allow * pool=vms, allow * pool=images' -o client.cinder.keyring
    sudo ceph auth get-or-create client.cinder-backup mon 'allow *' osd 'allow * pool=backups' -o client.cinder-backup.keyring
    

    如果填错权限可以改 sudo ceph auth caps client.glance mon 'allow *' osd 'allow * pool=images' -o client.glance.keyring

    1. 拿到 cinder 的 key
    ceph auth get-key client.cinder  >> client.cinder.key
    sz client.cinder.key
    # 然后把 文件发到每一个 compute node 上
    uuidgen # aff9070f-b853-4d19-b77c-b2aa7baca432
    #d2b06849-6a8c-40b7-bfea-0d2a729ac70d
    # 生成一个 uuid 然后写到 secret.xml 中
    
    <secret ephemeral='no' private='no'>
      <uuid>{your UUID}</uuid>
      <usage type='ceph'>
        <name>client.cinder secret</name>
      </usage>
    </secret>
    

    然后执行

    sudo virsh secret-define --file secret.xml
    sudo virsh secret-set-value --secret {your UUID} --base64 $(cat client.cinder.key)
    rm  -rf client.cinder.key secret.xml
    

    到这了 compute2 上 http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configuring-cinder

    1. 编辑 /etc/glance/glance-api.conf
    [DEFAULT]
    ...
    default_store = rbd
    ...
    [glance_store]
    stores = rbd
    rbd_store_pool = images
    rbd_store_user = glance
    rbd_store_ceph_conf = /etc/ceph/ceph.conf
    rbd_store_chunk_size = 8
    show_image_direct_url = True
    show_multiple_locations = True
    [paste_deploy]
    flavor = keystone
    

    如果 glance 连接失败可以考虑是不是 /etc/cinder 下的 keyring 文件是不是 ceph.client.*.keyring 格式 。 ceph!!!

    1. 编辑 /etc/cinder/cinder.conf
    [DEFAULT]
    ...
    enabled_backends = ceph
    glance_api_version = 2
    ### 添加
    [ceph]
    volume_driver = cinder.volume.drivers.rbd.RBDDriver
    volume_backend_name = ceph
    rbd_pool = volumes
    rbd_ceph_conf = /etc/ceph/ceph.conf
    rbd_flatten_volume_from_snapshot = false
    rbd_max_clone_depth = 5
    rbd_store_chunk_size = 4
    rados_connect_timeout = -1
    rbd_user = cinder
    host_ip = 10.0.5.10 ## 这个地方用本地机器替换一下
    rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
    # * backup *
    backup_driver = cinder.backup.drivers.ceph
    backup_ceph_conf = /etc/ceph/ceph.conf
    backup_ceph_user = cinder-backup
    backup_ceph_chunk_size = 134217728
    backup_ceph_pool = backups
    backup_ceph_stripe_unit = 0
    backup_ceph_stripe_count = 0
    restore_discard_excess_bytes = true
    [libvirt]
    rbd_user = cinder
    rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
    

    如果 cinder 失败,可以看看 /etc/ceph/ceph.conf 下的 public network 是不是加了一个下划线

    1. 编辑 /etc/nova/nova.conf
    [client]
    rbd cache = true
    rbd cache writethrough until flush = true
    admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
    log file = /var/log/qemu/qemu-guest-$pid.log
    rbd concurrent management ops = 20
    
    mkdir -p /var/run/ceph/guests/ /var/log/qemu/
    chown qemu:libvirt /var/run/ceph/guests /var/log/qemu/
    
  • 相关阅读:
    spring security 获取当前用户
    spring data jpa deleteInBatch 导致异常 java.lang.StackOverflowError
    大数据 分布式文件系统 HDFS概念
    angular9 ng start正常,build部署后无法正常加载页面
    springboot使用rocketmq RocketMQMessageListener参数
    spring boot使用rocketmq
    Teamcenter RAC 调用查询
    Teamcenter RAC 查找数据集并获取数据集中文件
    带有编译时和运行时的简记
    前台线程和后台线程、线程安全
  • 原文地址:https://www.cnblogs.com/kischn/p/7977995.html
Copyright © 2011-2022 走看看