zoukankan      html  css  js  c++  java
  • 配置cinder-volume服务使用ceph作为后端存储

    在ceph监视器上执行

    CINDER_PASSWD='cinder1234!'
    controllerHost='controller'
    RABBIT_PASSWD='0penstackRMQ'

    1.创建pool池

    为cinder-volume服务创建pool池(因为我只有一个OSD节点,所以要将副本数设置为1)
    ceph osd pool create cinder-volumes 32
    ceph osd pool set cinder-volumes size 1
    ceph osd pool application enable  cinder-volumes rbd
    ceph osd lspools

    2.查看pool池的使用情况

    ceph df

    3.创建账号

    ceph auth get-or-create client.cinder-volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=cinder-volumes, allow rwx pool=glance-images' -o /etc/ceph/ceph.client.cinder-volumes.keyring
    #查看
    ceph auth ls | grep -EA3 'client.(cinder-volumes)'

    4.修改ceph.conf配置文件并同步到所有的监视器节点(这步一定要操作)

    su - cephd
    cd ~/ceph-cluster/
    cat <<EOF>> ceph.conf
    [client.cinder-volumes]
    keyring = /etc/ceph/ceph.client.cinder-volumes.keyring
    EOF
    ceph-deploy --overwrite-conf admin ceph-mon01
    exit

    5.安装cinder-volume组件和ceph客户端(如果ceph监视器是在控制节点上不需要执行这一步)

    yum -y install openstack-cinder python-keystone ceph-common

    6.使用uuidgen生成一个uuid(确保cinder和libvirt中的UUID一致)

    uuidgen
    运行uuidgen命令可以得到下面的UUID值:

    086037e4-ad59-4c61-82c9-86edc31b0bc0

    7.配置cinder-volume服务与cinder-api服务进行交互

    openstack-config --set  /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:${RABBIT_PASSWD}@${controllerHost}:5672
    openstack-config --set /etc/cinder/cinder.conf cache backend  oslo_cache.memcache_pool
    openstack-config --set /etc/cinder/cinder.conf cache enabled  true
    openstack-config --set /etc/cinder/cinder.conf cache memcache_servers  ${controllerHost}:11211
    openstack-config --set  /etc/cinder/cinder.conf DEFAULT auth_strategy  keystone
    openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  auth_uri  http://${controllerHost}:5000
    openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  auth_url  http://${controllerHost}:5000
    openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  auth_type password
    openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  project_domain_id  default
    openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  user_domain_id  default
    openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  project_name  service
    openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  username  cinder
    openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  password  ${CINDER_PASSWD}
    openstack-config --set  /etc/cinder/cinder.conf oslo_concurrency lock_path  /var/lib/cinder/tmp

    8.配置cinder-volume服务使用的后端存储为ceph

    openstack-config --set /etc/cinder/cinder.conf  DEFAULT  enabled_backends  ceph

    9.配置cinder-volume服务驱动ceph

    openstack-config --set /etc/cinder/cinder.conf  ceph volume_driver  cinder.volume.drivers.rbd.RBDDriver
    openstack-config --set /etc/cinder/cinder.conf  ceph rbd_pool  cinder-volumes
    openstack-config --set /etc/cinder/cinder.conf  ceph rbd_user cinder-volumes
    openstack-config --set /etc/cinder/cinder.conf  ceph rbd_ceph_conf  /etc/ceph/ceph.conf
    openstack-config --set /etc/cinder/cinder.conf  ceph rbd_flatten_volume_from_snapshot  false
    openstack-config --set /etc/cinder/cinder.conf  ceph bd_max_clone_depth  5
    openstack-config --set /etc/cinder/cinder.conf  ceph rbd_store_chunk_size  4
    openstack-config --set /etc/cinder/cinder.conf  ceph rados_connect_timeout  -1
    openstack-config --set /etc/cinder/cinder.conf  ceph glance_api_version 2
    openstack-config --set /etc/cinder/cinder.conf  ceph rbd_secret_uuid  086037e4-ad59-4c61-82c9-86edc31b0bc0

    10.启动cinder-volume服务

    systemctl enable openstack-cinder-volume.service
    systemctl start openstack-cinder-volume.service
    systemctl status openstack-cinder-volume.service

    在需要挂载ceph卷的所有计算节点上执行

    1.创建secret文件(UUID需要与cinder服务中一致)

    cat << EOF > ~/secret.xml
    <secret ephemeral='no' private='no'>
         <uuid>086037e4-ad59-4c61-82c9-86edc31b0bc0</uuid>
         <usage type='ceph'>
             <name>client.cinder-volumes secret</name>
         </usage>
    </secret>
    EOF

    2.从ceph监视器上获取cinder-volumes账户的密钥环

    ceph auth get-key client.cinder-volumes
    得到如下的结果:
    AQCxfDFdgp2qKRAAUY/vep29N39Qv7xWKYqMUw==

    3.在libvirt中注册UUID

    virsh secret-define --file ~/secret.xml

    4.在libvirt中添加UUID和cinder-volumes密钥环

    virsh secret-set-value --secret 086037e4-ad59-4c61-82c9-86edc31b0bc0 --base64 AQCxfDFdgp2qKRAAUY/vep29N39Qv7xWKYqMUw==

    5.查看libvirt中添加的UUID

    virsh secret-list

    6.重启libvirt

    systemctl restart libvirtd.service
    systemctl status libvirtd.service

    出错回滚的方案

    1.删除pool池

    先在所有的监视器节点上开启删除pool的权限,然后才可以删除。
    删除pool时ceph要求必须输入两次pool名称,同时加上--yes-i-really-really-mean-it选项。
    echo '
    mon_allow_pool_delete = true
    [mon]
    mon allow pool delete = true
    ' >> /etc/ceph/ceph.conf
    systemctl restart ceph-mon.target
    ceph osd pool delete cinder-volumes cinder-volumes  --yes-i-really-really-mean-it

    2.删除账号

    ceph auth del client.cinder-volumes

    3.删除libvirt中注册的UUID和cinder-volumes密钥环

    查看:
    virsh secret-list
    删除(secret-undefine后跟uuid值):
    virsh secret-undefine  086037e4-ad59-4c61-82c9-86edc31b0bc0

  • 相关阅读:
    WPF 命令基础
    委托 C#
    Volley网络请求框架的基本用法
    MailOtto 实现完美预加载以及源码解读
    Android_时间服务
    Android_Chronometer计时器
    Android_Json实例
    完结篇
    就快完结篇
    MySQL 选出日期时间最大的一条记录
  • 原文地址:https://www.cnblogs.com/jipinglong/p/11217074.html
Copyright © 2011-2022 走看看