zoukankan      html  css  js  c++  java
  • ceph对接openstack

    一、使用rbd方式提供存储如下数据:

    (1)image(glance):保存glanc中的image;

    (2)volume(cinder)存储:保存cinder的volume;保存创建虚拟机时选择创建新卷;

      

    (3)vms(nova)的存储:保存创建虚拟机时不选择创建新卷;

     

    二、实施步骤:

    (1)客户端也要有cent用户:(比如说我openstack环境有100多个节点,我不可能说所有的节点都创建cent这个用户吧,那要选择的去创建,在我节点部署像cinder、nova、glance这三个服务的节点上去创建cent用户)

    useradd cent && echo "123" | passwd --stdin cent
    
    echo
    -e 'Defaults:cent !requiretty cent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
    chmod
    440 /etc/sudoers.d/ceph

    (2)openstack要用ceph的节点(比如compute-node和storage-node)安装下载的软件包:

    yum localinstall ./* -y

    或则:每个节点安装 clients(要访问ceph集群的节点):

    yum install python-rbd
    
    yum install ceph-common #ceph的命令工具
    如果先采用上面的方式安装客户端,其实这两个包在rpm包中早已经安装过了

    (3)部署节点上执行,为openstack节点安装ceph:

    ceph-deploy install controller
    
    ceph-deploy admin controller

    (4)客户端执行

    sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

    (5)create pools,只需在一个ceph节点上操作即可:

    在ceph环境里创建三个pool,这三个pool是分别保存我们openstack平台的镜像、虚拟机、卷。

    ceph osd pool create images 1024
    
    ceph osd pool create vms 1024
    
    ceph osd pool create volumes 1024

    显示pool的状态

    ceph osd lspools

    (6)在ceph集群中,创建glance和cinder用户, 只需在一个ceph节点上操作即可:

    我ceph集群要给你openstack平台的glance和cinder用。

    部署节点创建glance和cinder用户
    useradd glance
    
    useradd cinder
    而后做授权
    ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allowrwx pool=images'
    
    ceph auth
    get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allowrwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
    controller compute storage三个节点已经有了这两个用户
    nova使用cinder用户,就不单独创建了

    (7)拷贝ceph-ring(生成令牌环), 只需在一个ceph节点上操作即可:

    ceph auth get-or-create client.glance > /etc/ceph/ceph.client.glance.keyring
    
    ceph auth
    get-or-create client.cinder > /etc/ceph/ceph.client.cinder.keyring

     

    使用scp拷贝到其他节点(ceph集群节点和openstack的要用ceph的节点比如compute-node和storage-node,本次对接的是一个all-in-one的环境,所以copy到controller节点即可 )

    [root@yunwei ceph]# ls
    ceph.client.admin.keyring  ceph.client.cinder.keyring  ceph.client.glance.keyring  ceph.conf  rbdmap  tmpR3uL7W
    
    [root@yunwei ceph]#
    [root@yunwei ceph]# scp ceph.client.glance.keyring ceph.client.cinder.keyring controller:
    /etc/ceph/

    (8)更改文件的权限(所有客户端节点均执行)

    chown glance:glance /etc/ceph/ceph.client.glance.keyring
    
    chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

    (9)更改libvirt权限(只需在nova-compute节点上操作即可,每个计算节点都做)

    uuidgen
    
    940f0485-e206-4b49-b878-dcd0cb9c70a4

    在/etc/ceph/目录下(在什么目录没有影响,放到/etc/ceph目录方便管理):

    cat > secret(认证的意思).xml <<EOF
    
    <secret ephemeral='no' private='no'>
    
    <uuid>940f0485-e206-4b49-b878-dcd0cb9c70a4</uuid>
    
    <usage type='ceph'>
    
    <name>client.cinder secret</name>
    
    </usage>
    
    </secret>
    
    EOF

    将 secret.xml 拷贝到所有compute节点,并执行::

    virsh secret-define --file secret.xml
    
    ceph auth get-key client.cinder > ./client.cinder.key
    
    virsh secret-set-value --secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 --base64 $(cat ./client.cinder.key)

    最后所有compute节点的client.cinder.key和secret.xml都是一样的, 记下之前生成的uuid:940f0485-e206-4b49-b878-dcd0cb9c70a4

    如遇如下错误:

    [root@controller ceph]# virsh secret-define --file secret.xml
    
    错误:使用 secret.xml 设定属性失败
    错误:
    internal error: 已将 UUID 为d448a6ee-60f3-42a3-b6fa-6ec69cab2378 的 secret 定义为与 client.cinder secret 一同使用 [root@controller ~]# virsh secret-list
    UUID 用量
    -------------------------------------------------------------------------------- d448a6ee-60f3-42a3-b6fa-6ec69cab2378 ceph client.cinder secret [root@controller ~]# virsh secret-undefine d448a6ee-60f3-42a3-b6fa-6ec69cab2378
    已删除 secret d448a6ee
    -60f3-42a3-b6fa-6ec69cab2378 [root@controller ~]# virsh secret-list
    UUID 用量
    -------------------------------------------------------------------------------- [root@controller ceph]# virsh secret-define --file secret.xml
    生成 secret 940f0485
    -e206-4b49-b878-dcd0cb9c70a4 [root@controller ~]# virsh secret-list
    UUID 用量
    -------------------------------------------------------------------------------- 940f0485-e206-4b49-b878-dcd0cb9c70a4 ceph client.cinder secret virsh secret-set-value --secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 --base64 $(cat ./client.cinder.key)

    (10)配置Glance, 在所有的controller节点上做如下更改:

    vim /etc/glance/glance-api.conf
    
    [DEFAULT] default_store
    = rbd [cors] [cors.subdomain] [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [glance_store] stores = rbd default_store = rbd rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8 [image_format] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = glance [matchmaker_redis] [oslo_concurrency] [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [paste_deploy] flavor = keystone [profiler] [store_type_location_strategy] [task] [taskflow_executor]

    在所有的controller节点上做如下更改

    systemctl restart openstack-glance-api.service
    
    systemctl status openstack-glance-api.service

    创建image验证:

    [root@controller ~]# openstack image create "cirros"   --file cirros-0.3.3-x86_64-disk.img.img   --disk-format qcow2 --container-format bare --public
      
    [root@controller ~]# rbd ls images
    
    9ce5055e
    -4217-44b4-a237-e7b577a20dac

    **********有输出镜像说明成功了

     (8)配置 Cinder:

    vim /etc/cinder/cinder.conf
    
    [DEFAULT]
    my_ip = #当前主机IP
    glance_api_servers = http://controller:9292
    auth_strategy = keystone
    enabled_backends = ceph
    state_path = /var/lib/cinder
    transport_url = rabbit://openstack:admin@controller
    [backend]
    [barbican]
    [brcd_fabric_example]
    [cisco_fabric_example]
    [coordination]
    [cors]
    [cors.subdomain]
    [database]
    connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    [fc-zone-manager]
    [healthcheck]
    [key_manager]
    [keystone_authtoken]
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = cinder
    password = cinder
    [matchmaker_redis]
    [oslo_concurrency]
    lock_path = /var/lib/cinder/tmp
    [oslo_messaging_amqp]
    [oslo_messaging_kafka]
    [oslo_messaging_notifications]
    [oslo_messaging_rabbit]
    [oslo_messaging_zmq]
    [oslo_middleware]
    [oslo_policy]
    [oslo_reports]
    [oslo_versionedobjects]
    [profiler]
    [ssl]
    [ceph]
    volume_driver = cinder.volume.drivers.rbd.RBDDriver
    rbd_pool = volumes
    rbd_ceph_conf = /etc/ceph/ceph.conf
    rbd_flatten_volume_from_snapshot = false
    rbd_max_clone_depth = 5
    rbd_store_chunk_size = 4
    rados_connect_timeout = -1
    glance_api_version = 2
    rbd_user = cinder
    rbd_secret_uuid = 940f0485-e206-4b49-b878-dcd0cb9c70a4
    volume_backend_name=ceph

    重启cinder服务:

    #控制节点重启
    systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service 
    #存储节点重启
    openstack-cinder-volume.service
    
    systemctl status openstack
    -cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service

     创建volume验证:

    [root@controller gfs]# rbd ls volumes
    
    volume-43b7c31d-a773-4604-8e4a-9ed78ec18996

     (9)配置Nova:

    vim /etc/nova/nova.conf
    
    [DEFAULT]
    my_ip=#当前主机IP
    use_neutron = True
    firewall_driver = nova.virt.firewall.NoopFirewallDriver
    enabled_apis=osapi_compute,metadata
    transport_url = rabbit://openstack:admin@controller
    [api]
    auth_strategy = keystone
    [api_database]
    connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
    [barbican]
    [cache]
    [cells]
    [cinder]
    os_region_name = RegionOne
    [cloudpipe]
    [conductor]
    [console]
    [consoleauth]
    [cors]
    [cors.subdomain]
    [crypto]
    [database]
    connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
    [ephemeral_storage_encryption]
    [filter_scheduler]
    [glance]
    api_servers = http://controller:9292
    [guestfs]
    [healthcheck]
    [hyperv]
    [image_file_url]
    [ironic]
    [key_manager]
    [keystone_authtoken]
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = nova
    password = nova
    [libvirt]
    virt_type=qemu
    images_type = rbd
    images_rbd_pool = vms
    images_rbd_ceph_conf = /etc/ceph/ceph.conf
    rbd_user = cinder
    rbd_secret_uuid = 940f0485-e206-4b49-b878-dcd0cb9c70a4
    [matchmaker_redis]
    [metrics]
    [mks]
    [neutron]
    url = http://controller:9696
    auth_url = http://controller:35357
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    region_name = RegionOne
    project_name = service
    username = neutron
    password = neutron
    service_metadata_proxy = true
    metadata_proxy_shared_secret = METADATA_SECRET
    [notifications]
    [osapi_v21]
    [oslo_concurrency]
    lock_path=/var/lib/nova/tmp
    [oslo_messaging_amqp]
    [oslo_messaging_kafka]
    [oslo_messaging_notifications]
    [oslo_messaging_rabbit]
    [oslo_messaging_zmq]
    [oslo_middleware]
    [oslo_policy]
    [pci]
    [placement]
    os_region_name = RegionOne
    auth_type = password
    auth_url = http://controller:35357/v3
    project_name = service
    project_domain_name = Default
    username = placement
    password = placement
    user_domain_name = Default
    [quota]
    [rdp]
    [remote_debug]
    [scheduler]
    [serial_console]
    [service_user]
    [spice]
    [ssl]
    [trusted_computing]
    [upgrade_levels]
    [vendordata_dynamic_auth]
    [vmware]
    [vnc]
    enabled=true
    vncserver_listen=$my_ip
    vncserver_proxyclient_address=$my_ip
    novncproxy_base_url = http://172.16.254.63:6080/vnc_auto.html
    [workarounds]
    [wsgi]
    [xenserver]
    [xvp]

    重启nova服务:

    #控制节点重启
    systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-compute.service
    #计算节点重启 
    openstack-nova-compute.service
    #存储节点重启
    openstack-nova-compute.service
    
    systemctl status openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-compute.service

     创建虚机验证:

     

  • 相关阅读:
    HTMLParser使用
    SpringMVC学习系列(6) 之 数据验证
    SpringMVC学习系列 之 表单标签
    开源OSS.Social微信项目解析
    源码分析——核心机制
    Struts2 源码分析——过滤器(Filter)
    调结者(Dispatcher)之执行action
    配置管理之PackageProvider接口
    源码分析——Action代理类的工作
    DefaultActionInvocation类的执行action
  • 原文地址:https://www.cnblogs.com/shuaiyin/p/11043495.html
Copyright © 2011-2022 走看看