zoukankan      html  css  js  c++  java
  • 学习openstack(一)

    一、云计算

    云计算特点:必须通过网络使用;弹性计算(按需付费);对用户是透明的(用户不考虑后端的具体实现);

    云计算分类:私有云、公有云(amazon是老大、aliyun、tingyun、tencentyun)、混合云;

    云计算分层:

    Iaas(infrastructure as a service):基础架构即服务,服务器、虚拟机、网络等设备资源

    Paas(platform as a service):平台即服务,web中间件、数据库等集成的系统平台

    SaaS(software as a service);软件即服务,电子邮件、杀毒、网盘等软件服务

    二、OpenStack

    1、简介

    OpenStack是一个由NASA(美国国家航空航天局)和Rackspace合作研发并发起的,以Apache许可证授权的自由软件和开放源代码项目。

    OpenStack是一套Iaas解决方案

    2、组件

    openstack三大核心组件:

    nova(coumpute service计算服务);是一套控制器,用于为用户管理虚拟机实例,配置硬件规格、

    neutron(networking service网络服务);提供网络虚拟化技术,创建虚拟网络、隔离网段的功能

    cinder(block storage块存储);虚拟机添加硬盘,提供数据块存储服务

    其它组件:

    keystone(identity service认证服务);提供身份验证功能,

    horizon(dashboard仪表板,web界面);提供web界面,管理各种服务

    glance(image service镜像服务);支持多种虚拟机镜像格式,创建、删除、编辑镜像

    Swift(Object Storage对象存储),适用于一次写入多次读取

    Heat(Orchestration业务流程),提供通过模板定义部署方式,实现自动化部署

    基础服务:

    MySQL;rabbitmq(组件间通信的交通枢杻);

    主要:

    Nova 管理计算资源,是核心服务。

    Neutron 管理网络资源,是核心服务。

    Glance 为 VM 提供 OS 镜像,属于存储范畴,是核心服务。

    Cinder 提供块存储,VM怎么也得需要数据盘吧,是核心服务。

    Swift 提供对象存储,不是必须的,是可选服务。

    Keystone 认证服务,没它 OpenStack 转不起来,是核心服务。

    Ceilometer 监控服务,不是必须的,可选服务。

    Horizon 大家都需要一个操作界面吧。

    三、部署OpenStack

    基础环境:

    node1.com  192.168.4.16
    node2.com  192.168.4.17
    
    统一更换主机名,hosts文件,关闭iptables、关闭Selinux、时间同步 centos yum 源 https:
    //repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/

    1、部署kvm

    1、安装kvm
    yum install qemu-kvm qemu-kvm-tools virt-manager libvirt
    管理工具: qemu-kvm qemu-kvm-tools 管理工具
    虚拟机工具:virt-manager
    其他工具:libvirt
    brctl show 
    
    2、创建桥接网卡
    vim bridge.sh
    #!/bin/bash
    brctl addbr br0
    brctl addif br0 eht0
    ip addr del dev eth0 192.168.4.11/24
    ifconfig br0 192.168.4.11/24 up
    route add default gw 192.168.4.1
    或者 修改配置文件

    2、部署mysql

    1、安装mysql
    yum install mysql-server
    
    2、修改配置文件
    cp /usr/share/mysql/my-medium.cnf /etc/my.cnf
    vim /etc/my.cnf
    #在[mysqld]标签下配置
    default-storage-engine = innodb         默认存储引擎innodb
    innodb_file_per_table                   设置独享的表空间,如果不设置,会是共享表空间
    collation-server = utf8_general_ci      校对规则
    init-connect = 'SET NAMES utf8'         链接字符集
    character-set-server = utf8             数据库建库字符集
    max_connections = 4096                  最大连接数
    bind-address                            mysql监听地址
    
    3、启动mysql
    /etc/init.d/mysqld start
    
    4、创建数据库
    创建keystone数据库并授权
    mysql> create database keystone;
    mysql> grant all on keystone.* to keystone@'192.168.4.0/255.255.255.0' identified by 'keystone';
    
    创建glance数据库并授权 mysql> create database glance; mysql> grant all on glance.* to glance@'192.168.4.0/255.255.255.0' identified by 'glance';
    创建nova数据库并授权 mysql> create database nova; mysql> grant all on nova.* to nova@'192.168.4.0/255.255.255.0' identified by 'nova'; 创建neutron并授权 mysql> create database neutron; mysql> grant all on neutron.* to neutron@'192.168.4.0/255.255.255.0' identified by 'neutron'; 创建cinder并授权 mysql> create database cinder; mysql> grant all on cinder.* to cinder@'192.168.56.0/255.255.255.0' identified by 'cinder';
    查看所有库
    mysql
    >show databases

    3、部署RabbitMQ

    1、安装 rabbitmq
    yum install rabbitmq-server
    
    2、启动服务
    /etc/init.d/rabbitmq-server start               #如果主机名不能解析,会导致启动不了。
    
    3、安装web管理插件
    /usr/lib/rabbitmq/bin/rabbitmq-plugins list     #列出rabbitmq当前有哪些插件
    /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management     #启用管理插件

    rabbitmq监控的端口是5672,web管理端口是15672和55672。

     4、部署KeyStone

    yum install openstack-keystone python-keystoneclient

    创建keystone需要使用的pki令牌

    keystone-manage pki_setup --keystone-user keystone --keystone-group keystone

    默认会生成“/etc/keystone/ssl/”证书目录及证书文件,此时需要设置目录的权限。

    chown -R keystone:keystone /etc/keystone/ssl/
    chmod -R o-rwx /etc/keystone/ssl/

    配置keystone的admin_token

     egrep -n "^[a-z]" /etc/keystone/keystone.conf 
    13:admin_token=ADMIN
    619:connection=mysql://keystone:keystone@192.168.1.36/keystone

    配置keystone之后,需要同步数据库,作用是建立keystone的表结构。

    keystone-manage db_sync
    mysql -h 192.168.1.36 -u keystone -pkeystone -e "use keystone;show tables;"

    配置keystone的Debug及日志功能

    egrep -n '^[a-z]' /etc/keystone/keystone.conf
    374:debug=true
    439:log_file=/var/log/keystone/keystone.log

    启动服务

    chown -R keystone:keystone /var/log/keystone/*
    /etc/init.d/openstack-keystone start
    chkconfig openstack-keystone on

     keystone监听的端口

    netstat -lntup|egrep "35357|5000"

     keystone的三大类命令

    keystone --help|grep list
    keystone --help|grep create
    keystone --help|grep delete

    定义admin_token变量

    export OS_SERVICE_TOKEN=ADMIN
    export OS_SERVICE_ENDPOINT=http://192.168.1.36:35357/v2.0
    keystone role-list

    注册keystone用户

    1、创建一个admin用户
    keystone user-create --name=admin --pass=admin --email=admin@example.com
    keystone user-list
    2、创建一个admin角色
    keystone role-create --name=admin
    keystone role-list
    3、创建一个admin租户
    keystone tenant-create --name=admin --description="Admin Tenant"
    keystone tenan-list
    4、添加用户角色并建立关系
    keystone user-role-add --user=admin --tenant=admin --role=admin
    keystone user-role-list 

    项目用户

    1、创建一个demo用户
    keystone user-create --name=demo --pass=demo
    2、创建一个demo租户
    keystone tenant-create --name=demo --description="demo Tenant"
    3、建立关系
    keystone user-role-add --user=demo --role=_member_ --tenant=demo
    4、创建一个service用户
    keystone tenant-create --name=service 
    5、创建service和endpoint
    keystone service-create --name=keystone --type=identity
    keystone service-list
    keystone endpoint-create 
    > --service-id=$(keystone service-list | awk '/ identity / {print $2}') 
    > --publicurl=http://192.168.1.36:5000/v2.0 
    > --internalurl=http://192.168.1.36:5000/v2.0 
    > --adminurl=http://192.168.1.36:35357/v2.0
    unset OS_SERVICE_TOKEN
    unset OS_SERVICE_ENDPOINT
    keystone --os-username=admin --os-password=admin --os-tenant-name=admin --os-auth-url=http://192.168.1.36:35357/v2.0 token-get
    keystone endpoint-list    在数据库里查到的结果

     建立keystone环境变量文件(方便以后使用)

    建立admin的环境变量

    cat /root/keystone-admin 
    export OS_TENANT_NAME=admin
    export OS_USERNAME=admin
    export OS_PASSWORD=admin
    export OS_AUTH_URL=http://192.168.1.36:35357/v2.0

    建立demo的环境变量

    cat keystone-demo
    export OS_TENANT_NAME=demo
    export OS_USERNAME=demo
    export OS_PASSWORD=demo
    export OS_AUTH_URL=http://192.168.1.36:35357/v2.0

    5、部署Glance

    安装

    yum install openstack-glance python-glanceclient python-crypto

    配置

    egrep -n '^[a-z]' glance-api.conf 
     43:log_file=/var/log/glance/api.log
     564:connection=mysql://glance:glance@192.168.1.36/glance
    egrep -n '^[a-z]' glance-registry.conf 
     19:log_file=/var/log/glance/registry.log
     94:connection=mysql://glance:glance@192.168.1.36/glance

    同步数据库

    glance-manage db_sync
    mysql -h 192.168.1.36 -u glance -pglance -e"use glance;show tables;"

     配置Glance的RabbitMQ

    egrep -n '^[a-z]' glance-api.conf 
    232:notifier_strategy = rabbit
    242:rabbit_host=192.168.1.36
    243:rabbit_port=5672
    244:rabbit_use_ssl=false
    245:rabbit_userid=guest
    246:rabbit_password=guest
    247:rabbit_virtual_host=/
    248:rabbit_notification_exchange=glance
    249:rabbit_notification_topic=notifications
    250:rabbit_durable_queues=False

    Glance连接到keystone进行认证

    1、在keystone里创建Glance用户
    source keystone-admin
    keystone user-create --name=glance --pass=glance
    2、建立关系
     keystone user-role-add --user=glance --tenant=service --role=admin

    配置Glance的Keystone

    egrep -n "^[a-z]" /etc/glance/glance-api.conf
    645:auth_host=192.168.1.36
    646:auth_port=35357
    647:auth_protocol=http
    648:admin_tenant_name=service
    649:admin_user=glance
    650:admin_password=glance
    660:flavor=keystone
    egrep -n "^[a-z]" /etc/glance/glance-registry.conf 
    175:auth_host=192.168.1.36
    176:auth_port=35357
    177:auth_protocol=http
    178:admin_tenant_name=service
    179:admin_user=glance
    180:admin_password=glance
    190:flavor=keystone

    创建service和endpoint

    keystone service-create --name=glance --type=image
    keystone service-list
    keystone endpoint-create --service-id=$(keystone service-list | awk '/ image / {print $2}') 
    --publicurl=http://192.168.1.36:9292 --internalurl=http://192.168.1.36:9292 --adminurl=http://192.168.1.36:9292 keystone endpoint-list

    启动服务

    chown -R glance:glance  /var/log/glance/
    /etc/init.d/openstack-glance-api start
    /etc/init.d/openstack-glance-registry start

     查看端口:

    netstat -lntup|egrep '9191|9292'
    #glance-api:9191端口
    #glance-registry:9292端口

     查看glance镜像:(glance才刚启动,所以下面没有镜像,但是能看到,说明启动正常)

     glance image-list

    下载镜像并注册

    wget http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
    glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 --container-format bare --is-public True --file cirros-0.3.2-x86_64-disk.img
    glance image-list

    6、部署Nova

    安装

    yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console 
    openstack-nova-novncproxy openstack-nova-scheduler python-novaclient

    配置Nova

    egrep -n '^[a-z]' nova.conf 
    2475:connection=mysql://nova:nova@192.168.1.36/nova

    同步数据库

     nova-manage  db sync
     mysql -h 192.168.1.36 -u nova -pnova -e"use nova;show tables;"

    Nova配置RabbitMQ

    egrep -n '^[a-z]' nova.conf 
    79:rabbit_host=192.168.1.36
    83:rabbit_port=5672
    89:rabbit_use_ssl=false
    92:rabbit_userid=guest
    95:rabbit_password=guest
    189:rpc_backend=rabbit

    Nova配置Keystone

    添加Nova用户
    source keystone-admin
    keystone user-create --name=nova --pass=nova
    keystone user-role-add --user=nova --tenant=service --role=admin
    keystone user-list
     egrep -n '^[a-z]' nova.conf 
    544:auth_strategy=keystone
    2687:auth_host=192.168.1.36
    2690:auth_port=35357
    2694:auth_protocol=http
    2697:auth_uri=http://192.168.1.36:500
    2701:auth_version=v2.0
    2728:admin_user=nova
    2731:admin_password=nova
    2735:admin_tenant_name=service

    Nova配置Glance

    egrep -n '^[a-z]' nova.conf
    253:my_ip=192.168.1.36
    1129:glance_host=$my_ip

    Nova自身配置

    egrep -n '^[a-z]' nova.conf 
    302:state_path=/var/lib/nova
    885:instances_path=$state_path/instances
    1576:lock_path=/var/lib/nova/tmp
    1951:compute_driver=libvirt.LibvirtDriver
    2036:novncproxy_base_url=http://192.168.1.36:6080/vnc_auto.html
    2044:vncserver_listen=0.0.0.0
    2048:vncserver_proxyclient_address=192.168.1.36
    2051:vnc_enabled=true
    2054:vnc_keymap=en-us

    Nova更改的全部配置

    egrep -n '^[a-z]' nova.conf 
    79:rabbit_host=192.168.1.36
    83:rabbit_port=5672
    89:rabbit_use_ssl=false
    92:rabbit_userid=guest
    95:rabbit_password=guest
    189:rpc_backend=rabbit
    253:my_ip=192.168.1.36
    302:state_path=/var/lib/nova
    544:auth_strategy=keystone
    885:instances_path=$state_path/instances
    1129:glance_host=$my_ip
    1576:lock_path=/var/lib/nova/tmp
    1951:compute_driver=libvirt.LibvirtDriver
    2036:novncproxy_base_url=http://192.168.1.36:6080/vnc_auto.html
    2044:vncserver_listen=0.0.0.0
    2048:vncserver_proxyclient_address=192.168.1.36
    2051:vnc_enabled=true
    2054:vnc_keymap=en-us
    2475:connection=mysql://nova:nova@192.168.1.36/nova
    2687:auth_host=192.168.1.36
    2690:auth_port=35357
    2694:auth_protocol=http
    2697:auth_uri=http://192.168.1.36:500
    2701:auth_version=v2.0
    2728:admin_user=nova
    2731:admin_password=nova
    2735:admin_tenant_name=service

    创建service和endpoint

    source keystone-admin
    keystone service-create --name=nova --type=compute
    keystone endpoint-create --service-id=$(keystone service-list| awk ' / compute / {print $2}') 
    --publicurl=http://192.168.1.36:8774/v2/%(tenant_id)s --internalurl=http://192.168.1.36:8774/v2/%(tenant_id)s
    --adminurl=
    http://192.168.1.36:8774/v2/%(tenant_id)s

    启动服务

    for i in {api,cert,conductor,consoleauth,novncproxy,scheduler};do service openstack-nova-"$i" start;done
    nova host-list

    部署计算节点

    1、安装
    yum install -y qemu-kvm libvirt openstack-nova-compute python-novaclient
    2、查看系统是否支持KVM硬件虚拟化
    egrep -c '(vmx|svm)' /proc/cpuinfo 
    若返回0,说明不支持,配置libvirt取代KVM来使用QEMU
    修改 /etc/nova/nova.conf
    virt_type=qemu
    3、控制节点推送配置文件到计算节点 scp /etc/nova/nova.conf 192.168.1.37:/etc/nova/ 4、更改配置 egrep -n "^[a-z]" /etc/nova/nova.conf 2048:vncserver_proxyclient_address=192.168.1.37 改成计算节点的IP 5、启动服务 /etc/init.d/libvirtd start /etc/init.d/messagebus start /etc/init.d/openstack-nova-compute start 6、在控制节点上查看Nova的配置是否生效 nova host-list

    7、部署Neutron

    安装

    yum install openstack-neutron openstack-neutron-ml2 python-neutronclient openstack-neutron-linuxbridge

    基础配置

    egrep -n '^[a-z]' /etc/neutron/neutron.conf
    6:debug = true
    10:state_path = /var/lib/neutron
    13:lock_path = $state_path/lock
    53:core_plugin = ml2
    62 service_plugins = router,firewall,lbaas
    385:root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf

    Neutron配置MySQL

    egrep -n '^[a-z]' /etc/neutron/neutron.conf
    405:connection = mysql://neutron:neutron@192.168.1.36:3306/neutron

    创建Neutron的用户

    source keystone-admin
    keystone user-create --name neutron --pass neutron
    keystone user-role-add --user neutron --tenant service --role admin       

    Neutron配置Keystone

    egrep -n '^[a-z]' /etc/neutron/neutron.conf                                   
    66:api_paste_config = /usr/share/neutron/api-paste.ini
    70:auth_strategy = keystone
    395:auth_host = 192.168.1.36
    396:auth_port = 35357
    397:auth_protocol = http
    398:admin_tenant_name = service
    399:admin_user = neutron
    400:admin_password = neutron

    Neutron配置RabbitMQ

    egrep -n '^[a-z]' /etc/neutron/neutron.conf
    134:rabbit_host = 192.168.1.36
    136:rabbit_password = guest
    138:rabbit_port = 5672
    143:rabbit_userid = guest
    145:rabbit_virtual_host = /

     Neutron配置Nova

    egrep -n '^[a-z]' /etc/neutron/neutron.conf
    299:notify_nova_on_port_status_changes = true
    303:notify_nova_on_port_data_changes = true
    306:nova_url = http://192.168.1.36:8774/v2
    312:nova_admin_username = nova
    315:nova_admin_tenant_id = 628660545a044ac4ac5c1a16ca7f4a2c
    318:nova_admin_password = nova
    321:nova_admin_auth_url = http://192.168.1.36:35357/v2.0
    注释:315行id的由来:
    keystone tenant-list 
    就是service的ID号码,填写到nova_admin_tenant_id即可。

    配置ml2文件

    egrep -n '^[a-z]' /etc/neutron/plugins/ml2/ml2_conf.ini    
    5:type_drivers = flat,vlan,gre,vxlan
    12:tenant_network_types = flat,vlan,gre,vxlan
    17:mechanism_drivers = linuxbridge,openvswitch
    29:flat_networks = physnet1
    62:enable_security_group = True

    配置linuxbridge文件

    egrep -n '^[a-z]' /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
    20:network_vlan_ranges = physnet1
    31:physical_interface_mappings = physnet1:eth0
    74:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    79:enable_security_group = True

     Nova服务配置Neutron

    egrep -n '^[a-z]'  /etc/nova/nova.conf
    1200 network_api_class=nova.network.neutronv2.api.API
    1321 linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver
    1466 neutron_url=http://192.168.1.36:9696
    1474 neutron_admin_username=neutron
    1478 neutron_admin_password=neutron
    1482 neutron_admin_tenant_id=628660545a044ac4ac5c1a16ca7f4a2c
    1488 neutron_admin_tenant_name=service
    1496 neutron_admin_auth_url=http://192.168.1.36:5000/v2.0
    1503 neutron_auth_strategy=keystone
    1536 security_group_api=neutron
    1982 firewall_driver=nova.virt.libvirt.firewall.NoopFirewallDriver
    2872 vif_driver=nova.virt.libvirt.vif.NeutronLinuxBridgeVIFDriver

    重启服务

    for i in {api,conductor,scheduler}; do service openstack-nova-"$i" restart;done

    将配置文件推送到计算节点

    scp /etc/nova/nova.conf 192.168.1.37:/etc/nova/
    vim /etc/nova/nova.conf vncserver_proxyclient_address=192.168.1.37 改成计算节点的IP地址 /etc/init.d/openstack-nova-compute restart 重启服务

    创建service和endpoint

    keystone service-create --name neutron --type network
    keystone endpoint-create --service-id=$(keystone service-list | awk '/ network / {print $2}') 
    --publicurl=http://192.168.1.36:9696 --internalurl=http://192.168.1.36:9696 --adminurl=http://192.168.1.36:9696

     Neutron试启动

    neutron-server --config-file=/etc/neutron/neutron.conf  --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini  
    --config-file=/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
    没有报错,看到监听的端口则说明能启动成功!

    修改Neutron启动脚本

    修改neutron-server启动脚本

    vim /etc/init.d/neutron-server
    #15-17行
    "/usr/share/$prog/$prog-dist.conf" 
        "/etc/$prog/$prog.conf" 
        "/etc/$prog/plugin.ini" 
    #由上面更改为下面的类容:
        "/etc/neutron/neutron.conf" 
        "/etc/neutron/plugins/ml2/ml2_conf.ini" 
        "/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini" 

    修改neutron-linuxbridge-agent启动脚本

    vim /etc/init.d/neutron-linuxbridge-agent
    16-18行
        "/usr/share/$prog/$prog-dist.conf" 
        "/etc/$prog/$prog.conf" 
        "/etc/$prog/plugin.ini" 
    #由上面更改为下面的类容:
        "/etc/neutron/neutron.conf" 
        "/etc/neutron/plugins/ml2/ml2_conf.ini" 
        "/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini" 

    启动Nova服务

    /etc/init.d/neutron-server start
    /etc/init.d/neutron-linuxbridge-agent start
    
    查看端口
    netstat -lntup|grep 9696

    查看Neutron-list

     neutron agent-list

    计算节点Neutron的部署

    1、安装
    yum install openstack-neutron openstack-neutron-ml2 python-neutronclient openstack-neutron-linuxbridge
    2、复制控制节点的Neutron配置文件
    scp /etc/neutron/neutron.conf  192.168.1.37:/etc/neutron/
    scp /etc/neutron/plugins/ml2/ml2_conf.ini  192.168.1.37:/etc/neutron/plugins/ml2/
    scp /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini  192.168.1.37:/etc/neutron/plugins/linuxbridge/
    scp /etc/init.d/neutron-*  192.168.1.37:/etc/init.d/
    3、启动Neutron服务
    /etc/init.d/neutron-linuxbridge-agent start
    4、到控制节点查看
    neutron agent-list

    8、 部署Horizon

    安装

    yum install -y httpd mod_wsgi memcached python-memcached openstack-dashboard

    启动memcache

    /etc/init.d/memcached start

     配置dashboard

    vim /etc/openstack-dashboard/local_settings
    1、打开memcache,默认是关闭状态只需把前面注释取消掉即可
    98 CACHES = {
    99    'default': {
    100        'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
    101        'LOCATION' : '127.0.0.1:11211',
    102    }
    103 }
    2、更改Keystone的地址 128 OPENSTACK_HOST = "192.168.1.36"
    3、增加允许的主机 15 ALLOWED_HOSTS = ['horizon.example.com', 'localhost','192.168.1.36']

    启动apache

    /etc/init.d/httpd start

    访问dashboard

    http://192.168.1.36/dashboard/

    创建网络

    获取demo租户ID
    keystone tenant-list  
    创建网络 neutron net
    -create --tenant-id c4015c47e46f4b30bf68a6f39061ace3 flat_net
    --shared --provider:network_type flat --provider:physical_network physnet1 查看创建的网络 neutron net-list

    创建子网

    管理员-->系统面板-->网络-->点网络名称(flat_net)-->点创建子网,如图:子网名称flat_subnet,网络地址10.96.20.0/24
    IP版本IPv4,网关IP10.96.20.1-->下一步-->子网详情:分配地址池10.96.20.120,10.96.20.130;DNS域名解析服务123.125.81.6-->创建

    创建虚拟机

    用demo用户登录:
    项目-->Compute-->实例-->启动云主机-->云主机名称demo,
    云主机类型m1.tiny,云主机启动源从镜像启动,镜像名称cirros-0.3.4-x86_64(12.7MB)-->运行

    9、部署Cinder

    安装

    yum install openstack-cinder python-cinderclient

    配置

    egrep '^[a-z]' /etc/cinder/cinder.conf  -n
    79:rabbit_host=192.168.1.36
    83:rabbit_port=5672
    89:rabbit_use_ssl=false
    92:rabbit_userid=guest
    95:rabbit_password=guest
    181:rpc_backend=rabbit
    456:my_ip=192.168.1.36
    459:glance_host=$my_ip
    573:auth_strategy=keystone
    727:debug=true
    1908:connection=mysql://cinder:cinder@192.168.1.36/cinder
    2013:auth_host=192.168.1.36
    2017:auth_port=35357
    2021:auth_protocol=http
    2024:auth_uri=http://192.168.1.36:5000
    2029:identity_uri=http://192.168.1.36:35357/
    2033:auth_version=v2.0
    2057:admin_user=cinder
    2060:admin_password=cinder
    2064:admin_tenant_name=service

    同步数据库

    cinder-manage db sync
    mysql -h 192.168.1.36 -u cinder -pcinder -e 'use cinder;show tables;'

     Keystone注册

    keystone user-create --name=cinder --pass=cinder
    keystone user-role-add --user=cinder --tenant=service --role=admin
    keystone service
    -create --name=cinder --type=volume keystone endpoint-create --service-id=e7e5fdadbe874485b3225c8a833f229e --publicurl=http://192.168.1.36:8776/v1/%(tenant_id)s
    --internalurl=
    http://192.168.1.36:8776/v1/%(tenant_id)s --adminurl=http://192.168.1.36:8776/v1/%(tenant_id)s keystone service-create --name=cinderv2 --type=volumev2 keystone endpoint-create --service-id=aee6b0eac6ed49f08fd2cebda1cb71d7 --publicurl=http://192.168.1.36:8776/v2/%(tenant_id)s
    --internalurl=
    http://192.168.1.36:8776/v2/%(tenant_id)s --adminurl=http://192.168.1.36:8776/v2/%(tenant_id)s keystone service-list cinder service-list keystone endpoint-list

    启动服务

    /etc/init.d/openstack-cinder-api start
    /etc/init.d/openstack-cinder-scheduler start

    计算节点Cinder的部署

    1、部署ISCSI环境
    pvcreate /dev/sdb
    vgcreate cinder-volumes /dev/sdb
    
    vim /etc/lvm/lvm.conf
    在devices{}里面添加:
    filter = [ "a/sda1/", "a/sdb/", "r/.*/" ]
    
    yum install -y scsi-target-utils
    vim /etc/tgt/targets.conf
    include /etc/cinder/volumes/* 
    /etc/init.d/tgtd start
    
    2、部署Cinder环境 yum install openstack-cinder scp /etc/cinder/cinder.conf 192.168.1.37:/etc/cinder/
    egrep '^[a-z]' /etc/cinder/cinder.conf 配置iSCSI 957 iscsi_ip_address=$my_ip 970 volume_backend_name=iSCSI-Storage 991 iscsi_helper=tgtadm 1836 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

    3、启动服务
    /etc/init.d/openstack-cinder-volume start

    4、控制节点查看
    cinder service-list

    创建iSCSI存储类型

    cinder type-create iSCSI
    cinder type-key iSCSI set volume_backend_name=iSCSI-Storage
    cinder type-list

      创建iSCSI类型云硬盘

    路径:项目 > Compute > 云硬盘 > 创建云硬盘

    计算节点查看创建好的云硬盘信息:

    lvdisplay 

     Cinder-NFS云硬盘(部署在控制节点)

    部署NFS环境

    yum install  nfs-utils rpcbind
    mkdir -p /data/nfs
    vim /etc/exports
    /data/nfs *(rw,no_root_squash)
    /etc/init.d/rpcbind start
    /etc/init.d/nfs start

     部署Cinder环境

    vim   /etc/cinder/cinder.conf
    970  volume_backend_name=NFS-Storage
    1492 nfs_shares_config=/etc/cinder/nfs_shares
    1511 nfs_mount_point_base=$state_path/mnt
    1837 volume_driver=cinder.volume.drivers.nfs.NfsDriver
    
    vim /etc/cinder/nfs_shares
    192.168.1.36:/data/nfs

    重启Cinder-volume

    /etc/init.d/openstack-cinder-volume restart

    控制节点查看

    cinder service-list

    创建NFS存储类型

    cinder type-create NFS
    cinder type-key NFS set volume_backend_name=NFS-Storage
    cinder type-list 

    创建NFS类型云硬盘

    路径:项目 > Compute > 云硬盘 > 创建云硬盘

    查看创建的卷

     mount

    Cinder-GlusterFS云硬盘

     部署GlusterFS环境

    控制节点和计算节点都需要安装配置

    1、安装
    vim /etc/yum.repos.d/gluster.repo
    https://buildlogs.centos.org/centos/6/storage/x86_64/gluster-3.6/
    yum install glusterfs-server
    /etc/init.d/glusterd start
    mkdir -p /data/glusterfs/exp1
    2、创建信任存储池
    gluster peer probe 192.168.1.363、创建卷
    gluster volume create cinder-volome01 replica 2 192.168.1.36:/data/glusterfs/exp1/ 192.168.1.37:/data/glusterfs/exp1 force
    4、启动卷
    gluster vol start cinder-volome01
    5、查看卷
    gluster vol info

     部署Cinder环境

    egrep -n '^[a-z]'  /etc/cinder/cinder.conf
    1104 glusterfs_shares_config=/etc/cinder/glusterfs_shares
    
    vim /etc/cinder/glusterfs_shares
    192.168.1.36:/cinder-volome01

    创建GlustaerFS和NFS并用的存储类型(同时支持多个存储的方法)

    vim /etc/cinder/cinder.conf
    #注释掉下面几行NFS的配置:
    970  #volume_backend_name=NFS-Storage
    1837 #volume_driver=cinder.volume.drivers.nfs.NfsDriver
    
    #修改并添加如下几行的配置:
    578 enabled_backends=NFS_Driver,GlusterFS_Driver
    
    #放到文件最后 [NFS_Driver] volume_group
    =NFS_Driver volume_driver=cinder.volume.drivers.nfs.NfsDriver volume_backend_name=NFS-Storage [GlusterFS_Driver] volume_group=GlusterFS_Driver volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver volume_backend_name=GlusterFS-Storage

    重启Cinder-volume

     /etc/init.d/openstack-cinder-volume restart

    创建GlusterFS存储类型

    cinder type-create GlusterFS
    cinder type-key GlusterFS set volume_backend_name=GlusterFS-Storage
    cinder type-list

    创建GlusterFS类型云硬盘

    路径:项目 > Compute > 云硬盘 > 创建云硬盘

     10、部署负载均衡LBaas

    1)在dashboard中打开lbaas菜单

    vim /etc/openstack-dashboard/local_settings
    OPENSTACK_NEUTRON_NETWORK = {
        'enable_lb': True,
     }
    将原来的False改为True.(注意大写)

    2)重启dashboard服务

    /etc/init.d/httpd restart

    3)安装 haproxy服务

    yum install haproxy

    4)修改neutron的配置文件

    vim /etc/neutron/lbaas_agent.ini
    interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
    device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver

    5)安装namespace支持

    ip netns list   (输入命令没有报错说明支持,不要在安装了)
    yum update iproute
    
    或者 rpm -ivh --replacefiles http://www.rendoumi.com/soft/iproute-2.6.32-130.el6ost.netns.2.x86_64.rpm  

    6)修改启动lbaas脚本

    vim /etc/init.d/neutron-lbaas-agent
    configs=(
        "/etc/neutron/neutron.conf" 
        "/etc/neutron/lbaas_agent.ini"

    7)启动lbaas服务

    /etc/init.d/neutron-lbaas-agent start

    8)在WEB界面添加负载均衡

    ip netns list    #查看命名空间
    ip netns exec qlbaas-6104510d-cf14-4608-8c9f-9e7841b1a918 netstat -antp #可以看到haproxy的端口监听
    ip netns exec qlbaas-6104510d-cf14-4608-8c9f-9e7841b1a918 ip add   #查看VIP

    Saltstack自动化部署Openstack

    https://github.com/unixhot/salt-openstack

  • 相关阅读:
    Material Design系列第八篇——Creating Lists and Cards
    Androidの解决自动旋转导致activity重启问题
    cnBlogs博客推荐
    yii中sphinx,Ajax搜索分页
    yii框架中应用jquery表单验证插件
    百度一键分享
    yii框架中邮箱激活(数字签名)
    yii框架中保存第三方登录信息
    yii添加行的增删改查
    yii遍历行下的每列数据(小1月考)
  • 原文地址:https://www.cnblogs.com/wuhg/p/10155694.html
Copyright © 2011-2022 走看看