zoukankan      html  css  js  c++  java
  • RHCS图形界面建立GFS共享下

    我们上面通过图形界面实现了GFS,我们这里使用字符界面实现

    1.1.       系统基础配置

    5台节点均采用相同配置。

    1. 配置/etc/hosts文件

      # vi /etc/hosts

      127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4

      ::1        localhost localhost.localdomain localhost6 localhost6.localdomain6

      192.168.1.130 t-lg-kvm-001

      192.168.1.132 t-lg-kvm-002

      192.168.1.134 t-lg-kvm-003

      192.168.1.138 t-lg-kvm-005

      192.168.1.140 t-lg-kvm-006

    2. 网络设置

      关闭NetworkManager

      # service NetworkManager stop

      # chkconfig NetworkManager off

    3. 关闭SELinux

      修改/etc/selinux/config文件中设置SELINUX=disabled 

      # cat /etc/selinux/config

       

      # This file hctrls the state of SELinux on the system.

      # SELINUX= can take one of these three values:

      #     enforcing - SELinux securitypolicy is enforced.

      #     permissive - SELinux printswarnings instead of enforcing.

      #     disabled - No SELinux policyis loaded.

      SELINUX=disabled

      # SELINUXTYPE= can take one of these two values:

      #     targeted - Targeted processesare protected,

      #     mls - Multi Level Securityprotection.

      SELINUXTYPE=targeted

      设置当前生效:

      # setenforce 0

    4. 配置时间同步

      5台节点已配置时间同步。

    1.2.      配置yum

    Gfs2相关软件直接存放在CentOS系统镜像中,按照以下步骤进行操作:

    1、在192.168.1.130上挂载iso文件

    #mount -o loop /opt/CentOS-6.5-x86_64-bin-DVD1.iso /var/www/html/DVD1

    #mount -o loop /opt/CentOS-6.5-x86_64-bin-DVD2.iso /var/www/html/DVD2

    2、在192.168.1.130修改/etc/yum.repos.d/CentOS-Media.repo:

    #vi /etc/yum.repos.d/CentOS-Media.repo

    [c6-media]

    name=CentOS-$releasever - Media

    baseurl=file:///var/www/html/DVD1

            file:///var/www/html/DVD2

    gpgcheck=0

    enabled=1

    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

    3、在192.168.1.130上启动httpd服务,以提供其他计算节点使用

    # service httpd start

    4、在其他4台计算节点上配置yum

    #vi/etc/yum.repos.d/CentOS-Media.repo

    [c6-media]

    name=CentOS-$releasever - Media

    baseurl=http://192.168.1.130/DVD1

            http://192.168.1.130/DVD2

    gpgcheck=0

    enabled=1

    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

    1.3.      安装gfs2相关软件

    1.1.1.  安装gfs2相关软件

    5台计算节点上分别执行以下命令安装gfs2软件:

    安装cmanrgmanager

    # yuminstall -y rgmanager cman

    安装clvm

    # yuminstall -y lvm2-cluster

    安装gfs2

    # yuminstall -y gfs*

    1.1.2.  配置防火墙策略

    5台计算节点上分别执行以下命令配置防火墙策略:

    #iptables-A INPUT -p udp -m udp --dport 5404 -j ACCEPT

    #iptables-A INPUT -p udp -m udp --dport 5405 -j ACCEPT

    #iptables-A INPUT -p tcp -m tcp --dport 21064 -j ACCEPT

    #serviceiptables save

    以上过程执行完成后,建议重新启动计算节点,否则有可能会出现cman服务启动不成功的问题。

     

    1.4.      配置cmanrgmanager 集群

    配置集群在一台计算节点上执行即可,配置完成后同步到其他计算节点上,例如在192.168.1.130上进行配置:

    1、创建集群

    192.168.1.130上执行:

    root@t-lg-kvm-001:/#ccs_toolcreate kvmcluster

    2、配置集群节点

    总共有5台计算节点,因1台网卡问题暂未使用,目前配置过程中只有5台计算节点,将计算节点添加到集群中,在192.168.1.130上执行:

    root@t-lg-kvm-001:/#ccs_tooladdnode -n 1 t-lg-kvm-001

    root@t-lg-kvm-001:/#ccs_tooladdnode -n 2 t-lg-kvm-002

    root@t-lg-kvm-001:/#ccs_tooladdnode -n 3 t-lg-kvm-003

    root@t-lg-kvm-001:/#ccs_tooladdnode -n 4 t-lg-kvm-005

    root@t-lg-kvm-001:/#ccs_tooladdnode -n 5 t-lg-kvm-006

    查看集群:

    root@t-lg-kvm-001:/root#ccs_toollsnode

     

    Clustername: kvmcluster, config_version: 24

     

    Nodename                        Votes Nodeid Fencetype

    t-lg-kvm-001                       1    1   

    t-lg-kvm-002                       1    2   

    t-lg-kvm-003                       1    3   

    t-lg-kvm-005                       1    4   

    t-lg-kvm-006                       1    5 

    3、同步192.168.1.130上的配置文件到各节点

    root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.132:/etc/cluster/

    root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.134:/etc/cluster/

    root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.138:/etc/cluster/

    root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.140:/etc/cluster/

    4、启动各个节点上的cman服务

    5台计算节点上均执行:

    #servicecman start

    集群配置完成,接下来配置clvm.

    1.5.      配置CLVM

    1. 启用集群LVM

      在集群中的每个节点上均执行以下命令开启集群lvm

      #lvmconf--enable-cluster

      验证集群lvm是否启用:

      #cat/etc/lvm/lvm.conf | grep "locking_type = 3"

      locking_type= 3

      有返回值locking_type = 3证明集群lvm已启动。

    2. 启动clvm服务

      在各节点上启动clvm服务:

      #serviceclvmd start

    3. 在集群节点上创建lvm

      此步骤在一台节点上执行即可,例如在192.168.1.130上执行:

      查看共享存储:

      #fdisk-l

       

      Disk/dev/sda: 599.0 GB, 598999040000 bytes

      255heads, 63 sectors/track, 72824 cylinders

      Units= cylinders of 16065 * 512 = 8225280 bytes

      Sectorsize (logical/physical): 512 bytes / 512 bytes

      I/Osize (minimum/optimal): 512 bytes / 512 bytes

      Diskidentifier: 0x000de0e7

       

         Device Boot      Start         End      Blocks  Id  System

      /dev/sda1   *          1          66      524288  83  Linux

      Partition1 does not end on cylinder boundary.

      /dev/sda2              66       72825  584434688   8e  Linux LVM

       

      Disk/dev/mapper/vg01-lv01: 53.7 GB, 53687091200 bytes

      255heads, 63 sectors/track, 6527 cylinders

      Units= cylinders of 16065 * 512 = 8225280 bytes

      Sectorsize (logical/physical): 512 bytes / 512 bytes

      I/Osize (minimum/optimal): 512 bytes / 512 bytes

      Diskidentifier: 0x00000000

       

       

      Disk/dev/mapper/vg01-lv_swap: 537.7 GB, 537676218368 bytes

      255heads, 63 sectors/track, 65368 cylinders

      Units= cylinders of 16065 * 512 = 8225280 bytes

      Sectorsize (logical/physical): 512 bytes / 512 bytes

      I/Osize (minimum/optimal): 512 bytes / 512 bytes

      Diskidentifier: 0x00000000

       

       

      Disk /dev/sdb: 1073.7 GB, 1073741824000 bytes

      255heads, 63 sectors/track, 130541 cylinders

      Units= cylinders of 16065 * 512 = 8225280 bytes

      Sectorsize (logical/physical): 512 bytes / 512 bytes

      I/Osize (minimum/optimal): 512 bytes / 512 bytes

      Diskidentifier: 0x00000000

       

       

      Disk /dev/sdc: 1073.7 GB, 1073741824000 bytes

      255heads, 63 sectors/track, 130541 cylinders

      Units= cylinders of 16065 * 512 = 8225280 bytes

      Sectorsize (logical/physical): 512 bytes / 512 bytes

      I/Osize (minimum/optimal): 512 bytes / 512 bytes

      Diskidentifier: 0x00000000

       

       

      Disk /dev/sdd: 1073.7 GB, 1073741824000 bytes

      255heads, 63 sectors/track, 130541 cylinders

      Units= cylinders of 16065 * 512 = 8225280 bytes

      Sectorsize (logical/physical): 512 bytes / 512 bytes

      I/Osize (minimum/optimal): 512 bytes / 512 bytes

      Diskidentifier: 0x00000000

       

       

      Disk /dev/sde: 1073.7 GB, 1073741824000 bytes

      255heads, 63 sectors/track, 130541 cylinders

      Units= cylinders of 16065 * 512 = 8225280 bytes

      Sectorsize (logical/physical): 512 bytes / 512 bytes

      I/Osize (minimum/optimal): 512 bytes / 512 bytes

      Diskidentifier: 0x00000000

       

       

      Disk /dev/sdf: 1073.7 GB, 1073741824000 bytes

      255heads, 63 sectors/track, 130541 cylinders

      Units= cylinders of 16065 * 512 = 8225280 bytes

      Sectorsize (logical/physical): 512 bytes / 512 bytes

      I/Osize (minimum/optimal): 512 bytes / 512 bytes

      Diskidentifier: 0x00000000

       

       

      Disk /dev/sdg: 1073.7 GB, 1073741824000 bytes

      255heads, 63 sectors/track, 130541 cylinders

      Units= cylinders of 16065 * 512 = 8225280 bytes

      Sectorsize (logical/physical): 512 bytes / 512 bytes

      I/Osize (minimum/optimal): 512 bytes / 512 bytes

      Diskidentifier: 0x00000000

       

       

      Disk/dev/mapper/vg01-lv_bmc: 5368 MB, 5368709120 bytes

      255heads, 63 sectors/track, 652 cylinders

      Units= cylinders of 16065 * 512 = 8225280 bytes

      Sectorsize (logical/physical): 512 bytes / 512 bytes

      I/Osize (minimum/optimal): 512 bytes / 512 bytes

      Diskidentifier: 0x00000000

             6lun,每个1TB

      创建集群物理卷:

      root@t-lg-kvm-001:/root#pvcreate/dev/sdb

      root@t-lg-kvm-001:/root#pvcreate/dev/sdc

      root@t-lg-kvm-001:/root#pvcreate/dev/sdd

      root@t-lg-kvm-001:/root#pvcreate/dev/sde

      root@t-lg-kvm-001:/root#pvcreate/dev/sdf

      root@t-lg-kvm-001:/root#pvcreate/dev/sdg

             创建集群卷组:

      root@t-lg-kvm-001:/root#vgcreatekvmvg /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg

        Clustered volume group "kvmvg"successfully created

      root@t-lg-kvm-001:/root#vgs

        VG   #PV #LV #SN Attr   VSize   VFree

        kvmvg  6   0   0 wz--nc  5.86t 5.86t

        vg01   1   3   0 wz--n- 557.36g 1.61g

             创建集群逻辑卷:

      root@t-lg-kvm-001:/root#lvcreate -L 5998G -n kvmlv kvmvg

        Logical volume "kvmlv" created

      root@t-lg-kvm-001:/root#lvs

        LV     VG    Attr       LSize  Pool Origin Data%  Move LogCpy%Sync Convert

        kvmlv  kvmvg -wi-a-----   5.86t                                            

        lv01    vg01 -wi-ao----  50.00g                                            

        lv_bmc vg01  -wi-ao----   5.00g                                            

        lv_swap vg01 -wi-ao---- 500.75g                   

      到此集群的逻辑卷创建完成,逻辑卷在一台节点上创建完成后,在其他节点上都能看到。

      可登陆到其他节点上,使用lvs都能查看到该逻辑卷,验证是否成功。

    1.6.      配置gfs2

    1、将逻辑卷格式化成集群文件系统

    仅在一台机器上执行即可,例如在192.168.1.130上执行:

    root@t-lg-kvm-001:/root#mkfs.gfs2 -j 7 -p lock_dlm -t kvmcluster:sharedstorage/dev/kvmvg/kvmlv

    Thiswill destroy any data on /dev/kvmvg/kvmlv.

    Itappears to contain: symbolic link to `../dm-3'

     

    Areyou sure you want to proceed? [y/n] y

     

    Device:                    /dev/kvmvg/kvmlv

    Blocksize:                 4096

    DeviceSize                5998.00 GB(1572339712 blocks)

    FilesystemSize:           5998.00 GB (1572339710blocks)

    Journals:                  7

    ResourceGroups:           7998

    LockingProtocol:          "lock_dlm"

    LockTable:               "kvmcluster:sharedstorage"

    UUID:                     39f35f4a-e42a-164f-9438-967679e48f9f

    2、将集群文件系统挂载到/openstack/instances目录下

       该步骤在集群中的每个节点上都需要执行挂载命令:

    #mount-t gfs2 /dev/kvmvg/kvmlv /openstack/instances/

    查看挂载情况:

    #df-h

    Filesystem               Size  Used Avail Use% Mounted on

    /dev/mapper/vg01-lv01     50G  12G   35G  26% /

    tmpfs                    379G   29M 379G   1% /dev/shm

    /dev/mapper/vg01-lv_bmc  5.0G 138M  4.6G   3% /bmc

    /dev/sda1                504M   47M 433M  10% /boot

    /dev/mapper/kvmvg-kvmlv 5.9T  906M  5.9T  1% /openstack/instances

    设置开机自动挂载:

    #echo"/dev/kvmvg/kvmlv /openstack/instances gfs2 defaults 0 0" >>/etc/fstab

    启动rgmanager服务:

    #servicergmanager start

    设置开机自启动:

    #chkconfigclvmd on

    #chkconfigcman on

    #chkconfigrgmanager on

    #chkconfiggfs2 on

    3、设置挂载目录权限

    因挂载目录用于openstack存放虚拟机,目录的权限需要设置成nova:nova.

    在集群中的任意节点上执行:

    #chown -R nova:nova /openstack/instances/

    在各节点上查看目录权限是否正确:

    #ls-lh /openstack/

    总用量 4.0K

    drwxr-xr-x7 nova nova 3.8K 5  26 14:12 instances



    本文转自zsaisai 51CTO博客,原文链接:http://blog.51cto.com/3402313/1656136

  • 相关阅读:
    hdu 5366 简单递推
    hdu 5365 判断正方形
    hdu 3635 并查集
    hdu 4497 数论
    hdu5419 Victor and Toys
    hdu5426 Rikka with Game
    poj2074 Line of Sight
    hdu5425 Rikka with Tree II
    hdu5424 Rikka with Graph II
    poj1009 Edge Detection
  • 原文地址:https://www.cnblogs.com/twodog/p/12138994.html
Copyright © 2011-2022 走看看