zoukankan      html  css  js  c++  java
  • CentOS 配置软raid

    v-box里面新建一个centos7.3的服务器

    v-box中增加4块8GB容量的硬盘.(我增加的是nvme的ssd硬盘,普通硬盘也没有问题,容量大代表你需要等待的时间长,所以小点容量最好)

    创建raid

    [root@bogon ~]# lsblk
    NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    nvme0n1     259:0    0  8G  0 disk
    nvme0n2     259:1    0  8G  0 disk
    nvme0n3     259:2    0  8G  0 disk
    nvme0n4     259:3    0  8G  0 disk
    [root@bogon ~]# yum -y install mdadm
    [root@bogon ~]# mdadm -C /dev/md5 -a yes -l 5 -n 3 -x 1 /dev/nvme0n[1,2,3,4]
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md5 started.
    说明:
    -C:创建模式
    -a {yes|no}:自动创建对应的设备,yes表示会自动在/dev下创建RAID设备
    -l #:指明要创建的RAID的级别(-l 0 表示创建RAID0)
    -n #:使用#个块设备来创建此RAID(-n 3 表示用3块硬盘来创建这个RAID)(这里总共4块盘,3块用来做raid5,所以总容量只有3块盘的2/3,还有一块盘用来做热备)
    -x #:当前阵列中热备盘只有#块(-x 1 表示热备盘只有1块)
    [root@bogon ~]# lsblk
    NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
    nvme0n1     259:0    0    8G  0 disk
    └─md5         9:5    0   16G  0 raid5
    nvme0n2     259:1    0    8G  0 disk
    └─md5         9:5    0   16G  0 raid5
    nvme0n3     259:2    0    8G  0 disk
    └─md5         9:5    0   16G  0 raid5
    nvme0n4     259:3    0    8G  0 disk
    └─md5         9:5    0   16G  0 raid5 [root@bogon
    ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4]
    md5 : active raid5 nvme0n3[4] nvme0n4[3](S) nvme0n2[1] nvme0n1[0]
          16758784 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    unused devices: <none>
    [root@bogon
    ~]# mdadm -D /dev/md5 /dev/md5:
               Version : 1.2
         Creation Time :
            Raid Level : raid5
            Array Size : 16758784 (15.98 GiB 17.16 GB)
         Used Dev Size : 8379392 (7.99 GiB 8.58 GB)
          Raid Devices : 3
         Total Devices : 4
           Persistence : Superblock is persistent
           Update Time :
                 State : clean
        Active Devices : 3
       Working Devices : 4
        Failed Devices : 0
         Spare Devices : 1
                Layout : left-symmetric
            Chunk Size : 512K
    Consistency Policy : unknown
                  Name : bogon:5  (local to host bogon)
                  UUID : 3ff040bd:c1ad0eb3:d98015e1:e53b682c
                Events : 18
        Number   Major   Minor   RaidDevice State
           0     259        0        0      active sync   /dev/nvme0n1
           1     259        1        1      active sync   /dev/nvme0n2
           4     259        2        2      active sync   /dev/nvme0n3
           3     259        3        -      spare   /dev/nvme0n4 实验环境的硬盘比较小,所以立马创建完成,可以看出spare
    /dev/nvme0n4为热备盘. 添加raid5到raid配置文件中/etc/mdadm.conf(默认此文件不存在) echo DEVICE /dev/nvme0n[1,2,3,4] >> /etc/mdadm.conf mdadm -Ds >> /etc/mdadm.conf 然后格式化raid磁盘 [root@bogon ~]# mkfs.xfs /dev/md5 meta-data=/dev/md5               isize=512    agcount=16, agsize=261760 blks
             =                       sectsz=512   attr=2, projid32bit=1
             =                       crc=1        finobt=0, sparse=0
    data     =                       bsize=4096   blocks=4188160, imaxpct=25
             =                       sunit=128    swidth=256 blks
    naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
    log      =internal log           bsize=4096   blocks=2560, version=2
             =                       sectsz=512   sunit=8 blks, lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0 [root@bogon
    ~]# mkdir /mnt/data [root@bogon ~]# mount /dev/md5 /mnt/data/ [root@bogon ~]# echo "/dev/md5 /mnt/data xfs defaults 0 0" >> /etc/fstab

    模拟磁盘故障

    [root@bogon ~]# mdadm /dev/md5 -f /dev/nvme0n3
    mdadm: set /dev/nvme0n3 faulty in /dev/md5
    [root@bogon ~]# cat /proc/mdstat
    md5 : active raid5 nvme0n3[4](F) nvme0n4[3] nvme0n2[1] nvme0n1[0]
          16758784 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
          [===>.................]  recovery = 16.8% (1413632/8379392) finish=0.5min speed=201947K/sec
    unused devices: <none>

    当某个磁盘出现故障时,相应的设备方括号标记(F),如上nvme0n3[4](F)
    其中[3/2]的第一位数表示阵列所包含的设备数,第二位数表示活动的设备数,因为当前有一个设备故障,所以第二位数为2,此时阵列以降级模式运行,虽然阵列仍然可用,但是不具有数据冗余
    [UU_]表示正常使用的设备是/dev/nvme0n1,/dev/nvme0n2,假如这里是/dev/nvme0n2出现故障,则变成[U_U]

    重建完成后查看阵列状态,此时raid恢复正常(故障盘还未修复)

    [root@bogon ~]# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md5 : active raid5 nvme0n3[4](F) nvme0n4[3] nvme0n2[1] nvme0n1[0]
          16758784 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    unused devices: <none>
    [root@bogon ~]# mdadm -D /dev/md5
    /dev/md5:
               Version : 1.2
         Creation Time : Thu Nov 29 01:03:42 2018
            Raid Level : raid5
            Array Size : 16758784 (15.98 GiB 17.16 GB)
         Used Dev Size : 8379392 (7.99 GiB 8.58 GB)
          Raid Devices : 3
         Total Devices : 4
           Persistence : Superblock is persistent
           Update Time : Thu Nov 29 01:09:26 2018
                 State : clean
        Active Devices : 3
       Working Devices : 3
        Failed Devices : 1
         Spare Devices : 0
                Layout : left-symmetric
            Chunk Size : 512K
    Consistency Policy : unknown
                  Name : bogon:5  (local to host bogon)
                  UUID : 3ff040bd:c1ad0eb3:d98015e1:e53b682c
                Events : 37
        Number   Major   Minor   RaidDevice State
           0     259        0        0      active sync   /dev/nvme0n1
           1     259        1        1      active sync   /dev/nvme0n2
           3     259        3        2      active sync   /dev/nvme0n4
           4     259        2        -      faulty   /dev/nvme0n3

     接下来移除故障的磁盘

    [root@bogon ~]# mdadm /dev/md5 -r /dev/nvme0n3
    mdadm: hot removed /dev/nvme0n3 from /dev/md5
    [root@bogon ~]# mdadm -D /dev/md5
    /dev/md5:
               Version : 1.2
         Creation Time : Thu Nov 29 01:03:42 2018
            Raid Level : raid5
            Array Size : 16758784 (15.98 GiB 17.16 GB)
         Used Dev Size : 8379392 (7.99 GiB 8.58 GB)
          Raid Devices : 3
         Total Devices : 3
           Persistence : Superblock is persistent
           Update Time : Thu Nov 29 01:12:10 2018
                 State : clean
        Active Devices : 3
       Working Devices : 3
        Failed Devices : 0
         Spare Devices : 0
                Layout : left-symmetric
            Chunk Size : 512K
    Consistency Policy : unknown
                  Name : bogon:5  (local to host bogon)
                  UUID : 3ff040bd:c1ad0eb3:d98015e1:e53b682c
                Events : 38
        Number   Major   Minor   RaidDevice State
           0     259        0        0      active sync   /dev/nvme0n1
           1     259        1        1      active sync   /dev/nvme0n2
           3     259        3        2      active sync   /dev/nvme0n4

     因为刚才模拟损坏了一块盘,所以当前阵列没有热备盘了,所以我们需要在添加一块新的热备盘,这里为了方便,直接将刚才模拟损坏的硬盘再次添加到raid5中

    [root@bogon ~]# mdadm /dev/md5 -a /dev/nvme0n3
    mdadm: added /dev/nvme0n3
    [root@bogon ~]# mdadm -D /dev/md5
    /dev/md5:
               Version : 1.2
         Creation Time : Thu Nov 29 01:03:42 2018
            Raid Level : raid5
            Array Size : 16758784 (15.98 GiB 17.16 GB)
         Used Dev Size : 8379392 (7.99 GiB 8.58 GB)
          Raid Devices : 3
         Total Devices : 4
           Persistence : Superblock is persistent
           Update Time : Thu Nov 29 01:14:30 2018
                 State : clean
        Active Devices : 3
       Working Devices : 4
        Failed Devices : 0
         Spare Devices : 1
                Layout : left-symmetric
            Chunk Size : 512K
    Consistency Policy : unknown
                  Name : bogon:5  (local to host bogon)
                  UUID : 3ff040bd:c1ad0eb3:d98015e1:e53b682c
                Events : 39
        Number   Major   Minor   RaidDevice State
           0     259        0        0      active sync   /dev/nvme0n1
           1     259        1        1      active sync   /dev/nvme0n2
           3     259        3        2      active sync   /dev/nvme0n4
           4     259        2        -      spare   /dev/nvme0n3

    此时/dev/nvme0n3变成了热备盘,故障测试结束。

    软raid增加硬盘

     软raid使用了一段时间后,发现磁盘空间不足了,此时就需要向当前软RAID中增加新的磁盘,提高RAID的空间

    关闭当前系统,增加磁盘,因为我是vm实验环境,关闭虚拟机后直接添加硬盘即可,我骚气的使用了nvme接口,结果nvme最多支持4块硬盘,没办法,只能添加sas硬盘来顶一下啦!

    [root@localhost ~]# lsblk
    NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
    sdb           8:16   0    8G  0 disk
    nvme0n1     259:0    0    8G  0 disk
    └─md5         9:5    0   16G  0 raid5 /mnt/data
    nvme0n2     259:1    0    8G  0 disk
    └─md5         9:5    0   16G  0 raid5 /mnt/data
    nvme0n3     259:2    0    8G  0 disk
    └─md5         9:5    0   16G  0 raid5 /mnt/data
    nvme0n4     259:3    0    8G  0 disk
    └─md5         9:5    0   16G  0 raid5 /mnt/data
    [root@localhost ~]# mdadm /dev/md5 -a /dev/sdb
    mdadm: added /dev/sdb
    [root@localhost ~]# mdadm -D /dev/md5
    /dev/md5:
               Version : 1.2
         Creation Time : Thu Nov 29 01:03:42 2018
            Raid Level : raid5
            Array Size : 16758784 (15.98 GiB 17.16 GB)
         Used Dev Size : 8379392 (7.99 GiB 8.58 GB)
          Raid Devices : 3
         Total Devices : 5
           Persistence : Superblock is persistent
           Update Time : Thu Nov 29 01:56:18 2018
                 State : clean
        Active Devices : 3
       Working Devices : 5
        Failed Devices : 0
         Spare Devices : 2
                Layout : left-symmetric
            Chunk Size : 512K
    Consistency Policy : unknown
                  Name : bogon:5
                  UUID : 3ff040bd:c1ad0eb3:d98015e1:e53b682c
                Events : 43
        Number   Major   Minor   RaidDevice State
           0     259        0        0      active sync   /dev/nvme0n1
           1     259        1        1      active sync   /dev/nvme0n2
           3     259        3        2      active sync   /dev/nvme0n4
           4     259        2        -      spare   /dev/nvme0n3
           5       8       16        -      spare   /dev/sdb

     刚才向RAID中增加的磁盘,会被当作热备盘,还需要把热备盘加入到RAID的活动盘中

    [root@localhost ~]# mdadm -G /dev/md5 -n4
    [root@localhost ~]# mdadm -D /dev/md5
    /dev/md5:
               Version : 1.2
         Creation Time : Thu Nov 29 01:03:42 2018
            Raid Level : raid5
            Array Size : 16758784 (15.98 GiB 17.16 GB)
         Used Dev Size : 8379392 (7.99 GiB 8.58 GB)
          Raid Devices : 4
         Total Devices : 5
           Persistence : Superblock is persistent
           Update Time : Thu Nov 29 01:57:41 2018
                 State : clean, reshaping
        Active Devices : 4
       Working Devices : 5
        Failed Devices : 0
         Spare Devices : 1
                Layout : left-symmetric
            Chunk Size : 512K
    Consistency Policy : unknown
        Reshape Status : 8% complete
         Delta Devices : 1, (3->4)
                  Name : bogon:5
                  UUID : 3ff040bd:c1ad0eb3:d98015e1:e53b682c
                Events : 62
        Number   Major   Minor   RaidDevice State
           0     259        0        0      active sync   /dev/nvme0n1
           1     259        1        1      active sync   /dev/nvme0n2
           3     259        3        2      active sync   /dev/nvme0n4
           5       8       16        3      active sync   /dev/sdb
           4     259        2        -      spare   /dev/nvme0n3

    上图可以看到,刚新加的/dev/sdb已经变成了活动盘,但是Array Size : 16758784 (15.98 GiB 17.16 GB)并没有变大,那是因为构建没有完成。
    等cat /proc/mdstat构建完成,RAID的容量就会变成(4-1)x8G,下图是已经构建完成后的RAID状态,可以看到Array Size :25138176 (23.97 GiB 25.74 GB)增加了

    [root@localhost ~]# mdadm -D /dev/md5
    /dev/md5:
               Version : 1.2
         Creation Time : Thu Nov 29 01:03:42 2018
            Raid Level : raid5
            Array Size : 25138176 (23.97 GiB 25.74 GB)
         Used Dev Size : 8379392 (7.99 GiB 8.58 GB)
          Raid Devices : 4
         Total Devices : 5
           Persistence : Superblock is persistent
           Update Time : Thu Nov 29 02:01:12 2018
                 State : clean
        Active Devices : 4
       Working Devices : 5
        Failed Devices : 0
         Spare Devices : 1
                Layout : left-symmetric
            Chunk Size : 512K
    Consistency Policy : unknown
                  Name : bogon:5
                  UUID : 3ff040bd:c1ad0eb3:d98015e1:e53b682c
                Events : 75
        Number   Major   Minor   RaidDevice State
           0     259        0        0      active sync   /dev/nvme0n1
           1     259        1        1      active sync   /dev/nvme0n2
           3     259        3        2      active sync   /dev/nvme0n4
           5       8       16        3      active sync   /dev/sdb
           4     259        2        -      spare   /dev/nvme0n3
    [root@localhost ~]# df -Th
    Filesystem          Type      Size  Used Avail Use% Mounted on
    /dev/md5            xfs        16G   34M   16G   1% /raid5

    但是通过上面的df -Th查看文件系统容量发现大小并没有改变,所以还需要对文件系统扩容,根据磁盘大小,扩容时间也不相同,所以扩容时耐心等待

    因为我使用了nvme磁盘和sas磁盘,磁盘的super-block可能不同,导致演示失败。

    [root@localhost ~]# resize2fs /dev/md5
    resize2fs 1.42.9 (28-Dec-2013)
    resize2fs: Bad magic number in super-block while trying to open /dev/md5
    Couldn't find valid filesystem superblock.

    不过正常情况下,执行完resize2fs /dev/md5就可以看到df -Th显示正常大小了。

    然后修改RAID的配置文件vi /etc/mdadm.conf,

    DEVICE /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme0n4 /dev/xxxx

    最后reboot重启系统,确认RAID是否正常

    删除raid

    [root@bogon ~]# lsblk
    NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
    nvme0n1     259:0    0   500G  0 disk
    └─md127       9:127  0 999.8G  0 raid5
    nvme0n2     259:1    0   500G  0 disk
    └─md127       9:127  0 999.8G  0 raid5
    nvme0n3     259:2    0   500G  0 disk
    └─md127       9:127  0 999.8G  0 raid5
    nvme0n4     259:3    0   500G  0 disk
    └─md127       9:127  0 999.8G  0 raid5
    [root@bogon ~]# mdadm /dev/md127 --fail /dev/nvme0n1 --remove /dev/nvme0n1
    mdadm: set /dev/nvme0n1 faulty in /dev/md127
    mdadm: hot removed /dev/nvme0n1 from /dev/md127
    [root@bogon ~]# mdadm /dev/md127 --fail /dev/nvme0n2 --remove /dev/nvme0n2
    mdadm: set /dev/nvme0n2 faulty in /dev/md127
    mdadm: hot removed /dev/nvme0n2 from /dev/md127
    [root@bogon ~]# mdadm /dev/md127 --fail /dev/nvme0n3 --remove /dev/nvme0n3
    mdadm: set /dev/nvme0n3 faulty in /dev/md5
    mdadm: hot removed /dev/nvme0n3 from /dev/md5
    [root@bogon ~]# mdadm /dev/md127 --fail /dev/nvme0n4 --remove /dev/nvme0n4
    mdadm: set /dev/nvme0n4 faulty in /dev/md5
    mdadm: hot removed /dev/nvme0n4 from /dev/md5
    [root@bogon ~]# lsblk
    NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
    md5         9:127  0 999.8G  0 raid5
    nvme0n1     259:0    0   500G  0 disk
    nvme0n2     259:1    0   500G  0 disk
    nvme0n3     259:2    0   500G  0 disk
    nvme0n4     259:3    0   500G  0 disk
    [root@bogon ~]# mdadm -S /dev/md5
    mdadm: stopped /dev/md5
    [root@bogon ~]# mdadm --misc --zero-superblock /dev/nvme0n1
    [root@bogon ~]# mdadm --misc --zero-superblock /dev/nvme0n2
    [root@bogon ~]# mdadm --misc --zero-superblock /dev/nvme0n3
    [root@bogon ~]# mdadm --misc --zero-superblock /dev/nvme0n4

     其他参考命令

    mdadm --stop /dev/md0
    mdadm --remove /dev/md0

  • 相关阅读:
    js 常用正则表达式
    深度学习
    开通自动订阅功能前:输入银行业务信息
    VUE学习九,生命周期
    Setting Windows server firewall by powershell
    Install iis web server using powershell
    转 FAL[server, ARC3]: FAL archive failed Error 16401 creating standby archive log file at host
    springboot admin 监控
    二进制手指计数法
    SpringBoot集成Swagger-Bootstrap-UI
  • 原文地址:https://www.cnblogs.com/tcicy/p/10028891.html
Copyright © 2011-2022 走看看