zoukankan      html  css  js  c++  java
  • RAID磁盘阵列

    RAID 类型:

    RAID类型

    最低磁盘个数

    空间利用率

    各自的优缺点

    级 别

    说 明

    RAID0

    条带卷

    2+

    100%

    读写速度快,不容错

    RAID1

    镜像卷

    2

    50%

    读写速度一般,容错

    RAID5

    带奇偶校验的条带卷

    3+

    (n-1)/n

    读写速度快,容错,允许坏一块盘

    RAID10

    RAID1的安全+RAID0的高速

    4

    50%

    读写速度快,容错

    Mdadm 命令详解

    常见参数解释:

    参数

    作用

    -a

     

    检测设备名

    添加磁盘

    -n

    指定设备数量

    -l

    指定RAID级别

    -C

    创建

    -v

    显示过程

    -f

    模拟设备损坏

    -r

    移除设备

    -Q

    查看摘要信息

    -D

    查看详细信息

    -S

    停止RAID磁盘阵列

    搭建 raid 10 阵列

    第一步:查看磁盘
    [root@ken ~]# ls /dev/sd*
    /dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde
     
    第二步:下载mdadm
    [root@ken ~]# yum install mdadm -y
     
    第三步:创建raid10阵列
    [root@ken ~]# mdadm -Cv /dev/md0 -a yes -n 4 -l 10 /dev/sd{b,c,d,e}
    (-C:创建磁盘阵列 v:显示创建过程 /dev/md0 :阵列名称 -a:是否检测 -n :指定磁盘数量 -l:阵列类型  /dev/sd{b,c,d,e}:用来创建阵列的磁盘名)
    mdadm: layout defaults to n2
    mdadm: layout defaults to n2
    mdadm: chunk size defaults to 512K
    mdadm: size set to 20954112K
    mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md0 started.
     
    第四步:格式磁盘阵列为ext4
    [root@ken ~]# mkfs.ext4 /dev/md0
    mapper/ mcelog  md0     mem     midi    mqueue/ 
    [root@ken ~]# mkfs.ext4 /dev/md0 
    mke2fs 1.42.9 (28-Dec-2013)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=128 blocks, Stripe width=256 blocks
    2621440 inodes, 10477056 blocks
    523852 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=2157969408
    320 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624
    
    Allocating group tables: done                            
    Writing inode tables: done                            
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done   
     
    第五步:挂载
    [root@ken ~]# mkdir /raid10
    [root@ken ~]# mount /dev/md0 /raid10
    [root@ken ~]# df -h
    Filesystem               Size  Used Avail Use% Mounted on
    /dev/mapper/centos-root   17G  1.2G   16G   7% /
    devtmpfs                 224M     0  224M   0% /dev
    tmpfs                    236M     0  236M   0% /dev/shm
    tmpfs                    236M  5.6M  230M   3% /run
    tmpfs                    236M     0  236M   0% /sys/fs/cgroup
    /dev/sda1               1014M  130M  885M  13% /boot
    tmpfs                     48M     0   48M   0% /run/user/0
    /dev/md0                  40G   49M   38G   1% /raid10
     
    第六步:查看/dev/md0的详细信息
    [root@ken ~]# mdadm -D /dev/md0
    /dev/md0:
               Version : 1.2
         Creation Time : Thu Feb 28 19:08:25 2019
            Raid Level : raid10
            Array Size : 41908224 (39.97 GiB 42.91 GB)
         Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
          Raid Devices : 4
         Total Devices : 4
           Persistence : Superblock is persistent
    
           Update Time : Thu Feb 28 19:11:41 2019
                 State : clean, resyncing 
        Active Devices : 4
       Working Devices : 4
        Failed Devices : 0
         Spare Devices : 0
    
                Layout : near=2
            Chunk Size : 512K
    
    Consistency Policy : resync
    
         Resync Status : 96% complete
    
                  Name : ken:0  (local to host ken)
                  UUID : c5df1175:a6b1ad23:f3d7e80b:6b56fe98
                Events : 26
    
        Number   Major   Minor   RaidDevice State
           0       8       16        0      active sync set-A   /dev/sdb
           1       8       32        1      active sync set-B   /dev/sdc
           2       8       48        2      active sync set-A   /dev/sdd
           3       8       64        3      active sync set-B   /dev/sde
     
    第七步:写入到配置文件中
    [root@ken ~]# echo "/dev/md0 /raid10 ext4 defaults 0 0" >> /etc/fstab

    磁盘阵列损坏以及修复

    第一步:模拟设备损坏
    [root@ken ~]# mdadm /dev/md0 -f /dev/sdb
    mdadm: set /dev/sdb faulty in /dev/md0
    [root@ken ~]# mdadm -D /dev/md0
    /dev/md0:
               Version : 1.2
         Creation Time : Thu Feb 28 19:08:25 2019
            Raid Level : raid10
            Array Size : 41908224 (39.97 GiB 42.91 GB)
         Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
          Raid Devices : 4
         Total Devices : 4
           Persistence : Superblock is persistent
    
           Update Time : Thu Feb 28 19:15:59 2019
                 State : clean, degraded 
        Active Devices : 3
       Working Devices : 3
        Failed Devices : 1
         Spare Devices : 0
    
                Layout : near=2
            Chunk Size : 512K
    
    Consistency Policy : resync
    
                  Name : ken:0  (local to host ken)
                  UUID : c5df1175:a6b1ad23:f3d7e80b:6b56fe98
                Events : 30
    
        Number   Major   Minor   RaidDevice State
           -       0        0        0      removed
           1       8       32        1      active sync set-B   /dev/sdc
           2       8       48        2      active sync set-A   /dev/sdd
           3       8       64        3      active sync set-B   /dev/sde
    
           0       8       16        -      faulty   /dev/sdb
     
    第二步:添加新的磁盘
    在RAID 10级别的磁盘阵列中,当RAID 1磁盘阵列中存在一个故障盘时并不影响RAID 10磁盘阵列的使用。当购买了新的硬盘设备后再使用mdadm命令来予以替换即可,在此期间我们可以在/RAID目录中正常地创建或删除文件。由于我们是在虚拟机中模拟硬盘,所以先重启系统,然后再把新的硬盘添加到RAID磁盘阵列中。
    [root@ken ~]# reboot
    [root@ken ~]# umount /raid10
    [root@ken ~]# mdadm /dev/md0 -a /dev/sdb
    mdadm: added /dev/sdb
    [root@ken ~]# mdadm -D  /dev/md0
    /dev/md0:
               Version : 1.2
         Creation Time : Thu Feb 28 19:08:25 2019
            Raid Level : raid10
            Array Size : 41908224 (39.97 GiB 42.91 GB)
         Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
          Raid Devices : 4
         Total Devices : 4
           Persistence : Superblock is persistent
    
           Update Time : Thu Feb 28 19:19:14 2019
                 State : clean, degraded, recovering 
        Active Devices : 3
       Working Devices : 4
        Failed Devices : 0
         Spare Devices : 1
    
                Layout : near=2
            Chunk Size : 512K
    
    Consistency Policy : resync
    
        Rebuild Status : 7% complete                                      #这里显示重建进度
    
                  Name : ken:0  (local to host ken)
                  UUID : c5df1175:a6b1ad23:f3d7e80b:6b56fe98
                Events : 35
    
        Number   Major   Minor   RaidDevice State
           4       8       16        0      spare rebuilding   /dev/sdb    #rebuilding重建中
           1       8       32        1      active sync set-B   /dev/sdc
           2       8       48        2      active sync set-A   /dev/sdd
           3       8       64        3      active sync set-B   /dev/sde
     
    再次查看发现已经构建完毕
    [root@ken ~]# mdadm -D  /dev/md0
    /dev/md0:
               Version : 1.2
         Creation Time : Thu Feb 28 19:08:25 2019
            Raid Level : raid10
            Array Size : 41908224 (39.97 GiB 42.91 GB)
         Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
          Raid Devices : 4
         Total Devices : 4
           Persistence : Superblock is persistent
    
           Update Time : Thu Feb 28 19:20:52 2019
                 State : clean 
        Active Devices : 4
       Working Devices : 4
        Failed Devices : 0
         Spare Devices : 0
    
                Layout : near=2
            Chunk Size : 512K
    
    Consistency Policy : resync
    
                  Name : ken:0  (local to host ken)
                  UUID : c5df1175:a6b1ad23:f3d7e80b:6b56fe98
                Events : 51
    
        Number   Major   Minor   RaidDevice State
           4       8       16        0      active sync set-A   /dev/sdb
           1       8       32        1      active sync set-B   /dev/sdc
           2       8       48        2      active sync set-A   /dev/sdd
           3       8       64        3      active sync set-B   /dev/sde

    搭建 raid 5 阵列 以及备份盘

    第一步:查看磁盘
    [root@ken ~]# ls /dev/sd*
    /dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde
     
    第二步:创建RAID5阵列
    [root@ken ~]# mdadm  -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/sd{b,c,d,e}
    (-x :指定备份盘个数)
    mdadm: layout defaults to left-symmetric
    mdadm: layout defaults to left-symmetric
    mdadm: chunk size defaults to 512K
    mdadm: size set to 20954112K
    mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md0 started.
     
    第三步:格式化为ext4
    [root@ken ~]# mkfs.ext4 /dev/md0 
    mke2fs 1.42.9 (28-Dec-2013)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=128 blocks, Stripe width=256 blocks
    2621440 inodes, 10477056 blocks
    523852 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=2157969408
    320 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624
    
    Allocating group tables: done                            
    Writing inode tables: done                            
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done  
     
    第四步:挂载
    [root@ken ~]# mount /dev/md0 /raid5
    [root@ken ~]# df -h
    Filesystem               Size  Used Avail Use% Mounted on
    /dev/mapper/centos-root   17G  1.2G   16G   7% /
    devtmpfs                 476M     0  476M   0% /dev
    tmpfs                    488M     0  488M   0% /dev/shm
    tmpfs                    488M  7.7M  480M   2% /run
    tmpfs                    488M     0  488M   0% /sys/fs/cgroup
    /dev/sda1               1014M  130M  885M  13% /boot
    tmpfs                     98M     0   98M   0% /run/user/0
    /dev/md0                  40G   49M   38G   1% /raid5
     
    第五步:查看阵列信息
    可以发现有一个备份盘/dev/sde
    [root@ken ~]# mdadm -D /dev/md0
    /dev/md0:
               Version : 1.2
         Creation Time : Thu Feb 28 19:35:10 2019
            Raid Level : raid5
            Array Size : 41908224 (39.97 GiB 42.91 GB)
         Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
          Raid Devices : 3
         Total Devices : 4
           Persistence : Superblock is persistent
    
           Update Time : Thu Feb 28 19:37:11 2019
                 State : active 
        Active Devices : 3
       Working Devices : 4
        Failed Devices : 0
         Spare Devices : 1  (备份盘)
    
                Layout : left-symmetric
            Chunk Size : 512K
    
    Consistency Policy : resync
    
                  Name : ken:0  (local to host ken)
                  UUID : b693fe72:4452bd3f:4d995779:ee33bc77
                Events : 76
    
        Number   Major   Minor   RaidDevice State
           0       8       16        0      active sync   /dev/sdb
           1       8       32        1      active sync   /dev/sdc
           4       8       48        2      active sync   /dev/sdd
    
           3       8       64        -      spare   /dev/sde
     
    第六步:模拟/dev/sdb磁盘损坏
    可以发现/dev/sde备份盘立即开始构建
    [root@ken ~]# mdadm /dev/md0 -f /dev/sdb
    mdadm: set /dev/sdb faulty in /dev/md0
    [root@ken ~]# mdadm -D /dev/md0
    /dev/md0:
               Version : 1.2
         Creation Time : Thu Feb 28 19:35:10 2019
            Raid Level : raid5
            Array Size : 41908224 (39.97 GiB 42.91 GB)
         Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
          Raid Devices : 3
         Total Devices : 4
           Persistence : Superblock is persistent
    
           Update Time : Thu Feb 28 19:38:41 2019
                 State : active, degraded, recovering 
        Active Devices : 2
       Working Devices : 3
        Failed Devices : 1
         Spare Devices : 1
    
                Layout : left-symmetric
            Chunk Size : 512K
    
    Consistency Policy : resync
    
        Rebuild Status : 2% complete
    
                  Name : ken:0  (local to host ken)
                  UUID : b693fe72:4452bd3f:4d995779:ee33bc77
                Events : 91
    
        Number   Major   Minor   RaidDevice State
           3       8       64        0      spare rebuilding   /dev/sde
           1       8       32        1      active sync   /dev/sdc
           4       8       48        2      active sync   /dev/sdd
    
           0       8       16        -      faulty   /dev/sdb
  • 相关阅读:
    HDU 1247
    [转载]亲密接触VC6.0编译器
    [转载]你该学什么程序语言
    ACE学习2009116
    新东方英语学习二
    电脑族吃什么比较好
    爱默生生活的准则
    成大事必备9种能力9种手段9种心态
    [转载]句柄和指针
    关于WM_CREATE消息
  • 原文地址:https://www.cnblogs.com/leeeel/p/10816027.html
Copyright © 2011-2022 走看看