zoukankan      html  css  js  c++  java
  • Linux高级文件系统管理

    该系列文章只是本人的学习笔记,文章中的文字描述提取自《Linux鸟哥私房菜》《Linux运维之道》等书中的重点内容,化繁为简能够在工作中快速复习掌握重点,并不代表个人立场,但转载请加出处,并注明参考文献。

    如果您的 Linux 服务器有多个用户经常存取数据时,为了维护所有使用者在硬盘容量的公平使用,磁碟配额 (Quota) 就是一项非常有用的工具,另外,如果你的用户常常抱怨磁盘容量不够用,那么更进阶的文件系统就得要学习,本章我们会介绍磁盘阵列 (RAID),及逻辑卷轴文件系统 (LVM),这些工具都可以帮助你管理与维护使用者可用的磁盘容量.

    Quota 磁盘配额配置

    Quota 这个玩意儿就字面上的意思来看,就是有多少『限额』的意思啦,如果是用在零用钱上面,就是类似『有多少零用钱一个月』的意思之类的,如果是在计算机主机的磁盘使用量上呢? 以 Linux 来说,就是有多少容量限制的意思,我们可以使用 quota 来让磁盘的容量使用较为公平,下面我们会介绍什么是 quota 然后以一个完整的范例来介绍 quota 的使用作用.

    由于Linux是一个多用户管理的操作系统,而Linux默认情况下并不限制每个用户使用磁盘空间的大小,假如某个用户疏忽或者恶意占满磁盘空间,将导致系统磁盘无法写入甚至崩溃;为了保证系统磁
    盘的有足够的剩余空间,我们需要对用户和组进行磁盘空间使用限制。

    磁盘配额限制类型:

    ⦁ 限制用户和组,对磁盘空间的使用量
    ⦁ 限制用户和组,在磁盘内创建文件的个数

    磁盘配额限制级别:

    ⦁ 软限制:低级限制,此限制可以突破,突破时会被警告,超出部分会有宽限天数,宽限天数到期后超出部分被清空,软限制不能超过硬限制
    ⦁ 硬限制:绝对限制,此限制不会被突破,达到指定限制后无法使用更多空间
    ⦁ 宽限天数:当有数据超过软限制后,超出部分会被计时,宽限天数到期后超出部分数据将被清空,宽限天数默认是7天
    注:磁盘配额是针对分区进行设置的,无法实现,"某用户在系统中共计只能使用50MB磁盘空间",只能设置某用户在/home分区能使用30M这样的限制.切记:磁盘配额是针对分区的!

    精简模式下没有此命令,执行 yum install -y quota 安装

    ◆检查内核是否支持配额◆

    [root@localhost ~]# cat /boot/config-3.10.0-693.el7.x86_64 |grep "CONFIG_QUOTA"
    CONFIG_QUOTA=y
    CONFIG_QUOTA_NETLINK_INTERFACE=y
    # CONFIG_QUOTA_DEBUG is not set
    CONFIG_QUOTA_TREE=y
    CONFIG_QUOTACTL=y
    CONFIG_QUOTACTL_COMPAT=y
    

    ◆检查指定分区的挂载属性是否满足条件◆

    [root@localhost ~]# dumpe2fs -h /dev/vdb |grep "Default mount options"
    dumpe2fs 1.42.9 (28-Dec-2013)
    Default mount options:    user_xattr acl
     
    #检查结果中是否包含,usrquota,grpquota两个挂载属性
    

    ◆quotacheck 生成用户和组的配置文件◆

    [root@localhost ~]# quotacheck --help
    Utility for checking and repairing quota files.
    quotacheck [-gucbfinvdmMR] [-F <quota-format>] filesystem|-a
    
    语法格式:[ quota [选项] [分区名] ]
    
            -a      #扫描/etc/mtab文件中所有启用磁盘配额功能的分区.如果加入此参数,命令后面就不需要加入分区名了
            -u      #建立用户配额的配置文件,即生成aquota.user
            -g      #建立组配额的配置文件,即aquota.group
            -v      #显示扫描过程
            -c      #清除原有的配置文件,重新建立新的配置文件
    

    ◆edquota 编辑配额文件,设置指定限制大小◆

    [root@localhost ~]# edquota --help
    edquota: Usage:
            edquota [-rm] [-u] [-F formatname] [-p username] [-f filesystem] username ...
            edquota [-rm] -g [-F formatname] [-p groupname] [-f filesystem] groupname ...
            edquota [-u|g] [-F formatname] [-f filesystem] -t
            edquota [-u|g] [-F formatname] [-f filesystem] -T username|groupname ...
    
    语法格式:[ edquota [选项] [用户名或组名] ]
    
            -u      #用户名
            -g      #组名
            -t      #设定宽限时间
            -p      #复制磁盘配额规则,不需要每一个用户或者组都手动设置一遍
                    #edquota        -p 模板用户     -u 目标用户
     
    #注:配置文件中所写大小默认单位KB
    

    ◆启动quota配额管理◆

    [root@localhost ~]# quotaon --help
    quotaon: Usage:
            quotaon [-guvp] [-F quotaformat] [-x state] -a
            quotaon [-guvp] [-F quotaformat] [-x state] filesys ...
    
    语法格式:[ quotaon [选项] [分区名] ]
    
            -a      #根据/etc/mtab文件启动所有分区的磁盘配额(不写分区名)
            -u      #启动用户的磁盘配额
            -g      #启动组的磁盘配额
            -v      #显示启动过程信息
    

    ◆关闭quota配额管理◆

    [root@localhost ~]# quotaoff --help
    quotaoff: Usage:
            quotaoff [-guvp] [-F quotaformat] [-x state] -a
            quotaoff [-guvp] [-F quotaformat] [-x state] filesys ...
    
    语法格式:[ quotaoff [选项] [分区名] ]
    
            -a      #根据/etc/mtab文件关闭所有分区的磁盘配额(不写分区名)
            -u      #关闭用户的磁盘配额
            -g      #关闭组的磁盘配额
            -v      #显示关闭过程信息
    

    ◆quota 查看指定用户和组的配额信息◆

    [root@localhost ~]# quota --hlep
    quota: unrecognized option '--hlep'
    quota: Usage: quota [-guqvswim] [-l | [-Q | -A]] [-F quotaformat]
            quota [-qvswim] [-l | [-Q | -A]] [-F quotaformat] -u username ...
            quota [-qvswim] [-l | [-Q | -A]] [-F quotaformat] -g groupname ...
            quota [-qvswugQm] [-F quotaformat] -f filesystem ...
    
    语法格式:[ quota [选项] [用户名] ]
    
            -u      #用户名
            -g      #组名
            -v      #显示详细信息
            -s      #以常见单位显示大小
    

    ◆repquota 查看指定分区的磁盘配额◆

    [root@localhost ~]# repquota --help
    repquota: Utility for reporting quotas.
    Usage:
    repquota [-vugsi] [-c|C] [-t|n] [-F quotaformat] (-a | mntpoint)
    
    语法格式:[ repquota [选项] [分区名] ]
    
            -u      #查询用户配额
            -g      #查询组配额
            -v      #显示详情
            -s      #以常见单位显示
    

    ◆setquota 非交互设置磁盘配额命令◆

    [root@localhost ~]# setquota --help
    setquota: Usage:
      setquota [-u|-g] [-rm] [-F quotaformat] <user|group>
            <block-softlimit> <block-hardlimit> <inode-softlimit> <inode-hardlimit> -a|<filesystem>...
      setquota [-u|-g] [-rm] [-F quotaformat] <-p protouser|protogroup> <user|group> -a|<filesystem>...
      setquota [-u|-g] [-rm] [-F quotaformat] -b [-c] -a|<filesystem>...
      setquota [-u|-g] [-F quotaformat] -t <blockgrace> <inodegrace> -a|<filesystem>...
      setquota [-u|-g] [-F quotaformat] <user|group> -T <blockgrace> <inodegrace> -a|<filesystem>...
    
    setquota -u 用户名 软(容) 硬(容) 软(数) 硬(数) 分区名
     
    注:这样的非交互式的命令更适合写入脚本,而且假如有很多用户的磁盘配额配置相同也可以用复制来实现。
    

    ◆磁盘配额小实验◆

    ⦁ 这里有一块未分区的磁盘/dev/sdb,手动分区并格式化.
    ⦁ 将磁盘配额开启,并写入开机自启动列表.
    ⦁ 创建lyshark用户,和temp组.
    ⦁ 配置lyshark的软限制200M,硬限制500M,配置temp组软限制100M,硬限制200M.

    1.检查系统是否支持配额

    [root@localhost ~]# cat /boot/config-3.10.0-862.el7.x86_64 |grep "CONFIG_QUOTA"
    CONFIG_QUOTA=y
    CONFIG_QUOTA_NETLINK_INTERFACE=y
    # CONFIG_QUOTA_DEBUG is not set
    CONFIG_QUOTA_TREE=y
    CONFIG_QUOTACTL=y
    CONFIG_QUOTACTL_COMPAT=y
    

    2.查看磁盘信息

    [root@localhost ~]# ll /dev/sd*
    brw-rw---- 1 root disk 8,  0 6月  24 09:14 /dev/sda
    brw-rw---- 1 root disk 8,  1 6月  24 09:14 /dev/sda1
    brw-rw---- 1 root disk 8,  2 6月  24 09:14 /dev/sda2
    brw-rw---- 1 root disk 8, 16 6月  24 09:14 /dev/sdb
    

    3.磁盘分区/dev/sdb,并格式化为ext4格式

    [root@localhost ~]# parted /dev/sdb
    GNU Parted 3.1
    Using /dev/sdb
    Welcome to GNU Parted! Type 'help' to view a list of commands.
    (parted) mkpart
    Partition name?  []? sdb1
    File system type?  [ext2]? ext2
    Start? 1M
    End? 10000M
    (parted) p
    Model: VMware, VMware Virtual S (scsi)
    Disk /dev/sdb: 10.7GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number  Start   End     Size    File system  Name  Flags
     1      1049kB  10.0GB  9999MB  ext4         sdb1
    
    (parted) q
    Information: You may need to update /etc/fstab.
    
    [root@localhost ~]# mkfs.ext4 /dev/sdb1
    
    mke2fs 1.42.9 (28-Dec-2013)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    610800 inodes, 2441216 blocks
    122060 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=2151677952
    75 block groups
    32768 blocks per group, 32768 fragments per group
    8144 inodes per group
    Superblock backups stored on blocks:
            32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
    
    Allocating group tables: done
    Writing inode tables: done
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done
    

    4.创建挂载点,并挂载设备

    [root@localhost ~]# mkdir /sdb1
    [root@localhost ~]# mount /dev/sdb1 /sdb1/
    
    [root@localhost ~]# df -h
    Filesystem               Size  Used Avail Use% Mounted on
    /dev/mapper/centos-root  8.0G  1.4G  6.7G  17% /
    devtmpfs                  98M     0   98M   0% /dev
    tmpfs                    110M     0  110M   0% /dev/shm
    tmpfs                    110M  5.5M  104M   6% /run
    tmpfs                    110M     0  110M   0% /sys/fs/cgroup
    /dev/sda1               1014M  130M  885M  13% /boot
    tmpfs                     22M     0   22M   0% /run/user/0
    /dev/sr0                 4.2G  4.2G     0 100% /mnt
    /dev/sdb1                9.1G   37M  8.6G   1% /sdb1
    

    5.检查分区是否支持配额 (主要看有没有usrquota,grpquota)

    [root@localhost ~]# dumpe2fs -h /dev/sdb1 |grep "Default mount options"
    dumpe2fs 1.42.9 (28-Dec-2013)
    Default mount options:    user_xattr acl
    
    [root@localhost ~]# cat /proc/mounts |grep "/dev/sdb1"
    /dev/sdb1 /sdb1 ext4 rw,relatime,data=ordered 0 0
    
    #上面没有看到相关权限,此时我们要重新挂载一下磁盘,加上权限
    
    [root@localhost ~]# mount -o remount,usrquota,grpquota /dev/sdb1
    
    [root@localhost ~]# cat /proc/mounts |grep "/dev/sdb1"
    /dev/sdb1 /sdb1 ext4 rw,relatime,quota,usrquota,grpquota,data=ordered 0 0
    

    6.设置开机自动挂载分区,并开启配额

    [root@localhost ~]# ls -l /dev/disk/by-uuid/
    total 0
    lrwxrwxrwx 1 root root 10 Sep 21 20:07 13d5ccc2-52db-4aec-963a-f88e8edcf01c -> ../../sda1
    lrwxrwxrwx 1 root root  9 Sep 21 20:07 2018-05-03-20-55-23-00 -> ../../sr0
    lrwxrwxrwx 1 root root 10 Sep 21 20:07 4604dcf2-da39-455a-9719-e7c5833e566c -> ../../dm-0
    lrwxrwxrwx 1 root root 10 Sep 21 20:47 939cbeb8-bc88-44aa-9221-50672111e123 -> ../../sdb1
    lrwxrwxrwx 1 root root 10 Sep 21 20:07 f6a4b420-aa6a-4e66-bbb3-c8e8280a099f -> ../../dm-1
    
    
    [root@localhost ~]# cat /etc/fstab
    
    #
    # /etc/fstab
    # Created by anaconda on Tue Sep 18 09:05:06 2018
    #
    # Accessible filesystems, by reference, are maintained under '/dev/disk'
    # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
    #
    /dev/mapper/centos-root /                       xfs     defaults        0 0
    UUID=13d5ccc2-52db-4aec-963a-f88e8edcf01c /boot                   xfs     defaults        0 0
    /dev/mapper/centos-swap swap                    swap    defaults        0 0
    
    UUID=7d7f22ed-466e-4205-8efe-1b6184dc5e1b swap swap defaults 0 0
    UUID=939cbeb8-bc88-44aa-9221-50672111e123 /sdb1   ext4   defaults,usrquota,grpquota  0 0
    
    [root@localhost ~]# mount -o remount,usrquota,grpquota /dev/sdb1
    

    7.生成配额文件 quotackeck -ugv[分区名]

    [root@localhost ~]# quotacheck -ugv /dev/sdb1
    
    quotacheck: Your kernel probably supports journaled quota but you are not using it. Consider switching to journaled quota to avoid running quotacheck after an unclean shutdown.
    quotacheck: Scanning /dev/sdb1 [/sdb1] done
    quotacheck: Cannot stat old user quota file /sdb1/aquota.user: No such file or directory. Usage will not be subtracted.
    quotacheck: Cannot stat old group quota file /sdb1/aquota.group: No such file or directory. Usage will not be subtracted.
    quotacheck: Cannot stat old user quota file /sdb1/aquota.user: No such file or directory. Usage will not be subtracted.
    quotacheck: Cannot stat old group quota file /sdb1/aquota.group: No such file or directory. Usage will not be subtracted.
    quotacheck: Checked 3 directories and 0 files
    quotacheck: Old file not found.
    quotacheck: Old file not found.
    

    8.编辑限制,edquota -ugtp [用户名/组名]

    配置lyshark的软限制200M,硬限制500M

    [root@localhost ~]# edquota -u lyshark
    
    Disk quotas for user lyshark (uid 1000):
    
         ↓文件系统                         软(容量)   硬(容量)    I节点      软(数)   硬(数)
      Filesystem              blocks       soft       hard     inodes     soft     hard
      /dev/sdb1                 0          200M       500M          0        0        0
    

    配置temp组软限制100M,硬限制200M.

    [root@localhost ~]# edquota -g temp
    
    Disk quotas for group temp (gid 1001):
      Filesystem                   blocks       soft       hard     inodes     soft     hard
      /dev/sdb1                         0     102400     204800          0        0        0
    

    9.开启配额,quota on/off -augv

    [root@localhost ~]# quotaon -augv
    /dev/sdb1 [/sdb1]: group quotas turned on
    /dev/sdb1 [/sdb1]: user quotas turned on
    

    10.查看指定用户或组的配额,quota -ugvs

    [root@localhost ~]# quota -ugvs
    
    Disk quotas for user root (uid 0):
         Filesystem   space   quota   limit   grace   files   quota   limit   grace
          /dev/sdb1     20K      0K      0K               2       0       0
    Disk quotas for group root (gid 0):
         Filesystem   space   quota   limit   grace   files   quota   limit   grace
          /dev/sdb1     20K      0K      0K               2       0       0
    

    LVM 逻辑卷管理器

    LVM(Logical Volume Manager)逻辑卷管理,它是Linux环境下对磁盘分区进行管理的一种机制,普通的磁盘分区管理方式在分区划分好之后就无法改变其大小,当一个逻辑分区存放不下某个文件时,解决的方法通常是使用符号链接,或者使用调整分区大小的工具,但这只是暂时解决办法,没有从根本上解决问题.简单来说LVM就是将物理磁盘融合成一个或几个大的虚拟磁盘存储池,按照我们的需求去存储池划分空间来使用,由于是虚拟的存储池,所以划分空间时可以自由的调整大小,如下:

    LVM的组成部分:

    ⦁ 物理卷(PV,Physical Volume):由磁盘或分区转化而成
    ⦁ 卷组(VG,Volume Group):将多个物理卷组合在一起组成了卷组,组成同一个卷组的可以是同一个硬盘的不同分区,也可以是不同硬盘上的不同分区,我们通常把卷组理解为一块硬盘.
    ⦁ 逻辑卷(LV,Logical Volume):把卷组理解为硬盘的话,那么我们的逻辑卷则是硬盘上的分区,逻辑卷可以进行格式化,存储数据.
    ⦁ 物理扩展(PE,Physical Extend):PE卷组的最小存储单元,PE所在的位置是VG卷组,即硬盘上,那么我们可以把PE理解为硬盘上的扇区,默认是4MB,可自由配置.

    这里准备好4块硬盘,无需分区与格式化.

    [root@localhost ~]# ll /dev/sd[b-z]
    
    brw-rw---- 1 root disk 8, 16 Sep 21 22:04 /dev/sdb
    brw-rw---- 1 root disk 8, 32 Sep 21 22:04 /dev/sdc
    brw-rw---- 1 root disk 8, 48 Sep 21 22:04 /dev/sdd
    brw-rw---- 1 root disk 8, 64 Sep 21 22:04 /dev/sde
    

    ◆PV 物理卷创建与移除◆

    PV的创建

    pvcreate [分区路径],[分区路径][.......]
    
    [root@localhost ~]# ll /dev/sd[b-z]
    brw-rw---- 1 root disk 8, 16 Sep 21 22:04 /dev/sdb
    brw-rw---- 1 root disk 8, 32 Sep 21 22:04 /dev/sdc
    brw-rw---- 1 root disk 8, 48 Sep 21 22:04 /dev/sdd
    brw-rw---- 1 root disk 8, 64 Sep 21 22:04 /dev/sde
    
    [root@localhost ~]# pvcreate /dev/sdb /dev/sdc           #此处拿3块硬盘创建
      Physical volume "/dev/sdb" successfully created.
      Physical volume "/dev/sdc" successfully created.
      Physical volume "/dev/sdd" successfully created.
    
    [root@localhost ~]# pvs                                  #查询创建好的硬盘
      PV         VG     Fmt  Attr PSize  PFree
      /dev/sda2  centos lvm2 a--  <9.00g     0
      /dev/sdb          lvm2 ---  10.00g 10.00g
      /dev/sdc          lvm2 ---  10.00g 10.00g
      /dev/sdd          lvm2 ---  10.00g 10.00g
    

    PV的移除

    pvremove [分区路径]
    
    [root@localhost ~]# pvs
      PV         VG     Fmt  Attr PSize  PFree
      /dev/sda2  centos lvm2 a--  <9.00g     0
      /dev/sdb          lvm2 ---  10.00g 10.00g
      /dev/sdc          lvm2 ---  10.00g 10.00g
      /dev/sdd          lvm2 ---  10.00g 10.00g
    
    [root@localhost ~]# pvremove /dev/sdd                       #移除/dev/sdd
      Labels on physical volume "/dev/sdd" successfully wiped.
    
    [root@localhost ~]# pvs
      PV         VG     Fmt  Attr PSize  PFree
      /dev/sda2  centos lvm2 a--  <9.00g     0
      /dev/sdb          lvm2 ---  10.00g 10.00g
      /dev/sdc          lvm2 ---  10.00g 10.00g
    

    ◆VG 卷组创建与移除◆

    创建VG卷组,VG卷组要在PV中选择

    vgcreate -s [指定PE大小] [VG卷组名] [分区路径] [分区路径][.....]
    
    [root@localhost ~]# pvs
      PV         VG     Fmt  Attr PSize  PFree
      /dev/sda2  centos lvm2 a--  <9.00g     0
      /dev/sdb          lvm2 ---  10.00g 10.00g
      /dev/sdc          lvm2 ---  10.00g 10.00g
    
    [root@localhost ~]# vgcreate -s 4M my_vg /dev/sdb /dev/sdc        #此处就是创建一个VG卷组
      Volume group "my_vg" successfully created
    
    [root@localhost ~]# vgs
      VG     #PV #LV #SN Attr   VSize  VFree
      centos   1   2   0 wz--n- <9.00g     0
      my_vg    2   0   0 wz--n- 19.99g 19.99g                         #这就是VG卷组,名字是my_vg
    

    给当前my_vg卷组,添加一块新的PV,也就是扩展卷组

    vgextend [卷组名] [物理卷分区]
    
    [root@localhost ~]# pvs
      PV         VG     Fmt  Attr PSize   PFree
      /dev/sda2  centos lvm2 a--   <9.00g      0
      /dev/sdb   my_vg  lvm2 a--  <10.00g <10.00g
      /dev/sdc   my_vg  lvm2 a--  <10.00g <10.00g
      /dev/sdd          lvm2 ---   10.00g  10.00g               #这个物理卷没有划分卷组
    
    [root@localhost ~]# vgextend my_vg /dev/sdd                 #添加一个PV到指定卷组
      Volume group "my_vg" successfully extended
    
    [root@localhost ~]# pvs
      PV         VG     Fmt  Attr PSize   PFree
      /dev/sda2  centos lvm2 a--   <9.00g      0
      /dev/sdb   my_vg  lvm2 a--  <10.00g <10.00g
      /dev/sdc   my_vg  lvm2 a--  <10.00g <10.00g
      /dev/sdd   my_vg  lvm2 a--  <10.00g <10.00g               #已被划分到my_vg卷组
    

    在VG卷组里移除一个PV(移除单个PG)

    vgreduce [卷组名] [物理卷分区]
    
    [root@localhost ~]# pvs
      PV         VG     Fmt  Attr PSize   PFree
      /dev/sda2  centos lvm2 a--   <9.00g      0
      /dev/sdb   my_vg  lvm2 a--  <10.00g <10.00g
      /dev/sdc   my_vg  lvm2 a--  <10.00g <10.00g
      /dev/sdd   my_vg  lvm2 a--  <10.00g <10.00g
    
    [root@localhost ~]# vgreduce my_vg /dev/sdd                #将/dev/sdd从my_vg卷组里移除
      Removed "/dev/sdd" from volume group "my_vg"
    
    [root@localhost ~]# pvs
      PV         VG     Fmt  Attr PSize   PFree
      /dev/sda2  centos lvm2 a--   <9.00g      0
      /dev/sdb   my_vg  lvm2 a--  <10.00g <10.00g
      /dev/sdc   my_vg  lvm2 a--  <10.00g <10.00g
      /dev/sdd          lvm2 ---   10.00g  10.00g
    

    移除整个VG卷组

    vgremove [卷组名]
    
    [root@localhost ~]# vgs
      VG     #PV #LV #SN Attr   VSize  VFree
      centos   1   2   0 wz--n- <9.00g     0
      my_vg    2   0   0 wz--n- 19.99g 19.99g
    
    [root@localhost ~]# vgremove my_vg                    #移除整个卷组
      Volume group "my_vg" successfully removed
    
    [root@localhost ~]# vgs
      VG     #PV #LV #SN Attr   VSize  VFree
      centos   1   2   0 wz--n- <9.00g    0
    [root@localhost ~]#
    
    
    
    

    移除空的物理卷VG

    vgreduce -a [卷组名]
    
    [root@localhost ~]# vgs
      VG     #PV #LV #SN Attr   VSize   VFree
      centos   1   2   0 wz--n-  <9.00g      0
      my_vg    3   0   0 wz--n- <29.99g <29.99g
    
    [root@localhost ~]# vgreduce -a my_vg                 #只移除空卷组
      Removed "/dev/sdb" from volume group "my_vg"
      Removed "/dev/sdc" from volume group "my_vg"
    
    [root@localhost ~]# vgs
      VG     #PV #LV #SN Attr   VSize   VFree
      centos   1   2   0 wz--n-  <9.00g      0
      my_vg    1   0   0 wz--n- <10.00g <10.00g
    
    

    ◆LV 逻辑卷创建与移除◆

    创建LVM

    lvcreate -L [指定大小] -n [LV名字] [VG卷组:从哪个卷组里划分]
    
    [root@localhost ~]# lvs
      LV   VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      root centos -wi-ao---- <8.00g
      swap centos -wi-ao----  1.00g
    
    [root@localhost ~]# lvcreate -L 10G -n my_lv my_vg            #创建LVM逻辑卷
      Logical volume "my_lv" created.
    
    [root@localhost ~]# lvs
      LV    VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      root  centos -wi-ao---- <8.00g
      swap  centos -wi-ao----  1.00g
      my_lv my_vg  -wi-a----- 10.00g
    

    格式化并挂载使用

    [root@localhost ~]# mkdir /LVM                            #首先创建一个挂载点
    [root@localhost ~]#
    [root@localhost ~]# mkfs.ext4 /dev/my_vg/my_lv            #格式化LVM分区
    mke2fs 1.42.9 (28-Dec-2013)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    655360 inodes, 2621440 blocks
    131072 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=2151677952
    80 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks:
            32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
    
    Allocating group tables: done
    Writing inode tables: done
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done
    
    [root@localhost ~]# mount /dev/my_vg/my_lv /LVM/                  #挂载LVM
    [root@localhost ~]#
    [root@localhost ~]# df -h                                         #查看结果
    Filesystem               Size  Used Avail Use% Mounted on
    /dev/mapper/centos-root  8.0G  1.2G  6.9G  15% /
    devtmpfs                  98M     0   98M   0% /dev
    tmpfs                    110M     0  110M   0% /dev/shm
    tmpfs                    110M  5.5M  104M   5% /run
    tmpfs                    110M     0  110M   0% /sys/fs/cgroup
    /dev/sda1               1014M  130M  885M  13% /boot
    tmpfs                     22M     0   22M   0% /run/user/0
    /dev/mapper/my_vg-my_lv  9.8G   37M  9.2G   1% /LVM                ← 挂载成功
    
    
    
    

    ◆LV 容量增加 (将LV的容量增加5G的空间)◆

    注意:这里扩展,要先扩展LVM,然后再扩展文件系统

    [root@localhost ~]# df -h
    Filesystem               Size  Used Avail Use% Mounted on
    /dev/mapper/centos-root  8.0G  1.2G  6.9G  15% /
    devtmpfs                  98M     0   98M   0% /dev
    tmpfs                    110M     0  110M   0% /dev/shm
    tmpfs                    110M  5.5M  104M   5% /run
    tmpfs                    110M     0  110M   0% /sys/fs/cgroup
    /dev/sda1               1014M  130M  885M  13% /boot
    tmpfs                     22M     0   22M   0% /run/user/0
    /dev/mapper/my_vg-my_lv  9.8G   37M  9.2G   1% /LVM                  ←此处是10G
    
    [root@localhost ~]# lvextend -L +5G /dev/my_vg/my_lv                 #执行增加命令,从VG卷组划分5G
      Size of logical volume my_vg/my_lv changed from 10.00 GiB (2560 extents) to 15.00 GiB (3840).
      Logical volume my_vg/my_lv successfully resized.
    
    [root@localhost ~]# resize2fs -f /dev/my_vg/my_lv                    #扩展文件系统
    resize2fs 1.42.9 (28-Dec-2013)
    Filesystem at /dev/my_vg/my_lv is mounted on /LVM; on-line resizing required
    old_desc_blocks = 2, new_desc_blocks = 2
    The filesystem on /dev/my_vg/my_lv is now 3932160 blocks long.
    
    [root@localhost ~]# df -h                                            #验证扩展结果
    Filesystem               Size  Used Avail Use% Mounted on
    /dev/mapper/centos-root  8.0G  1.2G  6.9G  15% /
    devtmpfs                  98M     0   98M   0% /dev
    tmpfs                    110M     0  110M   0% /dev/shm
    tmpfs                    110M  5.5M  104M   5% /run
    tmpfs                    110M     0  110M   0% /sys/fs/cgroup
    /dev/sda1               1014M  130M  885M  13% /boot
    tmpfs                     22M     0   22M   0% /run/user/0
    /dev/mapper/my_vg-my_lv   15G   41M   14G   1% /LVM                  ←此处已经从10G 增加到15G
    

    ◆LV 容量缩小(将LV的容量缩小5G的空间)◆

    注意:这里缩小,要卸载文件系统,检查分区,然后缩小文件系统,最后再缩小LVM

    [root@localhost ~]# df -h
    Filesystem               Size  Used Avail Use% Mounted on
    /dev/mapper/centos-root  8.0G  1.2G  6.9G  15% /
    devtmpfs                  98M     0   98M   0% /dev
    tmpfs                    110M     0  110M   0% /dev/shm
    tmpfs                    110M  5.5M  104M   5% /run
    tmpfs                    110M     0  110M   0% /sys/fs/cgroup
    /dev/sda1               1014M  130M  885M  13% /boot
    tmpfs                     22M     0   22M   0% /run/user/0
    /dev/mapper/my_vg-my_lv   15G   41M   14G   1% /LVM                 ←此处显示15G空间
    
    [root@localhost ~]# umount /dev/my_vg/my_lv                         #卸载LVM卷组
    
    [root@localhost ~]# e2fsck -f /dev/my_vg/my_lv                      #检查文件系统
    e2fsck 1.42.9 (28-Dec-2013)
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    /dev/my_vg/my_lv: 11/983040 files (0.0% non-contiguous), 104724/3932160 blocks
    
    [root@localhost ~]# resize2fs -f /dev/my_vg/my_lv 10G(减小后的大小)   #缩小文件系统
    resize2fs 1.42.9 (28-Dec-2013)
    Resizing the filesystem on /dev/my_vg/my_lv to 2621440 (4k) blocks.
    The filesystem on /dev/my_vg/my_lv is now 2621440 blocks long.
    
    [root@localhost ~]# lvreduce -L 10G /dev/my_vg/my_lv                 #缩小LVM
      WARNING: Reducing active logical volume to 10.00 GiB.
      THIS MAY DESTROY YOUR DATA (filesystem etc.)
    Do you really want to reduce my_vg/my_lv? [y/n]: y                   #输入y
      Size of logical volume my_vg/my_lv changed from 15.00 GiB (3840 extents) to 10.00 GiB (2560).
      Logical volume my_vg/my_lv successfully resized.
    
    [root@localhost ~]# mount /dev/my_vg/my_lv /LVM/                    #挂载
    
    [root@localhost ~]# df -h                                           #再次查看分区变化
    Filesystem               Size  Used Avail Use% Mounted on
    /dev/mapper/centos-root  8.0G  1.2G  6.9G  15% /
    devtmpfs                  98M     0   98M   0% /dev
    tmpfs                    110M     0  110M   0% /dev/shm
    tmpfs                    110M  5.5M  104M   5% /run
    tmpfs                    110M     0  110M   0% /sys/fs/cgroup
    /dev/sda1               1014M  130M  885M  13% /boot
    tmpfs                     22M     0   22M   0% /run/user/0
    /dev/mapper/my_vg-my_lv  9.8G   37M  9.2G   1% /LVM                 ←此处已经从15G变成10G
    

    ◆LV 快照功能◆

    拍摄快照

    lvcreate [-s 快照] -n [快照名] -L [快照大小] [指定分区] 
    
    [root@localhost LVM]# ls
    1    12  16  2   23  27  30  34  38  41  45  49  52  56  6   63  67  70  74  78  81  85  89  92  96
    10   13  17  20  24  28  31  35  39  42  46  5   53  57  60  64  68  71  75  79  82  86  9   93  97
    100  14  18  21  25  29  32  36  4   43  47  50  54  58  61  65  69  72  76  8   83  87  90  94  98
    11   15  19  22  26  3   33  37  40  44  48  51  55  59  62  66  7   73  77  80  84  88  91  95  99
    
    [root@localhost LVM]# lvcreate -s -n mylv_back -L 200M /dev/my_vg/my_lv            #给/LVM目录拍摄快照
      Logical volume "mylv_back" created.
    
    [root@localhost LVM]# lvs                                                          #查看快照
      LV        VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      root      centos -wi-ao----  <8.00g
      swap      centos -wi-ao----   1.00g
      my_lv     my_vg  owi-aos---  10.00g
      mylv_back my_vg  swi-a-s--- 200.00m      my_lv  0.01                             ←此处就是快照
    

    快照恢复

    [root@localhost LVM]# ls
    1    12  16  2   23  27  30  34  38  41  45  49  52  56  6   63  67  70  74  78  81  85  89  92  96
    10   13  17  20  24  28  31  35  39  42  46  5   53  57  60  64  68  71  75  79  82  86  9   93  97
    100  14  18  21  25  29  32  36  4   43  47  50  54  58  61  65  69  72  76  8   83  87  90  94  98
    11   15  19  22  26  3   33  37  40  44  48  51  55  59  62  66  7   73  77  80  84  88  91  95  99
    
    [root@localhost LVM]# rm -fr *                                #模拟被删除
    [root@localhost LVM]# mkdir /back                             #创建挂载点
    [root@localhost LVM]# mount /dev/my_vg/mylv_back /back/       #挂载备份文件
    [root@localhost LVM]# cp -a /back/* ./                        #复制备份文件
    
    [root@localhost LVM]# ls
    1    12  16  2   23  27  30  34  38  41  45  49  52  56  6   63  67  70  74  78  81  85  89  92  96
    10   13  17  20  24  28  31  35  39  42  46  5   53  57  60  64  68  71  75  79  82  86  9   93  97
    100  14  18  21  25  29  32  36  4   43  47  50  54  58  61  65  69  72  76  8   83  87  90  94  98
    11   15  19  22  26  3   33  37  40  44  48  51  55  59  62  66  7   73  77  80  84  88  91  95  99
    

    RAID 独立磁盘冗余阵列

    定义:独立磁盘构成的具有冗余能力的阵列

    磁盘阵列分类:一是外接式磁盘阵列柜、二是内接式磁盘阵列卡,三是利用软件来仿真

    1.通过把多个磁盘组织在一起作为一个逻辑卷提供磁盘跨越功能
    2.通过把数据分成多个数据块(Block)并行写入/读出多个磁盘以提高访问磁盘的速度
    3.通过镜像或校验操作提供容错能力

    注意:RAID磁盘阵列主要为了保证硬件损坏的情况下业务不会终止,无法防止误操作

    磁盘阵列的分类

    1.外接式磁盘阵列柜
    2.内接式磁盘阵列
    3.利用软件来仿真
    注意:通常情况下,生产环境中,一般使用硬件RAID来做,这里只做了解即可

    RAID磁盘阵列简介

    RAID 0 没有奇偶校验的 (条带卷)
    RAID 0 提高存储性能的原理是把连续的数据分散到多个磁盘上存取,这样系统有数据请求就可以被多个磁盘并行的执行,每个磁盘执行属于它自己的那部分数据请求

    RAID 1 独立磁盘冗余阵 (镜像卷)
    RAID 1 通过磁盘数据镜像实现数据冗余,在成对的独立磁盘上产生互为备份的数据.当原始数据繁忙时,可直接从镜像拷贝中读取数据,因此RAID 1可以提高读取性能.

    RAID10 (镜象阵列条带)
    Raid 10 是一个Raid1与Raid0的组合体,它是利用奇偶校验实现条带集镜像,所以它继承了Raid0的快速和Raid1的安全.

    RAID5 分布式奇偶校验的独立磁盘结构(最少3块)
    RAID 5 是一种存储性能,数据安全,和存储成本,兼顾的存储解决方案. RAID 5可以理解为是RAID 0和RAID 1的折中方案.

    ◆Mdadm 命令解析◆

    [root@localhost ~]# mdadm --help
    mdadm is used for building, managing, and monitoring
    Linux md devices (aka RAID arrays)
    Usage: mdadm 
    
    mdadm --create --auto=yes /dev/md[0-9] --raid-devices=[0-n] 
    --level=[015] --spare-devices=[0-n] /dev/sd[a-z]
    
    		--create      		 #新建RAID参数
    		--auto=yes    		 #默认配置
    		--raid-devices=N     #磁盘阵列数
    		--spare-devices=N 	 #备份磁盘数
    		--level [015] 		 #阵列等级
    		mdadm --detail       #查询阵列信息
    

    ◆构建一个RAID 5◆

    注意:精简模式下,没有安装此命令,执行 yum install -y mdadm 安装

    [root@localhost ~]# ls -l /dev/sd[b-z]
    brw-rw---- 1 root disk 8, 16 Sep 21 23:06 /dev/sdb
    brw-rw---- 1 root disk 8, 32 Sep 21 23:06 /dev/sdc
    brw-rw---- 1 root disk 8, 48 Sep 21 23:06 /dev/sdd
    brw-rw---- 1 root disk 8, 64 Sep 21 23:04 /dev/sde
    
    [root@localhost ~]# mdadm --create --auto=yes /dev/md0 --level=5 
    > --raid-devices=3 --spare-devices=1 /dev/sd{b,c,d,e}                  #创建一个RAID,其中接口是/dev/md0,等级是RAID5
    mdadm: Defaulting to version 1.2 metadata                              #主磁盘数3,备份盘数1,提供sd{b,c,d,e}磁盘
    mdadm: array /dev/md0 started.
    
    [root@localhost ~]# mdadm --detail /dev/md0                            #查看阵列信息
    /dev/md0:   ←设备文件名
               Version : 1.2
         Creation Time : Fri Sep 21 23:19:09 2018   ←创建日期
            Raid Level : raid5                      ←RAID等级
            Array Size : 20953088 (19.98 GiB 21.46 GB)  ←可用空间
         Used Dev Size : 10476544 (9.99 GiB 10.73 GB)   ←每个设备可用空间
          Raid Devices : 3       ←RAID设备数量
         Total Devices : 4       ←全部设备数量
           Persistence : Superblock is persistent
    
           Update Time : Fri Sep 21 23:19:26 2018
                 State : clean, degraded, recovering
        Active Devices : 3   ←启动磁盘
       Working Devices : 4   ←可用磁盘
        Failed Devices : 0   ←错误磁盘
         Spare Devices : 1   ←预备磁盘
    
                Layout : left-symmetric
            Chunk Size : 512K
    
    Consistency Policy : resync
    
        Rebuild Status : 34% complete
    
                  Name : localhost.localdomain:0  (local to host localhost.localdomain)
                  UUID : 2ee2bcd5:c5189354:d3810252:23c2d5a8   ←此设备UUID
                Events : 6
    
        Number   Major   Minor   RaidDevice State
           0       8       16        0      active sync   /dev/sdb
           1       8       32        1      active sync   /dev/sdc
           4       8       48        2      spare rebuilding   /dev/sdd
    
           3       8       64        -      spare   /dev/sde
    

    格式化 /dev/md0并挂载使用

    [root@localhost ~]# mkfs -t ext4 /dev/md0             #格式化
    mke2fs 1.42.9 (28-Dec-2013)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=128 blocks, Stripe width=256 blocks
    1310720 inodes, 5238272 blocks
    261913 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=2153775104
    160 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks:
            32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
            4096000
    
    Allocating group tables: done
    Writing inode tables: done
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done
    
    [root@localhost ~]# mkdir /RAID              #新建挂载目录
    [root@localhost ~]#
    [root@localhost ~]# mount /dev/md0 /RAID/    #挂载设备
    [root@localhost ~]#
    [root@localhost ~]# df -h
    Filesystem               Size  Used Avail Use% Mounted on
    /dev/mapper/centos-root  8.0G  1.2G  6.9G  15% /
    devtmpfs                  98M     0   98M   0% /dev
    /dev/sr0                 4.2G  4.2G     0 100% /mnt
    /dev/md0                  20G   45M   19G   1% /RAID    ←此处可看到挂载成功
    

    ◆RAID 仿真救援模式◆

    mdadm --manage /dev/md[0-9] --add 设备 --remove 设备 --fail 设备
    
    	--add     #将后面的设备加入md中
    	--remove  #移除设备
    	--fail    #设置出错磁盘
    ------------------------------------------------------------
    [实验]
    
    
    [root@localhost /]# mdadm --manage /dev/md0 --fail /dev/sdb         #将/dev/sdb标注为错误
    mdadm: set /dev/sdb faulty in /dev/md0
    
    [root@localhost /]# mdadm --detail /dev/md0                         #查看一下状态
    /dev/md0:
               Version : 1.2
         Creation Time : Fri Sep 21 23:19:09 2018
            Raid Level : raid5
            Array Size : 20953088 (19.98 GiB 21.46 GB)
         Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
          Raid Devices : 3
         Total Devices : 4
           Persistence : Superblock is persistent
    
           Update Time : Fri Sep 21 23:50:12 2018
                 State : clean, degraded, recovering
        Active Devices : 2
       Working Devices : 3
        Failed Devices : 1  ← 出错磁盘一个
         Spare Devices : 1
    
                Layout : left-symmetric
            Chunk Size : 512K
    
    Consistency Policy : resync
    
        Rebuild Status : 5% complete     ←此处需要注意,他正在恢复数据,等到100%时又可以正常工作
    
                  Name : localhost.localdomain:0  (local to host localhost.localdomain)
                  UUID : 2ee2bcd5:c5189354:d3810252:23c2d5a8
                Events : 20
    
        Number   Major   Minor   RaidDevice State
           3       8       64        0      spare rebuilding   /dev/sde
           1       8       32        1      active sync   /dev/sdc
           4       8       48        2      active sync   /dev/sdd
    
           0       8       16        -      faulty   /dev/sdb   ← 出错磁盘
    
    
    [root@localhost /]# mdadm --manage /dev/md0 --remove /dev/sdb            #移除这个坏掉的磁盘
    mdadm: hot removed /dev/sdb from /dev/md0
    
    [root@localhost /]# mdadm --manage /dev/md0 --add /dev/sdb               #添加一个新的磁盘
    mdadm: added /dev/sdb
    

    参考文献:Linux鸟哥私房菜,Linux运维之道

  • 相关阅读:
    Linux 守护进程创建
    Linux 进程
    静态库与动态库的制作
    目录文件的操作函数 mkdir ,opendir,readdir,closedir
    获取文件或目录的属性 stat 函数
    文件IO 例子
    文件 IO
    标准 IO fread 与 fwrite 的使用(可以实现二进制流的读写)
    bzoj 2716: [Violet 3]天使玩偶
    cf1175 DE
  • 原文地址:https://www.cnblogs.com/LyShark/p/10221799.html
Copyright © 2011-2022 走看看