zoukankan      html  css  js  c++  java
  • 客户端挂载和管理glusterfs卷

    一、客户端挂载

    可以使用Gluster Native Client方法在GNU / Linux客户端中实现高并发性,性能和透明故障转移。可以使用NFS v3访问gluster卷。已经对GNU / Linux客户端和其他操作系统中的NFS实现进行了广泛的测试,例如FreeBSD,Mac OS X,以及Windows 7(Professional和Up)和Windows Server 2003.其他NFS客户端实现可以与gluster一起使用NFS服务器。使用Microsoft Windows以及SAMBA客户端时,可以使用CIFS访问卷。对于此访问方法,Samba包需要存在于客户端。

    总结:GlusterFS支持三种客户端类型。Gluster Native Client、NFS和CIFS。Gluster Native Client是在用户空间中运行的基于FUSE的客户端,官方推荐使用Native Client,可以使用GlusterFS的全部功能。

    1.1 使用Gluster Native Client挂载

    Gluster Native Client是基于FUSE的,所以需要保证客户端安装了FUSE。这个是官方推荐的客户端,支持高并发和高效的写性能。

    在开始安装Gluster Native Client之前,您需要验证客户端上是否已加载FUSE模块,并且可以访问所需的模块,如下所示:

    $ modprobe fuse  #将FUSE可加载内核模块(LKM)添加到Linux内核
    $ dmesg | grep -i fuse  #验证是否已加载FUSE模块
    [  569.630373] fuse init (API version 7.22)
    

    1.2 安装Gluseter Native Client

    $ yum -y install glusterfs-client  #安装glusterfs-client客户端
    $ mkdir /mnt/glusterfs  #创建挂载目录
    $ mount.glusterfs 192.168.56.11:/gv1 /mnt/glusterfs/  #挂载/gv1
    $ df -h
    Filesystem          Size  Used Avail Use% Mounted on
    /dev/sda2            20G  1.4G   19G   7% /
    devtmpfs            231M     0  231M   0% /dev
    tmpfs               241M     0  241M   0% /dev/shm
    tmpfs               241M  4.6M  236M   2% /run
    tmpfs               241M     0  241M   0% /sys/fs/cgroup
    /dev/sda1           197M   97M  100M  50% /boot
    tmpfs                49M     0   49M   0% /run/user/0
    192.168.56.11:/gv1  4.0G  312M  3.7G   8% /mnt/glusterfs
    $ ll /mnt/glusterfs/  #查看挂载目录的内容
    total 100000
    -rw-r--r-- 1 root root 102400000 Aug  7 04:30 100M.file
    $ mount  #查看挂载信息
    sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
    proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
    ......
    192.168.99.251:/gv1 on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
    

    1.3 手动挂载卷选项

    使用该mount -t glusterfs命令时,可以指定以下选项 。请注意,您需要用逗号分隔所有选项。

    backupvolfile-server=server-name  #在安装fuse客户端时添加了这个选择,则当第一个vofile服务器故障时,该选项执行的的服务器将用作volfile服务器来安装客户端
    
    volfile-max-fetch-attempts=number of attempts  指定在装入卷时尝试获取卷文件的尝试次数。
    
    log-level=loglevel  #日志级别
    
    log-file=logfile    #日志文件
    
    transport=transport-type  #指定传输协议
    
    direct-io-mode=[enable|disable]
    
    use-readdirp=[yes|no]  #设置为ON,则强制在fuse内核模块中使用readdirp模式
    
    举个例子:
    
    # mount -t glusterfs -o backupvolfile-server=volfile_server2,use-readdirp=no,volfile-max-fetch-attempts=2,log-level=WARNING,log-file=/var/log/gluster.log server1:/test-volume /mnt/glusterfs
    

    1.4 自动挂载卷

    除了使用mount挂载,还可以使用/etc/fstab自动挂载

    语法格式:HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev 0 0
    
    举个例子:
    192.168.99.251:/gv1 /mnt/glusterfs glusterfs defaults,_netdev 0 0
    

    二、管理GlusterFS卷

    2.1 停止卷

    $ gluster volume stop gv1
    

    2.2 删除卷

    $ gluster volume delete gv1
    

    2.3 扩展卷

    GlusterFS支持在线进行卷的扩展。

    如果添加的节点还不是集群中的节点,需要使用下面命令添加到集群

    语法:gluster peer probe <SERVERNAME>

    扩展卷语法:gluster volume add-brick <VOLNAME> <NEW-BRICK>

    $ gluster peer probe gluster-node3  #添加gluster-node3到集群
    peer probe: success.
    
    $ gluster volume add-brick test-volume gluster-node3:/storage/brick1 force  #扩展test-volume卷
    volume add-brick: success
    $ gluster volume info
     
    Volume Name: test-volume
    Type: Distribute
    Volume ID: 26a625bb-301c-4730-a382-0a838ee63935
    Status: Started
    Snapshot Count: 0
    Number of Bricks: 3
    Transport-type: tcp
    Bricks:
    Brick1: gluster-node1:/storage/brick1
    Brick2: gluster-node2:/storage/brick1
    Brick3: gluster-node3:/storage/brick1      #增加的brick
    Options Reconfigured:
    transport.address-family: inet
    nfs.disable: on
    
    $ gluster volume rebalance test-volume start  #添加后,重新平衡卷以确保文件分发到新添加的brick
    volume rebalance: test-volume: success: Rebalance on test-volume has been started successfully. Use rebalance status command to check status of the rebalance process.
    ID: ca58bd21-11a5-4018-bb2a-8f9079982394
    

    2.4 收缩卷

    收缩卷和扩展卷相似据以Brick为单位。

    语法:gluster volume remove-brick <VOLNAME> <BRICKNAME> start

    $ gluster volume remove-brick test-volume gluster-node3:/storage/brick1 start  #删除brick
    volume remove-brick start: success
    ID: dd0004f0-b3e6-45d6-80ed-90506dc16159
    $ gluster volume remove-brick test-volume gluster-node3:/storage/brick1 status  #查看remove brick操作的状态
                                        Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                                   ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               gluster-node3               35        0Bytes            35             0             0            completed        0:00:00
    $ gluster volume remove-brick test-volume gluster-node3:/storage/brick1 commit  #显示completed状态后,提交remove-brick操作
    volume remove-brick commit: success
    $ gluster volume info
     
    Volume Name: test-volume
    Type: Distribute
    Volume ID: 26a625bb-301c-4730-a382-0a838ee63935
    Status: Started
    Snapshot Count: 0
    Number of Bricks: 2
    Transport-type: tcp
    Bricks:
    Brick1: gluster-node1:/storage/brick1
    Brick2: gluster-node2:/storage/brick1
    Options Reconfigured:
    performance.client-io-threads: on
    transport.address-family: inet
    nfs.disable: on
    

    2.5 迁移卷

    要替换分布式卷上的brick,需要添加一个新的brick,然后删除要替换的brick。在替换的过程中会触发重新平衡的操作,会将移除的brick中的数据到新加入的brick中。

    注意:这里仅支持可以对分布式复制卷或复制卷使用"replace-brick"命令进行替换操作。

    (1)初始卷test-volume的配置信息$ gluster volume info
     
    Volume Name: test-volume
    Type: Distribute
    Volume ID: 26a625bb-301c-4730-a382-0a838ee63935
    Status: Started
    Snapshot Count: 0
    Number of Bricks: 2
    Transport-type: tcp
    Bricks:
    Brick1: gluster-node1:/storage/brick1
    Brick2: gluster-node2:/storage/brick1
    Options Reconfigured:
    performance.client-io-threads: on
    transport.address-family: inet
    nfs.disable: on
    (2)test-volume挂载目录的文件和在实际存储位置的文件信息
    $ ll
    total 0
    -rw-r--r-- 1 root root 0 Aug 13 22:22 file1
    -rw-r--r-- 1 root root 0 Aug 13 22:22 file2
    -rw-r--r-- 1 root root 0 Aug 13 22:22 file3
    -rw-r--r-- 1 root root 0 Aug 13 22:22 file4
    -rw-r--r-- 1 root root 0 Aug 13 22:22 file5
    
    $ ll /storage/brick1/
    total 0
    -rw-r--r-- 2 root root 0 Aug 13 22:22 file1
    -rw-r--r-- 2 root root 0 Aug 13 22:22 file2
    -rw-r--r-- 2 root root 0 Aug 13 22:22 file5
    
    $ ll /storage/brick1/
    total 0
    -rw-r--r-- 2 root root 0 Aug 13  2018 file3
    -rw-r--r-- 2 root root 0 Aug 13  2018 file4(3)添加新brick gluster-node3:/storage/brick1
    $ gluster volume add-brick test-volume gluster-node3:/storage/brick1/ force
    volume add-brick: success
    (4)启动remove-brick
    $ gluster volume remove-brick test-volume gluster-node2:/storage/brick1 start
    volume remove-brick start: success
    ID: 2acdaebb-25a9-477c-807e-980a6086796e
    (5)查看remove-brick的状态是否为completed
    $ gluster volume remove-brick test-volume gluster-node2:/storage/brick1 status
                                        Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                                   ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               gluster-node2                2        0Bytes             2             0             0            completed        0:00:00
    (6)确认删除旧的brick
    $ gluster volume remove-brick test-volume gluster-node2:/storage/brick1 commit
    volume remove-brick commit: success
    (7)test-volume的最新配置
    $ gluster volume info
     
    Volume Name: test-volume
    Type: Distribute
    Volume ID: 26a625bb-301c-4730-a382-0a838ee63935
    Status: Started
    Snapshot Count: 0
    Number of Bricks: 2
    Transport-type: tcp
    Bricks:
    Brick1: gluster-node1:/storage/brick1
    Brick2: gluster-node3:/storage/brick1
    Options Reconfigured:
    performance.client-io-threads: on
    transport.address-family: inet
    nfs.disable: on
    (8)检查新增brick的文件存储信息,原先存储在gluster-node2节点的文件移动到了gluster-node3中
    [root@gluster-node3 ~]# ll /storage/brick1/
    total 0
    -rw-r--r-- 2 root root 0 Aug 13  2018 file3
    -rw-r--r-- 2 root root 0 Aug 13  2018 file4
    

    2.6 系统配额

    $ gluster volume quota test-volume enable    #启用配额
    volume quota : success
    
    $ gluster volume quota test-volume disable    #禁用配额
    volume quota : success
    
    $ mount -t glusterfs 127.0.0.1:/test-volume /gv1  #挂载test-volume卷
    $ mkdir /gv1/quota  #创建限制的目录
    $ gluster volume quota test-volume limit-usage /quota 10MB    #对/gv1/quota目录限制
    
    $ gluster volume quota test-volume list  #查看目录限制信息
                      Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
    -------------------------------------------------------------------------------------------------------------------------------
    /quota                                    10.0MB     80%(8.0MB)   0Bytes  10.0MB              No                   No
    
    $ gluster volume set test-volume features.quota-timeout 5      #设置信息的超时时间
    
    [root@gluster-node1 quota]# cp /gv1/20M.file .  #拷贝20M文件到/gv1/quota下,已经超出了限额,但是依旧可以成功,由于限制的值较小,可能受到算法的影响
    [root@gluster-node1 quota]# cp /gv1/20M.file ./20Mb.file  #再拷贝20M的文件,就会提示超出目录限额
    cp: cannot create regular file ‘./20Mb.file’: Disk quota exceeded
    
    $ gluster volume quota test-volume remove /quota  #删除某个目录的quota设置
    volume quota : success
    

    备注:

    quota功能,主要是对挂载点下的某个目录进行空间限额,如:/mnt/glusterfs/data目录,而不是对组成卷组的空间进行限制。

    2.7 I/O信息查看

    Profile Command 提供接口查看一个卷中的每一个brick的IO信息。

    $ gluster volume profile test-volume start  #启动profiling,之后则可以进行IO信息查看
    Starting volume profile on test-volume has been successful 
    $ gluster volume profile test-volume info  #查看IO信息,可以查看到每个brick的IO信息
    Brick: gluster-node1:/storage/brick1
    ------------------------------------
    Cumulative Stats:
       Block Size:              32768b+              131072b+ 
     No. of Reads:                    0                     0 
    No. of Writes:                    2                   312 
     %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop
     ---------   -----------   -----------   -----------   ------------        ----
          0.00       0.00 us       0.00 us       0.00 us            122      FORGET
          0.00       0.00 us       0.00 us       0.00 us            160     RELEASE
          0.00       0.00 us       0.00 us       0.00 us             68  RELEASEDIR
     
        Duration: 250518 seconds
       Data Read: 0 bytes
    Data Written: 40960000 bytes
     
    Interval 1 Stats:
     
        Duration: 27 seconds
       Data Read: 0 bytes
    Data Written: 0 bytes
     
    Brick: gluster-node3:/storage/brick1
    ------------------------------------
    Cumulative Stats:
       Block Size:               1024b+                2048b+                4096b+ 
     No. of Reads:                    0                     0                     0 
    No. of Writes:                    3                     1                    10 
     
       Block Size:               8192b+               16384b+               32768b+ 
     No. of Reads:                    0                     0                     1 
    No. of Writes:                  291                   516                    68 
     
       Block Size:              65536b+              131072b+ 
     No. of Reads:                    0                   156 
    No. of Writes:                    6                    20 
     %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop
     ---------   -----------   -----------   -----------   ------------        ----
          0.00       0.00 us       0.00 us       0.00 us              3     RELEASE
          0.00       0.00 us       0.00 us       0.00 us             31  RELEASEDIR
     
        Duration: 76999 seconds
       Data Read: 20480000 bytes
    Data Written: 20480000 bytes
     
    Interval 1 Stats:
     
        Duration: 26 seconds
       Data Read: 0 bytes
    Data Written: 0 bytes
    $ gluster volume profile test-volume stop  #查看结束后关闭profiling功能
    Stopping volume profile on test-volume has been successful 
    

    2.8 Top监控

    Top command 允许你查看bricks的性能例如:read, write, file open calls, file read calls, file write calls, directory open calls, and directory real calls

    所有的查看都可以设置top数,默认100

    # gluster volume top VOLNAME open [brick BRICK-NAME] [list-cnt]    //查看打开的fd
    
    $ gluster volume top test-volume open brick gluster-node1:/storage/brick1 list-cnt 3
    Brick: gluster-node1:/storage/brick1
    Current open fds: 0, Max open fds: 4, Max openfd time: 2018-08-13 11:53:24.099217
    Count        filename
    =======================
    1        /98.txt
    1        /95.txt
    1        /87.txt
    
    
    # gluster volume top VOLNAME read [brick BRICK-NAME] [list-cnt]    //查看调用次数最多的读调用
    
    $ gluster volume top test-volume read brick gluster-node3:/storage/brick1 
    Brick: gluster-node3:/storage/brick1
    Count        filename
    =======================
    157        /20M.file
    
    
    # gluster volume top VOLNAME write [brick BRICK-NAME] [list-cnt]    //查看调用次数最多的写调用
    
    $ gluster volume top test-volume write brick gluster-node3:/storage/brick1 
    Brick: gluster-node3:/storage/brick1
    Count        filename
    =======================
    915        /20M.file
    
    
    # gluster volume top VOLNAME opendir [brick BRICK-NAME] [list-cnt]
    
    # gluster volume top VOLNAME readdir [brick BRICK-NAME] [list-cnt]    //查看次数最多的目录调用
    
    $ gluster volume top test-volume opendir brick gluster-node3:/storage/brick1 
    Brick: gluster-node3:/storage/brick1
    Count        filename
    =======================
    7        /quota
    
    $ gluster volume top test-volume readdir brick gluster-node3:/storage/brick1 
    Brick: gluster-node3:/storage/brick1
    Count        filename
    =======================
    7        /quota
    
    
    # gluster volume top VOLNAME read-perf [bsblk-size count count] [brick BRICK-NAME] [list-cnt]    //查看每个Brick的读性能
    
    $ gluster volume top test-volume read-perf bs 256 count 1 brick gluster-node3:/storage/brick1 
    Brick: gluster-node3:/storage/brick1
    Throughput 42.67 MBps time 0.0000 secs
    MBps Filename                                        Time                      
    ==== ========                                        ====                      
       0 /20M.file                                       2018-08-14 03:32:24.7443  
    
    
    # gluster volume top VOLNAME write-perf [bsblk-size count count] [brick BRICK-NAME] [list-cnt]    //查看每个Brick的写性能
    
    $ gluster volume top test-volume write-perf bs 256 count 1 brick gluster-node1:/storage/brick1 
    Brick: gluster-node1:/storage/brick1
    Throughput 16.00 MBps time 0.0000 secs
    MBps Filename                                        Time                      
    ==== ========                                        ====                      
       0 /quota/20Mb.file                                2018-08-14 11:34:21.957635
       0 /quota/20M.file                                 2018-08-14 11:31:02.767068
    
    *************** 当你发现自己的才华撑不起野心时,就请安静下来学习吧!***************
  • 相关阅读:
    Portal技术介绍
    DBlibrary 常用函数
    【转】如何让你的WinForm嵌入桌面
    【转】Windows快捷方式文件格式解析(中文)
    合理安排时间
    javascript脚本压缩工具JSEncoder实现
    【转及整理】C#管理快捷方式文件创建
    【转】房产崩盘路线图
    【转】关于个人知识管理(PKM)的一些基本概念
    Javascript代码压缩、加密算法的破解分析及工具实现
  • 原文地址:https://www.cnblogs.com/lvzhenjiang/p/14199516.html
Copyright © 2011-2022 走看看