zoukankan      html  css  js  c++  java
  • 学习GlusterFS(二)

    环境准备


     

    3台机器,每个机器双网卡,每个机器还需要额外添加1个10GB的磁盘用于测试

    机器系统版本是centos6.6

    1
    2
    3
    4
    5
    [root@gluster-1-1 ~]# uname -rm
    2.6.32-504.el6.x86_64 x86_64
    [root@gluster-1-1 ~]# cat /etc/redhat-release
    CentOS release 6.6 (Final)
    [root@gluster-1-1 ~]#

    3台机器对应关系

    1
    2
    3
    10.0.1.151 gluster-1-1
    10.0.1.152 gluster-1-2
    10.0.1.153 gluster-1-3

    这里实验都是采用eth0的网络操作的,实际生产中3个gluster节点之间相互通信应该采用eth1

    hosts文件添加解析,执行如下命令

    1
    2
    3
    4
    5
    cat >>/etc/hosts<<EOF
    10.0.1.151 gluster-1-1
    10.0.1.152 gluster-1-2
    10.0.1.153 gluster-1-3
    EOF

     

    下载gluster相关软件包

    1
    2
    3
    4
    5
    6
    7
    8
    mkdir -p /tools
    cd /tools
    wget http://bits.gluster.org/pub/gluster/glusterfs/3.4.2/x86_64/glusterfs-3.4.2-1.el6.x86_64.rpm
    wget http://bits.gluster.org/pub/gluster/glusterfs/3.4.2/x86_64/glusterfs-api-3.4.2-1.el6.x86_64.rpm
    wget http://bits.gluster.org/pub/gluster/glusterfs/3.4.2/x86_64/glusterfs-cli-3.4.2-1.el6.x86_64.rpm
    wget http://bits.gluster.org/pub/gluster/glusterfs/3.4.2/x86_64/glusterfs-fuse-3.4.2-1.el6.x86_64.rpm
    wget http://bits.gluster.org/pub/gluster/glusterfs/3.4.2/x86_64/glusterfs-libs-3.4.2-1.el6.x86_64.rpm
    wget http://bits.gluster.org/pub/gluster/glusterfs/3.4.2/x86_64/glusterfs-server-3.4.2-1.el6.x86_64.rpm

    查看下载的包

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    [root@gluster-1-1 tools]# ls -al
    total 1924
    drwxr-xr-x   3 root root   4096 Feb 11 01:20 .
    dr-xr-xr-x. 26 root root   4096 Feb 11 01:12 ..
    -rw-r--r--   1 root root 997688 Jan  3  2014 glusterfs-3.4.2-1.el6.x86_64.rpm
    -rw-r--r--   1 root root  56728 Jan  3  2014 glusterfs-api-3.4.2-1.el6.x86_64.rpm
    -rw-r--r--   1 root root  98904 Jan  3  2014 glusterfs-cli-3.4.2-1.el6.x86_64.rpm
    -rw-r--r--   1 root root  80980 Jan  3  2014 glusterfs-fuse-3.4.2-1.el6.x86_64.rpm
    -rw-r--r--   1 root root 217380 Jan  3  2014 glusterfs-libs-3.4.2-1.el6.x86_64.rpm
    -rw-r--r--   1 root root 492268 Jan  3  2014 glusterfs-server-3.4.2-1.el6.x86_64.rpm

      

    安装gluster的rpm包,启动服务


     

    先安装依赖包

    1
    yum install rpcbind libaio lvm2-devel -y

    安装gluster包

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    [root@gluster-1-2 tools]# rpm  -ivh  gluster*.rpm
    Preparing...                ########################################### [100%]
       1:glusterfs-libs         ########################################### [ 17%]
       2:glusterfs              ########################################### [ 33%]
       3:glusterfs-cli          ########################################### [ 50%]
       4:glusterfs-fuse         ########################################### [ 67%]
       5:glusterfs-server       ########################################### [ 83%]
    error reading information on service glusterfsd: No such file or directory
       6:glusterfs-api          ########################################### [100%]
    [root@gluster-1-2 tools]#

    启动服务gluster

    1
    2
    3
    4
    [root@gluster-1-1 tools]# /etc/init.d/glusterd start
    Starting glusterd:                                         [  OK  ]
    [root@gluster-1-1 tools]# /etc/init.d/glusterd status
    glusterd (pid  3139) is running...

    安装磁盘性能测试监控工具


     

    还需要准备下面工具,方便后期测试磁盘IO等指标

    1
    2
    3
    4
    5
    6
    7
    [root@gluster-1-1 tools]# ll
    total 3160
    -rw-r--r-- 1 root root 112206 Jul 23  2012 atop-1.27-3.x86_64.rpm
    -rw-r--r-- 1 root root 274992 Apr  7  2014 fio-2.1.7-1.el6.rf.x86_64.rpm
    -rw-r--r-- 1 root root 735060 Feb 10 19:27 iozone-3.394-1.el6.rf.x86_64.rpm
    -rw-r--r-- 1 root root  54380 Feb 11 03:07 iperf-2.0.5-11.el6.x86_64.rpm
    -rw-r--r-- 1 1001 ftp   47000 Mar  3  2002 postmark-1.51.c

    安装工具,atop工具还依赖libibverbs包,需要先安装它

    1
    2
    3
    4
    5
    6
    7
    8
    [root@gluster-1-1 tools]# rpm -ivh atop*.rpm
    Preparing...                ########################################### [100%]
        package atop-1.27-3.x86_64 is already installed
    [root@gluster-1-1 tools]# rpm -ivh fio*.rpm
    warning: fio-2.1.7-1.el6.rf.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 6b8d79e6: NOKEY
    error: Failed dependencies:
        libibverbs.so.1()(64bit) is needed by fio-2.1.7-1.el6.rf.x86_64
    [root@gluster-1-1 tools]# yum install libibverbs -y

    安装4个工具,atop,fio,iozone,iperf

    1
    2
    3
    4
    rpm -ivh atop*.rpm
    rpm -ivh fio*.rpm
    rpm -ivh iozone*.rpm
    rpm -ivh  iperf*.rpm

    配置postmark工具

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    [root@gluster-1-3 tools]# yum install gcc -y
    [root@gluster-1-3 tools]# gcc -o postmark postmark-1.51.c
    /tmp/cco7XMjr.o: In function `cli_show':
    postmark-1.51.c:(.text+0x26f9): warning: the `getwd' function is dangerous and should not be used.
    [root@gluster-1-3 tools]# ls
    atop-1.27-3.x86_64.rpm                glusterfs-fuse-3.4.2-1.el6.x86_64.rpm    no_use_rpm
    fio-2.1.7-1.el6.rf.x86_64.rpm         glusterfs-libs-3.4.2-1.el6.x86_64.rpm    postmark
    glusterfs-3.4.2-1.el6.x86_64.rpm      glusterfs-server-3.4.2-1.el6.x86_64.rpm  postmark-1.51.c
    glusterfs-api-3.4.2-1.el6.x86_64.rpm  iozone-3.394-1.el6.rf.x86_64.rpm
    glusterfs-cli-3.4.2-1.el6.x86_64.rpm  iperf-2.0.5-11.el6.x86_64.rpm
    [root@gluster-1-3 tools]#
     
    [root@gluster-1-3 tools]# cp postmark /usr/bin/
    [root@gluster-1-3 tools]#
    [root@gluster-1-1 tools]# postmark
    PostMark v1.51 : 8/14/01
    pm>quit
    [root@gluster-1-1 tools]#

    配置Gluster服务,添加节点


     

    配置时间服务

    1
    2
    3
    4
    5
    6
    [root@gluster-1-2 tools]# /etc/init.d/ntpd start
    Starting ntpd:                                             [  OK  ]
    [root@gluster-1-2 tools]# date
    Sat Feb 11 03:37:12 CST 2017
    [root@gluster-1-2 tools]# date
    Fri Feb 10 19:38:13 CST 2017

     格式化磁盘并挂载

    1
    2
    3
    4
    5
    格式化分区并挂载
    echo "mount /dev/sdb  /brick1" >>/etc/rc.local
    mkdir /brick1
    source /etc/rc.local
    df -h

      

    环境准备完毕,gluster操作。添加gluster节点服务器

    1
    2
    3
    4
    5
    6
    7
    [root@gluster-1-1 tools]# gluster peer probe gluster2
    peer probe: failed: Probe returned with unknown errno 107
    [root@gluster-1-1 tools]# gluster peer probe gluster-1-2
    peer probe: success
    [root@gluster-1-1 tools]# gluster peer probe gluster-1-3
    peer probe: success
    [root@gluster-1-1 tools]#

    添加节点之后,检查

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    在机器1上查看
     
    [root@gluster-1-1 tools]# gluster peer status
    Number of Peers: 2
     
    Hostname: gluster-1-2
    Port: 24007
    Uuid: 34fb23b8-dcb6-4009-abad-14f47a7a9481
    State: Peer in Cluster (Connected)
     
    Hostname: gluster-1-3
    Port: 24007
    Uuid: 2b3b3536-0ca8-44a6-908c-e32e4b80f28e
    State: Peer in Cluster (Connected)
    [root@gluster-1-1 tools]#
     
     
    在机器2上查看
    [root@gluster-1-2 tools]# gluster peer status
    Number of Peers: 2
     
    Hostname: 10.0.1.151
    Port: 24007
    Uuid: 0c058823-7956-4761-b56f-84ba85f528b8
    State: Peer in Cluster (Connected)
     
    Hostname: gluster-1-3
    Uuid: 2b3b3536-0ca8-44a6-908c-e32e4b80f28e
    State: Peer in Cluster (Connected)
     
     
    机器3上查看
    [root@gluster-1-3 tools]# gluster peer status
    Number of Peers: 2
     
    Hostname: 10.0.1.151
    Port: 24007
    Uuid: 0c058823-7956-4761-b56f-84ba85f528b8
    State: Peer in Cluster (Connected)
     
    Hostname: gluster-1-2
    Uuid: 34fb23b8-dcb6-4009-abad-14f47a7a9481
    State: Peer in Cluster (Connected)
    [root@gluster-1-3 tools]#

    创建Hash卷


     

    创建卷
    这里再gluster-1-3机器上执行的,默认这里创建的是哈希卷
    它会自动在gluster下面的brick1下面创建个b1目录

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    [root@gluster-1-3 tools]# gluster volume create testvol gluster-1-1:/brick1/b1
    volume create: testvol: success: please start the volume to access data
    [root@gluster-1-3 tools]# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sda3        35G  4.1G   30G  13% /
    tmpfs           996M     0  996M   0% /dev/shm
    /dev/sda1       380M   33M  327M  10% /boot
    /dev/sdb        9.8G   23M  9.2G   1% /brick1
    [root@gluster-1-3 tools]#
     
    [root@gluster-1-3 tools]# ls /brick1/
    lost+found
    [root@gluster-1-3 tools]#
    [root@gluster-1-1 tools]# ls /brick1/
    b1  lost+found
    [root@gluster-1-1 tools]#

    查看gluster相关帮助  

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    [root@gluster-1-3 tools]# gluster volume help
    volume info [all|<VOLNAME>] - list information of all volumes
    volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT>] [device vg] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK> ... [force] - create a new volume of specified type with mentioned bricks
    volume delete <VOLNAME> - delete volume specified by <VOLNAME>
    volume start <VOLNAME> [force] - start volume specified by <VOLNAME>
    volume stop <VOLNAME> [force] - stop volume specified by <VOLNAME>
    volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK> ... [force] - add brick to volume <VOLNAME>
    volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... {start|stop|status|commit|force} - remove brick from volume <VOLNAME>
    volume rebalance <VOLNAME> [fix-layout] {start|stop|status} [force] - rebalance operations
    volume replace-brick <VOLNAME> <BRICK> <NEW-BRICK> {start [force]|pause|abort|status|commit [force]} - replace-brick operations
    volume set <VOLNAME> <KEY> <VALUE> - set options for volume <VOLNAME>
    volume help - display help for the volume command
    volume log rotate <VOLNAME> [BRICK] - rotate the log file for corresponding volume/brick
    volume sync <HOSTNAME> [all|<VOLNAME>] - sync the volume information from a peer
    volume reset <VOLNAME> [option] [force] - reset all the reconfigured options
    volume profile <VOLNAME> {start|info|stop} [nfs] - volume profile operations
    volume quota <VOLNAME> <enable|disable|limit-usage|list|remove> [path] [value] - quota translator specific operations
    volume top <VOLNAME> {[open|read|write|opendir|readdir [nfs]] |[read-perf|write-perf [nfs|{bs <size> count <count>}]]|[clear [nfs]]} [brick <brick>] [list-cnt <count>] - volume top operations
    volume status [all | <VOLNAME> [nfs|shd|<BRICK>]] [detail|clients|mem|inode|fd|callpool] - display status of all or specified volume(s)/brick
    volume heal <VOLNAME> [{full | info {healed | heal-failed | split-brain}}] - self-heal commands on volume specified by <VOLNAME>
    volume statedump <VOLNAME> [nfs] [all|mem|iobuf|callpool|priv|fd|inode|history]... - perform statedump on bricks
    volume list - list all volumes in cluster
    volume clear-locks <VOLNAME> <path> kind {blocked|granted|all}{inode [range]|entry [basename]|posix [range]} - Clear locks held on path
    [root@gluster-1-3 tools]#

      

    启动卷,查看卷状态
    数据以后就存到gluster-1-1的brick1下面的b1下面了

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    [root@gluster-1-3 tools]# gluster volume start testvol
    volume start: testvol: success
    [root@gluster-1-3 tools]# gluster volume info testvol
      
    Volume Name: testvol
    Type: Distribute
    Volume ID: ec60d25c-06f6-4174-aa42-b00709d20e19
    Status: Started
    Number of Bricks: 1
    Transport-type: tcp
    Bricks:
    Brick1: gluster-1-1:/brick1/b1
    [root@gluster-1-3 tools]#

      

    挂载卷操作
    有客户端的话,在客户端挂载,没客户端可以一个节点上练习挂载

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    [root@gluster-1-3 tools]# mount -t glusterfs gluster-1-1:/testvol /mnt
    [root@gluster-1-3 tools]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda3              35G  4.1G   30G  13% /
    tmpfs                 996M     0  996M   0% /dev/shm
    /dev/sda1             380M   33M  327M  10% /boot
    /dev/sdb              9.8G   23M  9.2G   1% /brick1
    gluster-1-1:/testvol  9.8G   23M  9.2G   1% /mnt
    [root@gluster-1-3 tools]#
    mount看一下
    [root@gluster-1-3 tools]# mount
    /dev/sda3 on / type ext4 (rw)
    proc on /proc type proc (rw)
    sysfs on /sys type sysfs (rw)
    devpts on /dev/pts type devpts (rw,gid=5,mode=620)
    tmpfs on /dev/shm type tmpfs (rw)
    /dev/sda1 on /boot type ext4 (rw)
    none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
    vmware-vmblock on /var/run/vmblock-fuse type fuse.vmware-vmblock (rw,nosuid,nodev,default_permissions,allow_other)
    /dev/sdb on /brick1 type ext4 (rw)
    gluster-1-1:/testvol on /mnt type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
    [root@gluster-1-3 tools]#

     在挂载点新建文件和删除文件,一些正常 

    1
    2
    3
    4
    5
    6
    7
    8
    [root@gluster-1-3 tools]# cd /mnt/
    [root@gluster-1-3 mnt]# ls
    [root@gluster-1-3 mnt]# touch ab  c
    [root@gluster-1-3 mnt]# mkdir ddd
    [root@gluster-1-3 mnt]# rm -f c
    [root@gluster-1-3 mnt]# ls
    ab  ddd
    [root@gluster-1-3 mnt]#

     删除节点操作。节点机器上无法删除自己

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    [root@gluster-1-3 mnt]# gluster peer detach gluster-1-2
    peer detach: success
    [root@gluster-1-3 mnt]# gluster peer detach gluster-1-3
    peer detach: failed: gluster-1-3 is localhost
    [root@gluster-1-3 mnt]# gluster peer status
    peer status: No peers present
    [root@gluster-1-3 mnt]#
    需要在gluster-1-1上操作
    [root@gluster-1-1 tools]# gluster peer detach gluster-1-3
    peer detach: success
    [root@gluster-1-1 tools]# gluster peer status
    peer status: No peers present
    [root@gluster-1-1 tools]#

    卷还存在

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    [root@gluster-1-1 tools]# gluster volume info
      
    Volume Name: testvol
    Type: Distribute
    Volume ID: ec60d25c-06f6-4174-aa42-b00709d20e19
    Status: Started
    Number of Bricks: 1
    Transport-type: tcp
    Bricks:
    Brick1: gluster-1-1:/brick1/b1
    [root@gluster-1-1 tools]#

    把节点都加回来

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    [root@gluster-1-1 tools]# gluster peer probe gluster-1-2
    peer probe: success
    [root@gluster-1-1 tools]# gluster peer probe gluster-1-3
    peer probe: success
    [root@gluster-1-1 tools]# gluster peer status
    Number of Peers: 2
     
    Hostname: gluster-1-2
    Port: 24007
    Uuid: 34fb23b8-dcb6-4009-abad-14f47a7a9481
    State: Peer in Cluster (Connected)
     
    Hostname: gluster-1-3
    Port: 24007
    Uuid: 2b3b3536-0ca8-44a6-908c-e32e4b80f28e
    State: Peer in Cluster (Connected)
    [root@gluster-1-1 tools]#

    增加brick,brick后要有文件夹,否则就行第一行这样failed了  

    1
    2
    3
    4
    5
    6
    [root@gluster-1-1 tools]# gluster volume add-brick testvol gluster-1-2:/brick1/
    volume add-brick: failed:
    [root@gluster-1-1 tools]# gluster volume add-brick testvol gluster-1-2:/brick1/b2
    volume add-brick: success
    [root@gluster-1-1 tools]# gluster volume add-brick testvol gluster-1-3:/brick1/b3
    volume add-brick: success

     客户侧挂载点的容量就变了 

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    [root@gluster-1-3 mnt]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda3              35G  4.2G   30G  13% /
    tmpfs                 996M     0  996M   0% /dev/shm
    /dev/sda1             380M   33M  327M  10% /boot
    /dev/sdb              9.8G   23M  9.2G   1% /brick1
    gluster-1-1:/testvol  9.8G   23M  9.2G   1% /mnt
    [root@gluster-1-3 mnt]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda3              35G  4.2G   30G  13% /
    tmpfs                 996M     0  996M   0% /dev/shm
    /dev/sda1             380M   33M  327M  10% /boot
    /dev/sdb              9.8G   23M  9.2G   1% /brick1
    gluster-1-1:/testvol   30G   68M   28G   1% /mnt
    [root@gluster-1-3 mnt]#

    客户端创建一个目录,所有节点都会出现

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    [root@gluster-1-3 mnt]# cd /mnt/
    [root@gluster-1-3 mnt]# ls
    ab  ddd
    [root@gluster-1-3 mnt]# mkdir tools
    [root@gluster-1-3 mnt]# ls
    ab  ddd  tools
    [root@gluster-1-3 mnt]# ls /brick1/b3/
    ddd  tools
    [root@gluster-1-3 mnt]#
     
    [root@gluster-1-2 tools]# ls /brick1/b2/
    ddd  tools
    [root@gluster-1-2 tools]#
     
    [root@gluster-1-1 tools]# ls /brick1/b1/
    ab  ddd  tools
    [root@gluster-1-1 tools]#

     复制文件到客户端挂载点内,可以看到文件在3个节点的brick内hash分布了 

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    [root@gluster-1-3 mnt]# cp -rf /tools/* /mnt/tools/
    [root@gluster-1-3 mnt]# ll /mnt/tools/
    total 3143
    -rw-r--r-- 1 root root 112206 Feb 10 20:43 atop-1.27-3.x86_64.rpm
    -rw-r--r-- 1 root root 274992 Feb 10 20:43 fio-2.1.7-1.el6.rf.x86_64.rpm
    -rw-r--r-- 1 root root 997688 Feb 10 20:43 glusterfs-3.4.2-1.el6.x86_64.rpm
    -rw-r--r-- 1 root root  56728 Feb 10 20:43 glusterfs-api-3.4.2-1.el6.x86_64.rpm
    -rw-r--r-- 1 root root  98904 Feb 10 20:43 glusterfs-cli-3.4.2-1.el6.x86_64.rpm
    -rw-r--r-- 1 root root  80980 Feb 10 20:43 glusterfs-fuse-3.4.2-1.el6.x86_64.rpm
    -rw-r--r-- 1 root root 217380 Feb 10 20:43 glusterfs-libs-3.4.2-1.el6.x86_64.rpm
    -rw-r--r-- 1 root root 492268 Feb 10 20:43 glusterfs-server-3.4.2-1.el6.x86_64.rpm
    -rw-r--r-- 1 root root 735060 Feb 10 20:43 iozone-3.394-1.el6.rf.x86_64.rpm
    -rw-r--r-- 1 root root  54380 Feb 10 20:43 iperf-2.0.5-11.el6.x86_64.rpm
    drwxr-xr-x 2 root root  12288 Feb 10 20:43 no_use_rpm
    -rwxr-xr-x 1 root root  34789 Feb 10 20:43 postmark
    -rw-r--r-- 1 root root  47000 Feb 10 20:43 postmark-1.51.c
     
    文件在3个节点的brick内hash分布了
    [root@gluster-1-3 mnt]# ll /brick1/b3/tools/
    total 108
    -rw-r--r-- 2 root root 54380 Feb 10 20:43 iperf-2.0.5-11.el6.x86_64.rpm
    drwxr-xr-x 2 root root  4096 Feb 10 20:43 no_use_rpm
    -rw-r--r-- 2 root root 47000 Feb 10 20:43 postmark-1.51.c
    [root@gluster-1-3 mnt]#
     
     
    [root@gluster-1-2 tools]# ll /brick1/b2/tools/
    total 2788
    -rw-r--r-- 2 root root 274992 Feb 10 20:43 fio-2.1.7-1.el6.rf.x86_64.rpm
    -rw-r--r-- 2 root root 997688 Feb 10 20:43 glusterfs-3.4.2-1.el6.x86_64.rpm
    -rw-r--r-- 2 root root  80980 Feb 10 20:43 glusterfs-fuse-3.4.2-1.el6.x86_64.rpm
    -rw-r--r-- 2 root root 217380 Feb 10 20:43 glusterfs-libs-3.4.2-1.el6.x86_64.rpm
    -rw-r--r-- 2 root root 492268 Feb 10 20:43 glusterfs-server-3.4.2-1.el6.x86_64.rpm
    -rw-r--r-- 2 root root 735060 Feb 10 20:43 iozone-3.394-1.el6.rf.x86_64.rpm
    drwxr-xr-x 2 root root   4096 Feb 10 20:43 no_use_rpm
    -rwxr-xr-x 2 root root  34789 Feb 10 20:43 postmark
    [root@gluster-1-2 tools]#
     
     
    [root@gluster-1-1 mnt]# ll /brick1/b1/tools/
    total 272
    -rw-r--r-- 2 root root 112206 Feb 10 20:43 atop-1.27-3.x86_64.rpm
    -rw-r--r-- 2 root root  56728 Feb 10 20:43 glusterfs-api-3.4.2-1.el6.x86_64.rpm
    -rw-r--r-- 2 root root  98904 Feb 10 20:43 glusterfs-cli-3.4.2-1.el6.x86_64.rpm
    drwxr-xr-x 2 root root   4096 Feb 10 20:43 no_use_rpm
    [root@gluster-1-1 mnt]#

    删除卷会造成一些数据丢失,因为被删除节点有数据   

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    [root@gluster-1-3 mnt]# ll /brick1/b3/tools/
    total 108
    -rw-r--r-- 2 root root 54380 Feb 10 20:43 iperf-2.0.5-11.el6.x86_64.rpm
    drwxr-xr-x 2 root root  4096 Feb 10 20:43 no_use_rpm
    -rw-r--r-- 2 root root 47000 Feb 10 20:43 postmark-1.51.c
     
     
    [root@gluster-1-1 mnt]# gluster volume remove-brick testvol gluster-1-3:/brick1/b3
    Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
    volume remove-brick commit force: success
    [root@gluster-1-1 mnt]#
     
     
    [root@gluster-1-3 mnt]# ls /mnt/tools/
    atop-1.27-3.x86_64.rpm                 glusterfs-libs-3.4.2-1.el6.x86_64.rpm
    fio-2.1.7-1.el6.rf.x86_64.rpm          glusterfs-server-3.4.2-1.el6.x86_64.rpm
    glusterfs-3.4.2-1.el6.x86_64.rpm       iozone-3.394-1.el6.rf.x86_64.rpm
    glusterfs-api-3.4.2-1.el6.x86_64.rpm   no_use_rpm
    glusterfs-cli-3.4.2-1.el6.x86_64.rpm   postmark
    glusterfs-fuse-3.4.2-1.el6.x86_64.rpm
    [root@gluster-1-3 mnt]#
     
    查看卷的信息,mount点空间也下降了
    [root@gluster-1-1 mnt]# gluster volume info
      
    Volume Name: testvol
    Type: Distribute
    Volume ID: ec60d25c-06f6-4174-aa42-b00709d20e19
    Status: Started
    Number of Bricks: 2
    Transport-type: tcp
    Bricks:
    Brick1: gluster-1-1:/brick1/b1
    Brick2: gluster-1-2:/brick1/b2
    [root@gluster-1-1 mnt]#
     
    [root@gluster-1-3 mnt]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda3              35G  4.2G   30G  13% /
    tmpfs                 996M     0  996M   0% /dev/shm
    /dev/sda1             380M   33M  327M  10% /boot
    /dev/sdb              9.8G   29M  9.2G   1% /brick1
    gluster-1-1:/testvol   20G   49M   19G   1% /mnt
     
    [root@gluster-1-3 mnt]# gluster volume info
      
    Volume Name: testvol
    Type: Distribute
    Volume ID: ec60d25c-06f6-4174-aa42-b00709d20e19
    Status: Started
    Number of Bricks: 2
    Transport-type: tcp
    Bricks:
    Brick1: gluster-1-1:/brick1/b1
    Brick2: gluster-1-2:/brick1/b2
    [root@gluster-1-3 mnt]#

    把brick加回来时提示失败,因为原先的brick目录下有扩展属性,带有原来的卷的信息  

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    [root@gluster-1-1 mnt]# gluster volume add-brick  testvol gluster-1-3:/brick1/b3
    volume add-brick: failed:
    [root@gluster-1-1 mnt]#
     
    需要删除目录,才能加回来
    [root@gluster-1-3 mnt]# rm -rf /brick1/b3/
    [root@gluster-1-3 mnt]# ls /brick1/
    lost+found
    [root@gluster-1-3 mnt]#
     
    添加回来了
    [root@gluster-1-1 mnt]# gluster volume add-brick  testvol gluster-1-3:/brick1/b3
    volume add-brick: success
    [root@gluster-1-1 mnt]# gluster volume info
      
    Volume Name: testvol
    Type: Distribute
    Volume ID: ec60d25c-06f6-4174-aa42-b00709d20e19
    Status: Started
    Number of Bricks: 3
    Transport-type: tcp
    Bricks:
    Brick1: gluster-1-1:/brick1/b1
    Brick2: gluster-1-2:/brick1/b2
    Brick3: gluster-1-3:/brick1/b3
    [root@gluster-1-1 mnt]#

    rebalance操作能够让文件按照之前的规则再分配

    做下rebalance,看到新加的节点上分到了文件和目录
    现网中rebalance,最好在服务器空闲的时间操作

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    [root@gluster-1-3 mnt]# ls /brick1/b3/
    [root@gluster-1-3 mnt]# gluster volume rebalance testvol start
    volume rebalance: testvol: success: Starting rebalance on volume testvol has been successful.
    ID: 649f717b-406f-4a32-a385-cc614cda5fbd
    [root@gluster-1-3 mnt]# ls /brick1/b3/
    ddd  tools
    [root@gluster-1-3 mnt]# ls /brick1/b3/tools/
    atop-1.27-3.x86_64.rpm                glusterfs-cli-3.4.2-1.el6.x86_64.rpm
    glusterfs-api-3.4.2-1.el6.x86_64.rpm  no_use_rpm
    [root@gluster-1-3 mnt]#

    空间也回来了  

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    [root@gluster-1-3 mnt]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda3              35G  4.2G   30G  13% /
    tmpfs                 996M     0  996M   0% /dev/shm
    /dev/sda1             380M   33M  327M  10% /boot
    /dev/sdb              9.8G   24M  9.2G   1% /brick1
    gluster-1-1:/testvol   20G   54M   19G   1% /mnt
    [root@gluster-1-3 mnt]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda3              35G  4.2G   30G  13% /
    tmpfs                 996M     0  996M   0% /dev/shm
    /dev/sda1             380M   33M  327M  10% /boot
    /dev/sdb              9.8G   24M  9.2G   1% /brick1
    gluster-1-1:/testvol   30G   77M   28G   1% /mnt
    [root@gluster-1-3 mnt]#
    也可以查看平衡状态。
    [root@gluster-1-3 mnt]# gluster volume rebalance testvol status
                                        Node Rebalanced-files          size       scanned      failures       skipped         status run time in secs
                                   ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------
                                   localhost                0        0Bytes             1             0             0      completed             0.00
                                   localhost                0        0Bytes             1             0             0      completed             0.00
                                   localhost                0        0Bytes             1             0             0      completed             0.00
                                 gluster-1-2                0        0Bytes            20             0             0      completed             0.00
    volume rebalance: testvol: success:
    [root@gluster-1-3 mnt]#

    卸载挂载点,停止卷
    这个操作很危险,但是卷删除了,下面的数据还在

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    [root@gluster-1-3 mnt]# umount /mnt -lf
    [root@gluster-1-3 mnt]# gluster volume stop testvol
    Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
    volume stop: testvol: success
    [root@gluster-1-3 mnt]# gluster volume delete testvol
    Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
    volume delete: testvol: success
    [root@gluster-1-3 mnt]# ls /brick1/b3/tools/
    atop-1.27-3.x86_64.rpm                glusterfs-cli-3.4.2-1.el6.x86_64.rpm
    glusterfs-api-3.4.2-1.el6.x86_64.rpm  no_use_rpm
    [root@gluster-1-3 mnt]# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sda3        35G  4.2G   30G  13% /
    tmpfs           996M     0  996M   0% /dev/shm
    /dev/sda1       380M   33M  327M  10% /boot
    /dev/sdb        9.8G   24M  9.2G   1% /brick1
    [root@gluster-1-3 mnt]#
    卷信息也空白了
    [root@gluster-1-3 mnt]# gluster volume  info
    No volumes present
    [root@gluster-1-3 mnt]#

    想要清除数据,可以登录到每个节点上删除brick下面的数据

    1
    2
    3
    4
    5
    6
    7
    8
    [root@gluster-1-3 mnt]# rm -rf /brick1/b3/
    [root@gluster-1-3 mnt]#
     
    [root@gluster-1-2 tools]# rm -rf /brick1/b2/
    [root@gluster-1-2 tools]#
     
    [root@gluster-1-1 mnt]# rm -rf /brick1/b1/
    [root@gluster-1-1 mnt]#

      

     创建复制卷


     操作如下,需要replica 2 表示复制卷

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    [root@gluster-1-1 mnt]# gluster volume create testvol replica 2 gluster-1-1:/brick1/b1  gluster-1-2:/brick1/b2
    volume create: testvol: success: please start the volume to access data
    [root@gluster-1-1 mnt]# gluster volume info
      
    Volume Name: testvol
    Type: Replicate
    Volume ID: 1fa96ed0-a062-4ccf-9e4f-07cb9d84296f
    Status: Created
    Number of Bricks: 1 x 2 = 2
    Transport-type: tcp
    Bricks:
    Brick1: gluster-1-1:/brick1/b1
    Brick2: gluster-1-2:/brick1/b2
    [root@gluster-1-1 mnt]#

    启动卷,查看卷状态  

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    [root@gluster-1-1 mnt]# gluster volume start testvol
    volume start: testvol: success
    [root@gluster-1-1 mnt]# gluster volume info
      
    Volume Name: testvol
    Type: Replicate
    Volume ID: 1fa96ed0-a062-4ccf-9e4f-07cb9d84296f
    Status: Started
    Number of Bricks: 1 x 2 = 2
    Transport-type: tcp
    Bricks:
    Brick1: gluster-1-1:/brick1/b1
    Brick2: gluster-1-2:/brick1/b2
    [root@gluster-1-1 mnt]#

     

    挂载后操作创建文件文件
    容量只有一个节点的容量,因为是复制卷

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    [root@gluster-1-1 mnt]# mount -t glusterfs gluster-1-3:/testvol /mnt
    [root@gluster-1-1 mnt]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda3              35G  4.2G   30G  13% /
    tmpfs                 996M     0  996M   0% /dev/shm
    /dev/sda1             380M   33M  327M  10% /boot
    /dev/sdb              9.8G   23M  9.2G   1% /brick1
    gluster-1-3:/testvol  9.8G   23M  9.2G   1% /mnt
    [root@gluster-1-1 mnt]# touch /mnt/{a,b,c}
    [root@gluster-1-1 mnt]# ls /mnt/
    a  b  c
    [root@gluster-1-1 mnt]# ls /brick1/b1/
    a  b  c
    [root@gluster-1-1 mnt]#
     
    节点2上也一样,这就是复制卷
    [root@gluster-1-2 tools]# ls /brick1/b2/
    a  b  c
    [root@gluster-1-2 tools]#

      

    模拟误删卷信息故障


     

    删除卷信息,卷信息在下面路径下  

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    [root@gluster-1-2 tools]# ll /var/lib/glusterd/vols/
    total 4
    drwxr-xr-x 4 root root 4096 Feb 10 21:19 testvol
    [root@gluster-1-2 tools]# ll /var/lib/glusterd/vols/testvol/
    total 40
    drwxr-xr-x 2 root root 4096 Feb 10 21:19 bricks
    -rw------- 1 root root   16 Feb 10 21:19 cksum
    -rw------- 1 root root  328 Feb 10 21:19 info
    -rw------- 1 root root   34 Feb 10 21:19 node_state.info
    -rw------- 1 root root   12 Feb 10 21:19 rbstate
    drwxr-xr-x 2 root root 4096 Feb 10 21:19 run
    -rw------- 1 root root 1302 Feb 10 21:18 testvol-fuse.vol
    -rw------- 1 root root 1332 Feb 10 21:18 testvol.gluster-1-1.brick1-b1.vol
    -rw------- 1 root root 1332 Feb 10 21:18 testvol.gluster-1-2.brick1-b2.vol
    -rw------- 1 root root 1530 Feb 10 21:18 trusted-testvol-fuse.vol
    [root@gluster-1-2 tools]# rm -f  /var/lib/glusterd/vols/testvol/
    rm: cannot remove `/var/lib/glusterd/vols/testvol/': Is a directory
    [root@gluster-1-2 tools]# rm -rf  /var/lib/glusterd/vols/testvol/
     
     
    [root@gluster-1-2 tools]# ll /var/lib/glusterd/vols/testvol/
    ls: cannot access /var/lib/glusterd/vols/testvol/: No such file or directory

    恢复卷信息

    把卷信息同步过来,gluster-1-1节点上是正常的
    下面操作的all表示同步所有卷信息过来,这里也可以写成testvol

    这种卷信息要定期备份

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    [root@gluster-1-2 tools]# gluster volume sync gluster-1-1  all
    Sync volume may make data inaccessible while the sync is in progress. Do you want to continue? (y/n) y
    volume sync: success
    [root@gluster-1-2 tools]# ll /var/lib/glusterd/vols/testvol/
    total 36
    drwxr-xr-x 2 root root 4096 Feb 10 21:45 bricks
    -rw------- 1 root root   16 Feb 10 21:45 cksum
    -rw------- 1 root root  328 Feb 10 21:45 info
    -rw------- 1 root root   34 Feb 10 21:45 node_state.info
    -rw------- 1 root root   12 Feb 10 21:45 rbstate
    -rw------- 1 root root 1302 Feb 10 21:45 testvol-fuse.vol
    -rw------- 1 root root 1332 Feb 10 21:45 testvol.gluster-1-1.brick1-b1.vol
    -rw------- 1 root root 1332 Feb 10 21:45 testvol.gluster-1-2.brick1-b2.vol
    -rw------- 1 root root 1530 Feb 10 21:45 trusted-testvol-fuse.vol
    [root@gluster-1-2 tools]#

     

    Gluster设置允许可信任客户端IP


     

    一些参数设置
    设置只允许192.168.1.*的访问

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    [root@gluster-1-1 mnt]# gluster volume set testvol auth.allow 192.168.1.*
    volume set: success
    [root@gluster-1-1 mnt]# gluster volume info
      
    Volume Name: testvol
    Type: Replicate
    Volume ID: 1fa96ed0-a062-4ccf-9e4f-07cb9d84296f
    Status: Started
    Number of Bricks: 1 x 2 = 2
    Transport-type: tcp
    Bricks:
    Brick1: gluster-1-1:/brick1/b1
    Brick2: gluster-1-2:/brick1/b2
    Options Reconfigured:
    auth.allow: 192.168.1.*
    [root@gluster-1-1 mnt]#
     
    再次mount就失败了
    [root@gluster-1-3 mnt]# mount -t glusterfs gluster-1-1:/testvol /mnt
    Mount failed. Please check the log file for more details.

     改成了all,测试还是无法mount,改成10网段才可以,这里很奇怪

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    [root@gluster-1-1 mnt]# gluster volume info
      
    Volume Name: testvol
    Type: Replicate
    Volume ID: 1fa96ed0-a062-4ccf-9e4f-07cb9d84296f
    Status: Started
    Number of Bricks: 1 x 2 = 2
    Transport-type: tcp
    Bricks:
    Brick1: gluster-1-1:/brick1/b1
    Brick2: gluster-1-2:/brick1/b2
    Options Reconfigured:
    auth.allow: all
    [root@gluster-1-1 mnt]# gluster volume set testvol auth.allow 10.0.*
    volume set: success
    [root@gluster-1-1 mnt]# mount -t glusterfs gluster-1-3:/testvol /mnt
    [root@gluster-1-1 mnt]#

      

    关于gluster的nfs功能


     

    1
    系统会启动一个nfs进程
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    [root@gluster-1-1 mnt]# ps -ef | grep nfs
    root      31399      1  0 21:19 ?        00:00:00 /usr/sbin/glusterfs -s localhost --volfile-id glusternfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S /var/run/f3d908eb4b03aac9575816a.socket
    root      31612   2556  0 22:00 pts/0    00:00:00 grep --colour=auto nfs
    [root@gluster-1-1 mnt]#
     
    关闭它自带的nfs
    [root@gluster-1-1 mnt]# gluster volume set testvol nfs.disable on
    volume set: success
    [root@gluster-1-1 mnt]# ps -ef | grep nfs
    root      31636   2556  0 22:01 pts/0    00:00:00 grep --colour=auto nfs
    [root@gluster-1-1 mnt]#
     
    在信息里也能看到是关闭了
    [root@gluster-1-1 mnt]# gluster volume info
      
    Volume Name: testvol
    Type: Replicate
    Volume ID: 1fa96ed0-a062-4ccf-9e4f-07cb9d84296f
    Status: Started
    Number of Bricks: 1 x 2 = 2
    Transport-type: tcp
    Bricks:
    Brick1: gluster-1-1:/brick1/b1
    Brick2: gluster-1-2:/brick1/b2
    Options Reconfigured:
    nfs.disable: on
    auth.allow: 10.0.*
    [root@gluster-1-1 mnt]#

      

    glusterfs操作复习


     

     系统验证测试
     存储配置测试
     网络配置测试
     卷配置测试
     系统性能测试

     

     再创建一个卷 

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    [root@gluster-1-1 mnt]# gluster volume create dhtvol gluster-1-1:/brick1/dht
    volume create: dhtvol: success: please start the volume to access data
    [root@gluster-1-1 mnt]# gluster volume start dhtvol
    volume start: dhtvol: success
    [root@gluster-1-1 mnt]# gluster volume info dhtvol
      
    Volume Name: dhtvol
    Type: Distribute
    Volume ID: 7884976b-9470-44df-af40-a3ee8a7bd021
    Status: Started
    Number of Bricks: 1
    Transport-type: tcp
    Bricks:
    Brick1: gluster-1-1:/brick1/dht
    [root@gluster-1-1 mnt]#

    客户端挂载  

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    [root@gluster-1-3 mnt]# mount -t glusterfs gluster-1-1:/dhtvol /mnt
    [root@gluster-1-3 mnt]#
     
    [root@gluster-1-3 mnt]# df -h
    Filesystem           Size  Used Avail Use% Mounted on
    /dev/sda3             35G  4.2G   30G  13% /
    tmpfs                996M     0  996M   0% /dev/shm
    /dev/sda1            380M   33M  327M  10% /boot
    /dev/sdb             9.8G   23M  9.2G   1% /brick1
    gluster-1-1:/dhtvol  9.8G   23M  9.2G   1% /mnt
    [root@gluster-1-3 mnt]#

    增加卷

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    [root@gluster-1-3 mnt]# gluster volume add-brick dhtvol gluster-1-2:/brick1/dht 
    volume add-brick: success
    [root@gluster-1-3 mnt]# df -h
    Filesystem           Size  Used Avail Use% Mounted on
    /dev/sda3             35G  4.2G   30G  13% /
    tmpfs                996M     0  996M   0% /dev/shm
    /dev/sda1            380M   33M  327M  10% /boot
    /dev/sdb             9.8G   23M  9.2G   1% /brick1
    gluster-1-1:/dhtvol  9.8G   23M  9.2G   1% /mnt
    [root@gluster-1-3 mnt]# df -h
    Filesystem           Size  Used Avail Use% Mounted on
    /dev/sda3             35G  4.2G   30G  13% /
    tmpfs                996M     0  996M   0% /dev/shm
    /dev/sda1            380M   33M  327M  10% /boot
    /dev/sdb             9.8G   23M  9.2G   1% /brick1
    gluster-1-1:/dhtvol   20G   45M   19G   1% /mnt
    [root@gluster-1-3 mnt]#
    继续增加
    [root@gluster-1-3 mnt]# gluster volume add-brick dhtvol gluster-1-3:/brick1/dht 
    volume add-brick: success
    [root@gluster-1-3 mnt]# df -h
    Filesystem           Size  Used Avail Use% Mounted on
    /dev/sda3             35G  4.2G   30G  13% /
    tmpfs                996M     0  996M   0% /dev/shm
    /dev/sda1            380M   33M  327M  10% /boot
    /dev/sdb             9.8G   23M  9.2G   1% /brick1
    gluster-1-1:/dhtvol   20G   45M   19G   1% /mnt
    [root@gluster-1-3 mnt]# df -h
    Filesystem           Size  Used Avail Use% Mounted on
    /dev/sda3             35G  4.2G   30G  13% /
    tmpfs                996M     0  996M   0% /dev/shm
    /dev/sda1            380M   33M  327M  10% /boot
    /dev/sdb             9.8G   23M  9.2G   1% /brick1
    gluster-1-1:/dhtvol   30G   68M   28G   1% /mnt
    [root@gluster-1-3 mnt]#

    查看卷的信息  

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    [root@gluster-1-3 mnt]# gluster volume info dhtvol
      
    Volume Name: dhtvol
    Type: Distribute
    Volume ID: 7884976b-9470-44df-af40-a3ee8a7bd021
    Status: Started
    Number of Bricks: 3
    Transport-type: tcp
    Bricks:
    Brick1: gluster-1-1:/brick1/dht
    Brick2: gluster-1-2:/brick1/dht
    Brick3: gluster-1-3:/brick1/dht
    [root@gluster-1-3 mnt]#

      

    glusterfs性能测试工具


     

    在一个节点起服务  

    1
    2
    3
    4
    5
    [root@gluster-1-1 mnt]# iperf -s
    ------------------------------------------------------------
    Server listening on TCP port 5001
    TCP window size: 85.3 KByte (default)
    ------------------------------------------------------------

    客户端连接服务器端,测试网络速度

    1
    2
    3
    4
    5
    6
    7
    8
    9
    [root@gluster-1-2 tools]# iperf -c gluster-1-1
    ------------------------------------------------------------
    Client connecting to gluster-1-1, TCP port 5001
    TCP window size: 42.5 KByte (default)
    ------------------------------------------------------------
    [  3] local 10.0.1.152 port 59526 connected with 10.0.1.151 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-10.0 sec  6.63 GBytes  5.69 Gbits/sec
    [root@gluster-1-2 tools]#

    服务端也能查看到信息
    因为是虚拟机环境,这里虚高了。

    1
    2
    3
    4
    5
    6
    7
    8
    [root@gluster-1-1 mnt]# iperf -s
    ------------------------------------------------------------
    Server listening on TCP port 5001
    TCP window size: 85.3 KByte (default)
    ------------------------------------------------------------
    [  4] local 10.0.1.151 port 5001 connected with 10.0.1.152 port 59526
    [ ID] Interval       Transfer     Bandwidth
    [  4]  0.0-10.0 sec  6.63 GBytes  5.69 Gbits/sec

    你觉得压力不够,可以客户端多个进程一起发包
    使用-P参数 
    客户端结果如下

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    [root@gluster-1-2 tools]# iperf -c gluster-1-1 -P 4
    ------------------------------------------------------------
    Client connecting to gluster-1-1, TCP port 5001
    TCP window size: 19.3 KByte (default)
    ------------------------------------------------------------
    [  5] local 10.0.1.152 port 59529 connected with 10.0.1.151 port 5001
    [  4] local 10.0.1.152 port 59528 connected with 10.0.1.151 port 5001
    [  3] local 10.0.1.152 port 59527 connected with 10.0.1.151 port 5001
    [  6] local 10.0.1.152 port 59530 connected with 10.0.1.151 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  6]  0.0-10.0 sec  1.31 GBytes  1.13 Gbits/sec
    [  5]  0.0-10.0 sec   933 MBytes   783 Mbits/sec
    [  4]  0.0-10.0 sec  1.21 GBytes  1.04 Gbits/sec
    [  3]  0.0-10.0 sec  2.64 GBytes  2.27 Gbits/sec
    [SUM]  0.0-10.0 sec  6.07 GBytes  5.22 Gbits/sec
    [root@gluster-1-2 tools]#

    服务端结果如下

    1
    2
    3
    4
    5
    6
    7
    8
    9
    [  5] local 10.0.1.151 port 5001 connected with 10.0.1.152 port 59527
    [  4] local 10.0.1.151 port 5001 connected with 10.0.1.152 port 59528
    [  6] local 10.0.1.151 port 5001 connected with 10.0.1.152 port 59529
    [  7] local 10.0.1.151 port 5001 connected with 10.0.1.152 port 59530
    [  5]  0.0-10.0 sec  2.64 GBytes  2.27 Gbits/sec
    [  4]  0.0-10.0 sec  1.21 GBytes  1.03 Gbits/sec
    [  6]  0.0-10.0 sec   933 MBytes   782 Mbits/sec
    [  7]  0.0-10.0 sec  1.31 GBytes  1.13 Gbits/sec
    [SUM]  0.0-10.0 sec  6.07 GBytes  5.21 Gbits/sec

    gluster volume status这个语句的作用

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    [root@gluster-1-2 tools]# gluster volume status
    Status of volume: testvol
    Gluster process                     Port    Online  Pid
    ------------------------------------------------------------------------------
    Brick gluster-1-1:/brick1/b1                49153   Y   31387
    Brick gluster-1-2:/brick1/b2                N/A N   N/A
    Self-heal Daemon on localhost               N/A Y   31262
    Self-heal Daemon on 10.0.1.151              N/A Y   31403
    Self-heal Daemon on gluster-1-3             N/A Y   31378
      
    There are no active volume tasks
    Status of volume: dhtvol
    Gluster process                     Port    Online  Pid
    ------------------------------------------------------------------------------
    Brick gluster-1-1:/brick1/dht               49154   Y   31668
    Brick gluster-1-2:/brick1/dht               49154   Y   31351
    Brick gluster-1-3:/brick1/dht               49154   Y   31667
    NFS Server on localhost                 2049    Y   31373
    NFS Server on gluster-1-3               2049    Y   31677
    NFS Server on 10.0.1.151                2049    Y   31705
      
    There are no active volume tasks

    也可以指定卷名查看状态  

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    [root@gluster-1-2 tools]# gluster volume  status dhtvol
    Status of volume: dhtvol
    Gluster process                     Port    Online  Pid
    ------------------------------------------------------------------------------
    Brick gluster-1-1:/brick1/dht               49154   Y   31668
    Brick gluster-1-2:/brick1/dht               49154   Y   31351
    Brick gluster-1-3:/brick1/dht               49154   Y   31667
    NFS Server on localhost                 2049    Y   31373
    NFS Server on 10.0.1.151                2049    Y   31705
    NFS Server on gluster-1-3               2049    Y   31677
      
    There are no active volume tasks
    [root@gluster-1-2 tools]#
     
    [root@gluster-1-2 tools]# netstat -antpu | grep 10.0.1.151
    tcp        0      0 10.0.1.152:24007            10.0.1.151:1020             ESTABLISHED 3139/glusterd      
    tcp        0      0 10.0.1.152:1023             10.0.1.151:24007            ESTABLISHED 3139/glusterd      
    tcp        0      0 10.0.1.152:49153            10.0.1.151:1016             ESTABLISHED 31245/glusterfsd   
    tcp        0      0 10.0.1.152:1007             10.0.1.151:49154            ESTABLISHED 31373/glusterfs    
    tcp        0      0 10.0.1.152:1019             10.0.1.151:49153            ESTABLISHED 31262/glusterfs    
    tcp        0      0 10.0.1.152:49153            10.0.1.151:1015             ESTABLISHED 31245/glusterfsd   
    tcp        0      0 10.0.1.152:49154            10.0.1.151:1005             ESTABLISHED 31351/glusterfsd   
    [root@gluster-1-2 tools]#

      

    性能测试 

    基本性能
     dd if=/dev/zero of=dd.dat bs=1M count=1k
     dd if=dd.dat of=/dev/null bs=1M count=1k
     

    带宽测试
     iozone -r 1m -s 128m -t 4 -i 0 -i 1
     IOPS测试
     Fio
     OPS测试
     Postmark 

    dd工具测试

    客户端写入速度和读取速度测试 

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    [root@gluster-1-3 mnt]# dd if=/dev/zero of=dd.dat bs=1M count=1k
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 36.2987 s, 29.6 MB/s
     
    [root@gluster-1-3 mnt]# dd if=dd.dat of=/dev/null bs=1M count=1k
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 2.86975 s, 374 MB/s
    [root@gluster-1-3 mnt]# ls
    a  b  c  dd.dat
    [root@gluster-1-3 mnt]# ls -lh
    total 1.0G
    -rw-r--r-- 1 root root    0 Feb 10 21:26 a
    -rw-r--r-- 1 root root    0 Feb 10 21:26 b
    -rw-r--r-- 1 root root    0 Feb 10 21:26 c
    -rw-r--r-- 1 root root 1.0G Feb 10 23:03 dd.dat
    [root@gluster-1-3 mnt]#

    再次测试,虚拟机速度不是很稳定  

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    [root@gluster-1-3 mnt]# dd if=/dev/zero of=/mnt/dd.dat1 bs=1M count=1k
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 17.6375 s, 60.9 MB/s
    [root@gluster-1-3 mnt]# ls
    aaa  dd.dat1
    [root@gluster-1-3 mnt]# dd if=/mnt/dd.dat1 of=/dev/null bs=1M count=1k
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 3.12335 s, 344 MB/s
    [root@gluster-1-3 mnt]#

    iozone工具宽带测试  

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    参数解释
    -i #  Test to run (0=write/rewrite, 1=read/re-read, 2=random-read/write
                     3=Read-backwards, 4=Re-write-record, 5=stride-read, 6=fwrite/re-fwrite
                     7=fread/Re-fread, 8=random_mix, 9=pwrite/Re-pwrite, 10=pread/Re-pread
                     11=pwritev/Re-pwritev, 12=preadv/Re-preadv)
    2是随机写
     
     -r #  record size in Kb
                  or -r #k .. size in Kb
                  or -r #m .. size in Mb
                  or -r #g .. size in Gb
    -s #  file size in Kb
                  or -s #k .. size in Kb
                  or -s #m .. size in Mb
                  or -s #g .. size in Gb
    -t #  Number of threads or processes to use in throughput test

    带宽测试
    iozone -r 1m -s 128m -t 4 -i 0 -i 1

    测试结果  

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    [root@gluster-1-1 mnt]# iozone -r 1m -s 128m -t 4 -i 0 -i 1
        Iozone: Performance Test of File I/O
                Version $Revision: 3.394 $
            Compiled for 64 bit mode.
            Build: linux
     
        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                     Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                     Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer.
                     Ben England.
     
        Run began: Fri Feb 10 23:13:31 2017
     
        Record Size 1024 KB
        File size set to 131072 KB
        Command line used: iozone -r 1m -s 128m -t 4 -i 0 -i 1
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Throughput test with 4 processes
        Each process writes a 131072 Kbyte file in 1024 Kbyte records
     
        Children see throughput for  4 initial writers  =  232821.23 KB/sec
        Parent sees throughput for  4 initial writers   =   73057.30 KB/sec
        Min throughput per process          =   25442.28 KB/sec
        Max throughput per process          =   98973.52 KB/sec
        Avg throughput per process          =   58205.31 KB/sec
        Min xfer                    =   68608.00 KB
     
        Children see throughput for  4 rewriters    =  329695.09 KB/sec
        Parent sees throughput for  4 rewriters     =  104885.55 KB/sec
        Min throughput per process          =   65601.57 KB/sec
        Max throughput per process          =  106135.20 KB/sec
        Avg throughput per process          =   82423.77 KB/sec
        Min xfer                    =   82944.00 KB
     
        Children see throughput for  4 readers      = 11668653.50 KB/sec
        Parent sees throughput for  4 readers       = 11282677.82 KB/sec
        Min throughput per process          = 2576575.00 KB/sec
        Max throughput per process          = 3230440.75 KB/sec
        Avg throughput per process          = 2917163.38 KB/sec
        Min xfer                    =  103424.00 KB
     
        Children see throughput for 4 re-readers    = 9956129.00 KB/sec
        Parent sees throughput for 4 re-readers     = 9397866.98 KB/sec
        Min throughput per process          = 1464638.00 KB/sec
        Max throughput per process          = 3228923.00 KB/sec
        Avg throughput per process          = 2489032.25 KB/sec
        Min xfer                    =   48128.00 KB
     
     
     
    iozone test complete.
    [root@gluster-1-1 mnt]#

     FIO工具

    配置文件说明

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    [global]
    ioengine=libaio
    direct=1
    thread=1
    norandommap=1
    randrepeat=0
    filename=/mnt/fio.dat  文件写到这里
    [rr]
    stonewall
    group_reporting
    bs=4k  块大小
    rw=randread  随机读
    numjobs=8  8个进程
    iodepth=4

     写一个配置文件,内容是上面文件 

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    [root@gluster-1-3 ~]# vim fio.conf
    [root@gluster-1-3 ~]# cat fio.conf
    [global]
    ioengine=libaio
    direct=1
    thread=1
    norandommap=1
    randrepeat=0
    filename=/mnt/fio.dat
    [rr]
    stonewall
    group_reporting
    bs=4k
    rw=randread
    numjobs=8
    iodepth=4
    [root@gluster-1-3 ~]#
     
    这一次运行报错
    [root@gluster-1-3 ~]# fio fio.conf
    rr: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
    ...
    fio-2.1.7
    Starting 8 threads
    rr: you need to specify size=
    fio: pid=0, err=22/file:filesetup.c:793, func=total_file_size, error=Invalid argument
    rr: you need to specify size=
    fio: pid=0, err=22/file:filesetup.c:793, func=total_file_size, error=Invalid argument
    rr: you need to specify size=
    fio: pid=0, err=22/file:filesetup.c:793, func=total_file_size, error=Invalid argument
    rr: you need to specify size=
    fio: pid=0, err=22/file:filesetup.c:793, func=total_file_size, error=Invalid argument
    rr: you need to specify size=
    fio: pid=0, err=22/file:filesetup.c:793, func=total_file_size, error=Invalid argument
    rr: you need to specify size=
    fio: pid=0, err=22/file:filesetup.c:793, func=total_file_size, error=Invalid argument
    rr: you need to specify size=
    fio: pid=0, err=22/file:filesetup.c:793, func=total_file_size, error=Invalid argument
    rr: you need to specify size=
    fio: pid=0, err=22/file:filesetup.c:793, func=total_file_size, error=Invalid argument
     
     
    Run status group 0 (all jobs):
    [root@gluster-1-3 ~]#

    再次改动配置,如下  

    加个size=100m

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    [root@gluster-1-3 ~]# cat fio.conf
    [global]
    ioengine=libaio
    direct=1
    thread=1
    norandommap=1
    randrepeat=0
    filename=/mnt/fio.dat
    size=100m
    [rr]
    stonewall
    group_reporting
    bs=4k
    rw=randread
    numjobs=8
    iodepth=4
    [root@gluster-1-3 ~]#
    运行。
    [root@gluster-1-3 ~]# fio fio.conf
    rr: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
    ...
    fio-2.1.7
    Starting 8 threads
    rr: Laying out IO file(s) (1 file(s) / 100MB)
    Jobs: 8 (f=8): [rrrrrrrr] [16.2% done] [1878KB/0KB/0KB /s] [469/0/0 iops] [eta 06m:03s]
     
    Jobs: 8 (f=8): [rrrrrrrr] [16.6% done] [1828KB/0KB/0KB /s] [457/0/0 iops] [eta 06m:02s]
    Jobs: 8 (f=8): [rrrrrrrr] [16.9% done] [2125KB/0KB/0KB /s] [531/0/0 iops] [eta 06m:00s]
    Jobs: 8 (f=8): [rrrrrrrr] [17.6% done] [2097KB/0KB/0KB /s] [524/0/0 iops] [eta 05m:56s]
    Jobs: 8 (f=8): [rrrrrrrr] [18.1% done] [1784KB/0KB/0KB /s] [446/0/0 iops] [eta 05m:53s]
     
    Jobs: 8 (f=8): [rrrrrrrr] [18.3% done] [1850KB/0KB/0KB /s] [462/0/0 iops] [eta 05m:52s]
     
    Jobs: 8 (f=8): [rrrrrrrr] [18.6% done] [2031KB/0KB/0KB /s] [507/0/0 iops] [eta 05m:51s]
     
     
     
    先用ctrl+c终止掉
     
    ^Cbs: 8 (f=8): [rrrrrrrr] [44.7% done] [2041KB/0KB/0KB /s] [510/0/0 iops] [eta 04m:00s]
    fio: terminating on signal 2
     
    rr: (groupid=0, jobs=8): err= 0: pid=31821: Fri Feb 10 23:29:04 2017
      read : io=375372KB, bw=1937.8KB/s, iops=484, runt=193716msec
        slat (usec): min=334, max=266210, avg=16504.09, stdev=19351.15
        clat (usec): min=1, max=324872, avg=49528.02, stdev=33714.50
         lat (msec): min=2, max=352, avg=66.03, stdev=39.05
        clat percentiles (msec):
         |  1.00th=[    4],  5.00th=[   10], 10.00th=[   15], 20.00th=[   22],
         | 30.00th=[   29], 40.00th=[   36], 50.00th=[   43], 60.00th=[   51],
         | 70.00th=[   61], 80.00th=[   74], 90.00th=[   95], 95.00th=[  115],
         | 99.00th=[  159], 99.50th=[  180], 99.90th=[  225], 99.95th=[  245],
         | 99.99th=[  281]
        bw (KB  /s): min=  106, max=  488, per=12.51%, avg=242.37, stdev=54.98
        lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%
        lat (msec) : 2=0.04%, 4=0.96%, 10=4.39%, 20=12.12%, 50=42.19%
        lat (msec) : 100=32.04%, 250=8.21%, 500=0.04%
      cpu          : usr=0.01%, sys=0.35%, ctx=93892, majf=0, minf=48
      IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued    : total=r=93843/w=0/d=0, short=r=0/w=0/d=0
         latency   : target=0, window=0, percentile=100.00%, depth=4
     
    Run status group 0 (all jobs):
       READ: io=375372KB, aggrb=1937KB/s, minb=1937KB/s, maxb=1937KB/s, mint=193716msec, maxt=193716msec
    [root@gluster-1-3 ~]#

    读测试

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    [root@gluster-1-3 ~]# cat fio.conf
    [global]
    ioengine=libaio
    direct=1
    thread=1
    norandommap=1
    randrepeat=0
    filename=/mnt/fio.dat
    size=100m
    [rr]
    stonewall
    group_reporting
    bs=4k
    rw=read
    numjobs=8
    iodepth=4
    [root@gluster-1-3 ~]#
     
    这是顺序读的
    [root@gluster-1-3 ~]# fio fio.conf
    rr: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
    ...
    fio-2.1.7
    Starting 8 threads
    fio: pid=31840, err=9/file:ioengines.c:395, func=io commit, error=Bad file descriptor
    Jobs: 7 (f=7): [RRXRRRRR] [97.0% done] [19752KB/0KB/0KB /s] [4938/0/0 iops] [eta 00m:01s]
    rr: (groupid=0, jobs=8): err= 9 (file:ioengines.c:395, func=io commit, error=Bad file descriptor): pid=31838: Fri Feb 10 23:30:55 2017
      read : io=716800KB, bw=23025KB/s, iops=5756, runt= 31131msec
        slat (usec): min=372, max=63173, avg=1198.24, stdev=1656.16
        clat (usec): min=2, max=71395, avg=3602.91, stdev=3045.37
         lat (usec): min=517, max=73189, avg=4801.87, stdev=3615.83
        clat percentiles (usec):
         |  1.00th=[ 2008],  5.00th=[ 2352], 10.00th=[ 2448], 20.00th=[ 2608],
         | 30.00th=[ 2704], 40.00th=[ 2800], 50.00th=[ 2896], 60.00th=[ 2992],
         | 70.00th=[ 3120], 80.00th=[ 3344], 90.00th=[ 3984], 95.00th=[ 9792],
         | 99.00th=[18304], 99.50th=[23424], 99.90th=[32640], 99.95th=[37632],
         | 99.99th=[49408]
        bw (KB  /s): min= 1840, max= 6016, per=14.42%, avg=3321.34, stdev=374.60
        lat (usec) : 4=0.01%, 750=0.01%, 1000=0.01%
        lat (msec) : 2=0.97%, 4=89.11%, 10=5.00%, 20=4.18%, 50=0.73%
        lat (msec) : 100=0.01%
      cpu          : usr=0.24%, sys=2.07%, ctx=179303, majf=0, minf=38
      IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued    : total=r=179201/w=0/d=0, short=r=0/w=0/d=0
         latency   : target=0, window=0, percentile=100.00%, depth=4
     
    Run status group 0 (all jobs):
       READ: io=716800KB, aggrb=23025KB/s, minb=23025KB/s, maxb=23025KB/s, mint=31131msec, maxt=31131msec
    [root@gluster-1-3 ~]#

    改成写的运行,这都是虚拟机虚高的,一般sata磁盘是80MB左右,sas盘 150MB左右。 

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    [root@gluster-1-3 ~]# cat fio.conf
    [global]
    ioengine=libaio
    direct=1
    thread=1
    norandommap=1
    randrepeat=0
    filename=/mnt/fio.dat
    size=100m
    [rr]
    stonewall
    group_reporting
    bs=4k
    rw=write
    numjobs=8
    iodepth=4
    [root@gluster-1-3 ~]# fio fio.conf
    rr: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
    ...
    fio-2.1.7
    Starting 8 threads
    ^Cbs: 5 (f=5): [W_W_W_WW] [8.5% done] [0KB/27744KB/0KB /s] [0/6936/0 iops] [eta 01m:48s]
    fio: terminating on signal 2
     
    rr: (groupid=0, jobs=8): err= 0: pid=31851: Fri Feb 10 23:32:42 2017
      write: io=401704KB, bw=49036KB/s, iops=12259, runt=  8192msec
        slat (usec): min=42, max=2211.3K, avg=555.55, stdev=22865.93
        clat (usec): min=1, max=3136.8K, avg=1671.44, stdev=43867.76
         lat (usec): min=47, max=3138.5K, avg=2227.89, stdev=51372.35
        clat percentiles (usec):
         |  1.00th=[  139],  5.00th=[  141], 10.00th=[  143], 20.00th=[  149],
         | 30.00th=[  165], 40.00th=[  183], 50.00th=[  199], 60.00th=[  223],
         | 70.00th=[  258], 80.00th=[  338], 90.00th=[  442], 95.00th=[  556],
         | 99.00th=[17280], 99.50th=[28032], 99.90th=[60160], 99.95th=[528384],
         | 99.99th=[2211840]
        bw (KB  /s): min=    2, max=61688, per=23.77%, avg=11655.21, stdev=20767.75
        lat (usec) : 2=0.01%, 4=0.01%, 50=0.01%, 100=0.01%, 250=68.15%
        lat (usec) : 500=25.00%, 750=3.28%, 1000=0.20%
        lat (msec) : 2=0.16%, 4=0.49%, 10=1.02%, 20=0.88%, 50=0.67%
        lat (msec) : 100=0.08%, 750=0.01%, 2000=0.03%, >=2000=0.01%
      cpu          : usr=0.55%, sys=4.00%, ctx=301450, majf=0, minf=7
      IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued    : total=r=0/w=100426/d=0, short=r=0/w=0/d=0
         latency   : target=0, window=0, percentile=100.00%, depth=4
     
    Run status group 0 (all jobs):
      WRITE: io=401704KB, aggrb=49036KB/s, minb=49036KB/s, maxb=49036KB/s, mint=8192msec, maxt=8192msec
    [root@gluster-1-3 ~]#

      

     postmark工具

    参数解释

    1
    2
    3
    4
    5
    6
    7
    set size 1k   每个文件1k
    set number 10000   写1万个文件
    set location /mnt/
    set subdirectories 100    分成100个目录写
    set read 1k
    set write 1k
    run 60  运行60秒

    运行200秒,让它自动退出

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    [root@gluster-1-3 ~]# cat postmark.conf
    set size 1k
    set number 10000
    set location /mnt/
    set subdirectories 100
    set read 1k
    set write 1k
    run 200
    quit
    [root@gluster-1-3 ~]#
     
    新开一个窗口可以看到创建的目录,创建100个目录,每个目录下面创建100个文件
    [root@gluster-1-3 mnt]# ls
    aaa      s11  s17  s22  s28  s33  s39  s44  s5   s55  s60  s66  s71  s77  s82  s88  s93  s99
    dd.dat1  s12  s18  s23  s29  s34  s4   s45  s50  s56  s61  s67  s72  s78  s83  s89  s94
    fio.dat  s13  s19  s24  s3   s35  s40  s46  s51  s57  s62  s68  s73  s79  s84  s9   s95
    s0       s14  s2   s25  s30  s36  s41  s47  s52  s58  s63  s69  s74  s8   s85  s90  s96
    s1       s15  s20  s26  s31  s37  s42  s48  s53  s59  s64  s7   s75  s80  s86  s91  s97
    s10      s16  s21  s27  s32  s38  s43  s49  s54  s6   s65  s70  s76  s81  s87  s92  s98
    [root@gluster-1-3 mnt]#

    可以查看帮助

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    [root@gluster-1-3 ~]# postmark
    PostMark v1.51 : 8/14/01
    pm>help
    set size - Sets low and high bounds of files
    set number - Sets number of simultaneous files
    set seed - Sets seed for random number generator
    set transactions - Sets number of transactions
    set location - Sets location of working files
    set subdirectories - Sets number of subdirectories
    set read - Sets read block size
    set write - Sets write block size
    set buffering - Sets usage of buffered I/O
    set bias read - Sets the chance of choosing read over append
    set bias create - Sets the chance of choosing create over delete
    set report - Choose verbose or terse report format
    run - Runs one iteration of benchmark
    load - Read configuration file
    show - Displays current configuration
    help - Prints out available commands
    quit - Exit program
    pm>

    也可以在这里一行行的输入命令然后执行

    1
    2
    3
    4
    5
    6
    7
    pm>set size 1k
    pm>set number 10000
    pm>set location /mnt
    pm>set subdirectories 100
    pm>run 200
    Creating subdirectories...Done
    Creating files...

    运行完,它会在当前目录下生成一个时间为名字的报告文件,比如这里是200

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    [root@gluster-1-3 ~]# cat 200
    Time:
        35 seconds total
        1 seconds of transactions (500 per second)
     
    Files:
        10258 created (293 per second)
            Creation alone: 10000 files (400 per second)
            Mixed with transactions: 258 files (258 per second)
        244 read (244 per second)
        0 appended (0 per second)
        10258 deleted (293 per second)
            Deletion alone: 10016 files (1112 per second)
            Mixed with transactions: 242 files (242 per second)
     
    Data:
        0.24 kilobytes read (0.01 kilobytes per second)
        10.02 kilobytes written (0.29 kilobytes per second)
    Time:
        33 seconds total
        1 seconds of transactions (500 per second)
     
    Files:
        10258 created (310 per second)   #每秒创建多少
            Creation alone: 10000 files (416 per second)
            Mixed with transactions: 258 files (258 per second)
        244 read (244 per second)
        0 appended (0 per second)
        10258 deleted (310 per second)   #每秒删除多少
            Deletion alone: 10016 files (1252 per second)
            Mixed with transactions: 242 files (242 per second)
     
    Data:
        0.24 kilobytes read (0.01 kilobytes per second)
        10.02 kilobytes written (0.30 kilobytes per second)
    [root@gluster-1-3 ~]#

      

    系统监控 


     

     系统负载
     存储空间
     GlusterFS状态
     系统日志

     

    atop工具

    可以查看系统当前实时的一些指标

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    PRC |  sys    4m19s |  user  48.84s  |  #proc    140 |  #tslpu     0  |  #zombie    0 |  #exit      0  |
    CPU |  sys   1% |  user  0%  |  irq   0% |  idle    398%  |  wait      1% |  curscal   ?%  |
    cpu |  sys   0% |  user  0%  |  irq   0% |  idle     99%  |  cpu000 w  0% |  curscal   ?%  |
    cpu |  sys   0% |  user  0%  |  irq   0% |  idle     99%  |  cpu003 w  0% |  curscal   ?%  |
    cpu |  sys   0% |  user  0%  |  irq   0% |  idle    100%  |  cpu002 w  0% |  curscal   ?%  |
    cpu |  sys   0% |  user  0%  |  irq   0% |  idle    100%  |  cpu001 w  0% |  curscal   ?%  |
    CPL |  avg1    0.00 |  avg5    0.00  |  avg15   0.00 |  csw  6281744  |  intr 4782073 |  numcpu     4  |
    MEM |  tot     1.9G |  free    1.1G  |  cache 376.0M |  dirty   0.0M  |  buff   96.3M |  slab  118.6M  |
    SWP |  tot     4.0G |  free    4.0G  |               |                |  vmcom 618.2M |  vmlim   5.0G  |
    PAG |  scan  119296 |  steal 116494  |  stall      0 |                |  swin       0 |  swout      0  |
    DSK |           sda |  busy  1%  |  read   42778 |  write  27793  |  MBw/s   0.03 |  avio 2.56 ms  |
    DSK |           sdb |  busy  0%  |  read     556 |  write   2746  |  MBw/s   0.02 |  avio 6.05 ms  |
    NET |  transport    |  tcpi 1136211  |  tcpo 1329001 |  udpi     657  |  udpo     848 |  tcpao   1507  |
    NET |  network      |  ipi  1141696  |  ipo  1329910 |  ipfrw      0  |  deliv 1142e3 |  icmpo     23  |
    NET |  eth0  0% |  pcki 2815911  |  pcko 3522252 |  si 1127 Kbps  |  so 1275 Kbps |  erro       0  |
    NET |  eth1  0% |  pcki    1000  |  pcko      14 |  si    0 Kbps  |  so    0 Kbps |  erro       0  |
    NET |  lo      ---- |  pcki  258711  |  pcko  258711 |  si   21 Kbps  |  so   21 Kbps |  erro       0  |
                                 *** system and process activity since boot ***
      PID   TID RUID      THR  SYSCPU  USRCPU  VGROW  RGROW  RDDSK  WRDSK ST EXC S CPUNR  CPU CMD       1/16
    31779     - root        7   2m07s  15.85s 296.0M 46900K  1016K   400K N-   - S     0   1% glusterfs
       19     - root        1  46.08s   0.00s     0K     0K     0K     0K N-   - S     0   0% events/0
     1390     - root        2  19.36s  20.70s 172.9M  7740K  3952K    24K N-   - S     3   0% vmtoolsd
    31667     - root        8  16.96s   0.81s 471.9M 26964K    12K 41104K N-   - S     0   0% glusterfsd
     1569     - root        3   4.92s   8.32s 199.2M  5232K   896K    16K N-   - S     1   0% ManagementAgen
     1863     - root        1   5.37s   0.63s 18372K   720K     0K     4K N-   - S     0   0% irqbalance
       36     - root        1   4.87s   0.00s     0K     0K     0K     0K N-   - S     2   0% kblockd/2
       21     - root        1   3.47s   0.00s     0K     0K     0K     0K N-   - S     2   0% events/2
      330     - root        1   3.41s   0.00s     0K     0K     0K 38820K N-   - S     3   0% jbd2/sda3-8

    可以先运行postmark,然后这边使用atop监控,对照

     

    GlusterFS配置信息和日志


     


    3.2.x版本
    配置信息: /etc/glusterd/
    日志: /var/log/glusterfs
    3.4版本
    配置信息: /var/lib/glusterd
    日志: /var/log/glusterfs

     查看日志

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    [root@gluster-1-3 ~]# cd /var/log/glusterfs/
    [root@gluster-1-3 glusterfs]# ls
    bricks  cli.log  etc-glusterfs-glusterd.vol.log  glustershd.log  mnt.log  nfs.log  testvol-rebalance.log
    [root@gluster-1-3 glusterfs]# cd bricks/
    [root@gluster-1-3 bricks]# ls
    brick1-b3.log  brick1-dht.log
     
    下面的大写字母  I 表示info级别的日志
    [root@gluster-1-3 bricks]# tail -10 brick1-dht.log
     
    +------------------------------------------------------------------------------+
    [2017-02-10 14:44:32.137312] I [server-handshake.c:567:server_setvolume] 0-dhtvol-server: accepted client from gluster-1-3-31676-2017/02/10-14:44:32:63735-dhtvol-client-2-0 (version: 3.4.2)
    [2017-02-10 14:44:33.227536] I [server-handshake.c:567:server_setvolume] 0-dhtvol-server: accepted client from gluster-1-2-31372-2017/02/10-14:44:33:130279-dhtvol-client-2-0 (version: 3.4.2)
    [2017-02-10 14:44:33.349065] I [server-handshake.c:567:server_setvolume] 0-dhtvol-server: accepted client from gluster-1-3-31632-2017/02/10-14:42:40:76203-dhtvol-client-2-2 (version: 3.4.2)
    [2017-02-10 14:44:33.358599] I [server-handshake.c:567:server_setvolume] 0-dhtvol-server: accepted client from gluster-1-1-31704-2017/02/10-14:44:33:289254-dhtvol-client-2-0 (version: 3.4.2)
    [2017-02-10 15:06:44.426731] I [server-handshake.c:567:server_setvolume] 0-dhtvol-server: accepted client from gluster-1-3-31777-2017/02/10-15:06:44:351469-dhtvol-client-2-0 (version: 3.4.2)
    [2017-02-10 15:06:48.693234] I [server.c:762:server_rpc_notify] 0-dhtvol-server: disconnecting connectionfrom gluster-1-3-31632-2017/02/10-14:42:40:76203-dhtvol-client-2-2
    [2017-02-10 15:06:48.693299] I [server-helpers.c:729:server_connection_put] 0-dhtvol-server: Shutting down connection gluster-1-3-31632-2017/02/10-14:42:40:76203-dhtvol-client-2-2
    [2017-02-10 15:06:48.693351] I [server-helpers.c:617:server_connection_destroy] 0-dhtvol-server: destroyed connection of gluster-1-3-31632-2017/02/10-14:42:40:76203-dhtvol-client-2-2
    [root@gluster-1-3 bricks]#

      

     glusterfs集群典型故障处理


     

    1、复制卷数据不一致
    故障现象:双副本卷数据出现不一致
    故障模拟:删除其中一个brick数据
    修复方法
     触发自修复:遍历并访问文件
     find /mnt -type f -print0 | xargs -0 head -c1

     挂载复制卷

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    [root@gluster-1-3 bricks]# mkdir /rep
    [root@gluster-1-3 bricks]# mount -t glusterfs gluster-1-1:/testvol /rep
    [root@gluster-1-3 bricks]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda3              35G  4.2G   30G  13% /
    tmpfs                 996M     0  996M   0% /dev/shm
    /dev/sda1             380M   33M  327M  10% /boot
    /dev/sdb              9.8G   62M  9.2G   1% /brick1
    gluster-1-1:/dhtvol    30G  3.3G   25G  12% /mnt
    gluster-1-1:/testvol  9.8G  2.2G  7.1G  24% /rep
    [root@gluster-1-3 bricks]#
    模拟问题。在gluster-1-1机器上删除文件
    [root@gluster-1-1 ~]# cd /brick1/b1/
    [root@gluster-1-1 b1]# ls
    a  b  c  dd.dat
    [root@gluster-1-1 b1]# rm -f a
    [root@gluster-1-1 b1]# ls
    b  c  dd.dat
    [root@gluster-1-1 b1]#
    gluster-1-2机器上有文件
    [root@gluster-1-2 ~]# ls /brick1/b2/
    a  b  c  dd.dat
    [root@gluster-1-2 ~]#
     
    访问文件可以触发文件的自动修复
    [root@gluster-1-3 bricks]# ls /rep/
    a  b  c  dd.dat
    [root@gluster-1-3 bricks]# cat /rep/a
    [root@gluster-1-3 bricks]# cat /rep/a
    [root@gluster-1-3 bricks]#
    自动修复了
    [root@gluster-1-1 b1]# ls
    b  c  dd.dat
    [root@gluster-1-1 b1]# ls
    a  b  c  dd.dat
    [root@gluster-1-1 b1]#

    通过遍历挂载点,访问文件也可以触发,下面这种是批量遍历。不过我通过下面命令测试没成功

    1
    find /rep -type f -print0 | xargs -0 head -c1

     

    2、glusterfs集群节点配置信息不正确
    故障模拟
     删除server2部分配置信息
     配置信息位置: /var/lib/glusterd/
    修复方法
     触发自修复:通过Gluster工具同步配置信息
     Gluster volume sync server1 all

    恢复复制卷 brick
     故障现象:双副本卷中一个brick损坏
     恢复流程

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    1、重新建立故障brick目录
    setfattr -n trusted.gfid -v
    0x00000000000000000000000000000001 /data2
    setfattr -n trusted.glusterfs.dht -v
    0x000000010000000000000000ffffffff /data2
    setfattr -n trusted.glusterfs.volume-id -v
    0xcc51d546c0af4215a72077ad9378c2ac /data2
    -v 的参数设置成你的值
    2、设置扩展属性(参考另一个复制 brick)
    3、重启 glusterd服务
    4、触发数据自修复
    find /mntpoint -type f -print0 | xargs -0 head -c1 >/dev/null

     模拟删除brick的操作 

     

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    [root@gluster-1-1 b1]# gluster volume info testvol
      
    Volume Name: testvol
    Type: Replicate
    Volume ID: 1fa96ed0-a062-4ccf-9e4f-07cb9d84296f
    Status: Started
    Number of Bricks: 1 x 2 = 2
    Transport-type: tcp
    Bricks:
    Brick1: gluster-1-1:/brick1/b1
    Brick2: gluster-1-2:/brick1/b2
    Options Reconfigured:
    nfs.disable: on
    auth.allow: 10.0.*
     
    [root@gluster-1-2 ~]# ls /brick1/b2/
    a  b  c  dd.dat
    [root@gluster-1-2 ~]# rm -rf /brick1/b2/
    [root@gluster-1-2 ~]# ls /brick1/b2
    ls: cannot access /brick1/b2: No such file or directory
    [root@gluster-1-2 ~]#

    1、节点1上操作,获取扩展属性

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    [root@gluster-1-1 b1]# cd /brick1/
    [root@gluster-1-1 brick1]# getfattr -d -m . -e hex b1/
    # file: b1/
    trusted.afr.testvol-client-0=0x000000000000000000000000
    trusted.afr.testvol-client-1=0x000000000000000000000001
    trusted.gfid=0x00000000000000000000000000000001
    trusted.glusterfs.dht=0x000000010000000000000000ffffffff
    trusted.glusterfs.volume-id=0x1fa96ed0a0624ccf9e4f07cb9d84296f
     
    [root@gluster-1-1 brick1]#

    2、重新建立故障brick目录
    恢复操作,扩展属性可以从gluster-1-1机器上获取的复制,执行顺序没关系

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    [root@gluster-1-2 brick1]# mkdir b2
    [root@gluster-1-2 brick1]#  getfattr -d -m . -e hex b2
    [root@gluster-1-2 brick1]# setfattr -n trusted.glusterfs.volume-id -v 0x1fa96ed0a0624ccf9e4f07cb9d84296f b2
    [root@gluster-1-2 brick1]#  getfattr -d -m . -e hex b2
    # file: b2
    trusted.glusterfs.volume-id=0x1fa96ed0a0624ccf9e4f07cb9d84296f
     
    [root@gluster-1-2 brick1]# setfattr -n trusted.gfid -v 0x00000000000000000000000000000001  b2
    [root@gluster-1-2 brick1]# setfattr -n trusted.glusterfs.dht -v 0x000000010000000000000000ffffffff b2
    [root@gluster-1-2 brick1]#  getfattr -d -m . -e hex b2
    # file: b2
    trusted.gfid=0x00000000000000000000000000000001
    trusted.glusterfs.dht=0x000000010000000000000000ffffffff
    trusted.glusterfs.volume-id=0x1fa96ed0a0624ccf9e4f07cb9d84296f
     
    [root@gluster-1-2 brick1]#
     
    重启服务
    [root@gluster-1-2 brick1]# /etc/init.d/glusterd restart
    Starting glusterd:                                         [  OK  ]
    [root@gluster-1-2 brick1]# ls /brick1/b2/
    [root@gluster-1-2 brick1]# ls /brick1/b2/

     

    3、我这边测试重启服务都没生效。在挂载点cat文件,gluster-1-2机器也没出现gluster-1-1机器上的文件
    最后是重启testvol服务解决的

    1
    2
    3
    4
    5
    6
    7
    8
    [root@gluster-1-2 b2]# gluster volume stop testvol
    Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
    volume stop: testvol: success
    [root@gluster-1-2 b2]# gluster volume start testvol
    volume start: testvol: success
    [root@gluster-1-2 b2]# ls
    44  a  b  c  f
    [root@gluster-1-2 b2]#

    glusterfs集群生产场景调优精要

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    系统关键考虑
     性能需求
     Read/Write
     吞吐量/IOPS/可用性
     Workload
     什么应用?
     大文件?
     小文件?
     除了吞吐量之外的需求?
     
     
    系统规模和架构
     性能理论上由硬件配置决定
     CPU/Mem/Disk/Network
     系统规模由性能和容量需求决定
     2U/4U存储服务器和JBOD适合构建Brick
     三种典型应用部署
     容量需求应用
     2U/4U存储服务器+多个JBOD
     CPU/RAM/Network要求低
     性能和容量混合需求应用
     2U/4U存储服务器+少数JBOD
     高 CPU/RAM,低Network
     性能需求应用
     1U/2U存储服务器(无JBOD)
     高 CPU/RAM,快DISK/Network
     
     
    系统配置
     根据Workload选择适当的 Volume类型
     Volume类型
     DHT – 高性能,无冗余
     AFR – 高可用,读性能高
     STP – 高并发读,写性能低,无冗余
     协议/性能
     Native – 性能最优
     NFS – 特定应用下可获得最优性能
     CIFS – 仅Windows平台使用
     数据流
     不同访问协议的数据流差异
     
    hash+复制是生产必须的
    mount
     
     
    系统硬件配置
     节点和集群配置
     多 CPU-支持更多的并发线程
     多 MEM-支持更大的 Cache
     多网络端口-支持更高的吞吐量
     专用后端网络用于集群内部通信
     NFS/CIFS协议访问需要专用后端网络
     推荐至少10GbE
     Native协议用于内部节点通信
     
    性能相关经验
     GlusterFS性能很大程度上依赖硬件
     充分理解应用基础上进行硬件配置
     缺省参数主要用于通用目的
     GlusterFS存在若干性能调优参数
     性能问题应当优先排除磁盘和网络故障
     
    建议6到8个磁盘最做一个raid
     
     
    系统调优
     关键调优参数
     Performance.write-behind-window-size 65535 (字节)
     Performance.cache-refresh-timeout 1 (秒)
     Performance.cache-size 1073741824 (字节)
     Performance.read-ahead off (仅1GbE)
     Performance.io-thread-count 24 (CPU核数)
     Performance.client-io-threads on (客户端)
     performance.write-behind on
     performance.flush-behind on
     cluster.stripe-block-size 4MB (缺省 128KB)
     Nfs.disable off (缺省打开)
     缺省参数设置适用于混合workloads
     不同应用调优
     理解硬件/固件配置及对性能的影响
     如CPU频率、 IB、 10GbE、 TCP offload
     
     
    KVM优化
     使用 QEMU-GlusterFS(libgfapi)整合方案
     gluster volume set <volume> group virt
     tuned-adm profile rhs-virtualization
     KVM host: tuned-adm profile virtual-host
     Images和应用数据使用不同的 volume
     每个gluster节点不超过2个KVM Host (16 guest/host)
     提高响应时间
     减少/sys/block/vda/queue/nr_request
     Server/Guest: 128/8 (缺省企值256/128)
     提高读带宽
     提高 /sys/block/vda/queue/read_ahead_kb
     VM readahead: 4096 (缺省值128)

      

     

  • 相关阅读:
    【算法】动态规划
    【设计模式】单例模式
    Python 多元线性回归
    Python 线性回归
    惩罚项
    局部常数拟合方法 例
    微分方程是用来做什么的?
    线性回归与梯度下降法
    k近邻法
    逻辑回归与梯度下降法
  • 原文地址:https://www.cnblogs.com/wuhg/p/10077447.html
Copyright © 2011-2022 走看看