zoukankan      html  css  js  c++  java
  • glusterfs 创建分布式副本卷

    1、创建brick

    在两台节点分别创建如下的目录:

    [root@hadoop4 ~]# mkdir /data/brick1/brick2
    [root@hadoop4 ~]# mkdir /data/brick1/brick3
    

    2、创建分布式副本卷

    gluster volume create drv1 replica 2 hadoop4:/data/brick1/brick2/ hadoop4:/data/brick1/brick3/ k8s-node2:/data/brick1/brick2/ k8s-node2:/data/brick1/brick3/
    Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
    Do you still want to continue?
     (y/n) y
    volume create: drv1: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant configuration. Use 'force' at the end of the command if you want to override this behavior. 
    

    报错了,提示:同一服务器上存在复制卷的多个块。这是因为用两个节点来模拟分布式副本卷导致的,解决办法就是在后面加force
    从新执行

    [root@k8s-node2 gluster_client]# gluster volume create drv1 replica 2 hadoop4:/data/brick1/brick2/ hadoop4:/data/brick1/brick3/ k8s-node2:/data/brick1/brick2/ k8s-node2:/data/brick1/brick3/ force
    volume create: drv1: success: please start the volume to access data
    

    3、查看信息

    3.1 查看

    [root@k8s-node2 ~]# gluster volume list
    drv1
    rv0
    rv1
    [root@k8s-node2 ~]# gluster volume info drv1
     
    Volume Name: drv1
    Type: Distributed-Replicate
    Volume ID: 426732fb-9c04-4831-b401-db7cac41a0e3
    Status: Created
    Snapshot Count: 0
    Number of Bricks: 2 x 2 = 4
    Transport-type: tcp
    Bricks:
    Brick1: hadoop4:/data/brick1/brick2
    Brick2: hadoop4:/data/brick1/brick3
    Brick3: k8s-node2:/data/brick1/brick2
    Brick4: k8s-node2:/data/brick1/brick3
    Options Reconfigured:
    cluster.granular-entry-heal: on
    storage.fips-mode-rchecksum: on
    transport.address-family: inet
    nfs.disable: on
    performance.client-io-threads: off
    

    可以看到类型是:Type: Distributed-Replicate
    3.2、启动

    # gluster volume start drv1 
    volume start: drv1: success
    

    4、客户端使用

    4.1 创建挂载点

    [root@k8s-node2 ~]# mkdir /gluster_client_dis
    [root@k8s-node2 ~]# mount -t glusterfs hadoop4:/drv1 /gluster_client_dis/
    
    

    注:这种模拟测试,由于在同一个节点使用了两块brick,导致创建出来的文件的副本在同一个节点,生产环境建议四台机器。

    记录学习和生活的酸甜苦辣.....哈哈哈
  • 相关阅读:
    Oracle等待事件Enqueue CI:Cross Instance Call Invocation
    Exadata. Are you ready?
    Beyond the Mobile Goldrush
    推荐一款性能诊断工具Membai
    Does LGWR use synchronous IO even AIO enabled?
    Raid Level,该如何为Oracle存储选择才好?
    Oracle备份恢复:Rman Backup缓慢问题一例
    Usage and Configuration of the Oracle shared Server
    UserManaged Backups
    Oracle Recovery Manager Overview and Configuration
  • 原文地址:https://www.cnblogs.com/yjt1993/p/14772712.html
Copyright © 2011-2022 走看看