zoukankan      html  css  js  c++  java
  • redis6.2.3 集群搭建

    redis 搭建集群 使用redis-cli 最新

    redis 从5开始 可以直接用redis-cli命令创建集群了,不用那么麻烦 安装ruby环境

    redis安装目录 : /usr/local/redis

    Redis server版本: 6.2.3

    https://redis.io/topics/cluster-tutorial 官网集群搭建介绍

    https://redis.io/download 官网安装介绍 (往下拉)

    redis安装

    下载https://redis.io/download

    wget https://download.redis.io/releases/redis-6.2.5.tar.gz
    tar xzf redis-6.2.5.tar.gz
    cd redis-6.2.5
    make
    

    启动脚本位于src下面

    $ src/redis-server
    
    $ src/redis-cli
    redis> set foo bar
    OK
    redis> get foo
    "bar"
    

    复制 redis-server redis-cli等命令到 /user/local/redis/bin 下面

    复制 redis-conf到/usr/local/redis/etc 下面

    启动

     /usr/local/redis/bin/redis-server /usr/local/redis/etc/redis-conf 
    

    集群搭建开始

    新建目录

    sudo mkdir /usr/local/cluster && cd /usr/local/cluster
    
    sudo mkdir 7000 7001 7002 7003 7004 7005
    sudo mkdir -p /usr/local/cluster/log/
    
    

    新建一个redis.conf 备用

    sudo vim redis.conf
    

    内容为如下

    bind  0.0.0.0
    protected-mode no 
    daemonize yes
    cluster-enabled yes
    cluster-node-timeout 15000
    appendonly yes
    
    port 7000
    pidfile /var/run/redis_7000.pid
    dbfilename dump_7000.rdb
    appendfilename "appendonly_7000.aof"
    cluster-config-file nodes_7000.conf
    logfile "/usr/local/cluster/log/redis_7000.log"
    

    复制到各个目录去

    sudo cp -f /usr/local/cluster/redis.conf   /usr/local/cluster/7000/
    sudo cp -f /usr/local/cluster/redis.conf   /usr/local/cluster/7001/
    sudo cp -f /usr/local/cluster/redis.conf   /usr/local/cluster/7002/
    sudo cp -f /usr/local/cluster/redis.conf   /usr/local/cluster/7003/
    sudo cp -f /usr/local/cluster/redis.conf   /usr/local/cluster/7004/
    sudo cp -f /usr/local/cluster/redis.conf   /usr/local/cluster/7005/
    

    修改目录里面的内容

    sudo vim /usr/local/cluster/7001/redis.conf
    :%s/7000/7001/g
    
    sudo vim /usr/local/cluster/7002/redis.conf
    :%s/7000/7002/g
    
    sudo vim /usr/local/cluster/7003/redis.conf
    :%s/7000/7003/g
    
    sudo vim /usr/local/cluster/7004/redis.conf
    :%s/7000/7004/g
    
    sudo vim /usr/local/cluster/7005/redis.conf
    :%s/7000/7005/g
    
    

    新建个启动脚本

    sudo ln -s /usr/local/redis/bin/redis-server /usr/local/bin/redis-server
    
    sudo vim start.sh
    
    #!/bin/bash
    /usr/local/bin/redis-server /usr/local/cluster/7000/redis.conf
    /usr/local/bin/redis-server /usr/local/cluster/7001/redis.conf
    /usr/local/bin/redis-server /usr/local/cluster/7002/redis.conf
    /usr/local/bin/redis-server /usr/local/cluster/7003/redis.conf
    /usr/local/bin/redis-server /usr/local/cluster/7004/redis.conf
    /usr/local/bin/redis-server /usr/local/cluster/7005/redis.conf
    
    sudo chmod +x start.sh && sudo  ./start.sh
    
    

    查看启动情况

    ps -ef|grep redis |grep -v "grep"
    
    ╭─deepin@deepin-PC /usr/local/cluster 
    ╰─$ ps -ef|grep redis|grep -v "grep"                                 
    root     21471     1  0 08:56 ?        00:00:00 /usr/local/bin/redis-server 0.0.0.0:7000 [cluster]
    root     21473     1  0 08:56 ?        00:00:00 /usr/local/bin/redis-server 0.0.0.0:7001 [cluster]
    root     21475     1  0 08:56 ?        00:00:00 /usr/local/bin/redis-server 0.0.0.0:7002 [cluster]
    root     21477     1  0 08:56 ?        00:00:00 /usr/local/bin/redis-server 0.0.0.0:7003 [cluster]
    root     21479     1  0 08:56 ?        00:00:00 /usr/local/bin/redis-server 0.0.0.0:7004 [cluster]
    root     21481     1  0 08:56 ?        00:00:00 /usr/local/bin/redis-server 0.0.0.0:7005 [cluster]
    

    可以看到六个redis都启动起来了

    创建集群

    redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 --cluster-replicas 1
    

    结果创建成功

    ╭─deepin@deepin-PC /usr/local/cluster 
    ╰─$ sudo redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 --cluster-replicas 1                                         1 ↵
    >>> Performing hash slots allocation on 6 nodes...
    Master[0] -> Slots 0 - 5460
    Master[1] -> Slots 5461 - 10922
    Master[2] -> Slots 10923 - 16383
    Adding replica 127.0.0.1:7004 to 127.0.0.1:7000
    Adding replica 127.0.0.1:7005 to 127.0.0.1:7001
    Adding replica 127.0.0.1:7003 to 127.0.0.1:7002
    >>> Trying to optimize slaves allocation for anti-affinity
    [WARNING] Some slaves are in the same host as their master
    M: 128a61043843bd93b28212cb8db09eb29b53410a 127.0.0.1:7000                                                                                                                                     
       slots:[0-5460] (5461 slots) master
    M: 954afe83d6bf58a9a7b9357827febf44524b38ea 127.0.0.1:7001
       slots:[5461-10922] (5462 slots) master
    M: 38488cf23d8ba899ac018d8b3a7bb252011fd75f 127.0.0.1:7002
       slots:[10923-16383] (5461 slots) master
    S: d882c3ab7f37613663d7f063c4d7eae5cefd7fb7 127.0.0.1:7003
       replicates 38488cf23d8ba899ac018d8b3a7bb252011fd75f
    S: fe79965b96d639f5ddd928754a61f6a543903a63 127.0.0.1:7004
       replicates 128a61043843bd93b28212cb8db09eb29b53410a
    S: 7f264ab64929710f859a1206a6ea202267810029 127.0.0.1:7005
       replicates 954afe83d6bf58a9a7b9357827febf44524b38ea
    Can I set the above configuration? (type 'yes' to accept): yes
    >>> Nodes configuration updated
    >>> Assign a different config epoch to each node
    >>> Sending CLUSTER MEET messages to join the cluster
    Waiting for the cluster to join
    .
    >>> Performing Cluster Check (using node 127.0.0.1:7000)
    M: 128a61043843bd93b28212cb8db09eb29b53410a 127.0.0.1:7000
       slots:[0-5460] (5461 slots) master
       1 additional replica(s)
    S: d882c3ab7f37613663d7f063c4d7eae5cefd7fb7 127.0.0.1:7003
       slots: (0 slots) slave
       replicates 38488cf23d8ba899ac018d8b3a7bb252011fd75f
    M: 954afe83d6bf58a9a7b9357827febf44524b38ea 127.0.0.1:7001
       slots:[5461-10922] (5462 slots) master
       1 additional replica(s)
    S: fe79965b96d639f5ddd928754a61f6a543903a63 127.0.0.1:7004
       slots: (0 slots) slave
       replicates 128a61043843bd93b28212cb8db09eb29b53410a
    M: 38488cf23d8ba899ac018d8b3a7bb252011fd75f 127.0.0.1:7002
       slots:[10923-16383] (5461 slots) master
       1 additional replica(s)
    S: 7f264ab64929710f859a1206a6ea202267810029 127.0.0.1:7005
       slots: (0 slots) slave
       replicates 954afe83d6bf58a9a7b9357827febf44524b38ea
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...                                                                                                                                                                    
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    
    

    查看集群状态

    redis-cli -h 127.0.0.1 -c -p 7000 info replication
    
    redis-cli -h 127.0.0.1 -c -p 7001 info replication
    
    redis-cli -h 127.0.0.1 -c -p 7002 info replication
    
    redis-cli -h 127.0.0.1 -c -p 7003 info replication
    
    redis-cli -h 127.0.0.1 -c -p 7004 info replication
    
    redis-cli -h 127.0.0.1 -c -p 7005 info replication
    

    连接到集群

    redis-cli -h 127.0.0.1 -c -p 7000 (加参数 -c 可以连接到集群,因为redis.conf将bind改为了ip地址,所以 -h 不可以省略)
    

    设置值的时候,可以看到他会随机存到其中一个主里面

    ╰─$ redis-cli -h 127.0.0.1 -c -p 7000
    127.0.0.1:7000> keys *
    (empty array)
    127.0.0.1:7000> set name wang
    -> Redirected to slot [5798] located at 127.0.0.1:7001
    OK
    127.0.0.1:7001> get name
    "wang"
    127.0.0.1:7001> 
    
    

    集群相关命令

    https://www.cnblogs.com/brady-wang/p/15096756.html

    redis-cli --cluster help
    Cluster Manager Commands:
      create         host1:port1 ... hostN:portN   #创建集群
                     --cluster-replicas <arg>      #从节点个数
      check          host:port                     #检查集群
                     --cluster-search-multiple-owners #检查是否有槽同时被分配给了多个节点
      info           host:port                     #查看集群状态
      fix            host:port                     #修复集群
                     --cluster-search-multiple-owners #修复槽的重复分配问题
      reshard        host:port                     #指定集群的任意一节点进行迁移slot,重新分slots
                     --cluster-from <arg>          #需要从哪些源节点上迁移slot,可从多个源节点完成迁移,以逗号隔开,传递的是节点的node id,还可以直接传递--from all,这样源节点就是集群的所有节点,不传递该参数的话,则会在迁移过程中提示用户输入
                     --cluster-to <arg>            #slot需要迁移的目的节点的node id,目的节点只能填写一个,不传递该参数的话,则会在迁移过程中提示用户输入
                     --cluster-slots <arg>         #需要迁移的slot数量,不传递该参数的话,则会在迁移过程中提示用户输入。
                     --cluster-yes                 #指定迁移时的确认输入
                     --cluster-timeout <arg>       #设置migrate命令的超时时间
                     --cluster-pipeline <arg>      #定义cluster getkeysinslot命令一次取出的key数量,不传的话使用默认值为10
                     --cluster-replace             #是否直接replace到目标节点
      rebalance      host:port                                      #指定集群的任意一节点进行平衡集群节点slot数量 
                     --cluster-weight <node1=w1...nodeN=wN>         #指定集群节点的权重
                     --cluster-use-empty-masters                    #设置可以让没有分配slot的主节点参与,默认不允许
                     --cluster-timeout <arg>                        #设置migrate命令的超时时间
                     --cluster-simulate                             #模拟rebalance操作,不会真正执行迁移操作
                     --cluster-pipeline <arg>                       #定义cluster getkeysinslot命令一次取出的key数量,默认值为10
                     --cluster-threshold <arg>                      #迁移的slot阈值超过threshold,执行rebalance操作
                     --cluster-replace                              #是否直接replace到目标节点
      add-node       new_host:new_port existing_host:existing_port  #添加节点,把新节点加入到指定的集群,默认添加主节点
                     --cluster-slave                                #新节点作为从节点,默认随机一个主节点
                     --cluster-master-id <arg>                      #给新节点指定主节点
      del-node       host:port node_id                              #删除给定的一个节点,成功后关闭该节点服务
      call           host:port command arg arg .. arg               #在集群的所有节点执行相关命令
      set-timeout    host:port milliseconds                         #设置cluster-node-timeout
      import         host:port                                      #将外部redis数据导入集群
                     --cluster-from <arg>                           #将指定实例的数据导入到集群
                     --cluster-copy                                 #migrate时指定copy
                     --cluster-replace                              #migrate时指定replace
      help           
    
    For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.
    

    当前可以测试命令

    redis-cli --cluster check 127.0.0.1:7000
    

    能看到集群当前情况 7000 7001 7002 为主 其他三台为从

    ╭─deepin@deepin-PC /usr/local/cluster 
    ╰─$ redis-cli --cluster check 127.0.0.1:7000
    127.0.0.1:7000 (128a6104...) -> 0 keys | 5461 slots | 1 slaves.
    127.0.0.1:7001 (954afe83...) -> 1 keys | 5462 slots | 1 slaves.
    127.0.0.1:7002 (38488cf2...) -> 0 keys | 5461 slots | 1 slaves.
    [OK] 1 keys in 3 masters.
    0.00 keys per slot on average.                                                                                                                                                                 
    >>> Performing Cluster Check (using node 127.0.0.1:7000)
    M: 128a61043843bd93b28212cb8db09eb29b53410a 127.0.0.1:7000
       slots:[0-5460] (5461 slots) master
       1 additional replica(s)
    S: d882c3ab7f37613663d7f063c4d7eae5cefd7fb7 127.0.0.1:7003
       slots: (0 slots) slave
       replicates 38488cf23d8ba899ac018d8b3a7bb252011fd75f
    M: 954afe83d6bf58a9a7b9357827febf44524b38ea 127.0.0.1:7001
       slots:[5461-10922] (5462 slots) master
       1 additional replica(s)
    S: fe79965b96d639f5ddd928754a61f6a543903a63 127.0.0.1:7004
       slots: (0 slots) slave
       replicates 128a61043843bd93b28212cb8db09eb29b53410a
    M: 38488cf23d8ba899ac018d8b3a7bb252011fd75f 127.0.0.1:7002
       slots:[10923-16383] (5461 slots) master
       1 additional replica(s)
    S: 7f264ab64929710f859a1206a6ea202267810029 127.0.0.1:7005
       slots: (0 slots) slave
       replicates 954afe83d6bf58a9a7b9357827febf44524b38ea
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...                                                                                                                                                                    
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    

    redis-cli --cluster info 127.0.0.1:7000

    ╭─deepin@deepin-PC /usr/local/cluster 
    ╰─$ redis-cli --cluster info  127.0.0.1:7000
    127.0.0.1:7000 (128a6104...) -> 0 keys | 5461 slots | 1 slaves.
    127.0.0.1:7001 (954afe83...) -> 1 keys | 5462 slots | 1 slaves.
    127.0.0.1:7002 (38488cf2...) -> 0 keys | 5461 slots | 1 slaves.
    [OK] 1 keys in 3 masters.
    0.00 keys per slot on average.  
    

    提示:在使用cli连接到redis后可以使用info replication命令来查看主从关系等信息。

    集群客户端命令(redis-cli -c -p port)

    集群
    cluster info :打印集群的信息
    cluster nodes :列出集群当前已知的所有节点( node),以及这些节点的相关信息。
    节点
    cluster meet <ip> <port> :将 ip 和 port 所指定的节点添加到集群当中,让它成为集群的一份子。
    cluster forget <node_id> :从集群中移除 node_id 指定的节点。
    cluster replicate <node_id> :将当前节点设置为 node_id 指定的节点的从节点。
    cluster saveconfig :将节点的配置文件保存到硬盘里面。
    槽(slot)
    cluster addslots <slot> [slot ...] :将一个或多个槽( slot)指派( assign)给当前节点。
    cluster delslots <slot> [slot ...] :移除一个或多个槽对当前节点的指派。
    cluster flushslots :移除指派给当前节点的所有槽,让当前节点变成一个没有指派任何槽的节点。
    cluster setslot <slot> node <node_id> :将槽 slot 指派给 node_id 指定的节点,如果槽已经指派给
    另一个节点,那么先让另一个节点删除该槽>,然后再进行指派。
    cluster setslot <slot> migrating <node_id> :将本节点的槽 slot 迁移到 node_id 指定的节点中。
    cluster setslot <slot> importing <node_id> :从 node_id 指定的节点中导入槽 slot 到本节点。
    cluster setslot <slot> stable :取消对槽 slot 的导入( import)或者迁移( migrate)。
    键
    cluster keyslot <key> :计算键 key 应该被放置在哪个槽上。
    cluster countkeysinslot <slot> :返回槽 slot 目前包含的键值对数量。
    cluster getkeysinslot <slot> <count> :返回 count 个 slot 槽中的键  
    

    验证集群是否自动会切换

    查看从服务器7003情况

    ╭─deepin@deepin-PC /usr/local/cluster 
    ╰─$ redis-cli -h 127.0.0.1 -c -p 7003        
    127.0.0.1:7003> info replication
    # Replication
    role:slave
    master_host:127.0.0.1
    master_port:7002
    master_link_status:up
    master_last_io_seconds_ago:5
    master_sync_in_progress:0
    slave_repl_offset:1246
    slave_priority:100
    slave_read_only:1
    replica_announced:1
    connected_slaves:0
    master_failover_state:no-failover
    master_replid:a5cf8f6adfda3cd57e7ca94ad5806649e72fa8b1
    master_replid2:0000000000000000000000000000000000000000
    master_repl_offset:1246
    second_repl_offset:-1
    repl_backlog_active:1
    repl_backlog_size:1048576
    repl_backlog_first_byte_offset:1
    repl_backlog_histlen:1246
    

    可以看到它是一个从 它的主是7002 现在关闭7002服务器

    再过一会看 7003 变为了主服务器

    127.0.0.1:7003> info replication
    # Replication
    role:master
    connected_slaves:0
    master_failover_state:no-failover
    master_replid:845d4450d9f373c9addf5021d25dde7a872807fb
    master_replid2:a5cf8f6adfda3cd57e7ca94ad5806649e72fa8b1
    master_repl_offset:1722
    second_repl_offset:1723
    repl_backlog_active:1
    repl_backlog_size:1048576
    repl_backlog_first_byte_offset:1
    repl_backlog_histlen:1722
    127.0.0.1:7003> 
    

    再启动7002 查看7003 发现多了slave

    127.0.0.1:7003> info replication
    # Replication
    role:master
    connected_slaves:1
    slave0:ip=127.0.0.1,port=7002,state=online,offset=1778,lag=1
    master_failover_state:no-failover
    master_replid:845d4450d9f373c9addf5021d25dde7a872807fb
    master_replid2:a5cf8f6adfda3cd57e7ca94ad5806649e72fa8b1
    master_repl_offset:1778
    second_repl_offset:1723
    repl_backlog_active:1
    repl_backlog_size:1048576
    repl_backlog_first_byte_offset:1
    repl_backlog_histlen:1778
    127.0.0.1:7003> 
    

    查看7002 变为了slave了

    ╭─deepin@deepin-PC /usr/local/cluster 
    ╰─$ redis-cli -h 127.0.0.1 -c -p 7002                 
    127.0.0.1:7002> info replication
    # Replication
    role:slave
    master_host:127.0.0.1
    master_port:7003
    master_link_status:up
    master_last_io_seconds_ago:3
    master_sync_in_progress:0
    slave_repl_offset:1848
    slave_priority:100
    slave_read_only:1
    replica_announced:1
    connected_slaves:0
    master_failover_state:no-failover
    master_replid:845d4450d9f373c9addf5021d25dde7a872807fb
    master_replid2:0000000000000000000000000000000000000000
    master_repl_offset:1848
    second_repl_offset:-1
    repl_backlog_active:1
    repl_backlog_size:1048576
    repl_backlog_first_byte_offset:1723
    repl_backlog_histlen:126
    127.0.0.1:7002> 
    

    验证删除从节点

    redis-cli --cluster del-node 127.0.0.1:7002 38488cf23d8ba899ac018d8b3a7bb252011fd75f
    
    ╭─deepin@deepin-PC /usr/local/cluster 
    ╰─$ redis-cli --cluster del-node 127.0.0.1:7002 38488cf23d8ba899ac018d8b3a7bb252011fd75f
    >>> Removing node 38488cf23d8ba899ac018d8b3a7bb252011fd75f from cluster 127.0.0.1:7002
    >>> Sending CLUSTER FORGET messages to the cluster...
    >>> Sending CLUSTER RESET SOFT to the deleted node.
    ╭─deepin@deepin-PC /usr/local/cluster 
    ╰─$ redis-cli -h 127.0.0.1 -c -p 7002                                                   
    127.0.0.1:7002> keys *
    (empty array)
    127.0.0.1:7002> cluster info
    cluster_state:fail
    cluster_slots_assigned:0
    cluster_slots_ok:0
    cluster_slots_pfail:0
    cluster_slots_fail:0
    cluster_known_nodes:1
    cluster_size:0
    cluster_current_epoch:7
    cluster_my_epoch:3
    cluster_stats_messages_ping_sent:400
    cluster_stats_messages_pong_sent:376
    cluster_stats_messages_sent:776
    cluster_stats_messages_ping_received:376
    cluster_stats_messages_pong_received:400
    cluster_stats_messages_received:776
    127.0.0.1:7002> cluster nodes
    38488cf23d8ba899ac018d8b3a7bb252011fd75f 127.0.0.1:7002@17002 myself,master - 0 1628040890000 3 connected
    127.0.0.1:7002> 
    
    ╭─deepin@deepin-PC /usr/local/cluster 
    ╰─$ redis-cli -h 127.0.0.1 -c -p 7000
    127.0.0.1:7000> cluster nodes 
    128a61043843bd93b28212cb8db09eb29b53410a 127.0.0.1:7000@17000 myself,master - 0 1628040988000 1 connected 0-5460
    d882c3ab7f37613663d7f063c4d7eae5cefd7fb7 127.0.0.1:7003@17003 master - 0 1628040989396 7 connected 10923-16383
    954afe83d6bf58a9a7b9357827febf44524b38ea 127.0.0.1:7001@17001 master - 0 1628040988393 2 connected 5461-10922
    fe79965b96d639f5ddd928754a61f6a543903a63 127.0.0.1:7004@17004 slave 128a61043843bd93b28212cb8db09eb29b53410a 0 1628040988000 1 connected
    7f264ab64929710f859a1206a6ea202267810029 127.0.0.1:7005@17005 slave 954afe83d6bf58a9a7b9357827febf44524b38ea 0 1628040987000 2 connected
    127.0.0.1:7000> 
    
    

    可以看到 7002 已经不在集群了

    说明:指定IP、端口和node_id 来删除一个节点,从节点可以直接删除,有slot分配的主节点不能直接删除。删除之后,该节点会被shutdown。

    注意:当被删除掉的节点重新起来之后不能自动加入集群,本身已经是个独立的master节点了(通过其他正常节点已经看不到该被del-node节点的信息)。

    如果想要再次加入集群,则需要先在该节点执行cluster reset,再用add-node进行添加,进行增量同步复制。

    新增7002 到集群 并且作为从 7003的

    ╭─deepin@deepin-PC /usr/local/cluster 
    ╰─$ redis-cli --cluster check  127.0.0.1:7000                                           
    127.0.0.1:7000 (128a6104...) -> 0 keys | 5461 slots | 1 slaves.
    127.0.0.1:7003 (d882c3ab...) -> 0 keys | 5461 slots | 0 slaves.
    127.0.0.1:7001 (954afe83...) -> 1 keys | 5462 slots | 1 slaves.
    [OK] 1 keys in 3 masters.
    0.00 keys per slot on average.                                                                    
    >>> Performing Cluster Check (using node 127.0.0.1:7000)
    M: 128a61043843bd93b28212cb8db09eb29b53410a 127.0.0.1:7000
       slots:[0-5460] (5461 slots) master
       1 additional replica(s)
    M: d882c3ab7f37613663d7f063c4d7eae5cefd7fb7 127.0.0.1:7003
       slots:[10923-16383] (5461 slots) master
    M: 954afe83d6bf58a9a7b9357827febf44524b38ea 127.0.0.1:7001
       slots:[5461-10922] (5462 slots) master
       1 additional replica(s)
    S: fe79965b96d639f5ddd928754a61f6a543903a63 127.0.0.1:7004
       slots: (0 slots) slave
       replicates 128a61043843bd93b28212cb8db09eb29b53410a
    S: 7f264ab64929710f859a1206a6ea202267810029 127.0.0.1:7005
       slots: (0 slots) slave
       replicates 954afe83d6bf58a9a7b9357827febf44524b38ea
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...                                                                       
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    
    

    下面测试下 新增一个主节点

    复制文件夹

    sudo cp -R 7000 7005 
    
    

    sudo vim 7006/redis.conf

    :%s/7000/7006/g
    

    启动

    sudo /usr/local/bin/redis-server /usr/local/cluster/7006/redis.conf
    

    加入集群

    redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 
    
    ╭─deepin@deepin-PC /usr/local/cluster 
    ╰─$ redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 
    >>> Adding node 127.0.0.1:7006 to cluster 127.0.0.1:7000
    >>> Performing Cluster Check (using node 127.0.0.1:7000)
    M: 128a61043843bd93b28212cb8db09eb29b53410a 127.0.0.1:7000
       slots:[0-5460] (5461 slots) master
       1 additional replica(s)
    M: d882c3ab7f37613663d7f063c4d7eae5cefd7fb7 127.0.0.1:7003
       slots:[10923-16383] (5461 slots) master
       1 additional replica(s)
    M: 954afe83d6bf58a9a7b9357827febf44524b38ea 127.0.0.1:7001
       slots:[5461-10922] (5462 slots) master
       1 additional replica(s)
    S: fe79965b96d639f5ddd928754a61f6a543903a63 127.0.0.1:7004
       slots: (0 slots) slave
       replicates 128a61043843bd93b28212cb8db09eb29b53410a
    S: 38488cf23d8ba899ac018d8b3a7bb252011fd75f 127.0.0.1:7002
       slots: (0 slots) slave
       replicates d882c3ab7f37613663d7f063c4d7eae5cefd7fb7
    S: 7f264ab64929710f859a1206a6ea202267810029 127.0.0.1:7005
       slots: (0 slots) slave
       replicates 954afe83d6bf58a9a7b9357827febf44524b38ea
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...                                                                       
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    >>> Send CLUSTER MEET to node 127.0.0.1:7006 to make it join the cluster.                         
    [OK] New node added correctly.
    

    查看结果

    ╭─deepin@deepin-PC /usr/local/cluster 
    ╰─$ redis-cli --cluster check  127.0.0.1:7000                  
    127.0.0.1:7000 (128a6104...) -> 0 keys | 5461 slots | 1 slaves.
    127.0.0.1:7003 (d882c3ab...) -> 0 keys | 5461 slots | 1 slaves.
    127.0.0.1:7001 (954afe83...) -> 1 keys | 5462 slots | 1 slaves.
    127.0.0.1:7006 (05be533e...) -> 0 keys | 0 slots | 0 slaves.
    [OK] 1 keys in 4 masters.
    0.00 keys per slot on average.                                                                    
    >>> Performing Cluster Check (using node 127.0.0.1:7000)
    M: 128a61043843bd93b28212cb8db09eb29b53410a 127.0.0.1:7000
       slots:[0-5460] (5461 slots) master
       1 additional replica(s)
    M: d882c3ab7f37613663d7f063c4d7eae5cefd7fb7 127.0.0.1:7003
       slots:[10923-16383] (5461 slots) master
       1 additional replica(s)
    M: 954afe83d6bf58a9a7b9357827febf44524b38ea 127.0.0.1:7001
       slots:[5461-10922] (5462 slots) master
       1 additional replica(s)
    S: fe79965b96d639f5ddd928754a61f6a543903a63 127.0.0.1:7004
       slots: (0 slots) slave
       replicates 128a61043843bd93b28212cb8db09eb29b53410a
    M: 05be533e1bf54fef8473af29b98aa45a2507388b 127.0.0.1:7006
       slots: (0 slots) master
    S: 38488cf23d8ba899ac018d8b3a7bb252011fd75f 127.0.0.1:7002
       slots: (0 slots) slave
       replicates d882c3ab7f37613663d7f063c4d7eae5cefd7fb7
    S: 7f264ab64929710f859a1206a6ea202267810029 127.0.0.1:7005
       slots: (0 slots) slave
       replicates 954afe83d6bf58a9a7b9357827febf44524b38ea
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...                                                                       
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    

    可以看到7006 还没有槽点 不能存数据 要分配

    使用到命令

    redis-cli --cluster   reshard   127.0.0.1:7006 
    

    过程会让你选接受的id 后面选all 从所有节点借一些凑给7006

    最后看结果 确实其他三个节点都给了点过来 凑够了我要的4000个

    >>> Performing Cluster Check (using node 127.0.0.1:7000)
    M: 128a61043843bd93b28212cb8db09eb29b53410a 127.0.0.1:7000
       slots:[1333-5460] (4128 slots) master
       1 additional replica(s)
    M: d882c3ab7f37613663d7f063c4d7eae5cefd7fb7 127.0.0.1:7003
       slots:[12256-16383] (4128 slots) master
       1 additional replica(s)
    M: 954afe83d6bf58a9a7b9357827febf44524b38ea 127.0.0.1:7001
       slots:[6795-10922] (4128 slots) master
       1 additional replica(s)
    S: fe79965b96d639f5ddd928754a61f6a543903a63 127.0.0.1:7004
       slots: (0 slots) slave
       replicates 128a61043843bd93b28212cb8db09eb29b53410a
    M: 05be533e1bf54fef8473af29b98aa45a2507388b 127.0.0.1:7006
       slots:[0-1332],[5461-6794],[10923-12255] (4000 slots) master
    S: 38488cf23d8ba899ac018d8b3a7bb252011fd75f 127.0.0.1:7002
       slots: (0 slots) slave
       replicates d882c3ab7f37613663d7f063c4d7eae5cefd7fb7
    S: 7f264ab64929710f859a1206a6ea202267810029 127.0.0.1:7005
       slots: (0 slots) slave
       replicates 954afe83d6bf58a9a7b9357827febf44524b38ea
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...                                                                                                                 
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    
    

    添加从节点

    redis-cli --cluster add-node 127.0.0.1:7002 127.0.0.1:7000 --cluster-slave --cluster-master-id d882c3ab7f37613663d7f063c4d7eae5cefd7fb7
    

    说明:把7002节点加入到7000节点的集群中,并且当做node_id为 d882c3ab7f37613663d7f063c4d7eae5cefd7fb7 的从节点。如果不指定 --cluster-master-id 会随机分配到任意一个主节点。

    结果

    ╭─deepin@deepin-PC /usr/local/cluster 
    ╰─$ redis-cli --cluster add-node 127.0.0.1:7002 127.0.0.1:7000 --cluster-slave --cluster-master-id d882c3ab7f37613663d7f063c4d7eae5cefd7fb7
    >>> Adding node 127.0.0.1:7002 to cluster 127.0.0.1:7000
    >>> Performing Cluster Check (using node 127.0.0.1:7000)
    M: 128a61043843bd93b28212cb8db09eb29b53410a 127.0.0.1:7000
       slots:[0-5460] (5461 slots) master
       1 additional replica(s)
    M: d882c3ab7f37613663d7f063c4d7eae5cefd7fb7 127.0.0.1:7003
       slots:[10923-16383] (5461 slots) master
    M: 954afe83d6bf58a9a7b9357827febf44524b38ea 127.0.0.1:7001
       slots:[5461-10922] (5462 slots) master
       1 additional replica(s)
    S: fe79965b96d639f5ddd928754a61f6a543903a63 127.0.0.1:7004
       slots: (0 slots) slave
       replicates 128a61043843bd93b28212cb8db09eb29b53410a
    S: 7f264ab64929710f859a1206a6ea202267810029 127.0.0.1:7005
       slots: (0 slots) slave
       replicates 954afe83d6bf58a9a7b9357827febf44524b38ea
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...                                                                       
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    >>> Send CLUSTER MEET to node 127.0.0.1:7002 to make it join the cluster.                         
    Waiting for the cluster to join
    
    >>> Configure node as replica of 127.0.0.1:7003.
    [OK] New node added correctly.
    

    再次查看 7002 已经进入了集群

    ╰─$ redis-cli --cluster check  127.0.0.1:7000
    127.0.0.1:7000 (128a6104...) -> 0 keys | 5461 slots | 1 slaves.
    127.0.0.1:7003 (d882c3ab...) -> 0 keys | 5461 slots | 1 slaves.
    127.0.0.1:7001 (954afe83...) -> 1 keys | 5462 slots | 1 slaves.
    [OK] 1 keys in 3 masters.
    0.00 keys per slot on average.                                                                    
    >>> Performing Cluster Check (using node 127.0.0.1:7000)
    M: 128a61043843bd93b28212cb8db09eb29b53410a 127.0.0.1:7000
       slots:[0-5460] (5461 slots) master
       1 additional replica(s)
    M: d882c3ab7f37613663d7f063c4d7eae5cefd7fb7 127.0.0.1:7003
       slots:[10923-16383] (5461 slots) master
       1 additional replica(s)
    M: 954afe83d6bf58a9a7b9357827febf44524b38ea 127.0.0.1:7001
       slots:[5461-10922] (5462 slots) master
       1 additional replica(s)
    S: fe79965b96d639f5ddd928754a61f6a543903a63 127.0.0.1:7004
       slots: (0 slots) slave
       replicates 128a61043843bd93b28212cb8db09eb29b53410a
    S: 38488cf23d8ba899ac018d8b3a7bb252011fd75f 127.0.0.1:7002
       slots: (0 slots) slave
       replicates d882c3ab7f37613663d7f063c4d7eae5cefd7fb7
    S: 7f264ab64929710f859a1206a6ea202267810029 127.0.0.1:7005
       slots: (0 slots) slave
       replicates 954afe83d6bf58a9a7b9357827febf44524b38ea
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...                                                                       
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    
    

    根据reset的类型配置hard或者soft ,Reset 一个Redis群集节点可以选择十分极端或极端的方式。 注意该命令在主节点hold住一个或多个keys的时候无效,在这种情况下,如果要彻底reset一个master, 需要将它的所有key先移除,如先使用FLUSHALL,在使用CLUSTER RESET

    节点上的效果如下:

    1. 群集中的节点都被忽略
    2. 所有已分派/打开的槽会被reset,以便slots-to-nodes对应关系被完全清除
    3. 如果节点是slave,它会被切换为(空)master。它的数据集已被清空,因此最后也会变成一个空master。
    4. **Hard reset only:生成新的节点ID
    5. Hard reset only:变量currentEpochconfigEpoch被设置为0
    6. 新配置被持久化到节点磁盘上的群集配置信息文件中

    当需要为一个新的或不同的群集提供一个新的群集节点是可使用该命令,同时它也在Redis群集测试框架中被广泛使用,它用于 在每个新的测试单元启动是初始化群集状态。

    其余测试

    搭建完成后可以做些其余测试,比如在主节点A上存数据,看数据是存到A节点内还是会随机存到任一主节点?

    再比如down掉主节点A,看A的从节点是否会抢占A节点等等。

    https://www.jianshu.com/p/a1e62e78667c

    https://segmentfault.com/a/1190000017151802

  • 相关阅读:
    inotify-java linux系统监听文件发生变化,实时通知java程序
    设置模式之单例模式(附上一个Objective-C编写的播放音乐的单例类)
    设计模式之观察者模式(关于OC中的KVOKVCNSNotification)
    设计模式之原型模式(深入理解OC中的NSCopying协议以及浅拷贝、深拷贝)
    设计模式之模板方法模式&&迪米特法则(代码Objective-C展示)
    iOS开发:深入理解GCD 第一篇
    设计模式之工厂方法模式(代码用Objective-C展示)
    iOS开发:一个高仿美团的团购ipad客户端的设计和实现(功能:根据拼音进行检索并展示数据,离线缓存团购数据,浏览记录与收藏记录的批量删除等)
    Xcode一些好用的插件,以及这些插件的管理器
    综合出现NSScanner: nil string argument libc++abi.dylib: terminat错误的解决方案
  • 原文地址:https://www.cnblogs.com/brady-wang/p/15097636.html
Copyright © 2011-2022 走看看