zoukankan      html  css  js  c++  java
  • NoSQL数据库之Redis Cluster(集群)

    Redis Cluster

    Redis Cluster 工作原理

    在哨兵sentinel机制中,可以解决redis高可用问题,即当master故障后可以自动将slave提升为
    master,从而可以保证redis服务的正常使用,但是无法解决redis单机写入的瓶颈问题,即单机redis写
    入性能受限于单机的内存大小、并发数量、网卡速率等因素。
    为了解决单机性能的瓶颈,提高Redis 性能,可以使用分布式集群的解决方案
    早期Redis 分布式集群部署方案:
    1. 客户端分区:由客户端程序决定key写分配和写入的redis node,但是需要客户端自己实现写入分
    配、高可用管理和故障转移等
    2. 代理方案:基于三方软件实现redis proxy,客户端先连接之代理层,由代理层实现key的写入分
    配,对客户端来说是有比较简单,但是对于集群管节点增减相对比较麻烦,而且代理本身也是单点和性能瓶颈。
    
    redis 3.0版本之后推出了无中心架构的redis cluster机制,在无中心的redis集群当中,其每个节点保存
    当前节点数据和整个集群状态,每个节点都和其他所有节点连接
    

    Redis Cluster特点如下:

    1. 所有Redis节点使用(PING机制)互联
    2. 集群中某个节点的是否失效,是由整个集群中超过半数的节点监测都失效,才能算真正的失效
    3. 客户端不需要proxy即可直接连接redis,应用程序中需要配置有全部的redis服务器IP
    4. redis cluster把所有的redis node 平均映射到 0-16383个槽位(slot)上,读写需要到指定的redis
    node上进行操作,因此有多少个redis node相当于redis 并发扩展了多少倍,每个redis node 承担16384/N个槽位
    5. Redis cluster预先分配16384个(slot)槽位,当需要在redis集群中写入一个key -value的时候,会使
    用CRC16(key) mod 16384之后的值,决定将key写入值哪一个槽位从而决定写入哪一个Redis节点上,从而有效解决单机瓶颈。
    

    Redis cluster 基本架构
    假如三个主节点分别是:A, B, C 三个节点,采用哈希槽 (hash slot)的方式来分配16384个slot 的话
    它们三个节点分别承担的slot 区间可以是:

    节点A覆盖 0-5460
    节点B覆盖 5461-10922
    节点C覆盖 10923-16383
    

    Redis cluster 主从架构

    Redis cluster的架构虽然解决了并发的问题,但是又引入了一个新的问题,每个Redis master的高可用
    如何解决?那就是对每个master 节点都实现主从复制,从而实现 redis 高可用性
    

    Redis Cluster 部署架构说明

    环境A:3台服务器,每台服务器启动6379和6380两个redis 服务实例,适用于测试环境
    
    环境B:6台服务器,分别是三组master/slave,适用于生产环境
    
    #集群节点
    172.31.0.8
    172.31.0.18
    172.31.0.28
    172.31.0.38
    172.31.0.48
    172.31.0.58
    #预留服务器扩展使用
    172.31.0.68
    172.31.0.78
    

    说明:Redis 5.X 和之前版本相比有很多变化,以下分别介绍两个版本5.X和4.X的配置

    部署方式介绍

    redis cluster 有多种部署方法
    原生命令安装
    理解Redis Cluster架构
    生产环境不使用
    官方工具安装
    高效、准确
    生产环境可以使用
    自主研发
    可以实现可视化的自动化部署
    

    原生命令手动部署

    原生命令手动部署过程

    在所有节点安装redis,并配置开启cluster功能
    各个节点执行meet,实现所有节点的相互通信
    为各个master 节点指派槽位范围
    指定各个节点的主从关系
    

    案例: 利用原生命令手动部署redis cluster

    在所有节点安装redis并启动cluster功能

    #在所有6个节点上都执行下面相同操作
    [root@centos8 ~]# yum -y install redis
    #手动修改配置文件
    [root@centos8 ~]# vim /etc/redis.conf
    bind 0.0.0.0
    masterauth 123456 #建议配置,否则后期的master和slave主从复制无法成功,还需再配置
    requirepass 123456
    cluster-enabled yes #取消此行注释,必须开启集群,开启后redis 进程会有cluster标识
    cluster-config-file nodes-6379.conf #取消此行注释,此为集群状态文件,记录主从关系及slot范
    围信息,由redis cluster 集群自动创建和维护
    cluster-require-full-coverage no #默认值为yes,设为no可以防止一个节点不可用导致整个
    cluster不可能
    #或者批量修改配置文件
    [root@centos8 ~]# sed -i.bak -e 's/bind 127.0.0.1/bind 0.0.0.0/' -e '/masterauth/a masterauth 123456' -e '/# requirepass/a requirepass 123456' -e '/# clusterenabled yes/a cluster-enabled yes' -e '/# cluster-config-file nodes-6379.conf/a cluster-config-file nodes-6379.conf' -e '/cluster-require-full-coverage yes/c cluster-require-full-coverage no' /etc/redis.conf
    
    [root@centos8 ~]# systemctl enable --now redis
    

    执行meet 操作实现相互通信

    #在任一节点上和其它所有节点进行meet通信
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning cluster meet 172.31.0.18 6379
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning cluster meet 172.31.0.28 6379
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning cluster meet 172.31.0.38 6379
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning cluster meet 172.31.0.48 6379
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning cluster meet 172.31.0.58 6379
    #可以看到所有节点之间可以相互连接通信
    [root@centos8 ~]# redis-cli -h 10.0.0.8 -a 123456 --no-auth-warning cluster nodes
    a177c5cbc2407ebb6230ea7e2a7de914bf8c2dab 172.31.0.8:6379@16379 myself,master - 0
    1602515365000 3 connected
    ...
    #由于没有槽位无法创建key
    [root@centos8 ~]# redis-cli -a 123456 --no-auth-warning set name long
    (error) CLUSTERDOWN Hash slot not served
    #查看当前状态
    [root@centos8 ~]#redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning cluster info
    cluster_state:fail
    cluster_slots_assigned:0  #无槽位分配置
    cluster_slots_ok:0
    cluster_slots_pfail:0
    cluster_slots_fail:0
    cluster_known_nodes:6
    cluster_size:0  #无集群成员
    ...
    

    为各个master 节点指派槽位范围

    #创建添加槽位的脚本
    [root@centos8 ~]#cat addslot.sh
    #!/bin/bash
    #
    # Author: xuanlv
    # Date: 2021-07-03
    host=$1
    port=$2
    start=$3
    end=$4
    pass=123456
    
    for slot in `seq ${start} ${end}`;do
        echo slot:$slot
        redis-cli -h ${host} -p $port -a ${pass} --no-auth-warning cluster addslots${slot}
    done
    
    #为三个master分配槽位,共16364/3=5,461.333333333333,平均每个master分配5,461个槽位
    [root@centos8 ~]# bash addslot.sh 10.0.0.8 6379 0 5461
    [root@centos8 ~]# bash addslot.sh 10.0.0.18 6379 5462 10922
    [root@centos8 ~]# bash addslot.sh 10.0.0.28 6379 10923 16383
    
    #当第一个master分配完槽位后,可以看到下面信息
    [root@centos8 ~]# redis-cli -a 123456 --no-auth-warning cluster info
    cluster_state:ok
    cluster_slots_assigned:5462 #分配槽位数
    cluster_slots_ok:5462
    cluster_slots_pfail:0
    cluster_slots_fail:0
    cluster_known_nodes:6
    cluster_size:1 #加入集群
    ...
    #当第一个master分配完槽位后,可以看到下面信息
    [root@centos8 ~]# redis-cli -a 123456 --no-auth-warning cluster nodes
    a177c5cbc2407ebb6230ea7e2a7de914bf8c2dab 172.31.0.8:6379@16379 myself,master - 0
    1602516039000 3 connected 0-5461
    ...
    #当所有三个节点都分配槽位后可以创建key
    [root@centos8 ~]# redis-cli -a 123456 --no-auth-warning set name long
    (error) MOVED 5798 172.31.0.18:6379
    [root@centos8 ~]# redis-cli -h 172.31.0.18 -a 123456 --no-auth-warning set name king
    OK
    [root@centos8 ~]#redis-cli -h 172.31.0.18 -a 123456 --no-auth-warning get name
    "king"
    #当所有的三个master分配完槽位后,可以看到下面信息,所有节点都是master
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning cluster nodes
    a177c5cbc2407ebb6230ea7e2a7de914bf8c2dab 172.31.0.8:6379@16379 myself,master - 0
    1602516633000 3 connected 0-5461
    97c5dcc3f33c2fc75c7fdded25d05d2930a312c0 172.31.0.18:6379@16379 master - 0
    1602516635862 1 connected 5462-10922
    
    #当所有的三个master分配完槽位后,可以看到下面信息
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning cluster info
    cluster_state:ok
    cluster_slots_assigned:16384
    cluster_slots_ok:16384
    cluster_slots_pfail:0
    cluster_slots_fail:0
    cluster_known_nodes:6
    cluster_size:3 #三个成员
    

    指定各个节点的主从关系

    #通过上面cluster nodes 查看master的ID信息,执行下面操作,将对应的slave 指定相应的master节
    点,实现三对主从节点
    #将172.31.0.38指定172.31.0.8的ID做为其从节点
    [root@centos8 ~]# redis-cli -h 172.31.0.38 -a 123456 --no-auth-warning cluster
    replicate a177c5cbc2407ebb6230ea7e2a7de914bf8c2dab
    OK
    #将172.31.0.48指定172.31.0.18的ID做为其从节点
    [root@centos8 ~]# redis-cli -h 172.31.0.48 -a 123456 --no-auth-warning cluster
    replicate 97c5dcc3f33c2fc75c7fdded25d05d2930a312c0
    OK
    #将172.31.0.58指定172.31.0.28的ID做为其从节点
    [root@centos8 ~]# redis-cli -h 172.31.0.58 -a 123456 --no-auth-warning cluster
    replicate 4f146b1ac51549469036a272c60ea97f065ef832
    OK
    #在第一组主从节点创建成功后,可以看到下面信息
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning cluster nodes
    a177c5cbc2407ebb6230ea7e2a7de914bf8c2dab 172.31.0.8:6379@16379 myself,master - 0
    1602517124000 3 connected 0-5461
    97c5dcc3f33c2fc75c7fdded25d05d2930a312c0 172.31.0.18:6379@16379 master - 0
    1602517123000 1 connected 5462-10922
    cb20d58870fe05de8462787cf9947239f4bc5629 172.31.0.38:6379@16379 slave
    a177c5cbc2407ebb6230ea7e2a7de914bf8c2dab 0 1602517125709 3 connected
    779a24884dbe1ceb848a685c669ec5326e6c8944 172.31.0.48:6379@16379 master - 0
    1602517124689 4 connected
    07231a50043d010426c83f3b0788e6b92e62050f 172.31.0.58:6379@16379 master - 0
    1602517123676 5 connected
    4f146b1ac51549469036a272c60ea97f065ef832 172.31.0.28:6379@16379 master - 0
    1602517123000 2 connected 10923-16383
    
    #在第一组主从节点创建成功后,可以看到下面信息
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning info
    replication
    # Replication
    role:master
    connected_slaves:1
    slave0:ip=172.31.0.38,port=6379,state=online,offset=322,lag=1
    
    [root@centos8 ~]# redis-cli -h 172.31.0.38 -a 123456 --no-auth-warning info
    replication
    # Replication
    role:slave
    master_host:172.31.0.8
    master_port:6379
    master_link_status:up
    ...
    #所有三组主从节点创建成功后,可以看到最终结果
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning cluster nodes
    
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning cluster info
    cluster_state:ok
    ...
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning info
    replication
    # Replication
    role:master
    connected_slaves:1
    slave0:ip=172.31.0.38,port=6379,state=online,offset=1022,lag=1
    ...
    [root@centos8 ~]# redis-cli -h 172.31.0.18 -a 123456 --no-auth-warning info
    replication
    # Replication
    role:master
    connected_slaves:1
    slave0:ip=172.31.0.48,port=6379,state=online,offset=182,lag=1
    ...
    [root@centos8 ~]# redis-cli -h 172.31.0.28 -a 123456 --no-auth-warning info
    replication
    # Replication
    role:master
    connected_slaves:1
    slave0:ip=172.31.0.58,port=6379,state=online,offset=252,lag=0
    ...
    #查看主从节关系及槽位信息
    [root@centos8 ~]# redis-cli -h 172.31.0.28 -a 123456 --no-auth-warning cluster
    slots
    1) 1) (integer) 10923
    2) (integer) 16383
    3) 1) "172.31.0.28"
    2) (integer) 6379
    3) "4f146b1ac51549469036a272c60ea97f065ef832"
    4) 1) "172.31.0.58"
    2) (integer) 6379
    3) "07231a50043d010426c83f3b0788e6b92e62050f"
    2) 1) (integer) 0
    2) (integer) 5461
    3) 1) "172.31.0.8"
    2) (integer) 6379
    3) "a177c5cbc2407ebb6230ea7e2a7de914bf8c2dab"
    4) 1) "172.31.0.38"
    2) (integer) 6379
    3) "cb20d58870fe05de8462787cf9947239f4bc5629"
    3) 1) (integer) 5462
    2) (integer) 10922
    3) 1) "172.31.0.18"
    2) (integer) 6379
    3) "97c5dcc3f33c2fc75c7fdded25d05d2930a312c0"
    4) 1) "172.31.0.48"
    2) (integer) 6379
    3) "779a24884dbe1ceb848a685c669ec5326e6c8944"
    

    验证 redis cluster 访问

    #指定选项 -c 表示以集群方式连接
    [root@centos8 ~]# redis-cli -c -h 172.31.0.8 -a 123456 --no-auth-warning set name long
    OK
    [root@centos8 ~]# redis-cli -c -h 172.31.0.8 -a 123456 --no-auth-warning get name
    "long"
    
    #非集群方式连接
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning get name
    (error) MOVED 5798 172.31.0.18:6379
    [root@centos8 ~]#redis-cli -h 172.31.0.18 -a 123456 --no-auth-warning get name
    "long"
    [root@centos8 ~]#redis-cli -h 172.31.0.28 -a 123456 --no-auth-warning get name
    (error) MOVED 5798 172.31.0.18:6379
    

    案例:基于Redis 5 的 redis cluster 部署

    官方文档:https://redis.io/topics/cluster-tutorial
    

    redis cluster 相关命令

    范例: 查看 --cluster 选项帮助

    [root@centos8 ~]# redis-cli --cluster help
    

    范例: 查看CLUSTER 指令的帮助

    [root@centos8 ~]# redis-cli CLUSTER HELP
    

    创建 redis cluster集群的环境准备

    每个redis 节点采用相同的相同的redis版本、相同的密码、硬件配置
    所有redis服务器必须没有任何数据

    准备六台主机,地址如下:

    172.31.0.8
    172.31.0.18
    172.31.0.28
    172.31.0.38
    172.31.0.48
    172.31.0.58
    

    启用 redis cluster 配置

    所有6台主机都执行以下配置

    [root@centos8 ~]# yum -y install redis
    

    每个节点修改redis配置,必须开启cluster功能的参数

    #手动修改配置文件
    [root@redis-node1 ~]# vim /etc/redis.conf
    bind 0.0.0.0
    masterauth 123456 #建议配置,否则后期的master和slave主从复制无法成功,还需再配置
    requirepass 123456
    cluster-enabled yes #取消此行注释,必须开启集群,开启后redis 进程会有cluster标识
    cluster-config-file nodes-6379.conf #取消此行注释,此为集群状态文件,记录主从关系及
    slot范围信息,由redis cluster 集群自动创建和维护
    cluster-require-full-coverage no #默认值为yes,设为no可以防止一个节点不可用导致整
    个cluster不可能
    #或者执行下面命令,批量修改
    [root@redis-node1 ~]# sed -i.bak -e 's/bind 127.0.0.1/bind 0.0.0.0/' -e '/masterauth/a masterauth 123456' -e '/# requirepass/a requirepass 123456' -e '/# cluster-enabled yes/a cluster-enabled yes' -e '/# cluster-config-file nodes-6379.conf/a cluster-config-file nodes-6379.conf' -e '/cluster-requirefull-coverage yes/c cluster-require-full-coverage no' /etc/redis.conf
    
    [root@redis-node1 ~]# systemctl enable --now redis
    

    验证当前Redis服务状态:

    #开启了16379的cluster的端口,实际的端口=redis port + 10000
    [root@centos8 ~]# ss -ntl
    
    #注意进程有[cluster]状态
    [root@centos8 ~]# ps -ef|grep redis
    redis 1939 1 0 10:54 ? 00:00:00 /usr/bin/redis-server 0.0.0.0:6379 [cluster]
    

    创建集群(推荐使用)

    #命令redis-cli的选项 --cluster-replicas 1 表示每个master对应一个slave节点
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster create 172.31.0.8:6379 172.31.0.18:6379 172.31.0.28:6379 172.31.0.38:6379 172.31.0.48:6379 172.31.0.58:6379 --cluster-replicas 1
    Warning: Using a password with '-a' or '-u' option on the command line interface
    may not be safe.
    >>> Performing hash slots allocation on 6 nodes...
    Master[0] -> Slots 0 - 5460
    Master[1] -> Slots 5461 - 10922
    Master[2] -> Slots 10923 - 16383
    ...
    Can I set the above configuration? (type 'yes' to accept): yes #输入yes自动创建集群
    ...
    [OK] All nodes agree about slots configuration. #所有节点槽位分配完成
    >>> Check for open slots... #检查打开的槽位
    
    #观察以上结果,可以看到3组master/slave
    master:172.31.0.8---slave:172.31.0.38
    master:172.31.0.18---slave:172.31.0.48
    master:172.31.0.28---slave:172.31.0.58
    

    查看主从状态

    [root@redis-node1 ~]# redis-cli -a 123456 -c INFO replication
    Warning: Using a password with '-a' or '-u' option on the command line interface
    may not be safe.
    # Replication
    role:master
    connected_slaves:1
    slave0:ip=172.31.0.38,port=6379,state=online,offset=896,lag=1
    
    [root@redis-node2 ~]# redis-cli -a 123456 INFO replication
    Warning: Using a password with '-a' or '-u' option on the command line interface
    may not be safe.
    # Replication
    role:master
    connected_slaves:1
    slave0:ip=172.31.0.48,port=6379,state=online,offset=980,lag=1
    

    范例: 查看指定master节点的slave节点信息

    [root@centos8 ~]# redis-cli -a 123456 cluster nodes
    
    #以下命令查看指定master节点的slave节点信息,其中
    #a177c5cbc2407ebb6230ea7e2a7de914bf8c2dab 为master节点的ID
    [root@centos8 ~]# redis-cli cluster slaves a177c5cbc2407ebb6230ea7e2a7de914bf8c2dab
    

    验证集群状态

    [root@redis-node1 ~]# redis-cli -a 123456 CLUSTER INFO
    Warning: Using a password with '-a' or '-u' option on the command line interface
    may not be safe.
    cluster_state:ok
    cluster_slots_assigned:16384
    cluster_slots_ok:16384
    cluster_slots_pfail:0
    cluster_slots_fail:0
    cluster_known_nodes:6 #节点数
    cluster_size:3 #三个集群
    ...
    #查看任意节点的集群状态
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster info 172.31.0.38:6379
    Warning: Using a password with '-a' or '-u' option on the command line interface
    may not be safe.
    172.31.0.18:6379 (99720241...) -> 0 keys | 5462 slots | 1 slaves.
    172.31.0.28:6379 (d34da866...) -> 0 keys | 5461 slots | 1 slaves.
    172.31.0.8:6379 (cb028b83...) -> 0 keys | 5461 slots | 1 slaves.
    [OK] 0 keys in 3 masters.
    0.00 keys per slot on average.
    

    查看集群node对应关系

    [root@redis-node1 ~]# redis-cli -a 123456 CLUSTER NODES
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster check 172.31.0.38:6379
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    172.31.0.18:6379 (99720241...) -> 0 keys | 5462 slots | 1 slaves.
    172.31.0.28:6379 (d34da866...) -> 0 keys | 5461 slots | 1 slaves.
    172.31.0.8:6379 (cb028b83...) -> 0 keys | 5461 slots | 1 slaves.
    

    验证集群写入key

    redis cluster 写入key

    #经过算法计算,当前key的槽位需要写入指定的node
    [root@redis-node1 ~]# redis-cli -a 123456 -h 172.31.0.8 SET key1 values1
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    (error) MOVED 9189 172.31.0.18:6379
    
    #槽位不在当前node所以无法写入
    #指定槽位对应node可写入
    [root@redis-node1 ~]# redis-cli -a 123456 -h 172.31.0.18 SET key1 values1
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    OK
    [root@redis-node1 ~]# redis-cli -a 123456 -h 172.31.0.18 GET key1
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    "values1"
    
    #对应的slave节点可以KEYS *,但GET key1失败,可以到master上执行GET key1
    [root@redis-node1 ~]# redis-cli -a 123456 -h 172.31.0.48 KEYS "*"
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    1) "key1"
    [root@redis-node1 ~]# redis-cli -a 123456 -h 172.31.0.48 GET key1
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    (error) MOVED 9189 172.31.0.18:6379
    

    redis cluster 计算key所属的slot

    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning cluster nodes
    
    #计算得到hello对应的slot
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning cluster keyslot hello
    (integer) 866
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning set hello king
    OK
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning cluster keyslot name
    (integer) 5798
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -a 123456 --no-auth-warning set name long
    (error) MOVED 5798 172.31.0.18:6379
    [root@centos8 ~]# redis-cli -h 172.31.0.18 -a 123456 --no-auth-warning set name long
    OK
    [root@centos8 ~]# redis-cli -h 172.31.0.18 -a 123456 --no-auth-warning get name
    "long"
    
    #使用选项-c 以集群模式连接
    [root@centos8 ~]# redis-cli -c -h 172.31.0.8 -a 123456 --no-auth-warning
    172.31.0.8:6379> cluster keyslot linux
    (integer) 12299
    172.31.0.8:6379> set linux love
    -> Redirected to slot [12299] located at 172.31.0.28:6379
    OK
    172.31.0.28:6379> get linux
    "love"
    172.31.0.28:6379> exit
    [root@centos8 ~]# redis-cli -h 172.31.0.28 -a 123456 --no-auth-warning get linux
    "love"
    

    python脚本实现RedisCluster集群写入

    官网:

    https://github.com/Grokzen/redis-py-cluster
    

    范例:

    [root@redis-node1 ~]# yum -y install python3
    [root@redis-node1 ~]# pip3 install redis-py-cluster
    [root@redis-node1 ~]# vim redis_cluster_test.py
    #!/usr/bin/env python3
    from rediscluster import RedisCluster
    startup_nodes = [
        {"host":"172.31.0.8", "port":6379},
        {"host":"172.31.0.18", "port":6379},
        {"host":"172.31.0.28", "port":6379},
        {"host":"172.31.0.38", "port":6379},
        {"host":"172.31.0.48", "port":6379},
        {"host":"172.31.0.58", "port":6379}
    ]
    redis_conn= RedisCluster(startup_nodes=startup_nodes,password='123456',decode_responses=True)
    for i in range(0, 10000):
        redis_conn.set('key'+str(i),'value'+str(i))
        print('key'+str(i)+':',redis_conn.get('key'+str(i)))
    
    [root@redis-node1 ~]# chmod +x redis_cluster_test.py
    [root@redis-node1 ~]#./redis_cluster_test.py
    ......
    key9998: value9998
    key9999: value9999
    
    #验证数据
    [root@redis-node1 ~]# redis-cli -a 123456 -h 172.31.0.8
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    172.31.0.8:6379> DBSIZE
    (integer) 3331
    172.31.0.8:6379> GET key1
    (error) MOVED 9189 172.31.0.18:6379
    172.31.0.8:6379> GET key2
    "value2"
    172.31.0.8:6379> GET key3
    "value3"
    172.31.0.8:6379> KEYS *
    ......
    3329) "key7832"
    3330) "key2325"
    3331) "key2880"
    172.31.0.8:6379>
    
    [root@redis-node1 ~]# redis-cli -a 123456 -h 172.31.0.18 DBSIZE
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    (integer) 3340
    [root@redis-node1 ~]# redis-cli -a 123456 -h 172.31.0.18 GET key1
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    "value1"
    [root@redis-node1 ~]# redis-cli -a 123456 -h 172.31.0.28 DBSIZE
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    (integer) 3329
    [root@redis-node1 ~]# redis-cli -a 123456 -h 172.31.0.18 GET key5
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    "value5"
    

    模拟master故障,对应的slave节点自动提升为新master

    #模拟node2节点出故障,需要相应的数秒故障转移时间
    [root@redis-node2 ~]# tail -f /var/log/redis/redis.log
    [root@redis-node2 ~]# redis-cli -a 123456
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    127.0.0.1:6379> shutdown
    not connected> exit
    [root@redis-node2 ~]# ss -ntl
    
    [root@redis-node2 ~]# redis-cli -a 123456 --cluster info 172.31.0.8:6379
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    Could not connect to Redis at 172.31.0.18:6379: Connection refused
    172.31.0.8:6379 (cb028b83...) -> 3331 keys | 5461 slots | 1 slaves.
    172.31.0.48:6379 (d04e524d...) -> 3340 keys | 5462 slots | 0 slaves. #172.31.0.48为新
    的master
    172.31.0.28:6379 (d34da866...) -> 3329 keys | 5461 slots | 1 slaves.
    [OK] 10000 keys in 3 masters.
    0.61 keys per slot on average.
    
    [root@redis-node2 ~]# redis-cli -a 123456 --cluster check 172.31.0.8:6379
    
    [root@redis-node2 ~]# redis-cli -a 123456 -h 172.31.0.48
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    172.31.0.48:6379> INFO replication
    # Replication
    role:master
    connected_slaves:0
    ...
    #恢复故障节点node2自动成为slave节点
    [root@redis-node2 ~]# systemctl start redis
    
    #查看自动生成的配置文件,可以查看node2自动成为slave节点
    [root@redis-node2 ~]# cat /var/lib/redis/nodes-6379.conf
    
    [root@redis-node2 ~]# redis-cli -a 123456 -h 172.31.0.48
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    172.31.0.48:6379> INFO replication
    # Replication
    role:master
    connected_slaves:1
    slave0:ip=172.31.0.18,port=6379,state=online,offset=2912564,lag=1
    

    案例:基于Redis 4 的 redis cluster 部署

    准备redis Cluster 基本配置

    1. 每个redis 节点采用相同的硬件配置、相同的密码、相同的redis版本
    2. 所有redis服务器必须没有任何数据
    3. 准备三台CentOS 7主机,已编译安装好redis,各启动两个redis实例,分别使用6379和6380端口,从而模拟实现6台redis实例
    
    172.31.0.7:6379|6380
    172.31.0.17:6379|6380
    172.31.0.27:6379|6380
    

    Redis cluster集群节点维护

    redis 集群运行之后,难免由于硬件故障、网络规划、业务增长等原因对已有集群进行相应的调整, 比
    如: 增加Redis node节点、减少节点、节点迁移、更换服务器等。增加节点和删除节点会涉及到已有的槽位重新分配及数据迁移。

    集群维护之动态扩容

    案例:因公司业务发展迅猛,现有的三主三从的redis cluster架构可能无法满足现有业务的并发写入需求,因
    此公司紧急采购两台服务器172.31.0.68,172.31.0.78,需要将其动态添加到集群当中,但不能影响业务使用和数据丢失。

    注意: 生产环境一般建议master节点为奇数个,比如:3,5,7,以防止脑裂现象

    添加节点准备

    增加Redis node节点,需要与之前的Redis node版本相同、配置一致,然后分别再启动两台Redis node,应为一主一从。

    #配置node7节点
    [root@redis-node7 ~]# yum -y install redis
    [root@redis-node7 ~]# sed -i.bak -e 's/bind 127.0.0.1/bind 0.0.0.0/' -e '/masterauth/a masterauth 123456' -e '/# requirepass/a requirepass 123456' -e '/# cluster-enabled yes/a cluster-enabled yes' -e '/# cluster-config-file nodes-6379.conf/a cluster-config-file nodes-6379.conf' -e '/cluster-require-fullcoverage yes/c cluster-require-full-coverage no' /etc/redis.conf
    
    [root@redis-node7 ~]# systemctl enable --now redis
    
    #配置node8节点
    [root@redis-node8 ~]# yum -y install redis
    [root@redis-node8 ~]# sed -i.bak -e 's/bind 127.0.0.1/bind 0.0.0.0/' -e '/masterauth/a masterauth 123456' -e '/# requirepass/a requirepass 123456' -e '/# cluster-enabled yes/a cluster-enabled yes' -e '/# cluster-config-file nodes-6379.conf/a cluster-config-file nodes-6379.conf' -e '/cluster-require-fullcoverage yes/c cluster-require-full-coverage no' /etc/redis.conf
    
    [root@redis-node8 ~]#systemctl enable --now redis
    

    添加新的master节点到集群

    使用以下命令添加新节点,要添加的新redis节点IP和端口添加到的已有的集群中任意节点的IP:端口

    add-node new_host:new_port existing_host:existing_port [--slave --master-id
    <arg>]
    #说明:
    new_host:new_port #为新添加的主机的IP和端口
    existing_host:existing_port #为已有的集群中任意节点的IP和端口
    

    Redis 3/4 添加方式:

    Redis 5 添加方式(一定是要干净的服务器):

    #将一台新的主机172.31.0.68加入集群,以下示例中172.31.0.58可以是任意存在的集群节点
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster add-node 172.31.0.68:6379 <当前任意集群节点>:6379
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    >>> Adding node 172.31.0.68:6379 to cluster 172.31.0.58:6379
    
    #观察到该节点已经加入成功,但此节点上没有slot位,也无从节点,而且新的节点是master
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster info 172.31.0.8:6379
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    172.31.0.8:6379 (cb028b83...) -> 6672 keys | 5461 slots | 1 slaves.
    172.31.0.68:6379 (d6e2eca6...) -> 0 keys | 0 slots | 0 slaves.
    172.31.0.48:6379 (d04e524d...) -> 6679 keys | 5462 slots | 1 slaves.
    172.31.0.28:6379 (d34da866...) -> 6649 keys | 5461 slots | 1 slaves.
    [OK] 20000 keys in 5 masters.
    1.22 keys per slot on average.
    
    [root@redis-node1 ~]#redis-cli -a 123456 --cluster check 172.31.0.8:6379
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    172.31.0.8:6379 (cb028b83...) -> 6672 keys | 5461 slots | 1 slaves.
    172.31.0.68:6379 (d6e2eca6...) -> 0 keys | 0 slots | 0 slaves.
    172.31.0.48:6379 (d04e524d...) -> 6679 keys | 5462 slots | 1 slaves.
    172.31.0.28:6379 (d34da866...) -> 6649 keys | 5461 slots | 1 slaves.
    [OK] 20000 keys in 5 masters.
    1.22 keys per slot on average.
    
    [root@redis-node1 ~]# cat /var/lib/redis/nodes-6379.conf
    d6e2eca6b338b717923f64866bd31d42e52edc98 172.31.0.68:6379@16379 master - 0 1582356107260 8 connected
    
    #和上面显示结果一样
    [root@redis-node1 ~]# redis-cli -a 123456 CLUSTER NODES
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    d6e2eca6b338b717923f64866bd31d42e52edc98 172.31.0.68:6379@16379 master - 0 1582356313200 8 connected
    
    #查看集群状态
    [root@redis-node1 ~]# redis-cli -a 123456 CLUSTER INFO
    Warning: Using a password with '-a' or '-u' option on the command line interface
    may not be safe.
    cluster_state:ok
    cluster_slots_assigned:16384
    cluster_slots_ok:16384
    cluster_slots_pfail:0
    cluster_slots_fail:0
    cluster_known_nodes:7
    cluster_size:3
    

    在新的master上重新分配槽位

    新的node节点加到集群之后,默认是master节点,但是没有slots,需要重新分配
    添加主机之后需要对添加至集群种的新主机重新分片,否则其没有分片也就无法写入数据。

    注意: 重新分配槽位需要清空数据,所以需要先备份数据,扩展后再恢复数据

    Redis 3/4:方式

    Redis 5:方式

    [root@redis-node1 ~]# redis-cli -a 123456 --cluster reshard <当前任意集群节点>:6379
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    ...
    [OK] All 16384 slots covered.
    How many slots do you want to move (from 1 to 16384)?4096 #新分配多少个槽位
    =16384/master个数
    What is the receiving node ID? d6e2eca6b338b717923f64866bd31d42e52edc98 #新的
    master的ID
    Please enter all the source node IDs.
    Type 'all' to use all the nodes as source nodes for the hash slots.
    Type 'done' once you entered all the source nodes IDs.
    Source node #1: all #输入all,将哪些源主机的槽位分配给新的节点,all是自动在所有的redis node选择划分,如果是从redis cluster删除某个主机可以使用此方式将指定主机上的槽位全部移动到别的redis主机
    ......
    Do you want to proceed with the proposed reshard plan (yes/no)? yes #确认分配
    ...
    #确定slot分配成功
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster check 172.31.0.8:6379
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    172.31.0.8:6379 (cb028b83...) -> 5019 keys | 4096 slots | 1 slaves.
    172.31.0.68:6379 (d6e2eca6...) -> 4948 keys | 4096 slots | 0 slaves.
    172.31.0.48:6379 (d04e524d...) -> 5033 keys | 4096 slots | 1 slaves.
    172.31.0.28:6379 (d34da866...) -> 5000 keys | 4096 slots | 1 slaves.
    

    为新的master添加新的slave节点

    需要再向当前的Redis集群中添加一个Redis单机服务器172.31.0.78,用于解决当前172.31.0.68单机的潜在宕
    机问题,即实现响应的高可用功能,有两种方式:

    方法1:在新加节点到集群时,直接将之设置为slave(推荐使用)

    Redis 3/4 添加方式:

    Redis 5 添加方式:

    redis-cli -a 123456 --cluster add-node 172.31.0.78:6379 <任意集群节点>:6379 --cluster-slave --cluster-master-id <master ID>
    

    范例:

    #查看当前状态
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster check 172.31.0.8:6379
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    172.31.0.8:6379 (cb028b83...) -> 5019 keys | 4096 slots | 1 slaves.
    172.31.0.68:6379 (d6e2eca6...) -> 4948 keys | 4096 slots | 0 slaves.
    172.31.0.48:6379 (d04e524d...) -> 5033 keys | 4096 slots | 1 slaves.
    172.31.0.28:6379 (d34da866...) -> 5000 keys | 4096 slots | 1 slaves.
    [OK] 20000 keys in 4 masters.
    
    #直接加为slave节点
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster add-node 172.31.0.78:6379 172.31.0.8:6379 --cluster-slave --cluster-master-id d6e2eca6b338b717923f64866bd31d42e52edc98
    
    #验证是否成功
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster check 172.31.0.8:6379
    
    [root@centos8 ~]# redis-cli -a 123456 -h 172.31.0.8 --no-auth-warning cluster info
    cluster_state:ok
    cluster_slots_assigned:16384
    cluster_slots_ok:16384
    cluster_slots_pfail:0
    cluster_slots_fail:0
    cluster_known_nodes:8 #8个节点
    cluster_size:4 #4组主从
    

    方法2:先将新节点加入集群,再修改为slave

    Redis 3/4 版本:

    Redis 5 版本:

    #把172.31.0.78:6379添加到集群中:
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster add-node 172.31.0.78:6379 172.31.0.8:6379
    

    更改新节点更改状态为slave:

    需要手动将其指定为某个master的slave,否则其默认角色为master。

    [root@redis-node1 ~]# redis-cli -h 172.31.0.78 -p 6379 -a 123456 #登录到新添加节点
    172.31.0.78:6380> CLUSTER NODES #查看当前集群节点,找到目标master 的ID
    172.31.0.78:6380> CLUSTER REPLICATE 886338acd50c3015be68a760502b239f4509881c #将其设
    置slave,命令格式为cluster replicate MASTERID
    
    172.31.0.78:6380> CLUSTER NODES #再次查看集群节点状态,验证节点是否已经更改为指定master 的
    slave
    

    集群维护之动态缩容

    案例:

    由于172.31.0.8服务器使用年限已经超过三年,已经超过厂商质保期而且硬盘出现异常报警,经运维部架构师提交方案并与开发同事开会商议,决定将现有Redis集群的8台主服务器中的master 172.31.0.8和对应的slave 172.31.0.38 临时下线,三台服务器的并发写入性能足够支出未来1-2年的业务需求

    删除节点过程:

    添加节点的时候是先添加node节点到集群,然后分配槽位,删除节点的操作与添加节点的操作正好相
    反,是先将被要删除的Redis node上的槽位迁移到集群中的其他Redis node节点上,然后再将其删除,如果一个Redis node节点上的槽位没有被完全迁移,删除该node的时候会提示有数据且无法删除。

    迁移master 的槽位之其他master

    注意: 被迁移Redis master源服务器必须保证没有数据,否则迁移报错并会被强制中断。

    Redis 5版本删除Master 槽位口诀:先接收(节点ID)然后再删除(节点ID)

    Redis 3/4 版本

    Redis 5版本

    #查看当前状态
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster check 172.31.0.8:6379
    
    #连接到任意集群节点,#最后1365个slot从172.31.0.8移动到第一个master节点172.31.0.28上
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster reshard 172.31.0.18:6379
    ...
    [OK] All 16384 slots covered.
    How many slots do you want to move (from 1 to 16384)? 1365 #共4096/3分别给其它三个
    master节点
    What is the receiving node ID? d34da8666a6f587283a1c2fca5d13691407f9462 #master 172.31.0.28
    Please enter all the source node IDs.
    Type 'all' to use all the nodes as source nodes for the hash slots.
    Type 'done' once you entered all the source nodes IDs.
    Source node #1: cb028b83f9dc463d732f6e76ca6bbcd469d948a7 #输入要删除节点172.13.0.8的ID
    Source node #2: done
    ...
    Do you want to proceed with the proposed reshard plan (yes/no)? yes #确定
    
    #非交互式方式
    #再将1365个slot从172.31.0.8移动到第二个master节点172.31.0.48上
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster reshard 172.31.0.18:6379 --cluster-slots 1365 --cluster-from cb028b83f9dc463d732f6e76ca6bbcd469d948a7 --cluster-to d04e524daec4d8e22bdada7f21a9487c2d3e1057 --cluster-yes
    
    #最后的slot从172.31.0.8移动到第三个master节点172.31.0.68上
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster reshard 172.31.0.18:6379 --cluster-slots 1375 --cluster-from cb028b83f9dc463d732f6e76ca6bbcd469d948a7 --cluster-to d6e2eca6b338b717923f64866bd31d42e52edc98 --cluster-yes
    
    #确认172.31.0.8的所有slot都移走了,上面的slave也自动删除,成为其它master的slave
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster check 172.31.0.8:6379
    
    #原有的172.31.0.38自动成为172.31.0.68的slave
    [root@redis-node1 ~]# redis-cli -a 123456 -h 172.31.0.68 INFO replication
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    # Replication
    role:master
    connected_slaves:2
    slave0:ip=172.31.0.78,port=6379,state=online,offset=129390,lag=0
    slave1:ip=172.31.0.38,port=6379,state=online,offset=129390,lag=0
    
    [root@centos8 ~]# redis-cli -a 123456 -h 172.31.0.8 --no-auth-warning cluster info
    cluster_state:ok
    cluster_slots_assigned:16384
    cluster_slots_ok:16384
    cluster_slots_pfail:0
    cluster_slots_fail:0
    cluster_known_nodes:8 #集群中8个节点
    cluster_size:3 #少了一个主从的slot
    

    从集群删除服务器

    虽然槽位已经迁移完成,但是服务器IP信息还在集群当中,因此还需要将IP信息从集群删除
    注意: 删除服务器前,必须清除主机上面的槽位,否则会删除主机失败

    Redis 3/4:

    Redis 5:

    redis-cli -a 123456 --cluster del-node <任意cluster节点的IP:port> <删除节点的ID>
    

    范例:(必须先删除master 节点再删除绑定slave节点)

    [root@redis-node1 ~]# redis-cli -a 123456 --cluster del-node 172.31.0.8:6379
    cb028b83f9dc463d732f6e76ca6bbcd469d948a7
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    >>> Removing node cb028b83f9dc463d732f6e76ca6bbcd469d948a7 from cluster
    172.31.0.8:6379
    >>> Sending CLUSTER FORGET messages to the cluster...
    >>> SHUTDOWN the node.
    
    #删除节点后,redis进程自动关闭
    #删除节点信息
    [root@redis-node1 ~]# rm -f /var/lib/redis/nodes-6379.conf
    

    删除多余的slave节点验证结果

    #验证删除成功
    [root@redis-node1 ~]# ss -ntl
    State Recv-Q Send-Q Local Address:Port Peer Address:Port
    LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
    LISTEN 0 128 [::]:22 [::]:*
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster check 172.31.0.18:6379
    
    #删除多余的slave从节点
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster del-node 172.31.0.18:6379 f9adcfb8f5a037b257af35fa548a26ffbadc852d
    
    #删除集群文件
    [root@redis-node4 ~]# rm -f /var/lib/redis/nodes-6379.conf
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster check 172.31.0.18:6379
    
    [root@redis-node1 ~]# redis-cli -a 123456 --cluster info 172.31.0.18:6379
    #查看集群信息
    [root@redis-node1 ~]# redis-cli -a 123456 -h 172.31.0.18 CLUSTER INFO
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    cluster_state:ok
    cluster_slots_assigned:16384
    cluster_slots_ok:16384
    cluster_slots_pfail:0
    cluster_slots_fail:0
    cluster_known_nodes:6 #只有6个节点
    cluster_size:3
    

    集群维护之导入现有Redis数据至集群

    官方提供了离线迁移数据到集群的工具,有些公司开发了离线迁移工具
    官方工具: redis-cli --cluster import
    第三方在线迁移工具: 模拟slave 节点实现, 比如: 唯品会 redis-migrate-tool , 豌豆荚 redis-port

    案例:

    公司将redis cluster部署完成之后,需要将之前的数据导入之Redis cluster集群,但是由于Redis cluster 使用的分片保存key的机制,因此使用传统的AOF文件或RDB快照无法满足需求,因此需要使用集群数据导入命令完成。

    注意: 导入数据需要redis cluster不能与被导入的数据有重复的key名称,否则导入不成功或中断。

    基础环境准备

    导入数据之前需要关闭各redis 服务器的密码,包括集群中的各node和源Redis server,避免认证带来的环境不一致从而无法导入,可以加参数--cluster-replace 强制替换Redis cluster已有的key。

    #在所有节点包括master和slave节点上关闭各Redis密码认证
    [root@redis ~]# redis-cli -h 172.31.0.18 -p 6379 -a 123456 --no-auth-warning
    CONFIG SET requirepass ""
    OK
    

    执行数据导入

    将源Redis server的数据直接导入之 redis cluster,此方式慎用!

    Redis 3/4:

    Redis 5:

    [root@redis ~]# redis-cli --cluster import <集群服务器IP:PORT> --cluster-from <外部Redis node-IP:PORT> --cluster-copy --cluster-replace
    #只使用cluster-copy,则要导入集群中的key不能存在
    #如果集群中已有同样的key,如果需要替换,可以cluster-copy和cluster-replace联用,这样集群中的
    key就会被替换为外部数据
    

    范例:将非集群节点的数据导入redis cluster

    #在非集群节点172.31.0.78生成数据
    [root@centos8 ~]# hostname -I
    172.31.0.78
    [root@centos8 ~]# cat redis_test.sh
    #!/bin/bash
    #
    # 建议测试写10w或者100w数据
    NUM=10
    PASS=123456
    for i in `seq $NUM`;do
        redis-cli -h 127.0.0.1 -a "$PASS" --no-auth-warning set testkey${i} testvalue${i}
        echo "testkey${i} testvalue${i} 写入完成"
    done
    echo "$NUM个key写入到Redis完成"
    
    [root@centos8 ~]# bash redis_test.sh
    OK
    
    #取消需要导入的主机的密码
    [root@centos8 ~]# redis-cli -h 172.31.0.78 -p 6379 -a 123456 --no-auth-warning CONFIG SET requirepass ""
    
    #取消所有集群服务器的密码
    [root@centos8 ~]# redis-cli -h 172.31.0.8 -p 6379 -a 123456 --no-auth-warning CONFIG SET requirepass ""
    [root@centos8 ~]# redis-cli -h 172.31.0.18 -p 6379 -a 123456 --no-auth-warning CONFIG SET requirepass ""
    [root@centos8 ~]# redis-cli -h 172.31.0.28 -p 6379 -a 123456 --no-auth-warning CONFIG SET requirepass ""
    [root@centos8 ~]# redis-cli -h 172.31.0.38 -p 6379 -a 123456 --no-auth-warning CONFIG SET requirepass ""
    [root@centos8 ~]# redis-cli -h 172.31.0.48 -p 6379 -a 123456 --no-auth-warning CONFIG SET requirepass ""
    [root@centos8 ~]# redis-cli -h 172.31.0.58 -p 6379 -a 123456 --no-auth-warning CONFIG SET requirepass ""
    
    #导入数据至集群
    [root@centos8 ~]# redis-cli --cluster import 172.31.0.8:6379 --cluster-from 172.31.0.78:6379 --cluster-copy --cluster-replace
    >>> Importing data from 172.31.0.78:6379 to cluster 172.31.0.8:6379
    
    #验证数据
    [root@centos8 ~]# redis-cli -h 172.31.0.8 keys '*'
    1) "testkey5"
    

    集群偏斜

    redis cluster 多个节点运行一段时间后,可能会出现倾斜现象,某个节点数据偏多,内存消耗更大,或者接受
    用户请求访问更多
    发生倾斜的原因可能如下:

    节点和槽分配不均
    不同槽对应键值数量差异较大
    包含bigkey,建议少用
    内存相关配置不一致
    热点数据不均衡 : 一致性不高时,可以使用本地缓存和MQ
    

    获取指定槽位中对应键key值的个数

    #redis-cli cluster countkeysinslot {slot的值}
    

    范例: 获取指定slot对应的key个数

    [root@centos8 ~]# redis-cli -a 123456 cluster countkeysinslot 1
    (integer) 0
    [root@centos8 ~]# redis-cli -a 123456 cluster countkeysinslot 2
    (integer) 0
    [root@centos8 ~]# redis-cli -a 123456 cluster countkeysinslot 3
    (integer) 1
    

    执行自动的槽位重新平衡分布,但会影响客户端的访问,此方法慎用

    #redis-cli --cluster rebalance <集群节点IP:PORT>
    

    范例: 执行自动的槽位重新平衡分布

    [root@centos8 ~]# redis-cli -a 123456 --cluster rebalance 172.31.0.8:6379
    >>> Performing Cluster Check (using node 172.31.0.8:6379)
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    *** No rebalancing needed! All nodes are within the 2.00% threshold.
    

    获取bigkey ,建议在slave节点执行

    #redis-cli --bigkeys
    

    范例: 查找 bigkey(最大)

    [root@centos8 ~]# redis-cli -a 123456 --bigkeys
    # Scanning the entire keyspace to find biggest keys as well as
    # average sizes per key type. You can use -i 0.1 to sleep 0.1 sec
    # per 100 SCAN commands (not usually needed).
    [00.00%] Biggest string found so far 'key8811' with 9 bytes
    [26.42%] Biggest string found so far 'testkey1' with 10 bytes
    

    redis cluster 的局限性

    大多数时客户端性能会”降低”
    命令无法跨节点使用:mget、keys、scan、flush、sinter等
    客户端维护更复杂:SDK和应用本身消耗(例如更多的连接池)
    不支持多个数据库︰集群模式下只有一个db 0
    复制只支持一层∶不支持树形复制结构,不支持级联复制
    Key事务和Lua支持有限∶操作的key必须在一个节点,Lua和事务无法跨节点使用
    

    范例: 跨slot的局限性

    [root@centos8 ~]#redis-cli -a 123456 mget key1 key2 key3
    Warning: Using a password with '-a' or '-u' option on the command line interface
    may not be safe.
    (error) CROSSSLOT Keys in request don't hash to the same slot
    

    redis 扩展集群方案

    在Redis 官方自带的Redis cluster集群功能出来之前,有一些开源的集群解决方案可被参考使用
    Codis 是一个豌豆荚开发基于go实现的分布式 Redis 代理解决方案, 对于上层的应用来说, 连接到 Codis
    Proxy 和连接原生的 Redis Server 没有显著区别 (不支持的命令列表), 上层应用可以像使用单机的 Redis
    一样使用, Codis 底层会处理请求的转发, 不停机的数据迁移等工作, 所有后边的一切事情, 对于前面的客
    户端来说是透明的, 可以简单的认为后边连接的是一个内存无限大的 Redis 服务
    codis proxy 对客户端来说相当于redis,即连接codis proxy和连接redis是没有任何区别的,codisproxy
    无状态,不负责记录是否在哪保存,数据在zookeeper记录,即codis proxy向zookeeper查询key
    的记录位置,codis proxy 将请求转发到一个group组进行处理,一个group 里面有一个master和一个或者多个slave组成codis 默认有1024个槽位,而redis cluster 默认有16384个槽位,其把不同的槽位的内容放在不同的group

    Github 地址:https://github.com/CodisLabs/codis/
    

    twemproxy

    Twemproxy是一种代理分片机制,由Twitter开源。Twemproxy作为代理,可接受来自多个程序的访
    问,按照proxy配置的算法,转发给后台的各个Redis服务器,再原路返回。解决了单个Redis实例承载
    能力的问题。Twemproxy本身也是单点,需要用Keepalived做高可用方案。通过Twemproxy可以使用
    多台服务器来水平扩张redis服务,可以有效的避免单点故障问题。虽然使用Twemproxy需要更多的硬
    件资源和在redis性能有一定的损失(twitter测试约20%),也不支持数据迁移。但是能够提高整个系统
    的HA,另外twemproxy实现了memcached协议
    
    Github 地址:https://github.com/twitter/twemproxy
    

    报错

    创建集群报错:

    [root@localhost ~]# redis-cli -a 123456 --cluster create 172.31.0.8:6379 172.18.8.18:6379 172.31.0.28:6379 172.31.0.38:6379 172.31.0.48:6379 172.31.0.58:6379 --cluster-replicas 1
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    [ERR] Node 172.31.0.38:6379 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.
    

    原因是因为上个实验或者说是因为里面还有数据,必须要清除清空数据
    解决:

    # 登录机器清空数据,再重试创建
    127.0.0.1:6379> FLUSHDB
    OK
    

    扩容报错

    [root@localhost ~]# redis-cli -a 123456 --cluster add-node 172.31.0.168:6379 172.31.0.8:6379
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    >>> Adding node 172.31.0.168:6379 to cluster 172.31.0.8:6379
    >>> Performing Cluster Check (using node 172.31.0.8:6379)
    S: 5506cc259ad099a3d132bf268715fbc02ba8095c 172.31.0.108:6379
       slots: (0 slots) slave
       replicates c0bb83d04850d8dc17244beccb4add9b857719d8
    S: 726bd0b53df2907877a42aa5586ec606b26ddb85 172.31.0.48:6379
       slots: (0 slots) slave
       replicates f931d3f9c2b0faedddb73c918958189153e331bb
    S: 6255d89a023b99fed2b274a6d495f6836b1ca908 172.31.0.58:6379
       slots: (0 slots) slave
       replicates 411ac2cb1216131424b10288fca069487d5e2638
    M: 411ac2cb1216131424b10288fca069487d5e2638 172.31.0.28:6379
       slots:[10923-16383] (5461 slots) master
       1 additional replica(s)
    M: c0bb83d04850d8dc17244beccb4add9b857719d8 172.31.0.38:6379
       slots:[0-5460] (5461 slots) master
       1 additional replica(s)
    M: f931d3f9c2b0faedddb73c918958189153e331bb 172.31.0.18:6379
       slots:[5461-10922] (5462 slots) master
       1 additional replica(s)
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    [ERR] Node 172.31.0.168:6379 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.
    

    原因是因为里面还有数据,必须要清除清空数据
    解决:

    # 登录机器清空数据,再重试添加
    127.0.0.1:6379> FLUSHDB
    OK
    
  • 相关阅读:
    ADF 第二篇:使用UI创建数据工厂
    ADF 第一篇:Azure Data Factory介绍
    pandas 学习 第14篇:索引和选择数据
    AppDomain X [DataBase.dbo[runtime], Y] is marked for unload due to memory pressure
    SSPI handshake failed with error code 0x8009030c
    Jupyter notebooks 安装和使用指南
    Security 13:SQL Server 默认的角色和用户
    评估分类模型的指标:召回率和精确率
    SQL Server 幽灵数据删除
    SQL Server 关于kill state
  • 原文地址:https://www.cnblogs.com/xuanlv-0413/p/15085225.html
Copyright © 2011-2022 走看看