zoukankan      html  css  js  c++  java
  • redis跨实例迁移 & redis上云

    1)redis跨实例迁移——源实例db11迁移至目标实例db30

    root@fe2e836e4470:/data# redis-cli -a pwd1 -n 11 keys * |while read key
    > do
    > echo "Copying $key"
    > redis-cli -a pwd1 -n 11 --raw dump $key |head -c -1 
    > |redis-cli -h <dst_ip> -p 6379 -a pwd2 -n 30 -x restore $key 0
    > done
    
    
    ## 写成一行,如下:
    root@fe2e836e4470:/data# redis-cli -a pwd1 -n 11 keys * |while read key; do echo "Copying $key"; redis-cli -a pwd1 -n 11 --raw dump $key |head -c -1 |redis-cli -h <dst_ip> -p 6379 -a pwd2 -n 30 -x restore $key 0; done

    2)redis上云——迁移至阿里云

    参考文档

    a. 点击参考文档中的 redis-shake ,下载 redis-shake.tar.gz 至本地

    b. 将下载好的 redis-shake.tar.gz 上传至 redis所在的ECS,并拷贝至redis容器中

        docker cp /tmp/redis-shake.tar.gz docker_redis_1:/data/

    c. 解压redis-shake.tar.gz

    leyao-slb02 docker # docker-compose exec redis bash
    root@fe2e836e4470:/data# tar -xvf redis-shake.tar.gz
    root@fe2e836e4470:/data# ls -ahl
    drwxr-xr-x 3 redis root  4.0K Jun 21 07:37 .
    drwxr-xr-x 1 root  root  4.0K Jun 10 07:45 ..
    -rw-r--r-- 1 redis users 2.4K Jun 13 15:48 ChangeLog
    -rw-r--r-- 1 redis root  8.6K Jun 21 06:44 redis-shake.conf
    -rwxr-xr-x 1 redis users  11M Jun 13 15:48 redis-shake.linux64
    -rw-r--r-- 1 redis root  3.7M Jun 21 06:01 redis-shake.tar.gz

    d. 修改redis-shake配置文件

    leyao-slb02 docker # docker-compose exec redis bash
    root@fe2e836e4470:/data# vim redis-shake.conf
    
    ...
    source.address = localhost:6379
    source.password_raw = localRedisPwd
    target.address = r-uf65427cede42c14.redis.rds.aliyuncs.com:6379
    target.password_raw = yourALIredisPwd
    ...
    # 其余参数保持默认

    e. 使用如下命令进行迁移

    leyao-slb02 docker # docker-compose exec redis bash
    root@fe2e836e4470:/data# ./redis-shake.linux64 -type=sync -conf=redis-shake.conf

    f. 查看同步日志确认同步状态,当出现sync rdb done时,全量同步已经完成,同步进入增量阶段。

    root@fe2e836e4470:/data# ./redis-shake.linux64 -type=sync -conf=redis-shake.conf
    2019/06/27 06:53:56 [WARN]
    ______________________________
                                            _         ______ |
                                          /   \___-=O'/|O'/__|
        redis-shake, here we go !! \_______          / | /    )
      /                             /        '/-==__ _/__|/__=-|  -GM
     /                             /         *              | |
    /                             /                        (o)
    ------------------------------
    if you have any problem, please visit https://github.com/alibaba/RedisShake/wiki/FAQ
    
    2019/06/27 06:53:56 [INFO] redis-shake configuration: {"Id":"redis-shake","LogFile":"","LogLevel":"info","SystemProfile":9310,"HttpProfile":9320,"NCpu":0,"Parallel":32,"SourceType":"standalone","SourceAddress":"localhost:6379","SourcePasswordRaw":"bckBuqb5hDhCQfSr9eTVEYufn7gBxJ5k","SourcePasswordEncoding":"","SourceVersion":0,"SourceAuthType":"auth","SourceParallel":1,"SourceTLSEnable":false,"TargetAddress":"r-uf65427cede42c14.redis.rds.aliyuncs.com:6379","TargetPasswordRaw":"Karl@612500","TargetPasswordEncoding":"","TargetVersion":0,"TargetDBString":"-1","TargetAuthType":"auth","TargetType":"standalone","TargetTLSEnable":false,"RdbInput":["local"],"RdbOutput":"local_dump","RdbParallel":1,"RdbSpecialCloud":"","FakeTime":"","Rewrite":true,"FilterDB":"","FilterKey":[],"FilterSlot":[],"BigKeyThreshold":524288000,"Psync":false,"Metric":true,"MetricPrintLog":false,"HeartbeatUrl":"","HeartbeatInterval":3,"HeartbeatExternal":"test external","HeartbeatNetworkInterface":"","SenderSize":104857600,"SenderCount":5000,"SenderDelayChannelSize":65535,"KeepAlive":0,"PidPath":"","ScanKeyNumber":50,"ScanSpecialCloud":"","ScanKeyFile":"","Qps":200000,"ReplaceHashTag":false,"ExtraInfo":false,"SockFileName":"","SockFileSize":0,"SourceAddressList":["localhost:6379"],"TargetAddressList":["r-uf65427cede42c14.redis.rds.aliyuncs.com:6379"],"HeartbeatIp":"127.0.0.1","ShiftTime":0,"TargetRedisVersion":"4.0.11","TargetReplace":true,"TargetDB":-1,"Version":"improve-1.6.7,678f43481a4826764ed71fedd744a7ee23736536,go1.10.3,2019-06-13_23:48:39"}
    2019/06/27 06:53:56 [INFO] routine[0] starts syncing data from localhost:6379 to [r-uf65427cede42c14.redis.rds.aliyuncs.com:6379] with http[9321]
    2019/06/27 06:53:57 [INFO] dbSyncer[0] rdb file size = 3429472
    2019/06/27 06:53:57 [INFO] Aux information key:redis-ver value:5.0.5
    2019/06/27 06:53:57 [INFO] Aux information key:redis-bits value:64
    2019/06/27 06:53:57 [INFO] Aux information key:ctime value:1561618436
    2019/06/27 06:53:57 [INFO] Aux information key:used-mem value:27379792
    2019/06/27 06:53:57 [INFO] Aux information key:repl-stream-db value:0
    2019/06/27 06:53:57 [INFO] Aux information key:repl-id value:6641200d52e448927a79ce3e0a3cec641302da7f
    2019/06/27 06:53:57 [INFO] Aux information key:repl-offset value:0
    2019/06/27 06:53:57 [INFO] Aux information key:aof-preamble value:0
    2019/06/27 06:53:57 [INFO] db_size:1 expire_size:1
    2019/06/27 06:53:57 [INFO] db_size:3 expire_size:1
    2019/06/27 06:53:57 [INFO] db_size:9 expire_size:9
    2019/06/27 06:53:57 [INFO] db_size:7 expire_size:4
    2019/06/27 06:53:57 [INFO] db_size:6 expire_size:0
    2019/06/27 06:53:57 [INFO] db_size:6 expire_size:0
    2019/06/27 06:53:57 [INFO] Aux information key:lua value:-- Pop the first job off of the queue...
    local job = redis.call('lpop', KEYS[1])
    local reserved = false
    
    if(job ~= false) then
        -- Increment the attempt count and place job on the reserved queue...
        reserved = cjson.decode(job)
        reserved['attempts'] = reserved['attempts'] + 1
        reserved = cjson.encode(reserved)
        redis.call('zadd', KEYS[2], ARGV[1], reserved)
    end
    
    return {job, reserved}
    2019/06/27 06:53:57 [INFO] Aux information key:lua value:-- Get all of the jobs with an expired "score"...
    local val = redis.call('zrangebyscore', KEYS[1], '-inf', ARGV[1])
    
    -- If we have values in the array, we will remove them from the first queue
    -- and add them onto the destination queue in chunks of 100, which moves
    -- all of the appropriate jobs onto the destination queue very safely.
    if(next(val) ~= nil) then
        redis.call('zremrangebyrank', KEYS[1], 0, #val - 1)
    
        for i = 1, #val, 100 do
            redis.call('rpush', KEYS[2], unpack(val, i, math.min(i+99, #val)))
        end
    end
    
    return val
    2019/06/27 06:53:57 [INFO] Aux information key:lua value:return redis.call('exists',KEYS[1])<1 and redis.call('setex',KEYS[1],ARGV[2],ARGV[1])
    2019/06/27 06:53:57 [INFO] dbSyncer[0] total=3429472 -      3429472 [100%]  entry=35
    2019/06/27 06:53:57 [INFO] dbSyncer[0] sync rdb done
    2019/06/27 06:53:57 [WARN] dbSyncer[0] GetFakeSlaveOffset not enable when psync == false
    2019/06/27 06:53:57 [INFO] dbSyncer[0] Event:IncrSyncStart      Id:redis-shake
    2019/06/27 06:53:58 [INFO] dbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
    2019/06/27 06:53:59 [INFO] dbSyncer[0] sync:  +forwardCommands=7      +filterCommands=0      +writeBytes=34
    2019/06/27 06:54:00 [INFO] dbSyncer[0] sync:  +forwardCommands=6      +filterCommands=0      +writeBytes=27

    g. 登录阿里云Redis查看数据同步情况

  • 相关阅读:
    database design
    django bulk create user
    mysql basic
    install mysql
    django apache httpd windows
    django apache httpd centos
    python mail smtplib
    compile c cpp with cl.exe in vim
    Remote Access to IPython Notebooks via SSH
    calculate traffic by snmpwalk for mrtg
  • 原文地址:https://www.cnblogs.com/karl-python/p/11096096.html
Copyright © 2011-2022 走看看