Redis 使用场景
1、性能: 不经常变动的数据放入缓存
我们在碰到需要执行耗时特别久,且结果不频繁变动的SQL,就特别适合将运行结果放入缓存。这样,后面的请求就去缓存中读取,使得请求能够迅速响应。
2、迸发: 在高迸发的情况下使用缓存
在大并发的情况下,所有的请求直接访问数据库,数据库会出现连接异常。这个时候,就需要使用redis做一个缓冲操作,让请求先访问到redis,而不是直接访问数据库。
install Redis
wget http://download.redis.io/releases/redis-5.0.5.tar.gz tar xzf redis-5.0.5.tar.gz cd redis-5.0.5 make MALLOC=libc make PREFIX=/usr/local/redis/ install
tree /usr/local/redis/bin/
/usr/local/redis/bin/
├── redis-benchmark // Redis性能测试工具
├── redis-check-aof // 对更新日志appendonly.aof 检查, 是否可用
├── redis-check-rdb
├── redis-cli // Redis命令行客户端操作工具
├── redis-sentinel -> redis-server
└── redis-server //Redis服务器的daemon启动程序
0 directories, 6 files
配置并启动
echo 'PATH=/usr/local/redis/bin/:$PATH' >> /etc/profile
source /etc/profile which redis-server
cp /usr/local/lib/redis-5.0.5/redis.conf /usr/local/redis/conf/
redis-server /usr/local/redis/conf/redis.conf & -- 启动
redis-cli shutdown -- 关闭
在生产环境中,需要将redis作为一个deamon进程去启动,每次系统启动的时候,redis服务会跟着启动。
进入redis目录下面,然后在进入utils目录,可以看到,有一个redis_init_script文件。
将这个文件拷贝到/etc/init.d/redis_6379下面
cp /usr/local/lib/redis-5.0.5/utils/redis_init_script /etc/init.d/redis.6379
vim /usr/local/redis/conf/redis.6379.conf
daemonize yes # 让redis以daemon进程运行
dir /usr/local/redis/var/ # 设置持久化文件的位置
让redis跟随系统启动
在redis_6379
脚本中,最上面,加入两行注释
# chkconfig: 2345 90 10 # description: Redis is a persistent key-value database
认证
修改配置文件并重启
vim redis.conf
requirepass password123
redis-cli shutdown
redis-server /usr/local/redis/conf/redis.conf &
[root@caoc-1 conf]# redis-cli 127.0.0.1:6379> set k v # 设置k v 提示需要认证 (error) NOAUTH Authentication required. 127.0.0.1:6379> auth password123 # 认证 OK 127.0.0.1:6379> set k v # 设置成功 OK 127.0.0.1:6379> get k "v"
or 登录方式
redis-cli -a password
改名和禁用命令
rename-command CONFIG "" # 禁用命令
rename-command set sets # 改名命令
python 操作redis
pip install redis
# python
Python 2.7.5 (default, Oct 30 2018, 23:45:53)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import redis
>>> r = redis.Redis(host='127.0.0.1', port=6379, password='luciencao', db=0)
>>> pipe = r.pipeline()
>>> pipe.set('name','orange')
>>> pipe.execute()
[True]
>>> print(r.get('name'))
orange
>>>
Redis 主从复制
主
bind 0.0.0.0 port 6379
appendonly yes
...
从
bind 0.0.0.0
port 6379
appendonly yes
replicaof 192.168.1.19 6379 masterauth password
...
Redis-clusters 配置
集群采用6个节点, 3个master 和分别对应的1个 salve , 创建对应的7000-7005目录
mkdir -p /home/redis-clusters/{7000,7001,7002,7003,7004,7005}/{conf,var,run,log}
[root@redis redis-clusters]# tree .
.
├── 7000
│ ├── conf
│ │ └── redis.7000.conf
│ ├── log
│ │ └── redis.7000.log
│ ├── run
│ └── var
├── 7001
│ ├── conf
│ │ └── redis.7001.conf
│ ├── log
│ │ └── redis.7000.log
│ ├── run
│ └── var
...
集群配置文件 redis.7000.conf
需要区别的配置
port 7000 pidfile /home/redis-clusters/7000/run/redis_7000.pid logfile /home/redis-clusters/7000/log/redis.7000.log dir /home/redis-clusters/7000/var/ masterauth luciencao requirepass luciencao appendonly yes
完整配置:
[root@caoc-1 redis-clusters]# egrep -v "^$|^#" 7000/conf/redis.7000.conf bind 0.0.0.0 protected-mode yes port 7000 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize yes supervised no pidfile /home/redis-clusters/7000/run/redis_7000.pid loglevel notice logfile /home/redis-clusters/7000/log/redis.7000.log databases 16 always-show-logo yes save 900 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb dir /home/redis-clusters/7000/var/ masterauth luciencao replica-serve-stale-data yes replica-read-only yes repl-diskless-sync no repl-diskless-sync-delay 5 repl-disable-tcp-nodelay no replica-priority 100 requirepass luciencao lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no replica-lazy-flush no appendonly yes appendfilename "appendonly.aof" appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes aof-use-rdb-preamble yes lua-time-limit 5000 cluster-enabled yes cluster-config-file nodes-7000.conf cluster-node-timeout 15000 slowlog-log-slower-than 10000 slowlog-max-len 128 latency-monitor-threshold 0 notify-keyspace-events "" hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 stream-node-max-bytes 4096 stream-node-max-entries 100 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 dynamic-hz yes aof-rewrite-incremental-fsync yes rdb-save-incremental-fsync yes
启动服务
redis-server 7000/conf/redis.7000.conf redis-server 7001/conf/redis.7001.conf redis-server 7002/conf/redis.7002.conf redis-server 7003/conf/redis.7003.conf redis-server 7004/conf/redis.7004.conf redis-server 7005/conf/redis.7005.conf
启动集群
redis-cli --cluster create 0.0.0.0:7000 0.0.0.0:7001 0.0.0.0:7002 0.0.0.0:7003 0.0.0.0:7004 0.0.0.0:7005 --cluster-replicas 1 -a luciencao
输入 yes 打印如下
[OK] All 16384 slots covered 这表示集群中的 16384 个槽都有至少一个主节点在处理, 集群运作正常。
>>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 0.0.0.0:7004 to 0.0.0.0:7000 Adding replica 0.0.0.0:7005 to 0.0.0.0:7001 Adding replica 0.0.0.0:7003 to 0.0.0.0:7002 >>> Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master M: 814ff7a5c1e1affa45ff741018d38cc504114650 0.0.0.0:7000 slots:[0-5460] (5461 slots) master M: 9a46b15737ba55d8c09cb776d6f82796aca35296 0.0.0.0:7001 slots:[5461-10922] (5462 slots) master M: 9042f31ec13b6ace52833e2a5c83aa0178f7019e 0.0.0.0:7002 slots:[10923-16383] (5461 slots) master S: 59d4a5f73ffb82d294e684ac07700eb5e0cd8b0a 0.0.0.0:7003 replicates 9a46b15737ba55d8c09cb776d6f82796aca35296 S: 03bfaa6794ef1157b6f6b8e071d0575646105abc 0.0.0.0:7004 replicates 9042f31ec13b6ace52833e2a5c83aa0178f7019e S: db800ec138a395f05d68895ee3301835a412c314 0.0.0.0:7005 replicates 814ff7a5c1e1affa45ff741018d38cc504114650 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join .. >>> Performing Cluster Check (using node 0.0.0.0:7000) M: 814ff7a5c1e1affa45ff741018d38cc504114650 0.0.0.0:7000 slots:[0-5460] (5461 slots) master 1 additional replica(s) M: 9a46b15737ba55d8c09cb776d6f82796aca35296 127.0.0.1:7001 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 59d4a5f73ffb82d294e684ac07700eb5e0cd8b0a 127.0.0.1:7003 slots: (0 slots) slave replicates 9a46b15737ba55d8c09cb776d6f82796aca35296 S: db800ec138a395f05d68895ee3301835a412c314 127.0.0.1:7005 slots: (0 slots) slave replicates 814ff7a5c1e1affa45ff741018d38cc504114650 S: 03bfaa6794ef1157b6f6b8e071d0575646105abc 127.0.0.1:7004 slots: (0 slots) slave replicates 9042f31ec13b6ace52833e2a5c83aa0178f7019e M: 9042f31ec13b6ace52833e2a5c83aa0178f7019e 127.0.0.1:7002 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.