大家好呀,我是狂野君,从今天起,要开启我的日更
模式,只要卷不死,就往死里卷
谁让我们踏上了技术
这条不归路呢,简直停不下来
突然想起了《野子》
的一句歌词,卷啊卷啊,我的骄傲放纵!!!
好了,说点正事儿吧
大家都知道,现在但凡是个系统,都是分布式、微服务
这一套东西,动不动就是高并发、高可用、高性能之类的.
如果你还没不清楚 高并发、高可用、高性能
这么时髦的词汇,那说明你的真的也该卷一卷了
Redis作为当代系统最著名的三高
艺术家,对系统架构做出了卓越的贡献,必将被Java王国的人名永远铭记!! 永远活在我们心中
所以,今天就给大家唠唠 Redis高可用方面的一些实用干货,主从复制、哨兵、集群Cluster、分片等等吧
太多了,说不完了,大家自己往下看吧
哦哦,对了,看完觉得有收获,记得给狂野君点个赞 收藏啊,这里先行谢过
江湖再见~~~
性能压测
Redis 的性能测试工具,目前主流使用的是 Redis-benchmark
4.1. redis-benchmark
Redis 官方提供 redis-benchmark 的工具来模拟 N 个客户端同时发出 M 个请求,可以便捷对服务器进行读写性能压测
4.2. 语法
redis 性能测试的基本命令如下:
redis-benchmark [option] [option value]
redis 性能测试工具可选参数如下所示:
4.3. 快速测试
redis-benchmark
在安装 Redis 的服务器上,直接执行,不带任何参数,即可进行测试。测试结果如下:
====== PING_INLINE ====== 100000 requests completed in 1.18 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 84388.19 requests per second ====== PING_BULK ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85106.38 requests per second ====== SET ====== 100000 requests completed in 1.18 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 1 milliseconds 99.95% <= 2 milliseconds 99.95% <= 3 milliseconds 100.00% <= 3 milliseconds 85034.02 requests per second ====== GET ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 1 milliseconds 99.99% <= 2 milliseconds 100.00% <= 2 milliseconds 85106.38 requests per second ====== INCR ====== 100000 requests completed in 1.19 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 100.00% <= 3 milliseconds 84317.03 requests per second ====== LPUSH ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85763.29 requests per second ====== RPUSH ====== 100000 requests completed in 1.15 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 87260.03 requests per second ====== LPOP ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85689.80 requests per second ====== RPOP ====== 100000 requests completed in 1.16 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 86281.27 requests per second ====== SADD ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 100.00% <= 3 milliseconds 85106.38 requests per second ====== HSET ====== 100000 requests completed in 1.14 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 87719.30 requests per second ====== SPOP ====== 100000 requests completed in 1.16 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85836.91 requests per second ====== LPUSH (needed to benchmark LRANGE) ====== 100000 requests completed in 1.15 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.92% <= 1 milliseconds 100.00% <= 1 milliseconds 86805.56 requests per second ====== LRANGE_100 (first 100 elements) ====== 100000 requests completed in 2.03 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 1 milliseconds 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 99.99% <= 4 milliseconds 100.00% <= 4 milliseconds 49261.09 requests per second ====== LRANGE_300 (first 300 elements) ====== 100000 requests completed in 4.58 seconds 50 parallel clients 3 bytes payload keep alive: 1 6.06% <= 1 milliseconds 99.78% <= 2 milliseconds 99.94% <= 3 milliseconds 99.98% <= 4 milliseconds 100.00% <= 5 milliseconds 100.00% <= 5 milliseconds 21815.01 requests per second ====== LRANGE_500 (first 450 elements) ====== 100000 requests completed in 6.51 seconds 50 parallel clients 3 bytes payload keep alive: 1 0.04% <= 1 milliseconds 83.91% <= 2 milliseconds 99.93% <= 3 milliseconds 99.97% <= 4 milliseconds 99.98% <= 5 milliseconds 99.99% <= 6 milliseconds 100.00% <= 7 milliseconds 100.00% <= 7 milliseconds 15372.79 requests per second ====== LRANGE_600 (first 600 elements) ====== 100000 requests completed in 8.66 seconds 50 parallel clients 3 bytes payload keep alive: 1 0.03% <= 1 milliseconds 62.47% <= 2 milliseconds 98.11% <= 3 milliseconds 99.86% <= 4 milliseconds 99.94% <= 5 milliseconds 99.97% <= 6 milliseconds 99.98% <= 7 milliseconds 100.00% <= 8 milliseconds 100.00% <= 8 milliseconds 11551.35 requests per second ====== MSET (10 keys) ====== 100000 requests completed in 1.11 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 100.00% <= 3 milliseconds 90009.01 requests per second
基本可以看到,常用的 GET/SET/INCR 等命令,都在 8W+ QPS 以上
4.4. 精简测试
redis-benchmark -t set,get,incr -n 1000000 -q
通过 -t 参数,设置仅仅测试 SET/GET/INCR 命令
通过 -n 参数,设置每个测试执行 1000000 次操作。
通过 -q 参数,设置精简输出结果。
执行结果如下:
[root@iZuf6hci646px19gg3hpuwZ ~]# redis-benchmark -t set,get,incr -n 100000 -q SET: 85888.52 requests per second GET: 85881.14 requests per second INCR: 86722.75 requests per second #测试脚本性能 redis-benchmark -q script load "redis.call('set','foo','bar')"
4.5 实战演练
看一个实际的案例,压测开启、关闭 aof下,redis的性能剖析
1)关掉auth认证,打开aof,策略为always,配置文件如下
#redis.conf appendonly yes appendfsync always #requirepass abc #关掉auth #kill旧进程,重启redis [root@iZ8vb3a9qxofwannyywl6zZ aof]# pwd /opt/redis/latest/aof [root@iZ8vb3a9qxofwannyywl6zZ aof]# ..src/redis-server redis.conf
2)压测aof下的性能,以get,set为测试案例,将结果记录下来,留做后面对比
[root@iZ8vb3a9qxofwannyywl6zZ aof]# redis-server /usr/local/redis/redis.conf SET: 62274.25 requests per second, p50=0.687 msec GET: 88739.02 requests per second, p50=0.399 msec
3)将配置文件的appendonly改为no,关掉aof,重启redis,再来压测同样的指令
[root@iZ8vb3a9qxofwannyywl6zZ aof]# ..redis-6.2.4/src/redis-benchmark -t set,get -n 1000000 -q SET: 91575.09 requests per second, p50=0.391 msec GET: 90950.43 requests per second, p50=0.391 msec
4)结果分析
对各种读取操作来说,性能差别不大:get、spop、队列的range等
对写操作影响比较大
5)参考价值
如果你的项目里对数据安全性要求较高,写少读多的场景,可以适当使用aof
如果追求极致的性能,只做缓存,容忍数据丢失,还是关掉aof
5. Redis高可用
5.1 主从复制
5.1.1 面临问题
Redis有
两种不同的持久化方式,Redis
服务器通过持久化,把Redis
内存中持久化到硬盘当中,当Redis
宕机时,我们重启Redis
服务器时,可以由RDB
文件或AOF
文件恢复内存中的数据。
问题1:不过持久化后的数据仍然只在一台机器上,因此当硬件发生故障时,比如主板或CPU
坏了,这时候无法重启服务器,有什么办法可以保证服务器发生故障时数据的安全性?或者可以快速恢复数据呢?
问题2:容量瓶颈
5.1.2 解决办法
针对这些问题,redis提供了复制(replication)
的功能,通过"主从(一主多从)"和"集群(多主多从)"的方式对redis的服务进行水平扩展,用多台redis服务器共同构建一个高可用的redis服务系统。
5.1.3 主从复制
主从复制,是指将一台Redis服务器的数据,复制到其他的Redis服务器。前者称为主节点(master),后者称为从节点(slave),数据的复制是单向的,只能由主节点到从节点。
5.1.4 常用策略
策略1 :一主多从 主机(写),从机(读)
策略2:薪火相传
5.1.5 主从复制原理
Redis
的主从复制是异步复制,异步分为两个方面,一个是master
服务器在将数据同步到slave
时是异步的,因此master服务器在这里仍然可以接收其他请求,一个是slave在接收同步数据也是异步的。
复制方式
redis-cli -p 6379 info | grep run
全量复制
master
服务器会将自己的rdb
文件发送给slave
服务器进行数据同步,并记录同步期间的其他写入,再发送给slave
服务器,以达到完全同步的目的,这种方式称为全量复制。
增量复制
因为各种原因master
服务器与slave
服务器断开后,slave
服务器在重新连上maste
r服务器时会尝试重新获取断开后未同步的数据即部分同步,或者称为部分复制。
工作原理
master
服务器会记录一个replicationId
的伪随机字符串,用于标识当前的数据集版本,还会记录一个当数据集的偏移量offset
,不管master
是否有配置slave
服务器,replication Id和offset会一直记录并成对存在,我们可以通过以下命令查看replication Id和offset:
> info repliaction
通过redis-cli在master或slave服务器执行该命令会打印类似以下信息(不同服务器数据不同,打印信息不同):
connected_slaves:1 slave0:ip=127.0.0.1,port=6380,state=online,offset=9472,lag=1 master_replid:2cbd65f847c0acd608c69f93010dcaa6dd551cee master_repl_offset:9472
当master与slave正常连接时,slave使用PSYNC命令向master发送自己记录的旧master的replication id和offset,而master会计算与slave之间的数据偏移量,并将缓冲区中的偏移数量同步到slave,此时master和slave的数据一致。
而如果slave引用的replication太旧了,master与slave之间的数据差异太大,则master与slave之间会使用全量复制的进行数据同步。
5.1.6 配置主从复制
注:主从复制的开启,完全是在从节点发起的;不需要我们在主节点做任何事情。
从节点开启主从复制,有3种方式:
(1)配置文件:在从服务器的配置文件中加入:slaveof
(2)redis-server启动命令后加入 --slaveof
(3)Redis服务器启动后,直接通过客户端执行命令:slaveof,则该Redis实例成为从节点
演示:
①、通过 info replication 命令查看三台节点角色
初始状态,三台节点都是master
②、设置主从关系,从节点执行命令:SLAVEOF 127.0.0.1 6379
再看主节点信息:
这里通过命令来设置主从关系,一旦服务重启,那么角色关系将不复存在。想要永久的保存这种关系,可以通过配置redis.conf 文件来配置。
redis-benchmark0
5.1.7 测试主从关系
①、增量复制
master 操作写入:
slave操作获取:
②、全量复制
通过执行 SLAVEOF 127.0.0.1 6379,如果主节点 6379 以前还存在一些 key,那么执行命令之后,从节点会将以前的信息也都复制过来
③、主从读写分离
尝试slave操作获取:
原因是在配置文件 6380redis.conf 中对于 slave-read-only 的配置
如果我们将其修改为 no 之后,执行写命令是可以的,但是从节点写命令的数据从节点或者主节点都不能获取的。
④、主节点宕机
主节点 Maste 挂掉,两个从节点角色会发生变化吗?
上图可知主节点 Master 挂掉之后,从节点角色还是不会改变的。
⑤、主节点宕机后恢复
主节点Master挂掉之后,马上启动主机Master,主节点扮演的角色还是 Master 吗?
也就是说主节点挂掉之后重启,又恢复了主节点的角色。
5.2 sentinel哨兵模式
通过前面的配置,主节点Master 只有一个,一旦主节点挂掉之后,从节点没法担起主节点的任务,那么整个系统也无法运行。
如果主节点挂掉之后,从节点能够自动变成主节点,那么问题就解决了,于是哨兵模式诞生了。
哨兵模式是一种特殊的模式,首先Redis提供了哨兵的命令,哨兵是一个独立的进程,作为进程,它会独立运行。其原理是哨兵通过发送命令,等待Redis服务器响应,从而监控运行的多个Redis实例。
哨兵模式搭建步骤:
①、在配置文件目录下新建 sentinel.conf 文件,名字绝不能错,然后配置相应内容
redis-benchmark1
分别配置被监控的名字,ip地址,端口号,以及得票数。上面的得票数为1表示表示主机挂掉后salve投票看让谁接替成为主机,得票数大于1便成为主机
②、启动哨兵
redis-benchmark2
接下来,我们干掉主机 6379,然后看从节点有啥变化。
干掉主节点之后,我们查看后台打印日志,发现 6380投票变为主节点
PS:哨兵模式也存在单点故障问题,如果哨兵机器挂了,那么就无法进行监控了,解决办法是哨兵也建立集群,Redis哨兵模式是支持集群的。
6. Redis Cluster
引言
6.1主从 + 哨兵 问题分析
(1)在主从 + 哨兵模式中,仍然只有一个Master节点。当并发写请求较大时,哨兵模式并不能缓解写压力
(2) 在Redis Sentinel模式中,每个节点需要保存全量数据,冗余比较多
6.2 Cluster概念
从3.0版本之后,官方推出了Redis Cluster,它的主要用途是实现数据分片(Data Sharding),不过同样可以实现HA,是官方当前推荐的方案。
1.Redis-Cluster采用无中心结构
2.只有当集群中的大多数节点同时fail整个集群才fail。
3.整个集群有16384个slot,当需要在 Redis 集群中放置一个 key-value 时,根据 CRC16(key) mod 16384的值,决定将一个key放到哪个桶中。读取一个key时也是相同的算法。
4.当主节点fail时从节点会升级为主节点,fail的主节点online之后自动变成了从节点
6.3 故障转移
Redis集群的主节点内置了类似Redis Sentinel的节点故障检测和自动故障转移功能,当集群中的某个主节点下线时,集群中的其他在线主节点会注意到这一点,并对已下线的主节点进行故障转移。
6.4 集群分片策略
Redis-cluster分片策略,是用来解决key存储位置的
常见的数据分布的方式:顺序分布、哈希分布、节点取余哈希、一致性哈希..
6.5 Redis 集群的数据分片
Redis 集群没有使用一致性hash, 而是引入了 哈希槽的概念.
预设虚拟槽,每个槽就相当于一个数字,有一定范围
Redis Cluster中预设虚拟槽的范围为0到16383
步骤:
1.把16384槽按照节点数量进行平均分配,由节点进行管理
2.对每个key按照CRC16规则进行hash运算
3.把hash结果对16383进行取余
4.把余数发送给Redis节点
5.节点接收到数据,验证是否在自己管理的槽编号的范围
如果在自己管理的槽编号范围内,则把数据保存到数据槽中,然后返回执行结果
如果在自己管理的槽编号范围外,则会把数据发送给正确的节点,由正确的节点来把数据保存在对应的槽中
需要注意的是:Redis Cluster的节点之间会共享消息,每个节点都会知道是哪个节点负责哪个范围内的数据槽
虚拟槽分布方式中,由于每个节点管理一部分数据槽,数据保存到数据槽中。当节点扩容或者缩容时,对数据槽进行重新分配迁移即可,数据不会丢失。
6.6 搭建Redis Cluster
步骤分析:
启动节点:将节点以集群方式启动,此时节点是独立的。
节点握手:将独立的节点连成网络。
槽指派:将16384个槽位分配给主节点,以达到分片保存数据库键值对的效果。
主从复制:为从节点指定主节点。
步骤实现
启动节点
(1)新建目录,并拷贝出6个节点的配置文件
redis-benchmark3
(2)将redis.conf,依次拷贝到每个900X目录内,并修改每个900X目录下的redis.conf配置文件:
redis-benchmark4
(3)启动6个Redis实例
查看进程:
节点握手&槽指派&主从复制
redis5.0使用redis-cli作为创建集群的命令,使用c语言实现,不再使用ruby语言。
1)有了实例后,搭建集群非常简单,使用redis-cli一行命令即可
redis-benchmark5
参数解释: –cluster-replicas 1:表示希望为集群中的每个主节点创建一个从节点(一主一从)。 –cluster-replicas 2:表示希望为集群中的每个主节点创建两个从节点(一主二从)。
2)备注:如果节点上有数据,可能会有错误提示:
redis-benchmark6
删除dump.rdb,nodes.conf,登录redis-cli,flushdb即可
3)如果没问题,将收到集群创建成功的消息:
redis-benchmark7
集群验证
用redis-cli在服务器上set多个值,比如czbk,分别在不同的实例上,分片成功!
1)cluster命令验证
redis-benchmark8
2)使用key值和数据验证
redis-benchmark9
扩容
1)按上面方式,新起一个redis , 8084端口
====== PING_INLINE ====== 100000 requests completed in 1.18 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 84388.19 requests per second ====== PING_BULK ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85106.38 requests per second ====== SET ====== 100000 requests completed in 1.18 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 1 milliseconds 99.95% <= 2 milliseconds 99.95% <= 3 milliseconds 100.00% <= 3 milliseconds 85034.02 requests per second ====== GET ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 1 milliseconds 99.99% <= 2 milliseconds 100.00% <= 2 milliseconds 85106.38 requests per second ====== INCR ====== 100000 requests completed in 1.19 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 100.00% <= 3 milliseconds 84317.03 requests per second ====== LPUSH ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85763.29 requests per second ====== RPUSH ====== 100000 requests completed in 1.15 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 87260.03 requests per second ====== LPOP ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85689.80 requests per second ====== RPOP ====== 100000 requests completed in 1.16 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 86281.27 requests per second ====== SADD ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 100.00% <= 3 milliseconds 85106.38 requests per second ====== HSET ====== 100000 requests completed in 1.14 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 87719.30 requests per second ====== SPOP ====== 100000 requests completed in 1.16 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85836.91 requests per second ====== LPUSH (needed to benchmark LRANGE) ====== 100000 requests completed in 1.15 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.92% <= 1 milliseconds 100.00% <= 1 milliseconds 86805.56 requests per second ====== LRANGE_100 (first 100 elements) ====== 100000 requests completed in 2.03 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 1 milliseconds 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 99.99% <= 4 milliseconds 100.00% <= 4 milliseconds 49261.09 requests per second ====== LRANGE_300 (first 300 elements) ====== 100000 requests completed in 4.58 seconds 50 parallel clients 3 bytes payload keep alive: 1 6.06% <= 1 milliseconds 99.78% <= 2 milliseconds 99.94% <= 3 milliseconds 99.98% <= 4 milliseconds 100.00% <= 5 milliseconds 100.00% <= 5 milliseconds 21815.01 requests per second ====== LRANGE_500 (first 450 elements) ====== 100000 requests completed in 6.51 seconds 50 parallel clients 3 bytes payload keep alive: 1 0.04% <= 1 milliseconds 83.91% <= 2 milliseconds 99.93% <= 3 milliseconds 99.97% <= 4 milliseconds 99.98% <= 5 milliseconds 99.99% <= 6 milliseconds 100.00% <= 7 milliseconds 100.00% <= 7 milliseconds 15372.79 requests per second ====== LRANGE_600 (first 600 elements) ====== 100000 requests completed in 8.66 seconds 50 parallel clients 3 bytes payload keep alive: 1 0.03% <= 1 milliseconds 62.47% <= 2 milliseconds 98.11% <= 3 milliseconds 99.86% <= 4 milliseconds 99.94% <= 5 milliseconds 99.97% <= 6 milliseconds 99.98% <= 7 milliseconds 100.00% <= 8 milliseconds 100.00% <= 8 milliseconds 11551.35 requests per second ====== MSET (10 keys) ====== 100000 requests completed in 1.11 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 100.00% <= 3 milliseconds 90009.01 requests per second0
2)使用redis-cli登录任意节点,使用cluster nodes查看新集群信息
====== PING_INLINE ====== 100000 requests completed in 1.18 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 84388.19 requests per second ====== PING_BULK ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85106.38 requests per second ====== SET ====== 100000 requests completed in 1.18 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 1 milliseconds 99.95% <= 2 milliseconds 99.95% <= 3 milliseconds 100.00% <= 3 milliseconds 85034.02 requests per second ====== GET ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 1 milliseconds 99.99% <= 2 milliseconds 100.00% <= 2 milliseconds 85106.38 requests per second ====== INCR ====== 100000 requests completed in 1.19 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 100.00% <= 3 milliseconds 84317.03 requests per second ====== LPUSH ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85763.29 requests per second ====== RPUSH ====== 100000 requests completed in 1.15 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 87260.03 requests per second ====== LPOP ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85689.80 requests per second ====== RPOP ====== 100000 requests completed in 1.16 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 86281.27 requests per second ====== SADD ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 100.00% <= 3 milliseconds 85106.38 requests per second ====== HSET ====== 100000 requests completed in 1.14 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 87719.30 requests per second ====== SPOP ====== 100000 requests completed in 1.16 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85836.91 requests per second ====== LPUSH (needed to benchmark LRANGE) ====== 100000 requests completed in 1.15 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.92% <= 1 milliseconds 100.00% <= 1 milliseconds 86805.56 requests per second ====== LRANGE_100 (first 100 elements) ====== 100000 requests completed in 2.03 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 1 milliseconds 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 99.99% <= 4 milliseconds 100.00% <= 4 milliseconds 49261.09 requests per second ====== LRANGE_300 (first 300 elements) ====== 100000 requests completed in 4.58 seconds 50 parallel clients 3 bytes payload keep alive: 1 6.06% <= 1 milliseconds 99.78% <= 2 milliseconds 99.94% <= 3 milliseconds 99.98% <= 4 milliseconds 100.00% <= 5 milliseconds 100.00% <= 5 milliseconds 21815.01 requests per second ====== LRANGE_500 (first 450 elements) ====== 100000 requests completed in 6.51 seconds 50 parallel clients 3 bytes payload keep alive: 1 0.04% <= 1 milliseconds 83.91% <= 2 milliseconds 99.93% <= 3 milliseconds 99.97% <= 4 milliseconds 99.98% <= 5 milliseconds 99.99% <= 6 milliseconds 100.00% <= 7 milliseconds 100.00% <= 7 milliseconds 15372.79 requests per second ====== LRANGE_600 (first 600 elements) ====== 100000 requests completed in 8.66 seconds 50 parallel clients 3 bytes payload keep alive: 1 0.03% <= 1 milliseconds 62.47% <= 2 milliseconds 98.11% <= 3 milliseconds 99.86% <= 4 milliseconds 99.94% <= 5 milliseconds 99.97% <= 6 milliseconds 99.98% <= 7 milliseconds 100.00% <= 8 milliseconds 100.00% <= 8 milliseconds 11551.35 requests per second ====== MSET (10 keys) ====== 100000 requests completed in 1.11 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 100.00% <= 3 milliseconds 90009.01 requests per second1
3)重新分片
====== PING_INLINE ====== 100000 requests completed in 1.18 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 84388.19 requests per second ====== PING_BULK ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85106.38 requests per second ====== SET ====== 100000 requests completed in 1.18 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 1 milliseconds 99.95% <= 2 milliseconds 99.95% <= 3 milliseconds 100.00% <= 3 milliseconds 85034.02 requests per second ====== GET ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 1 milliseconds 99.99% <= 2 milliseconds 100.00% <= 2 milliseconds 85106.38 requests per second ====== INCR ====== 100000 requests completed in 1.19 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 100.00% <= 3 milliseconds 84317.03 requests per second ====== LPUSH ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85763.29 requests per second ====== RPUSH ====== 100000 requests completed in 1.15 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 87260.03 requests per second ====== LPOP ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85689.80 requests per second ====== RPOP ====== 100000 requests completed in 1.16 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 86281.27 requests per second ====== SADD ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 100.00% <= 3 milliseconds 85106.38 requests per second ====== HSET ====== 100000 requests completed in 1.14 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 87719.30 requests per second ====== SPOP ====== 100000 requests completed in 1.16 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85836.91 requests per second ====== LPUSH (needed to benchmark LRANGE) ====== 100000 requests completed in 1.15 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.92% <= 1 milliseconds 100.00% <= 1 milliseconds 86805.56 requests per second ====== LRANGE_100 (first 100 elements) ====== 100000 requests completed in 2.03 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 1 milliseconds 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 99.99% <= 4 milliseconds 100.00% <= 4 milliseconds 49261.09 requests per second ====== LRANGE_300 (first 300 elements) ====== 100000 requests completed in 4.58 seconds 50 parallel clients 3 bytes payload keep alive: 1 6.06% <= 1 milliseconds 99.78% <= 2 milliseconds 99.94% <= 3 milliseconds 99.98% <= 4 milliseconds 100.00% <= 5 milliseconds 100.00% <= 5 milliseconds 21815.01 requests per second ====== LRANGE_500 (first 450 elements) ====== 100000 requests completed in 6.51 seconds 50 parallel clients 3 bytes payload keep alive: 1 0.04% <= 1 milliseconds 83.91% <= 2 milliseconds 99.93% <= 3 milliseconds 99.97% <= 4 milliseconds 99.98% <= 5 milliseconds 99.99% <= 6 milliseconds 100.00% <= 7 milliseconds 100.00% <= 7 milliseconds 15372.79 requests per second ====== LRANGE_600 (first 600 elements) ====== 100000 requests completed in 8.66 seconds 50 parallel clients 3 bytes payload keep alive: 1 0.03% <= 1 milliseconds 62.47% <= 2 milliseconds 98.11% <= 3 milliseconds 99.86% <= 4 milliseconds 99.94% <= 5 milliseconds 99.97% <= 6 milliseconds 99.98% <= 7 milliseconds 100.00% <= 8 milliseconds 100.00% <= 8 milliseconds 11551.35 requests per second ====== MSET (10 keys) ====== 100000 requests completed in 1.11 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 100.00% <= 3 milliseconds 90009.01 requests per second2
4)平衡哈希槽
为了保证redis哈希槽的在每一个节点的均衡,需要对哈希槽进行均衡
====== PING_INLINE ====== 100000 requests completed in 1.18 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 84388.19 requests per second ====== PING_BULK ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85106.38 requests per second ====== SET ====== 100000 requests completed in 1.18 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 1 milliseconds 99.95% <= 2 milliseconds 99.95% <= 3 milliseconds 100.00% <= 3 milliseconds 85034.02 requests per second ====== GET ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 1 milliseconds 99.99% <= 2 milliseconds 100.00% <= 2 milliseconds 85106.38 requests per second ====== INCR ====== 100000 requests completed in 1.19 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 100.00% <= 3 milliseconds 84317.03 requests per second ====== LPUSH ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85763.29 requests per second ====== RPUSH ====== 100000 requests completed in 1.15 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 87260.03 requests per second ====== LPOP ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85689.80 requests per second ====== RPOP ====== 100000 requests completed in 1.16 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 86281.27 requests per second ====== SADD ====== 100000 requests completed in 1.17 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 100.00% <= 3 milliseconds 85106.38 requests per second ====== HSET ====== 100000 requests completed in 1.14 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 87719.30 requests per second ====== SPOP ====== 100000 requests completed in 1.16 seconds 50 parallel clients 3 bytes payload keep alive: 1 100.00% <= 0 milliseconds 85836.91 requests per second ====== LPUSH (needed to benchmark LRANGE) ====== 100000 requests completed in 1.15 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.92% <= 1 milliseconds 100.00% <= 1 milliseconds 86805.56 requests per second ====== LRANGE_100 (first 100 elements) ====== 100000 requests completed in 2.03 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 1 milliseconds 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 99.99% <= 4 milliseconds 100.00% <= 4 milliseconds 49261.09 requests per second ====== LRANGE_300 (first 300 elements) ====== 100000 requests completed in 4.58 seconds 50 parallel clients 3 bytes payload keep alive: 1 6.06% <= 1 milliseconds 99.78% <= 2 milliseconds 99.94% <= 3 milliseconds 99.98% <= 4 milliseconds 100.00% <= 5 milliseconds 100.00% <= 5 milliseconds 21815.01 requests per second ====== LRANGE_500 (first 450 elements) ====== 100000 requests completed in 6.51 seconds 50 parallel clients 3 bytes payload keep alive: 1 0.04% <= 1 milliseconds 83.91% <= 2 milliseconds 99.93% <= 3 milliseconds 99.97% <= 4 milliseconds 99.98% <= 5 milliseconds 99.99% <= 6 milliseconds 100.00% <= 7 milliseconds 100.00% <= 7 milliseconds 15372.79 requests per second ====== LRANGE_600 (first 600 elements) ====== 100000 requests completed in 8.66 seconds 50 parallel clients 3 bytes payload keep alive: 1 0.03% <= 1 milliseconds 62.47% <= 2 milliseconds 98.11% <= 3 milliseconds 99.86% <= 4 milliseconds 99.94% <= 5 milliseconds 99.97% <= 6 milliseconds 99.98% <= 7 milliseconds 100.00% <= 8 milliseconds 100.00% <= 8 milliseconds 11551.35 requests per second ====== MSET (10 keys) ====== 100000 requests completed in 1.11 seconds 50 parallel clients 3 bytes payload keep alive: 1 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 100.00% <= 3 milliseconds 90009.01 requests per second3
springboot
由以上原理就不难理解,springboot连接redis cluster时,可以连任意一台,也可以全部写上。
boot1.x默认客户端为jedis,2.x已经替换为Lettuce,Jedis在实现上是直接连接的redis server,Lettuce的连接是基于Netty的。两者配置项上略有不同。基础知识和配置,可以翻阅springboot data部分文档。
cluster下slave一般只作为对应master机器的数据备份,可以通过设置readonly作为读库,但一般不这么搞。如果你只是用作缓存,不在乎数据的丢失,觉得它浪费了资源,甚至你可以让slave数量为0,只做数据分片用。
Jedis也好,Lettuce也好,其对于redis-cluster架构下的数据的读取,都是默认是按照redis官方对redis-cluster的建议,所以两者默认均不支持redis-cluster下的读写分离。
如果我们强行只配置slave地址而不配置master(这个操作比较欠),实际上还是可以读到数据,但其内部操作是通过slave重定向到相关的master主机上,然后再将结果获取和输出。
如果,你看到这里了,那你可真不是一般人,因为一般人根本看不到这里
既然看到了,相信你一定会有收获,那就顺手给狂野君
点个赞呗,点不了吃亏,也点不了上当
您的认可,是我最大的动力
下面是往期的干货,希望对你有帮助
我用Redis分布式锁,抢了瓶茅台,然后GG了~~ 新来的,你说下Redis的持久化机制,哪一种能解决我们遇到的这个业务问题? 怎样才能快速成为一名架构师?
原文:https://juejin.cn/post/7099356518419529741