前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >redis读写分离之lettuce

redis读写分离之lettuce

作者头像
一笠风雨任生平
发布2022-01-06 14:20:10
2.1K0
发布2022-01-06 14:20:10
举报
文章被收录于专栏:服务化进程服务化进程

问题

redis使用过程中,很多情况都是读多写少,而不管是主从、哨兵、集群,从节点都只是用来备份,为了最大化节约用户成本,我们需要利用从节点来进行读,分担主节点压力,这里我们继续上一章的jedis的读写分离,由于springboot现在redis集群默认用的是lettuce,所以介绍下lettuce读写分离

读写分离

主从读写分离

这里先建一个主从集群,1主3从,一般情况下只需要进行相关配置如下:

spring:
  redis:
    host: redisMastHost
    port: 6379
    lettuce:
      pool:
        max-active: 512
        max-idle: 256
        min-idle: 256
        max-wait: -1

这样就可以直接注入redisTemplate,读写数据了,但是这个默认只能读写主,如果需要设置readfrom,则需要自定义factory,下面给出两种方案

方案一(适用于非aws)

只需要配置主节点,从节点会信息会自动从主节点获取

@Configuration
class WriteToMasterReadFromReplicaConfiguration {

  @Bean
  public LettuceConnectionFactory redisConnectionFactory() {

    LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
      .readFrom(ReadFrom.SLAVE_PREFERRED)
      .build();

    RedisStandaloneConfiguration serverConfig = new RedisStandaloneConfiguration("server", 6379);

    return new LettuceConnectionFactory(serverConfig, clientConfig);
  }
}

方案二(云上redis,比如aws)

下面给个demo

import io.lettuce.core.ReadFrom;
import io.lettuce.core.models.role.RedisNodeDescription;
import org.apache.commons.lang3.StringUtils;
import org.apache.commons.pool2.impl.GenericObjectPoolConfig;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisStaticMasterReplicaConfiguration;
import org.springframework.data.redis.connection.lettuce.LettuceClientConfiguration;
import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory;
import org.springframework.data.redis.connection.lettuce.LettucePoolingClientConfiguration;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.StringRedisSerializer;

import java.time.Duration;
import java.util.List;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
import java.util.stream.Stream;

@Configuration
public class RedisConfig {
    @Value("${spring.redis1.master}")
    private String master;
    @Value("${spring.redis1.slaves:}")
    private String slaves;

    @Value("${spring.redis1.port}")
    private int port;

    @Value("${spring.redis1.timeout:200}")
    private long timeout;

    @Value("${spring.redis1.lettuce.pool.max-idle:256}")
    private int maxIdle;

    @Value("${spring.redis1.lettuce.pool.min-idle:256}")
    private int minIdle;

    @Value("${spring.redis1.lettuce.pool.max-active:512}")
    private int maxActive;

    @Value("${spring.redis1.lettuce.pool.max-wait:-1}")
    private long maxWait;

    private static Logger logger = LoggerFactory.getLogger(RedisConfig.class);
    private final AtomicInteger index = new AtomicInteger(-1);
    @Bean(value = "lettuceConnectionFactory1")
    LettuceConnectionFactory lettuceConnectionFactory1(GenericObjectPoolConfig genericObjectPoolConfig) {
        RedisStaticMasterReplicaConfiguration configuration = new RedisStaticMasterReplicaConfiguration(
                this.master, this.port);
        if(StringUtils.isNotBlank(slaves)){
            String[] slaveHosts=slaves.split(",");
            for (int i=0;i<slavehosts.length;i++){ configuration.addnode(slavehosts[i], this.port); } lettuceclientconfiguration clientconfig="LettucePoolingClientConfiguration.builder().readFrom(ReadFrom.SLAVE).commandTimeout(Duration.ofMillis(timeout))" .poolconfig(genericobjectpoolconfig).build(); return new lettuceconnectionfactory(configuration, clientconfig); ** * genericobjectpoolconfig 连接池配置 @return @bean public genericobjectpoolconfig() { genericobjectpoolconfig="new" genericobjectpoolconfig(); genericobjectpoolconfig.setmaxidle(maxidle); genericobjectpoolconfig.setminidle(minidle); genericobjectpoolconfig.setmaxtotal(maxactive); genericobjectpoolconfig.setmaxwaitmillis(maxwait); genericobjectpoolconfig; @bean(name="redisTemplate1" ) redistemplate redistemplate(@qualifier("lettuceconnectionfactory1") lettuceconnectionfactory connectionfactory) redistemplate<string,string> template = new RedisTemplate<string,string>();
        template.setConnectionFactory(connectionFactory);
        template.setKeySerializer(new StringRedisSerializer());
        template.setValueSerializer(new StringRedisSerializer());
        template.setHashKeySerializer(new StringRedisSerializer());
        template.setHashValueSerializer(new StringRedisSerializer());
        logger.info("redis 连接成功");
        return template;
    }


}

这里的核心代码在readfrom的设置,lettuce提供了5中选项,分别是

  • MASTER
  • MASTER_PREFERRED
  • SLAVE_PREFERRED
  • SLAVE
  • NEAREST 最新的版本SLAVE改成了ReadFrom.REPLICA 这里设置为SlAVE,那么读请求都会走从节点,但是这里有个bug,每次都会读取最后一个从节点,其他从节点都不会有请求过去,跟踪源代码发现节点顺序是一定的,但是每次getConnection时每次都会获取最后一个,下面是缓存命令情况
在这里插入图片描述
在这里插入图片描述

解决方案就是自定义一个readFrom,如下

LettuceClientConfiguration clientConfig =
                LettucePoolingClientConfiguration.builder().readFrom(new ReadFrom() {
                    @Override
                    public List<redisnodedescription> select(Nodes nodes) {
                        List<redisnodedescription> allNodes = nodes.getNodes();
                        int ind = Math.abs(index.incrementAndGet() % allNodes.size());
                        RedisNodeDescription selected = allNodes.get(ind);
                        logger.info("Selected random node {} with uri {}", ind, selected.getUri());
                        List<redisnodedescription> remaining = IntStream.range(0, allNodes.size())
                                .filter(i -&gt; i != ind)
                                .mapToObj(allNodes::get).collect(Collectors.toList());
                        return Stream.concat(
                                Stream.of(selected),
                                remaining.stream()
                        ).collect(Collectors.toList());
                    }
                }).commandTimeout(Duration.ofMillis(timeout))
                        .poolConfig(genericObjectPoolConfig).build();
        return new LettuceConnectionFactory(configuration, clientConfig);

手动实现顺序读各个从节点,修改后调用情况如下,由于还有其他应用连接该redis,所以监控图中非绝对均衡

在这里插入图片描述
在这里插入图片描述

哨兵模式

这个我就提供一个简单demo

@Configuration
@ComponentScan("com.redis")
public class RedisConfig {

    @Bean
    public LettuceConnectionFactory redisConnectionFactory() {
//        return new LettuceConnectionFactory(new RedisStandaloneConfiguration("192.168.80.130", 6379));
        RedisSentinelConfiguration sentinelConfig = new RedisSentinelConfiguration()
                .master("mymaster")
                // 哨兵地址
                .sentinel("192.168.80.130", 26379)
                .sentinel("192.168.80.130", 26380)
                .sentinel("192.168.80.130", 26381);

        LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder().
        readFrom(ReadFrom.SLAVE_PREFERRED).build();
        return new LettuceConnectionFactory(sentinelConfig, clientConfig);
    }

    @Bean
    public RedisTemplate redisTemplate(RedisConnectionFactory redisConnectionFactory) {
        RedisTemplate redisTemplate = new RedisTemplate();
        redisTemplate.setConnectionFactory(redisConnectionFactory);
        // 可以配置对象的转换规则,比如使用json格式对object进行存储。
        // Object --&gt; 序列化 --&gt; 二进制流 --&gt; redis-server存储
        redisTemplate.setKeySerializer(new StringRedisSerializer());
        redisTemplate.setValueSerializer(new JdkSerializationRedisSerializer());
        return redisTemplate;
    }

}

集群模式

集群模式就比较简单了,直接套用下面demo

import io.lettuce.core.ReadFrom;
import io.lettuce.core.resource.ClientResources;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisClusterConfiguration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.connection.RedisNode;
import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory;
import org.springframework.data.redis.connection.lettuce.LettucePoolingClientConfiguration;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.StringRedisSerializer;

import java.time.Duration;
import java.util.HashSet;
import java.util.Set;

@Slf4j
@Configuration
public class Redis2Config {

    @Value("${spring.redis2.cluster.nodes: com:9736}")
    public String REDIS_HOST;

    @Value("${spring.redis2.cluster.port:9736}")
    public int REDIS_PORT;

    @Value("${spring.redis2.cluster.type:}")
    public String REDIS_TYPE;
    @Value("${spring.redis2.cluster.read-from:master}")
    public String READ_FROM;
    @Value("${spring.redis2.cluster.max-redirects:1}")
    public int REDIS_MAX_REDIRECTS;

    @Value("${spring.redis2.cluster.share-native-connection:true}")
    public boolean REDIS_SHARE_NATIVE_CONNECTION;

    @Value("${spring.redis2.cluster.validate-connection:false}")
    public boolean VALIDATE_CONNECTION;

    @Value("${spring.redis2.cluster.shutdown-timeout:100}")
    public long SHUTDOWN_TIMEOUT;

    @Bean(value = "myRedisConnectionFactory")
    public RedisConnectionFactory connectionFactory(ClientResources clientResources) {
        RedisClusterConfiguration clusterConfiguration = new RedisClusterConfiguration();
        if (StringUtils.isNotEmpty(REDIS_HOST)) {
            String[] serverArray = REDIS_HOST.split(",");
            Set<redisnode> nodes = new HashSet<redisnode>();
            for (String ipPort : serverArray) {
                String[] ipAndPort = ipPort.split(":");
                nodes.add(new RedisNode(ipAndPort[0].trim(), Integer.valueOf(ipAndPort[1])));
            }
            clusterConfiguration.setClusterNodes(nodes);
        }
        if (REDIS_MAX_REDIRECTS &gt; 0) {
            clusterConfiguration.setMaxRedirects(REDIS_MAX_REDIRECTS);
        }

        LettucePoolingClientConfiguration.LettucePoolingClientConfigurationBuilder clientConfigurationBuilder = LettucePoolingClientConfiguration.builder()
                .clientResources(clientResources).shutdownTimeout(Duration.ofMillis(SHUTDOWN_TIMEOUT));
        if (READ_FROM.equals("slave")) {
            clientConfigurationBuilder.readFrom(ReadFrom.SLAVE_PREFERRED);
        } else if (READ_FROM.equals("nearest")) {
            clientConfigurationBuilder.readFrom(ReadFrom.NEAREST);
        } else if (READ_FROM.equals("master")) {
            clientConfigurationBuilder.readFrom(ReadFrom.MASTER_PREFERRED);
        }
        LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(clusterConfiguration, clientConfigurationBuilder.build());
        lettuceConnectionFactory.afterPropertiesSet();
        return lettuceConnectionFactory;
    }

    @Bean(name = "myRedisTemplate")
    public RedisTemplate myRedisTemplate(@Qualifier("myRedisConnectionFactory") RedisConnectionFactory connectionFactory) {
        RedisTemplate template = new RedisTemplate();
        template.setConnectionFactory(connectionFactory);
        template.setKeySerializer(new StringRedisSerializer());
        template.setValueSerializer(new StringRedisSerializer());
        return template;
    }


}

不过这里集群模式不推荐读取从节点,因为在生产中有可能导致某一分片挂掉以至于整个集群都不可用,可以考虑从节点整多个,然后配置读写分离。

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2020/11/22 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 问题
  • 读写分离
    • 主从读写分离
      • 方案一(适用于非aws)
      • 方案二(云上redis,比如aws)
    • 哨兵模式
      • 集群模式
      相关产品与服务
      云数据库 Redis
      腾讯云数据库 Redis(TencentDB for Redis)是腾讯云打造的兼容 Redis 协议的缓存和存储服务。丰富的数据结构能帮助您完成不同类型的业务场景开发。支持主从热备,提供自动容灾切换、数据备份、故障迁移、实例监控、在线扩容、数据回档等全套的数据库服务。
      领券
      问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档