消息丢失得分两种情况 : 生产者 和 消费者 都有可能因处理不当导致消息丢失的情况
min.insync.replicas
配置的备份个数)都成功写入日志,这种策略会保证只要有一个备份存活就不会丢失数据。这是最强的数据保证。一般除非是金融级别,或跟钱打交道的场景才会使用这种配置。当然了如果min.insync.replicas
配置的是1则也可能丢消息,跟acks=1情况类似。如果消费这边配置的是自动提交,万一消费到数据还没处理完,就自动提交offset了,但是此时你consumer直接宕机了,未处理完的数据丢失了,下次也消费不到了。
<dependencies>
<dependency>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-starter-webartifactId>
dependency>
<dependency>
<groupId>org.springframework.kafkagroupId>
<artifactId>spring-kafkaartifactId>
dependency>
<dependency>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-starter-testartifactId>
<scope>testscope>
dependency>
<dependency>
<groupId>junitgroupId>
<artifactId>junitartifactId>
<scope>testscope>
dependency>
dependencies>
spring:
# Kafka 配置项,对应 KafkaProperties 配置类
kafka:
bootstrap-servers: 192.168.126.140:9092 # 指定 Kafka Broker 地址,可以设置多个,以逗号分隔
# Kafka Producer 配置项
producer:
acks: 1 # 0-不应答。1-leader 应答。all-所有 leader 和 follower 应答。
retries: 3 # 发送失败时,重试发送的次数
key-serializer: org.apache.kafka.common.serialization.StringSerializer # 消息的 key 的序列化
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer # 消息的 value 的序列化
batch-size: 16384 # 每次批量发送消息的最大数量 单位 字节 默认 16K
buffer-memory: 33554432 # 每次批量发送消息的最大内存 单位 字节 默认 32M
properties:
linger:
ms: 10000 # 批处理延迟时间上限。[实际不会配这么长,这里用于测速]这里配置为 10 * 1000 ms 过后,不管是否消息数量是否到达 batch-size 或者消息大小到达 buffer-memory 后,都直接发送一次请求。
# Kafka Consumer 配置项
consumer:
auto-offset-reset: earliest # 设置消费者分组最初的消费进度为 earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
properties:
spring:
json:
trusted:
packages: com.artisan.springkafka.domain
enable-auto-commit: false # 关闭自动提交
# Kafka Consumer Listener 监听器配置
listener:
missing-topics-fatal: false # 消费监听接口监听的主题不存在时,默认会报错。所以通过设置为 false ,解决报错
ack-mode: manual # 手工提交
logging:
level:
org:
springframework:
kafka: ERROR # spring-kafka
apache:
kafka: ERROR # kafka
主要的参数变化
spring.kafka.consumer.enable-auto-commit: false
配置,使用 Spring-Kafka 的消费进度的提交机制。 默认值 false
spring.kafka.listener.ack-mode: manual
配置, MANUAL 模式 即为 调用时,先标记提交消费进度。 消费完成后,再提交消费进度。
package com.artisan.springkafka.producer;
import com.artisan.springkafka.constants.TOPIC;
import com.artisan.springkafka.domain.MessageMock;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.stereotype.Component;
import org.springframework.util.concurrent.ListenableFuture;
import java.util.Random;
import java.util.concurrent.ExecutionException;
/**
* @author 小工匠
* @version 1.0
* @description: TODO
* @date 2021/2/17 22:25
* @mark: show me the code , change the world
*/
@Component
public class ArtisanProducerMock {
@Autowired
private KafkaTemplate<Object,Object> kafkaTemplate ;
/**
* 同步发送
* @return
* @throws ExecutionException
* @throws InterruptedException
*/
public SendResult sendMsgSync() throws ExecutionException, InterruptedException {
// 模拟发送的消息
Integer id = new Random().nextInt(100);
MessageMock messageMock = new MessageMock(id,"artisanTestMessage-" + id);
// 同步等待
return kafkaTemplate.send(TOPIC.TOPIC, messageMock).get();
}
public ListenableFuture<SendResult<Object, Object>> sendMsgASync() throws ExecutionException, InterruptedException {
// 模拟发送的消息
Integer id = new Random().nextInt(100);
MessageMock messageMock = new MessageMock(id,"messageSendByAsync-" + id);
// 异步发送消息
ListenableFuture<SendResult<Object, Object>> result = kafkaTemplate.send(TOPIC.TOPIC, messageMock);
return result ;
}
}
package com.artisan.springkafka.consumer;
import com.artisan.springkafka.constants.TOPIC;
import com.artisan.springkafka.domain.MessageMock;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.stereotype.Component;
import java.util.concurrent.TimeUnit;
/**
* @author 小工匠
* @version 1.0
* @description: TODO
* @date 2021/2/17 22:33
* @mark: show me the code , change the world
*/
@Component
public class ArtisanCosumerMock {
private Logger logger = LoggerFactory.getLogger(getClass());
private static final String CONSUMER_GROUP_PREFIX = "MANUAL_ACK_" ;
@KafkaListener(topics = TOPIC.TOPIC ,groupId = CONSUMER_GROUP_PREFIX + TOPIC.TOPIC)
public void onMessage(MessageMock messageMock, Acknowledgment acknowledgment) throws InterruptedException {
logger.info("【接受到消息][线程:{} 消息内容:{}]", Thread.currentThread().getName(), messageMock);
// MOCK BUSSINESS
TimeUnit.SECONDS.sleep(1);
// 手动提交消费进度
acknowledgment.acknowledge();
}
}
方法中增加了Acknowledgment 参数,过调用#acknowledge()
方法,可以手工提交当前消息的 Topic 的 Partition 的消费进度,确保消息不丢失。
package com.artisan.springkafka.produceTest;
import com.artisan.springkafka.SpringkafkaApplication;
import com.artisan.springkafka.producer.ArtisanProducerMock;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.kafka.support.SendResult;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
/**
* @author 小工匠
* * @version 1.0
* @description: TODO
* @date 2021/2/17 22:40
* @mark: show me the code , change the world
*/
@RunWith(SpringRunner.class)
@SpringBootTest(classes = SpringkafkaApplication.class)
public class ProduceMockTest {
private Logger logger = LoggerFactory.getLogger(getClass());
@Autowired
private ArtisanProducerMock artisanProducerMock;
@Test
public void testAsynSend() throws ExecutionException, InterruptedException {
logger.info("开始发送");
for (int i = 0; i < 10; i++) {
artisanProducerMock.sendMsgASync().addCallback(new ListenableFutureCallback<SendResult<Object, Object>>() {
@Override
public void onFailure(Throwable throwable) {
logger.info(" 发送异常{}]]", throwable);
}
@Override
public void onSuccess(SendResult<Object, Object> objectObjectSendResult) {
logger.info("回调结果 Result = topic:[{}] , partition:[{}], offset:[{}]",
objectObjectSendResult.getRecordMetadata().topic(),
objectObjectSendResult.getRecordMetadata().partition(),
objectObjectSendResult.getRecordMetadata().offset());
}
});
}
// 阻塞等待,保证消费
new CountDownLatch(1).await();
}
}
懂了么,老兄 ~
https://github.com/yangshangwei/boot2/tree/master/springkafkaACK