我最近开始研究Apache Kafka。
我配置了动物园管理员,还配置了Kafka实例(代理)。
一切都很好。
昨天,我使用生产者(使用默认分区)发送了大量消息,并使用.This活动创建了大量具有格式的日志文件夹。
标题-名称划分-编号例如:- Ajinkya-0,Ajinkya-10,Ajinkya-12等.
今天,当我重新启动Apache Kafka时,我得到了很多日志。
[2018-10-27 15:09:19,917] INFO [Log partition=__consumer_offsets-39, dir=/home/ajinkya/software/Kaftka/kafka_2.11-2.0.0/Kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2018-10-27 15:09:19,917] INFO [Log partition=__consumer_offsets-39, dir=/home/ajinkya/software/Kaftka/kafka_2.11-2.0.0/Kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2018-10-27 15:09:19,918] INFO [Log partition=__consumer_offsets-39, dir=/home/ajinkya/software/Kaftka/kafka_2.11-2.0.0/Kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2018-10-27 15:09:19,919] INFO [Log partition=__consumer_offsets-21, dir=/home/ajinkya/software/Kaftka/kafka_2.11-2.0.0/Kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2018-10-27 15:09:19,919] INFO [Log partition=__consumer_offsets-21, dir=/home/ajinkya/software/Kaftka/kafka_2.11-2.0.0/Kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2018-10-27 15:09:19,920] INFO [Log partition=__consumer_offsets-21, dir=/home/ajinkya/software/Kaftka/kafka_2.11-2.0.0/Kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2018-10-27 15:09:19,920] INFO [Log partition=__consumer_offsets-21, dir=/home/ajinkya/software/Kaftka/kafka_2.11-2.0.0/Kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-10-27 15:09:19,922] INFO [Log partition=Ajinkya-74, dir=/home/ajinkya/software/Kaftka/kafka_2.11-2.0.0/Kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
为什么我把日志看作
Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
既然消息已经被消耗掉了,为什么还会看到加载日志?
此外,我还看到了所有分区的加载日志,即50个分区
由于没有分区将增加,日志也将增加。
发布于 2018-10-27 17:17:45
见卡夫卡文献。
Kafka不像传统的消息中间件(JMS、RabbitMQ等)。
记录在日志中保留7天(默认情况下)。见log.retention.hours
和log.retention.minutes
。消费者可以“倒带”到保留期。
我删除了春天-卡夫卡标记,因为这个问题与Spring无关。
https://stackoverflow.com/questions/53021153
复制相似问题