前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Kafka环境搭建

Kafka环境搭建

作者头像
无涯WuYa
发布2021-03-30 10:22:17
3660
发布2021-03-30 10:22:17
举报
文章被收录于专栏:Python自动化测试

在异步交互模式中,我们经常会谈到消费者与生产者的模式,在这中间会使用到主流的MQ的中间件,主要为Kafka和RabbitMQ的中间件。当然也可以说是消息队列,由于在同步交互的模式中存在延迟的缺陷,那么也就说是在高并发的应用场景下,使用同步交互的模式显然是不合理的,就需要使用异步的消息队列来解决这个过程中消息的堵塞和积压。比如大量的请求对底层的DB进行请求,请求过多导致DB层面的连接数占用资源得不到释放,从而导致Too Many Connections等其他的异常信息。当然基于这样的场景很多的,因此就需要一个缓冲机制来解决这类的问题,而消息队列可以很好的解决这类堵塞以及积压的问题,准确的说消息队列通过异步处理请求来缓解系统的压力。消息队列拥有先进先出的特性,主要应用于不同进程或线程之间的通信机制,来处理输入的请求。在异步通信的机制中,客户端与服务端不需要知道对方的存在,更多关注的是MQ的消息,如下所示:

Kafka是一个分布式实时数据流的平台,起源于LinkedIn的公司,早期LinkedIn需要收集各个业务线的系统和应用服务的性能指标数据来进行分析,期间需要采用的数据量特别大,随着业务的扩展导致数据量的增大,内部自定义的系统无法满足诉求,于是内部开发了Kafka的系统,因此Kafka也拥有高吞吐量的特性。Kafka的项目目前是Apache项目基金会的一个顶级开源项目。Kafka提供了发布和订阅的功能,业务把数据发送到Kafka的集群(也可以是单机模式),也可以从Kafka集群读取数据,因此Kafka的工作机制主要也是基于生产者与消费者的模式,所谓生产者就是负责把数据写入到Kafka集群进行存储,而消费者模式就是负责读取数据。

Kafka的是一个分布式的系统,由Zookeeper来管理和协调它的各个代理节点。因此安装Kafka之前需要安装Zookeeper。在Apache的官方网站下载Zookeeper后,进行解压,解压后,在当前目录下新创建data的目录用来存储状态数据,完整的目录信息如下:

在conf的目录下,把zoo_sample.cfg修改为zoo.cfg,进行编辑Zookeeper的集群信息,主要内容为:

代码语言:javascript
复制
# The number of milliseconds of each tick
#服务器与客户端中间维持的心跳时间
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
#集群中Follwer服务器与Leader服务器之间最大的初始化连接数
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
#同步通信时间间隔
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
#zookeeper数据存放路径地址
dataDir=/Applications/devOps/bigData/zookeeper/data
dataLogDir=/Applications/devOps/bigData/zookeeper/log
# the port at which the clients will connect
#客户端端口号
clientPort=2181
admin.serverPort=9091
# the maximum number of client connections.
# increase this if you need to handle more clients
#处理客户端最大连接数
maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#需要保留的文件数目
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#日志清理评率,单位是小时,如果是0,就表示不开启自动清理的机制
autopurge.purgeInterval=1
## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true

编辑完配置文件后,把Zookeeper加入到path的环境变量中,然后就可以进行启动,启动的命令为zkServer.sh start,执行后,就会输出如下的信息:

代码语言:javascript
复制
ZooKeeper JMX enabled by default
Using config: /Applications/devOps/bigData/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

执行命令zkServer.sh status可以查看是否启动成功,以及它的模式,执行后输出的信息为:

代码语言:javascript
复制
ZooKeeper JMX enabled by default
Using config: /Applications/devOps/bigData/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: standalone

下面来说明Kafka的部署模式,首先也是在Apache的官方网站下载Kafka安装包,然后进行解压,和配置path的环境变量。在Kafka的解压的目录的conf下,配置server.properties,该文件的信息主要为:

代码语言:javascript
复制
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
#设置一个broker唯一的ID
broker.id=0
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
#设置消息日志存储路径
# log.dirs=/tmp/kafka-logs
log.dirs=/Applications/devOps/bigData/kafka/data

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
#删除主题的配置信息
delete.topic.enable=true

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
#指定Zookeeper的连接地址
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

主要需要指定Zookeeper的连接地址信息。配置完成后,就可以启动Kafka,启动的命令为:

代码语言:javascript
复制
kafka-server-start.sh ./config/server.properties

执行后就会启动Kafka。

启动成功后,来模拟生产者和消费者的数据交互,执行命令:

代码语言:javascript
复制
kafka-console-producer.sh --broker-list localhost:9092 -topic login

进入到生产者的模式,执行如下命令进入到消费者的模式:

代码语言:javascript
复制
kafka-console-consumer.sh --bootstrap-server localhost:9092 -topic login --from-beginning

在生产者的控制台里面输入:Hello Kafka,就会显示到消费者的控制台里面,如下所示:

通过如上我们可以看到Kafka基于生产者和消费者模式的数据交互。

感谢您的阅读,后续持续更新Kafka的应用和实战。

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2021-03-21,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 Python自动化测试 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
消息队列 CMQ 版
消息队列 CMQ 版(TDMQ for CMQ,简称 TDMQ CMQ 版)是一款分布式高可用的消息队列服务,它能够提供可靠的,基于消息的异步通信机制,能够将分布式部署的不同应用(或同一应用的不同组件)中的信息传递,存储在可靠有效的 CMQ 队列中,防止消息丢失。TDMQ CMQ 版支持多进程同时读写,收发互不干扰,无需各应用或组件始终处于运行状态。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档