Centos7部署kafka集群

avatar 2020年4月27日18:09:06 评论 1,033 次浏览

首先,需要了解的是kafka依赖zookeeper,这里下载的kafka中自带的有zookeeper,下载地址:http://ftp.cuhk.edu.hk/pub/packages/apache.org/kafka/2.5.0/kafka_2.12-2.5.0.tgz 看看我下载的文件。安装环境依赖与java,可以参考:https://www.wulaoer.org/?p=487 下面就开始部署了。

[root@www.wulaoer.org ~]#  ll kafka_2.12-2.5.0.tgz 
-rw-r--r--. 1 root root 61604633 Apr 25  2020 kafka_2.12-2.5.0.tgz
[root@www.wulaoer.org ~]# tar -zxf kafka_2.12-2.5.0.tgz 
[root@www.wulaoer.org ~]# mv kafka_2.12-2.5.0 /usr/local/kafka

需要把文件同步到所有的服务器上,这里就不说了,不管是下载还是使用scp都可以。

配置zookeeper集群

kafka文件中自带了zookeeper,所以我就不安装了zookeeper了,看看kafka的配置文件中zookeeper.properties文件,这个文件就是配置zookeeper的文件,如果没有,说明kafka不支持zookeeper。

[root@www.wulaoer.org ~]# ll /usr/local/kafka/config/
total 72
-rw-r--r--. 1 root root  906 Apr  8 09:13 connect-console-sink.properties
-rw-r--r--. 1 root root  909 Apr  8 09:13 connect-console-source.properties
-rw-r--r--. 1 root root 5321 Apr  8 09:13 connect-distributed.properties
-rw-r--r--. 1 root root  883 Apr  8 09:13 connect-file-sink.properties
-rw-r--r--. 1 root root  881 Apr  8 09:13 connect-file-source.properties
-rw-r--r--. 1 root root 2247 Apr  8 09:13 connect-log4j.properties
-rw-r--r--. 1 root root 2540 Apr  8 09:13 connect-mirror-maker.properties
-rw-r--r--. 1 root root 2262 Apr  8 09:13 connect-standalone.properties
-rw-r--r--. 1 root root 1221 Apr  8 09:13 consumer.properties
-rw-r--r--. 1 root root 4675 Apr  8 09:13 log4j.properties
-rw-r--r--. 1 root root 1925 Apr  8 09:13 producer.properties
-rw-r--r--. 1 root root 6849 Apr  8 09:13 server.properties
-rw-r--r--. 1 root root 1032 Apr  8 09:13 tools-log4j.properties
-rw-r--r--. 1 root root 1169 Apr  8 09:13 trogdor.conf
-rw-r--r--. 1 root root 1205 Apr  8 09:13 zookeeper.properties

下面看一下zookeeper.properties的配置文件。

[root@www.wulaoer.org ~]#  cat /usr/local/kafka/config/zookeeper.properties
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
# 
#    http://www.apache.org/licenses/LICENSE-2.0
# 
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# the directory where the snapshot is stored.
dataDir=/opt/data/zook   #数据路径
dataLogDir=/opt/log/zook  #日志路径
# the port at which the clients will connect
clientPort=2181   #客户端端口
# disable the per-ip limit on the number of connections since this is a non-production config
#maxClientCnxns=0   #默认没有注释,需要注释掉
# Disable the adminserver by default to avoid port conflicts.
# Set the port to something non-conflicting if choosing to enable this
admin.enableServer=false
# admin.serverPort=8080
tickTime=2000
initLimit=10
syncLimit=5
quorumListenOnAllIPs=true
server.1=10.16.202.197:2888:3888   #server后面的1,2,3需要和myid文件的保持一致。
server.2=10.16.201.158:2888:3888
server.3=10.16.200.67:2888:3888

配置好之后,需要把配置中的文件路径创建出来,然后在根据上面server对应的ip服务中创建myid文件,上面的配置和其他的zookeeper的配置一样。

[root@www.wulaoer.org ~]# mkdir /opt/{data,log}
[root@www.wulaoer.org ~]# mkdir /opt/data/zook
[root@www.wulaoer.org ~]# mkdir /opt/log/zook
[root@www.wulaoer.org ~]# echo "1" > /opt/data/myid #myid中的值要和上面配置的server.1=ip中的ip要对应。

我这里另外两个服务的配置分别是DevOps,Kubernetes,下面看我的配置。

[root@DevOps ~]# mkdir /opt/{data,log}
[root@DevOps ~]# mkdir /opt/data/zook
[root@DevOps ~]# mkdir /opt/log/zook
[root@DevOps ~]# echo "2" > /opt/data/myid 

[root@Kubernetes ~]# mkdir /opt/{data,log}
[root@Kubernetes ~]# mkdir /opt/data/zook
[root@Kubernetes ~]# mkdir /opt/log/zook
[root@Kubernetes ~]# echo "3" > /opt/data/myid 

zookeeper集群配置完成,下面开始配置kafka的集群配置。

kafka配置

这里配置的kafka的配置,主要有几个不一样的地方,分别是broker.id,advertised.listeners其他的都一样。

[root@www.wulaoer.org ~]#  cat /usr/local/kafka/config/config/server.properties 
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1   #这个id可以和myid的值对应,总只在集群中不能有重复的

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://10.16.202.197:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://10.16.202.197:9092  #本服务的ip,和其他的不一样

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/apps/work/log/kafka

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=3   #topic 在当前broker上的分片个数,与broker保持一致

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=10.16.202.197:2181,10.16.201.158:2181,10.16.200.67:2181  #和其他的都一致

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

kafka集群配置完成,下面尝试启动kafka,我先直接启动,主要是怕出现问题,便于解决。先启动zookeeper后在启动kafka。

[root@www.wulaoer.org ~]# /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties 

[root@www.wulaoer.org ~]# /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties > /dev/null 2>&1 &
[root@www.wulaoer.org ~]# /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties > /dev/null 2>&1 &

其他节点也是使用上面的方法启动,启动了kafka之后,我们验证一下,下面创建一个topics,创建之后,我们可以根据这个进行做数据传输。

[root@www.wulaoer.org ~]# /usr/local/kafka/bin/kafka-topics.sh -create --zookeeper 10.16.202.197:2181,10.16.201.158:2181,10.16.200.67:2181 -replication-factor 3 --partitions 3 --topic test

kafka集群做完了,后期在真对kafka的使用做一个总结,目前先介绍到这了。

avatar

发表评论

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: