当前位置: 首页 > news >正文

基于Docker的Kafka分布式集群

目录

1. 说明

2. 服务器规划

3. docker-compose文件

kafka{i}.yaml

 kafka-ui.yaml

4. kafka-ui配置集群监控

5. 参数表

6. 测试脚本

生产者-异步生产: AsyncKafkaProducer1.py

消费者-异步消费: AsyncKafkaConsumer1.py

7. 参考


1. 说明

  • 创建一个本地开发环境所需的kafka集群
  • 分布在3个虚拟机上,以docker容器方式互联互通

2. 服务器规划

Host端口备注

host001.dev.sb

9092, 9093, 9081

kafka ui 访问

kafka0 节点

host002.dev.sb9092, 9093kafka1 节点
host003.dev.sb9092, 9093kafka2 节点

3. docker-compose文件

kafka{i}.yaml

- 其中 {i} 对应0,1,2

- 用户密码都配在文件里面

services:kafka:image: 'bitnami/kafka:3.6.2'container_name: kafka{i}hostname: kafka{i}restart: alwaysports:- 9092:9092- 9093:9093environment:# KRaft- KAFKA_CFG_NODE_ID={i}- KAFKA_CFG_PROCESS_ROLES=controller,broker- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka0:9093,1@kafka1:9093,2@kafka2:9093- KAFKA_KRAFT_CLUSTER_ID=sbcluster01-mnopqrstuv# Listeners- KAFKA_CFG_LISTENERS=INTERNAL://:9094,CLIENT://:9095,CONTROLLER://:9093,EXTERNAL://:9092- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:SASL_PLAINTEXT,CLIENT:SASL_PLAINTEXT,CONTROLLER:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT- KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka0:9094,CLIENT://:9095,EXTERNAL://kafka0:9092- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER- KAFKA_CFG_NUM_PARTITIONS=3- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL# Clustering- KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=3- KAFKA_CFG_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=3- KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISR=2# Log- KAFKA_CFG_LOG_RETENTION_HOURS = 72# SASL- KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN- KAFKA_CONTROLLER_USER=kfkuser- KAFKA_CONTROLLER_PASSWORD=youknow- KAFKA_INTER_BROKER_USER=kfkuser- KAFKA_INTER_BROKER_PASSWORD=youknow- KAFKA_CLIENT_USERS=kfkuser- KAFKA_CLIENT_PASSWORDS=youknow# Others- TZ=Asia/Shanghaivolumes:- '/data0/Server/Db/kafka0:/bitnami/kafka'extra_hosts: - "kafka0:172.16.20.60"- "kafka1:172.16.20.61"- "kafka2:172.16.20.62"
 kafka-ui.yaml
services:kafka-ui:image: 'provectuslabs/kafka-ui:master'container_name: kafka-uirestart: alwaysports:- 9081:8080environment:- KAFKA_CLUSTERS_0_NAME=local- DYNAMIC_CONFIG_ENABLED=true- AUTH_TYPE=LOGIN_FORM- SPRING_SECURITY_USER_NAME=admin- SPRING_SECURITY_USER_PASSWORD=youknowextra_hosts: - "kafka0:172.16.20.60"- "kafka1:172.16.20.61"- "kafka2:172.16.20.62"

4. kafka-ui配置集群监控

5. 参数表

参数说明
KAFKA_CFG_PROCESS_ROLES

kafka角色,做broker, controller

示例:
KAFKA_CFG_PROCESS_ROLES=controller,broker

KAFKA_KRAFT_CLUSTER_ID集群id, 同属节点需一样
KAFKA_CFG_CONTROLLER_QUORUM_VOTERS投票选举列表
KAFKA_CFG_CONTROLLER_LISTENER_NAMES控制器名称
KAFKA_CFG_NUM_PARTITIONS默认分区数
KAFKA_CFG_LISTENERS监听器的地址和端口
KAFKA_CFG_ADVERTISED_LISTENERS发布监听器的地址和端口
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP监听器的协议 这里sasl_plain表示   仅认证加密 传输不加密
KAFKA_CLIENT_USERS加密客户端账号
KAFKA_CLIENT_PASSWORDS加密客户端密码
#Clustering
KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTORKafka 内部使用的 __consumer_offsets 主题的复制因子。这个主题是用来存储消费者偏移
KAFKA_CFG_TRANSACTION_STATE_LOG_REPLICATION_FACTORKafka 内部使用的 __transaction_state 主题的复制因子。这个主题是用来存储事务日志
KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISRKafka 内部使用的 __transaction_state 主题的最小 ISR(In-Sync Replicas)数量。ISR 是与
leader 保持同步的副本集合
#Log
KAFKA_CFG_LOG_DIRS日志目录
KAFKA_CFG_LOG_RETENTION_HOURS数据存储的最大时间超过这个时间会根据log.cleanup.policy设置的策略处理,默认168小时,一周时间

6. 测试脚本

生产者-异步生产: AsyncKafkaProducer1.py
from confluent_kafka import Producer
import jsondef delivery_report(err, msg):"""Called once for each message produced to indicate delivery result.Triggered by poll() or flush()."""if err is not None:print(f"Message delivery failed: {err}")else:print(f"Message delivered to {msg.topic()} [{msg.partition()}]")def create_async_producer(config):"""Creates an instance of an asynchronous Kafka producer."""return Producer(config)def produce_messages(producer, topic, messages):"""Asynchronously produces messages to a Kafka topic."""for message in messages:# Trigger any available delivery report callbacks from previous produce() callsproducer.poll(0)# Asynchronously produce a message, the delivery report callback# will be triggered from poll() above, or flush() below, when the message has# been successfully delivered or failed permanently.producer.produce(topic, json.dumps(message).encode("utf-8"), callback=delivery_report)# Wait for any outstanding messages to be delivered and delivery report# callbacks to be triggered.producer.flush()if __name__ == "__main__":# Kafka configuration# Replace these with your server's configurationconf = {"bootstrap.servers": "host001.dev.sb:9092,host002.dev.sb:9092,host003.dev.sb:9092","client.id": "PythonProducer","security.protocol": "SASL_PLAINTEXT","sasl.mechanisms": "PLAIN","sasl.username": "kfkuser","sasl.password": "youknow",}# Create an asynchronous Kafka producerasync_producer = create_async_producer(conf)# Messages to send to Kafkamessages_to_send = [{"key": "value1a"}, {"key": "value2a"}, {"key": "value3a"}]# Produce messagesproduce_messages(async_producer, "zx001.msg.user", messages_to_send)
消费者-异步消费: AsyncKafkaConsumer1.py
from confluent_kafka import Consumer, KafkaError, KafkaException
import asyncio
import json
import logging
from datetime import datetime# 设置日志格式,'%()'表示日志参数
log_format = "%(message)s"
logging.basicConfig(filename="logs/kafka_messages1.log", format=log_format, level=logging.INFO
)async def consume_loop(consumer, topics):try:# 订阅主题consumer.subscribe(topics)while True:# 轮询消息msg = consumer.poll(timeout=1.0)if msg is None:continueif msg.error():if msg.error().code() == KafkaError._PARTITION_EOF:# End of partition eventprint("%% %s [%d] reached end at offset %d\n"% (msg.topic(), msg.partition(), msg.offset()))elif msg.error():raise KafkaException(msg.error())else:# 正常消息raw_message = msg.value()# print(f"Raw message: {raw_message}")str_msg = raw_message.decode("utf-8")parsed_message = json.loads(str_msg)parsed_message["time"] = datetime.now().strftime("%Y-%m-%d %H:%M:%S")print(f"Received message: {type(parsed_message)} : {parsed_message}")json_data = json.dumps(parsed_message, ensure_ascii=False)logging.info("{}".format(json_data))await asyncio.sleep(0.01)  # 小睡片刻,让出控制权finally:# 关闭消费者consumer.close()async def consume():# 消费者配置conf = {"bootstrap.servers": "host001.dev.sb:9092,host002.dev.sb:9092,host003.dev.sb:9092","group.id": "MsgGroup2","auto.offset.reset": "earliest","client.id" :  "PythonConsumer","security.protocol" :  "SASL_PLAINTEXT","sasl.mechanisms" :  "PLAIN","sasl.username" :  "kfkuser","sasl.password" :  "youknow"}# 创建消费者consumer = Consumer(conf)await consume_loop(consumer, ["zx001.msg.user"])if __name__ == "__main__":asyncio.run(consume())

7. 参考

- Apache Kafka® Quick Start - Local Install With Docker

- kafka-ui-docs/configuration/configuration-wizard.md at main · provectus/kafka-ui-docs · GitHub

- https://juejin.cn/post/7187301063832109112

http://www.lryc.cn/news/525527.html

相关文章:

  • 【博客之星】年度总结:在云影与墨香中探寻成长的足迹
  • SpringBoot的Swagger配置
  • machine learning knn算法之使用KNN对鸢尾花数据集进行分类
  • C语言练习(16)
  • SOAFEE 技术研讨会:汽车软件定义与自动驾驶技术探讨
  • R语言学习笔记之开发环境配置
  • 多版本并发控制:MVCC的作用和基本原理
  • ubuntu18.04安装nvm管理本机node和npm
  • 【数据结构进阶】红黑树超详解 + 实现(附源码)
  • leetcode_3092. 最高频率的 ID
  • 鸿蒙仓颉环境配置(仓颉SDK下载,仓颉VsCode开发环境配置,仓颉DevEco开发环境配置)
  • 数据统计–图形报表(day11)
  • 源码分析之Openlayers样式篇CircleStyle类
  • 解决CentOS9系统下Zabbix 7.2图形中文字符乱码问题
  • AF3 FourierEmbedding类源码解读
  • vsftpd虚拟用户部署
  • MySQL 容器已经停止(但仍然存在),但希望重新启动它,并使它的 3306 端口映射到宿主机的 3306 端口是不可行的
  • 汇编实验·顺序程序设计
  • AIGC视频扩散模型新星:Video 版本的SD模型
  • HarmonyOS:通过(SQLite)关系型数据库实现数据持久化
  • 10. SpringCloud Alibaba Sentinel 规则持久化部署详细剖析
  • STM32更新程序OTA
  • MarsCode青训营打卡Day10(2025年1月23日)|稀土掘金-147.寻找独一无二的糖葫芦串、119.游戏队友搜索
  • vue(33) : 安装组件出错解决
  • ChatGPT结合Excel辅助学术数据分析详细步骤分享!
  • stm32f103 单片机(一)第一个工程
  • 云计算和服务器
  • Spring 框架:配置缓存管理器、注解参数与过期时间
  • Linux系统 C/C++编程基础——基于Qt的图形用户界面编程
  • 并发编程 - 线程同步(一)