prometheus监控数据远程写入Kafka集群
文章目录
- 前言
- 一、环境简介
- 1.1 环境简介
- 1.2 部署清单
- 1.3 组件版本
- 二、部署步骤
- 2.1 prometheus部署
- 2.2 kafka集群部署
- 2.3 prometheus-kafka-adapter部署
- 三、数据验证
- 四、总结
前言
根据项目要求,需将prometheus监控数据存储到kafka中。前面为了图方便就搭建了单机版的kafka进行验证,但是kafka中一直没有数据,后来部署了kafka集群,才解决了这个问题
。
将prometheus监控数据写入到kafka中,大多数都是使用prometheus-kafka-adapter插件,当然如果条件允许,也可以自己开发。
一、环境简介
1.1 环境简介
#1 系统版本
[root@es2][/opt]
$cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)#2 可通外网
[root@es2][/usr/local/prometheus]
$ping -c 3 www.baidu.com
PING www.a.shifen.com (220.181.38.150) 56(84) bytes of data.
64 bytes from 220.181.38.150 (220.181.38.150): icmp_seq=1 ttl=51 time=6.65 ms
64 bytes from 220.181.38.150 (220.181.38.150): icmp_seq=2 ttl=51 time=6.48 ms
64 bytes from 220.181.38.150 (220.181.38.150): icmp_seq=3 ttl=51 time=5.95 ms--- www.a.shifen.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 5.950/6.363/6.652/0.306 ms
1.2 部署清单
序号 | IP | 主机名 | 部署服务 |
---|---|---|---|
1 | 192.168.56.108 | es2 | prometheus |
2 | 192.168.56.116 | cydocker | docker,prometheus-kafka-adapter |
3 | kafka集群Ip192.168.56.101 | cnode1 | zk,kafka |
1.3 组件版本
prometheus | 2.29.1 |
---|---|
docker | 20.10.14 |
prometheus-kafka-adapter | 1.7.0 |
kafka | 2.12-3.6.0 |
二、部署步骤
2.1 prometheus部署
#1 解压缩
[root@node1 package]# tar -zxvf prometheus-2.29.1.linux-amd64.tar.gz #2 移动并命名
[root@node1 package]# mv prometheus-2.29.1.linux-amd64 /usr/local/prometheus#3 创建数据存储路径
[root@es2][/usr/local/prometheus]
$ mkdir -p /data/prometheus#4 修改prometheus.yml文件
[root@es2][/usr/local/prometheus]
$vim prometheus.yml- job_name: "prometheus"# metrics_path defaults to '/metrics'# scheme defaults to 'http'.static_configs:- targets: ["localhost:9090"]# 远程写到prometheus-kafka-adaptor
remote_write:- url: "http://192.168.56.116:10401/receive" #这里换成部署prometheus-kafka-adaptor服务的节点IP#5 开启服务(只为测试用,正式环境请注册service)
/usr/local/prometheus/prometheus \--config.file=/usr/local/prometheus/prometheus.yml \--storage.tsdb.path=/data/prometheus \--storage.tsdb.retention.time=365d \--web.listen-address=0.0.0.0:9090 \--web.enable-admin-api \--web.enable-lifecycle
2.2 kafka集群部署
请参考我写的上一篇博客《kafka集群部署搭建【详细版】》,这里不错重复赘述,但是需要先创建好topic。
[root@cnode1][/opt/kafka_2.12-3.6.0/bin]
$ ./kafka-topics.sh --create --topic test1 --bootstrap-server cnode1:9092 --partitions 1 --replication-factor 3
2.3 prometheus-kafka-adapter部署
docker部署这里不做赘述,可以直接看其他人博客
#1 拉取镜像
[root@cydocker][~]
$ docker pull telefonica/prometheus-kafka-adapter:1.7.0#2 编写启动脚本
[root@cydocker][~/adapoter]
$ cat adapoter.sh
#!/usr/bin/env bash
docker run -d --name prometheus-kafka-adapter --restart=always -m 2g \
-e KAFKA_BROKER_LIST=192.168.56.101:9092 \
-e KAFKA_TOPIC=test1 \
-e PORT=10401 \
-e SERIALIZATION_FORMAT=json \
-e GIN_MODE=release \
-e LOG_LEVEL=debug \
-p 10401:10401 \
telefonica/prometheus-kafka-adapter:1.7.0#3 查看运行状态
[root@cydocker][~/adapoter]
$ docker ps |grep kafka
8d47b0210004 telefonica/prometheus-kafka-adapter:1.7.0 "/bin/sh -c /prometh…" 4 days ago Up 35 minutes 0.0.0.0:10401->10401/tcp, :::10401->10401/tcp prometheus-kafka-adapter
三、数据验证
登陆到kafka节点进行验证,结果如下:
[root@cnode1][/opt/kafka_2.12-3.6.0/bin]
$ ./kafka-console-consumer.sh --bootstrap-server cnode1:9092 --topic test1{"labels":{"__name__":"prometheus_tsdb_reloads_total","instance":"127.0.0.1:9090","job":"prometheus"},"name":"prometheus_tsdb_reloads_total","timestamp":"2024-11-14T04:39:38Z","value":"7"}
{"labels":{"__name__":"prometheus_tsdb_retention_limit_bytes","instance":"127.0.0.1:9090","job":"prometheus"},"name":"prometheus_tsdb_retention_limit_bytes","timestamp":"2024-11-14T04:39:38Z","value":"0"}
{"labels":{"__name__":"prometheus_tsdb_size_retentions_total","instance":"127.0.0.1:9090","job":"prometheus"},"name":"prometheus_tsdb_size_retentions_total","timestamp":"2024-11-14T04:39:38Z","value":"0"}
{"labels":{"__name__":"prometheus_tsdb_storage_blocks_bytes","instance":"127.0.0.1:9090","job":"prometheus"},"name":"prometheus_tsdb_storage_blocks_bytes","timestamp":"2024-11-14T04:39:38Z","value":"1239229"}
{"labels":{"__name__":"prometheus_tsdb_symbol_table_size_bytes","instance":"127.0.0.1:9090","job":"prometheus"},"name":"prometheus_tsdb_symbol_table_size_bytes","timestamp":"2024-11-14T04:39:38Z","value":"392"}
{"labels":{"__name__":"prometheus_tsdb_time_retentions_total","instance":"127.0.0.1:9090","job":"prometheus"},"name":"prometheus_tsdb_time_retentions_total","timestamp":"2024-11-14T04:39:38Z","value":"0"}
{"labels":{"__name__":"prometheus_tsdb_tombstone_cleanup_seconds_bucket","instance":"127.0.0.1:9090","job":"prometheus","le":"0.005"},"name":"prometheus_tsdb_tombstone_cleanup_seconds_bucket","timestamp":"2024-11-14T04:39:38Z","value":"0"}
{"labels":{"__name__":"prometheus_tsdb_tombstone_cleanup_seconds_bucket","instance":"127.0.0.1:9090","job":"prometheus","le":"0.01"},"name":"prometheus_tsdb_tombstone_cleanup_seconds_bucket","timestamp":"2024-11-14T04:39:38Z","value":"0"}
{"labels":{"__name__":"prometheus_tsdb_tombstone_cleanup_seconds_bucket","instance":"127.0.0.1:9090","job":"prometheus","le":"0.025"},"name":"prometheus_tsdb_tombstone_cleanup_seconds_bucket","timestamp":"2024-11-14T04:39:38Z","value":"0"}
四、总结
至此,prometheus监控数据推送到kafka集群验证完毕,大家如果有什么疑问,请及时和我沟通交流。