当前位置: 首页 > news >正文

elasticsearch+logstash+kibana+filebeat实现niginx日志收集(未过滤日志内容)

单点部署

环境准备

基于Rocky9虚拟机,内存大小为4G

yum -y install lrzsz
useradd elkf
passwd elkf#密码随意su - elk
rz 导入包,笔者导使用版本为7.17.8

下载地址:https://www.elastic.co/downloads/past-releases/

tar -xf elasticsearch-7.17.8-linux-x86_64.tar.gz 
tar -xf filebeat-7.17.8-linux-x86_64.tar.gz 
tar -xf kibana-7.17.8-linux-x86_64.tar.gz  
tar -xf logstash-7.17.8-linux-x86_64.tar.gz 

配置elasticsearch

环境地址:/home/elkf/elasticsearch-7.17.8

# 配置以下环境变量
vim ~/.bash_profile
export ES_JAVA_HOME=/home/elkf/elasticsearch-7.17.8/jdk
export ES_HOME=/home/elkf/elasticsearch-7.17.8
source ~/.bash_profile # 配置jvm内存资源分配
vim config/jvm.options
-Xms1g
-Xmx4g# 配置elastic环境变量
vim config/elasticsearch.yml 
network.host: 0.0.0.0
discovery.type: single-node
xpack.security.enabled: false# 启动elasticsearch
bin/elasticsearch -d# 若启动成功
curl 127.0.0.1:9200
{"name" : "maxscale","cluster_name" : "elasticsearch","cluster_uuid" : "g6ZSGcSuTzSkthyWX5W90w","version" : {"number" : "7.17.8","build_flavor" : "default","build_type" : "tar","build_hash" : "120eabe1c8a0cb2ae87cffc109a5b65d213e9df1","build_date" : "2022-12-02T17:33:09.727072865Z","build_snapshot" : false,"lucene_version" : "8.11.1","minimum_wire_compatibility_version" : "6.8.0","minimum_index_compatibility_version" : "6.0.0-beta1"},"tagline" : "You Know, for Search"
}

配置kibana

环境地址:/home/elkf/kibana-7.17.8-linux-x86_64

vim config/kibana.yml
server.port:5601
server.host:"0.0.0.0"
elasticsearch.hosts:["http://localhost:9200"]
server.name: "kibana"
kibana.index: ".kibana"
i18n.locale: "zh-CN"#配置中文模式# 启动kibana
nohup bin/kibana &# ip为虚拟机ip,云端使用云端ip
使用浏览器访问:http://ip:5601

配置logstash

实验环境路径:/home/elkf/logstash-7.17.8

# 配置需要收集信息的文件
mkdir test
touch file.txt# 配置logstash收集信息的规则
vim config/pipelines.yml 
input {file {path=> "/home/elkf/logstash-7.17.8/test/file.txt"start_position => "beginning"}
}
output {elasticsearch {hosts => ["127.0.0.1:9200"]index => "system-log-%{+YYY.MM.dd}"}stdout {codec => rubydebug}
}# 可使用绝对路径来启动
nohup bin/logstash -f config/piplines.yml &# 使用其他终端测试收集信息是否成功
echo 15 > /home/elkf/logstash-7.17.8/test/file.txt
echo alpha > /home/elkf/logstash-7.17.8/test/file.txt
在kibana查看索引是否有sys-log-timedump

由于单点部署中Logstash完全能够完成数据收集、过滤、输出的功能,因此不再部署Filebeat。

集群部署

基于elkf争对nginx进行日志分析的节点分配

192.168.25.101:elasticsearch

192.168.25.102:kibana

192.168.25.103:logstash

192.168.25.104:nginx+filebeat

elasticsearch部署

单节点推荐至少4G运行内存,否则可能运行失败

# 导入包见单点部署
# 为方便管理,软件解压到/usr/local/目录下统一管理hostnamectl set-hostname ElasticSearch
tar -xf elasticsearch-7.17.8-linux-x86_64.tar.gz  -C /usr/local
tar -xf kibana-7.17.8-linux-x86_64.tar.gz -C /usr/localcd /usr/local/elasticsearch-7.17.8
# 配置JVM内存,配置内核max_map_count适配elasticsearch集群模式
vim config/jvm.options
-Xms4g
-Xmx4g
vim /etc/sysctl.conf
vm.max_map_count=262144# 配置elastic集群适配内容,注意与单点配置区别
mkdir /var/lib/elasticsearch/
mkdir /var/log/elasticsearch/
vim config/elasticsearch.yml
cluster.name: elkf
node.name: es1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch/
cluster.initial_master_nodes: ["es1"]# elasticsearch出于安全考虑,不能使用root用户启动,为elasticsearch配置相应用户
useradd elastic
chown -R elastic:elastic /usr/local/elasticsearch-7.17.8/
chown -R elastic:elastic /var/lib/elasticsearch /var/log/elasticsearch
su elastic
bin/elasticsearch -dcurl 192.168.25.101:9200
{"name" : "es1","cluster_name" : "elkf","cluster_uuid" : "uKedNB_VR8e90JOzgjrctg","version" : {"number" : "7.17.8","build_flavor" : "default","build_type" : "tar","build_hash" : "120eabe1c8a0cb2ae87cffc109a5b65d213e9df1","build_date" : "2022-12-02T17:33:09.727072865Z","build_snapshot" : false,"lucene_version" : "8.11.1","minimum_wire_compatibility_version" : "6.8.0","minimum_index_compatibility_version" : "6.0.0-beta1"},"tagline" : "You Know, for Search"}

配置kibana

单节点至少1G内存

# 导入包见单节点部署
hostnamectl set-hostname kibana
tar -xf kibana-7.17.8-linux-x86_64.tar.gz  -C /usr/local
cd /usr/local/kibana-7.17.8-linux-x86_64/
server.port: 5601
server.host: "192.168.25.102"
elasticsearch.hosts: ["http://192.168.25.101:9200"]
server.name: "kibana"
kibana.index: ".kibana"
i18n.locale: "zh-CN"
# 使用root用户启动,生产环境建议为其创建单独用户,并启用账户授权认证
nohup bin/kibana --allow-root &

配置logstash

单节点至少1G内存

hostnamectl set-hostname logstash
tar -xf logstash-7.17.8-linux-x86_64.tar.gz -C /usr/local/
cd /usr/local/logstash-7.17.8# 测试是否能启动成功
bin/logstash -e 'input{ stdin{} }output { stdout{} }'# 配置pipelines.yml之后
# 测试pipelines.yml
bin/logstash -f config/pipelines.yml  --config.test_and_exit# 测试返回成功之后启动logstash
nohup bin/logstash -f config/pipelines.yml &
  • 配置pipelines.yml

测试logstash与elasticsearch之间的连接,logstash本机系统日志测试

input {file {path => "var/log/messages"start_position => "beginning"}
}
output {elasticsearch {hosts => ["192.168.25.101:9200"]index => ["system-log-%{+YYYY.MM.dd}"]}stdout {codec => rubydebug}
}

配置filebeat+nginx

hostnamectl set-hostname filebeatnginx
yum -y install nginx
systemctl start nginx# 导入包见单点部署# 部署filebeat 
tar -xf filebeat-7.17.8-linux-x86_64.tar.gz  -C /usr/local
cd /usr/local/filebeat-7.17.8-linux-x86_64/

nginx日志收集实战

配置logstash

mv /usr/local/logstash-7.17.8/config/pipelines.yml /usr/local/logstash-7.17.8/config/pipelines.yml.bak
vim /usr/local/logstash-7.17.8/config/pipelines.yml
input {beats {port => 5004}
}
output {elasticsearch {hosts => ["192.168.25.101:9200"]index => ["Name1-nginx-access-%{+YYYY.MM.dd}"]}stdout {codec => rubydebug}
}

测试并启动

/usr/local/logstash-7.17.8/bin/logstash -f /usr/local/logstash-7.17.8/config/pipelines.yml --config.test_and_exitnohup /usr/local/logstash-7.17.8/bin/logstash -f /usr/local/logstash-7.17.8/config/pipelines.yml &

配置filebeat

配置文件:/usr/local/filebeat-7.17.8-linux-x86_64/filebeat.yml

mv /usr/local/filebeat-7.17.8-linux-x86_64/filebeat.yml /usr/local/filebeat-7.17.8-linux-x86_64/filebeat.yml.bak
vim /usr/local/filebeat-7.17.8-linux-x86_64/filebeat.yml
# ============================== Filebeat inputs ===============================
filebeat.inputs:
- type: filestreamid: Name1-nginx-monitorenabled: truepaths:- /var/log/nginx/access.log- /var/log/nginx/error.log
# ================================== General ===================================
tags: ["name1", "nginx"]
# ------------------------------ Logstash Output -------------------------------
output.logstash:hosts: ["192.168.25.103:5004"]
/usr/local/filebeat-7.17.8-linux-x86_64/filebeat
http://www.lryc.cn/news/592168.html

相关文章:

  • 使用 C++ 和 OpenCV 进行表面划痕检测
  • 算法-动态规划
  • Paimon对比基于消息队列(如Kafka)的传统实时数仓方案的优势
  • 【动态规划 解析】
  • centos7安装MySQL8.4手册
  • 【PTA数据结构 | C语言版】二叉堆的快速建堆操作
  • Vue常见指令
  • Webpack 项目优化详解
  • mac mlx大模型框架的安装和使用
  • LangChain 源码剖析(三):连接提示词与大语言模型的核心纽带——LLMChain
  • FastAdmin后台登录地址变更原理与手动修改方法-后台入口机制原理解析-优雅草卓伊凡
  • 反序列化漏洞1-PHP序列化基础概念(0基础超详细)
  • 【C# in .NET】18. 探秘接口:契约精神
  • ARM64高速缓存,内存属性及MAIR配置
  • MariaDB 10.4.34 安装配置文档(Windows 版)
  • 性能远超Spring Cloud Gateway!Apache ShenYu如何重新定义API网关!
  • uniapp微信小程序 实现swiper与按钮实现上下联动
  • 网页的性能优化,以及具体的应用场景
  • 工业ESD防静电无尘净化棉签擦拭棒:精密制造领域的清洁守护者!
  • bash-completion未安装或未启用
  • 飞书,正在成为中国AI制造故事的新阵地
  • 【数据可视化-67】基于pyecharts的航空安全深度剖析:坠毁航班数据集可视化分析
  • 使用python的读取xml文件,简单的处理成元组数组
  • 如何防止GitHub上的敏感信息被泄漏?
  • Redis-集群与分区
  • Redis——BigKey
  • web开发基础(CSS)
  • 【甲烷数据集】Sentinel-5P 卫星获取的全球甲烷数据集-TROPOMI L2 CH₄
  • 设计循环队列oj题(力口622)
  • 四足机器人远程视频与互动控制的全链路方案