day04-LogStash扩展
1.LogStash性能不稳定(某天关闭后,再次启动就非常慢),所以后面我们用Filebeat。2.先禁用
# geoip {
# source => "clientip"
# }3.在生产中要是用nignx服务或tomcat服务我们用EFK架构就可以
排查技巧观察点
LogStash报错分析
java日志查找
LogStash
数据映射之带宽统计案例
见昨日内容
多实例
#补充命令:
nc(netcat)命令是一个功能强大的网络工具,用于在网络连接上进行读写数据。它可以在 TCP 或 UDP 协议下工作,被广泛用于网络调试、端口扫描、数据传输等多种任务
1.启动第一个实例
[11:29:55 root@elk2:/etc/logstash/conf.d]#logstash -f 01-stdin-to-stdout.conf2.启动第二个实例
[10:36:25 root@elk2:/etc/logstash/conf.d]#logstash -f 02-file-to-stdout.conf --path.data /tmp/logstash-data#注意实验事项
1.如果不指定数据路径,则Logstash默认的数据路径在安装目录的data目录下。比如: "/usr/share/logstash/data/"
2.如果同一个节点启动了多个Logstash实例,不指定数据路径,则会报错:Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.`------------------------------实验开始位置-----------------------------------------` 1.启动Logstash实例1
[11:31:36 root@elk2:/etc/logstash/conf.d]#vim 09-multiple-tcp-to-es.conf
input { # 监听本地的tcp端口,用于接受数据tcp {port => 7777}} filter {mutate {remove_field => ["@version"]}}output { stdout { codec => rubydebug } elasticsearch {hosts => ["10.0.0.91:9200","10.0.0.92:9200","10.0.0.93:9200"]index => "linux-elfk-multiple-tcp-%{+yyyy.MM.dd}"}}[11:35:07 root@elk2:/etc/logstash/conf.d]#logstash -rf 09-multiple-tcp-to-es.conf 2.启动Logstash实例2
[11:35:00 root@elk2:/etc/logstash/conf.d]#vim 10-multiple-beats-to-es.conf
input { beats {port => 8888}} filter {mutate {remove_field => [ "@version","log","tags","agent","ecs","input" ]}}output { stdout { codec => rubydebug } elasticsearch {hosts => ["10.0.0.91:9200","10.0.0.92:9200","10.0.0.93:9200"]index => "linux-elfk-multiple-beats-%{+yyyy.MM.dd}"}}[11:36:43 root@elk2:/etc/logstash/conf.d]#logstash -rf 10-multiple-beats-to-es.conf --path.data /tmp/logstash-data3.启动filebeat
[11:28:53 root@elk3:/etc/filebeat]#vim 10-tcp-to-logstash.yaml
filebeat.inputs:
- type: tcphost: "0.0.0.0:9000"output.logstash:hosts: ["10.0.0.92:8888"][11:50:12 root@elk3:/etc/filebeat]#rm -rf /var/lib/filebeat/
[11:50:15 root@elk3:/etc/filebeat]#filebeat -e -c 10-tcp-to-logstash.yaml4.发送测试数据到实例1
[11:28:44 root@elk1:~]#echo www.baidu.com | nc 10.0.0.92 77775.发送测试数据到filebeat
[11:38:17 root@elk1:~]#echo www.jingdong.com | nc 10.0.0.93 9000
多分支
1.启动Logstash实例
[12:00:26 root@elk2:/etc/logstash/conf.d]#vim 11-if-logstash.conf
input { tcp {type => "tcp"port => 7777}beats {port => 8888type => "beats"}} filter {if [type] == "tcp" {mutate {remove_field => ["@version"]}} else {mutate {remove_field => [ "@version","log","tags","agent","ecs","input" ]}}}output { stdout { codec => rubydebug } if [type] == "tcp" {elasticsearch {hosts => ["10.0.0.91:9200","10.0.0.92:9200","10.0.0.93:9200"]index => "linux-elfk-if-tcp-%{+yyyy.MM.dd}"}}else {elasticsearch {hosts => ["10.0.0.91:9200","10.0.0.92:9200","10.0.0.93:9200"]index => "linux-elfk-if-beats-%{+yyyy.MM.dd}"}}}[12:04:48 root@elk2:/etc/logstash/conf.d]#logstash -rf 11-if-logstash.conf 2.启动filebeat实例
[11:50:12 root@elk3:/etc/filebeat]#rm -rf /var/lib/filebeat/
[11:50:15 root@elk3:/etc/filebeat]#filebeat -e -c 10-tcp-to-logstash.yaml3.发送测试数据到Filebeat
[11:28:44 root@elk1:~]#echo 11 | nc 10.0.0.92 7777
[11:38:17 root@elk1:~]#echo 22 | nc 10.0.0.93 9000echo 好好学习 | nc 10.0.0.92 7777
echo 天天k8s | nc 10.0.0.93 9000
pipeline
1.修改Logstash的配置文件[14:59:26 root@elk2:/etc/logstash/conf.d]#vim 12-pipeline-tcp-to-es.conf
input { tcp {port => 7777}} filter {mutate {remove_field => ["@version"]}}output { stdout { codec => rubydebug } elasticsearch {hosts => ["10.0.0.91:9200","10.0.0.92:9200","10.0.0.93:9200"]index => "linux-elfk-pipeline-tcp-%{+yyyy.MM.dd}"}}[14:59:48 root@elk2:/etc/logstash/conf.d]#vim 13-pipeline-beats-to-es.conf
input { beats {port => 8888}} filter {mutate {remove_field => [ "@version","log","tags","agent","ecs","input" ]}}output { stdout { codec => rubydebug } elasticsearch {hosts => ["10.0.0.91:9200","10.0.0.92:9200","10.0.0.93:9200"]index => "linux-elfk-pipeline-beats-%{+yyyy.MM.dd}"}}2.修改pipeline配置文件
[14:44:49 root@elk2:/etc/logstash]#vim pipelines.yml
- pipeline.id: tcppath.config: /etc/logstash/conf.d/12-pipeline-tcp-to-es.conf- pipeline.id: beatspath.config: /etc/logstash/conf.d/13-pipeline-beats-to-es.conf3.创建pipeline的软连接
[15:01:26 root@elk2:/etc/logstash]#mkdir /usr/share/logstash/config
[15:01:33 root@elk2:/etc/logstash]#ln -svf /etc/logstash/pipelines.yml /usr/share/logstash/config/4.启动Logstash
[15:01:35 root@elk2:/etc/logstash]#logstash -r5.启动filebeat
[15:03:06 root@elk3:/etc/filebeat]#rm -rf /var/lib/filebeat/
[15:04:04 root@elk3:/etc/filebeat]#filebeat -e -c 10-tcp-to-logstash.yaml6.发送测试数据
[12:14:09 root@elk1:~]#echo 好好学习 | nc 10.0.0.92 7777
[14:59:20 root@elk1:~]#echo 天天k8s | nc 10.0.0.93 9000
自定义正则
1.准备自定义的patterns
[16:09:58 root@elk2:/etc/logstash/conf.d]#mkdir patterns[16:10:01 root@elk2:/etc/logstash/conf.d]#vim patterns/linux
YEAR \d{4}
SCHOOL [a-z]{9}
CLASS [a-z0-9]{7}2.编写配置文件
[root@elk92 conf.d]# cat 14-tcp-grok_custom_patterns-es.conf
input { tcp {port => 7777}} filter {grok {# 指定自定义patterns目录,会自动加载该目录下的所有文件内容相关的patterns模式patterns_dir => ["/etc/logstash/conf.d/patterns"]match => { "message" => "%{YEAR:year}北京欢迎您: https://www.%{SCHOOL:School}.com 班级: %{CLASS:cLaSs}" }}}output { stdout { codec => rubydebug } # elasticsearch {
# hosts => ["10.0.0.91:9200","10.0.0.92:9200","10.0.0.93:9200"]
# index => "linux-elfk-grok-custom-%{+yyyy.MM.dd}"
# }}[16:12:10 root@elk2:/etc/logstash/conf.d]#logstash -rf 14-tcp-grok_custom_patterns-es.conf 3.发送测试数据
[root@elk93 ~]# echo "北京欢迎您: https://www.beijing.com 班级: linux" | nc 10.0.0.92 77774.接收的数据效果如下:
{"port" => 48110,"@timestamp" => 2024-10-30T08:13:44.725Z,"host" => "10.0.0.93","@version" => "1","tags" => [[0] "_grokparsefailure"],"message" => "北京欢迎您: https://www.beijing.com 班级: linux"
}
JVM优化
1.查看Logstash实例启动时占用的heap堆内存大小
[16:15:05 root@elk2:~]#ps -ef | grep logstash
root 2026 1053 99 16:15 pts/0 00:00:36 /usr/share/elasticsearch/jdk/bin/java -Xms1g -Xmx1g2.修改Logstash的堆内存大小
[16:15:02 root@elk2:~]#vim /etc/logstash/jvm.options
## JVM configuration# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space#-Xms1g
#-Xmx1g-Xms256m
-Xmx256m3.启动Logstash实例测试
[16:14:53 root@elk2:/etc/logstash/conf.d]#logstash -rf 14-tcp-grok_custom_patter4.查看JVM占用内存情况
[16:16:57 root@elk2:~]#ps -ef | grep logstash
root 2123 1053 99 16:17 pts/0 00:00:05 /usr/share/elasticsearch/jdk/bin/java -Xms128m -Xmx128m#注意:出现java.lang.OutOfMemoryError: Java heap space
表示内存不足,仅需要添加内存即可,工作中如何不设置Logstash的堆内存大小,默认为1GB。
自定义Nginx的json日志格式
1.清空nginx访问日志
[16:38:12 root@elk3:~]# > /var/log/nginx/access.log2.修改nginx的配置文件
[16:38:13 root@elk3:~]#vim /etc/nginx/nginx.conf ...# 注释原有的配置行# access_log /var/log/nginx/access.log;log_format nginx_json '{"@timestamp":"$time_iso8601",''"host":"$server_addr",''"clientip":"$remote_addr",''"SendBytes":$body_bytes_sent,''"responsetime":$request_time,''"upstreamtime":"$upstream_response_time",''"upstreamhost":"$upstream_addr",''"http_host":"$host",''"uri":"$uri",''"domain":"$host",''"xff":"$http_x_forwarded_for",''"referer":"$http_referer",''"tcp_xff":"$proxy_protocol_addr",''"http_user_agent":"$http_user_agent",''"status":"$status"}';access_log /var/log/nginx/access.log nginx_json;
...[16:39:49 root@elk3:~]#nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful[16:39:56 root@elk3:~]#systemctl reload nginx3.访问测试
[15:52:19 root@elk1:~]#curl 10.0.0.93
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>4.查看日志格式
[16:40:12 root@elk3:~]#tail -f /var/log/nginx/access.log
{"@timestamp":"2024-10-30T16:40:26+08:00","host":"10.0.0.93","clientip":"10.0.0.91","SendBytes":612,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"10.0.0.93","uri":"/index.nginx-debian.html","domain":"10.0.0.93","xff":"-","referer":"-","tcp_xff":"-","http_user_agent":"curl/7.81.0","status":"200"}5.使用filebeat采集nginx访问日志
[16:42:49 root@elk3:/etc/filebeat]#vim 11-log-to-es.yamlfilebeat.inputs:
- type: logpaths:- /var/log/nginx/access.log# 解析message字段的JSON格式,并将键值对放在顶级字段中json.keys_under_root: true#output.console:
# pretty: trueoutput.elasticsearch:hosts: ["http://10.0.0.91:9200","http://10.0.0.92:9200","http://10.0.0.93:9200"] index: "linux-nginx-json-%{+yyyy.MM.dd}" setup.ilm.enabled: false
setup.template.name: "linux-nginx"
setup.template.pattern: "linux-nginx*"
setup.template.overwrite: false
setup.template.settings:index.number_of_shards: 3index.number_of_replicas: 0[16:43:54 root@elk3:/etc/filebeat]#rm -rf /var/lib/filebeat/
[16:44:18 root@elk3:/etc/filebeat]#filebeat -e -c 11-log-to-es.yaml6.kibana查看数据
[16:40:26 root@elk1:~]#curl 10.0.0.93/beijing.html
程序名词
- SDK:英文全称为"Software development kit",表示软件开发工具包,指的是程序员所需的开发环境工具包,比如Java,Go,C++,Python,Ruby,PHP等程序员依赖的开发环境。- JDK:英文全称为: Java Development Kit ,表示Java开发的工具包集合,针对Java开发员的软件开发工具包。JDK就是典型的Java程序员所使用的SDK。用于编写"*.java"文件,对应的编译程序是"javac"工具。- JRE:英文全称为: Java Runtime Environment,表示Java的运行环境,相比JDK而言更轻量级,软件包会更小,只能保证程序能够运行,缺少开发的工具包。- JVM:英文全称为: Java Virtual Machine,表示Java的虚拟机,一般用于执行Java程序编译后的字节码文件"*.class",对应的运行程序是"java"工具。#JVM虚拟机又细分为多个存储区域:
- 对外内存
- 方法区
- ...
- 堆内存- 伊甸区:表示对象产生的内存。- 幸存区:经过GC时在伊甸区没有结束生命周期的对象会经过幸存区过滤到年老区。- 年老区:长期存在的对象。
项目篇: ElasticStack的RBAC实战
2.配置ES集群加密
2.1 生成证书文件
[17:01:21 root@elk1:~]#/usr/share/elasticsearch/bin/elasticsearch-certutil cert -out /etc/elasticsearch/elastic-certificates.p12 -pass ""
........
Note: Generating certificates without providing a CA certificate is deprecated.A CA certificate will become mandatory in the next major release.Certificates written to /etc/elasticsearch/elastic-certificates.p12This file should be properly secured as it contains the private key for
your instance.This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.[17:33:28 root@elk1:~]#ll /etc/elasticsearch/elastic-certificates.p12
-rw------- 1 root elasticsearch 3596 Oct 30 17:33 /etc/elasticsearch/elastic-certificates.p122.2 同步证书文件到其他节点
[17:33:52 root@elk1:~]#scp /etc/elasticsearch/elastic-certificates.p12 10.0.0.92:/etc/elasticsearch/
[17:34:10 root@elk1:~]#scp /etc/elasticsearch/elastic-certificates.p12 10.0.0.93:/etc/elasticsearch/2.3 修改ES集群的配置文件
[17:34:59 root@elk1:~]#vim /etc/elasticsearch/elasticsearch.yml
........
# 在最后一行添加以下内容
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p122.4 同步ES配置文件到其他节点
[17:35:25 root@elk1:~]#scp /etc/elasticsearch/elasticsearch.yml 10.0.0.92:/etc/elasticsearch/[17:35:44 root@elk1:~]#scp /etc/elasticsearch/elasticsearch.yml 10.0.0.93:/etc/elasticsearch/2.5 "ES集群所有节点"修改证书文件的权限信息
chown elasticsearch:elasticsearch /etc/elasticsearch/elastic-certificates.p122.6 "ES集群所有节点"重启ES服务
systemctl restart elasticsearch.service 2.7 生成随机密码【请保留密码,下一个步骤要用!】
[17:37:53 root@elk1:~]#/usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]yChanged password for user apm_system
PASSWORD apm_system = 4Am23FzpVOxMeKK4NVYdChanged password for user kibana_system
PASSWORD kibana_system = CJlMQpD1MxNNK6m4FVI9Changed password for user kibana
PASSWORD kibana = CJlMQpD1MxNNK6m4FVI9Changed password for user logstash_system
PASSWORD logstash_system = nvSvRzHYsQbhQJu2pi3JChanged password for user beats_system
PASSWORD beats_system = tGiTVdjelFHGnzo8e0ZfChanged password for user remote_monitoring_user
PASSWORD remote_monitoring_user = tSW5yxDEo1pnn3lLLJghChanged password for user elastic
PASSWORD elastic = OSUzxjrIEq9HRcbj4bKB2.8 验证集群是否正常【注意,elastic是ES集群的管理员用户,密码根据自己的系统生成使用即可】
[17:43:06 root@elk2:~]#curl -u elastic:OSUzxjrIEq9HRcbj4bKB 10.0.0.91:9200/_cat/nodes
10.0.0.92 36 83 10 0.15 0.24 0.16 cdfhilmrstw * elk2
10.0.0.93 27 82 9 0.04 0.13 0.09 cdfhilmrstw - elk3
10.0.0.91 13 96 5 0.00 0.08 0.13 cdfhilmrstw - elk1#配置kibana访问ES加密集群
1.修改kibana的配置文件
[17:45:03 root@elk1:~]#vim /etc/kibana/kibana.yml
...........
# 根据你自己环境生成的"kibana_system"的随机密码做相应的修改即可
elasticsearch.username: "kibana_system"
elasticsearch.password: "CJlMQpD1MxNNK6m4FVI9"2.重启kibana服务
[17:51:28 root@elk1:~]#systemctl restart kibana
[17:51:48 root@elk1:~]#ss -tnl|grep 5601
LISTEN 0 511 0.0.0.0:5601 0.0.0.0:* 3.访问kibana的WebUI
10.0.0.91:5601将"elastic"的用户名和密码进行修改!
修改用户密码
总结
- 数据映射ELFK架构关于带宽统计案例 *
- Logstash的多实例 ***
- logstash的多分支语法 **
- logstash的pipeline技术 *****
- grok自定义匹配模式patterns *
- logstash的JVM配置优化 ***
- 自定义nginx的访问日志格式 **
- ES集群加密配置 *****
- kibana访问ES加密集群并修改elastic管理员密码 *****
- 故障排查技巧 *****- 1.日志界别INFO WARNFATAL- 2.找关键字ERROR- 3.对于java日志查找方式从日志的最下网上看,找at上面没有at的行就是报错信息- 4.日志查找方式- 通过安装目录 /var/log/elasticsearch/ES集群名称.log - 启动脚本查看systemctl cat 启动脚本服务的名称