ELK
技术栈nginx
访问日志172.30.0.8 - - [26/Jun/2020:14:39:30 +0800] "GET //app/app/access_token?app_id=ce571941c2b7e4fb&rand=IRWDg_qd8LQk7ovExvLR8h8dBntkwYEW&signature=cf28da70ed09fff4d6d4e72ffe5baa0a56df2695 HTTP/1.1" 500 87 "-" "Yii2-Curl-Agent" "-"
172.30.0.5 - - [26/Jun/2020:14:39:30 +0800] "GET /devops/app/main?app_id=ce571941c2b7e4fb HTTP/1.0" 302 0 "http://local.fn.wiiqq.com/fn//devops/app/list?auth_code=zgqFYeJ6iuN2zoocxal7Cr6oPCQNz__Z&state=wii" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" "172.30.0.1"
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
Logstash
输入Logstash
从nginx
的访问日志中读取数据,并在Elasticsearch
中为日志创建索引,过程中还会根据grok
模式对日志进行过滤和字段提取Grok
表达式Logstash
安装包中已经包含了一些常用grok
表达式。可在github
上查看https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns
apache
通用格式日志的grok
模式如下input {
file {
path => "/var/log/nginx/access.log"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{COMMONAPACHELOG}" }
}
date {
match => ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
}
mutate {
convert => ["response", "integer"]
convert => ["bytes", "integer"]
}
}
output {
elasticsearch {
hosts=>"localhost"
}
}
grok
模式进行匹配,为消息分配时间戳字段,并根据需要转换某些字段的数据类型bin/logstash -f logstash.conf
logstash
,可以在控制台看到类似下面的输出Kibana
可视化Kibana
kibana
并打开http://localhost:5601
bin/kibana
ip
Count
Date Histogram
Average
,Field
:bytes
Date Histogram
Count
Date Histogram
Split Series
:字段为clientip
进行子聚合Guage
Guage
Count
dashboard