前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >ELK安装

ELK安装

作者头像
dogfei
发布2020-07-31 11:27:29
6070
发布2020-07-31 11:27:29
举报
文章被收录于专栏:devops探索devops探索

ELK安装

这里采用的全是RPM安装

  • 系统版本:centos7.4
  • jdk:1.8.0_171

1 2 3 4 5 6

1、yum list | grep jdk 2、yum -y install java-1.8.0-openjdk.x86_64 3、 java -version openjdk version "1.8.0_171" OpenJDK Runtime Environment (build 1.8.0_171-b10) OpenJDK 64-Bit Server VM (build 25.171-b10, mixed mode)

  • kibana:6.2.4

1 2 3 4 5 6 7 8 9 10 11 12 13

1、rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch 2、vim /etc/yum.repos.d/kibana.repo [kibana-6.x] name=Kibana repository for 6.x packages baseurl=https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md 3、yum -y install kibana 4、systemctl start kibana.service systemctl enable kibana.service

  • Elasticsearch:6.2.4

1 2 3 4 5 6 7 8 9 10 11 12 13 14

1、rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch 2、vim /etc/yum.repos.d/elasticsearch.repo [elasticsearch-6.x] name=Elasticsearch repository for 6.x packages baseurl=https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md 3、yum install elasticsearch 4、systemctl daemon-reload systemctl enable elasticsearch systemctl start elasticsearch

  • Logstash:6.2.4

1 2 3 4 5 6 7 8 9 10 11

1、rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch 2、vim /etc/yum.repos.d/logstash.repo [logstash-6.x] name=Elastic repository for 6.x packages baseurl=https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md 3、yum install logstash

安装nginx

1 2

yum -y install epel-release yum -y install nginx tools-httpd

nginx配置文件

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

cat /etc/nginx/conf.d/kibana.conf server { listen 80; server_name 192.168.159.128; auth_basic "Restricted Access"; auth_basic_user_file /etc/nginx/htpasswd.users; location / { proxy_pass http://192.168.159.128:5601; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }

设置密码

1 2

htpasswd -c /etc/nginx/htpasswd.users admin 设置完密码后,执行:setsebool -P httpd\_can\_network_connect  1

修改配置文件

1、kibana

1 2 3

cat /etc/kibana/kibana.yml | egrep -v "^$|^#" server.host: "192.168.159.128" elasticsearch.url: "http://192.168.159.128:9200"

2、elasticsearch

1 2 3 4 5 6

cat /etc/elasticsearch/elasticsearch.yml | egrep -v "^$|^#" path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch network.host: 192.168.159.128 #cluster.name: my-application 可以指定集群的名字,集群将会根据这个名字来自动发现和加入节点 #node.name: node-1 指定每个节点的名字

测试:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

curl 'http://192.168.159.128:9200/?pretty' { "name" : "nQHnUOC", "cluster_name" : "elasticsearch", "cluster_uuid" : "yQ-XQMR6STuOd-MI4gWflw", "version" : { "number" : "6.2.4", "build_hash" : "ccec39f", "build_date" : "2018-04-12T20:37:28.497551Z", "build_snapshot" : false, "lucene_version" : "7.2.1", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "tagline" : "You Know, for Search" }

3、logstash 分别配置input、filter、output

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35

cat /etc/logstash/conf.d/02-beats-input.conf input { beats { port => 5044 ssl => true ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } } cat /etc/logstash/conf.d/10-syslog-filter.conf filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } } cat /etc/logstash/conf.d/30-elasticsearch-output.conf output { elasticsearch { hosts => ["192.168.159.128:9200"] sniffing => true manage_template => false index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } }

配置Kibana Dashboard 安装插件

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

1、安装插件 cd /usr/share/elasticsearch bin/elasticsearch-plugin install ingest-user-agent bin/elasticsearch-plugin install ingest-geoip 2、安装filebeat rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch cat /etc/yum.repos.d/elastic.repo [elastic-6.x] name=Elastic repository for 6.x packages baseurl=https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md yum -y install filebeat 3、vim /etc/filebeat/filebeat.yml #=========================== Filebeat prospectors ============================= filebeat.prospectors: - type: log paths: - /var/log/*.log #============================== Kibana ===================================== setup.kibana: host: "192.168.159.128:5601" #================================ Outputs ===================================== output.elasticsearch: hosts: ["192.168.159.128:9200"] 4、filebeat setup systemctl restart filebeat

浏览器访问测试192.168.159.128

注意:

这里还是建议使用tar包来安装,具体安装细节这里不在赘述,可以参考其他博客。

https://blog.csdn.net/magerguo/article/details/79637646

需要注意的是,我们在启动elasticsearch的时候需要使用普通用户启动,否则会报错。

1、创建普通用户

1 2

groupadd elk groupadd -g elk elk

2、将tar包解压并移动到/usr/local目录下

1 2

mv elasticsearch-6.2.4.tar.gz /usr/local/elasticsearch chown -R elk.elk /usr/local/elasticsearch

3、修改/etc/sysctl.conf文件(这一步很关键)

1 2

把最大线程数限制调高 vm.max_map_count = 655360

4、设置

1 2 3 4 5 6

vim /etc/security/limits.conf #修改用户最大打开文件数 * soft nofile 65536 #警告设定所有用户最大打开进程数为65535 * hard nofile 131072 #严格设定所有用户最大打开进程数为131072 * soft nproc 65536 #警告设定所有用户最大打开文件数为65535 * hard nproc 131072 #严格设定所有用户最大打开文件数为131072

5、 设置elk用户参数

1 2

vim /etc/security/limits.d/20-nproc.conf elk soft nproc 65536

6、启动

1 2 3

./bin/elasticsearch -d 其他的logstash和kibana均可以这样安装,不用使用普通用户启动。

下面说一下logstash的写法

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

input { file { path => ["/home/wwwlogs/novel3_https.log","/home/wwwlogs/pay.log"] start_position => "beginning" type => "nginx_access" } } filter { grok { match => {"message" => "%{IP:remote_ip} \- \- \[%{HTTPDATE:timestamp}\] \"(%{WORD:verb}) %{NOTSPACE:request} (HTTP/%{NUMBER:httpversion})\" %{NUMBER:response} %{NUMBER:bytes} %{QS:http_referer} %{QS:http_user_agent}"} remove_field => ["_score","_type","auth","bytes","fromhost","httpversion","message","path","type"] } } output { if "_grokparsefailure" not in [tags] { elasticsearch { hosts => ["172.17.24.149:9200"] index => "cl-2-web-access-%{+YYYY-MM}" } stdout { codec => rubydebug } } }

这里是对nginx的访问日志做的处理,nginx的错误日志不能使用logstash内置的正则进行匹配,需要自定义正则进行匹配,规则如下:

1

"(?<datetime>\d\d\d\d/\d\d/\d\d \d\d:\d\d:\d\d) \[(?<errtype>\w+)\] \S+: \*\d+ (?<errmsg>[^,]+), (?<errinfo>.*)$"

使用grok debugger测试:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

错误日志: 2018/05/22 11:29:31 [error] 26385#0: *879988 open() "/home/wwwroot/bug83/public/static/book_img/20180514/53f2be51-7da7-445a-a328-3abfda79e951.jpg" failed (2: No such file or directory), client: 140.205.9.55, server: xs2.bfnet.cn, request: "GET /static/book_img/20180514/53f2be51-7da7-445a-a328-3abfda79e951.jpg HTTP/1.1", host: "xs2.bfnet.cn", referrer: "https://xs2.bfnet.cn/xssangeng/?" 匹配规则: (?<datetime>\d\d\d\d/\d\d/\d\d \d\d:\d\d:\d\d) \[(?<errtype>\w+)\] \S+: \*\d+ (?<errmsg>[^,]+), (?<errinfo>.*)$ 匹配结果: { "datetime": [ [ "2018/05/22 11:29:31" ] ], "errtype": [ [ "error" ] ], "errmsg": [ [ "open() "/home/wwwroot/bug83/public/static/book_img/20180514/53f2be51-7da7-445a-a328-3abfda79e951.jpg" failed (2: No such file or directory)" ] ], "errinfo": [ [ "client: 140.205.9.55, server: xs2.bfnet.cn, request: "GET /static/book_img/20180514/53f2be51-7da7-445a-a328-3abfda79e951.jpg HTTP/1.1", host: "xs2.bfnet.cn", referrer: "https://xs2.bfnet.cn/xssangeng/?"" ] ] }

然后剩下的就是将访问日志和错误日志写到一个配置文件里。

最后需要清理elasticsearch索引,否则会占用磁盘空间越来越大。

参考:

https://blog.csdn.net/qq_23598037/article/details/79563923

http://blog.51cto.com/nosmoking/1852115

https://blog.csdn.net/lzw_2006/article/details/51280058

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2018-05-14,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • ELK安装
  • 安装nginx
  • 修改配置文件
相关产品与服务
Elasticsearch Service
腾讯云 Elasticsearch Service(ES)是云端全托管海量数据检索分析服务,拥有高性能自研内核,集成X-Pack。ES 支持通过自治索引、存算分离、集群巡检等特性轻松管理集群,也支持免运维、自动弹性、按需使用的 Serverless 模式。使用 ES 您可以高效构建信息检索、日志分析、运维监控等服务,它独特的向量检索还可助您构建基于语义、图像的AI深度应用。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档