ElasticSearch + Logstash + Kibana 日志采集

本文节选自《Netkiller Monitoring 手札》

ElasticSearch + Logstash + Kibana 一键安装

配置 logstash 将本地日志导入到 elasticsearch

input {
  file {
    type => "syslog"
    path => [ "/var/log/maillog", "/var/log/messages", "/var/log/secure" ]
    start_position => "beginning"
  }
}
output {
  stdout { codec => rubydebug }
  elasticsearch { 
    hosts => ["127.0.0.1:9200"] 
  }
}		

19.3. TCP/UDP 接收日志并写入 elasticsearch

		input {
  file {
    type => "syslog"
    path => [ "/var/log/auth.log", "/var/log/messages", "/var/log/syslog" ]
  }
  tcp {
    port => "5145"
    type => "syslog-network"
  }
  udp {
    port => "5145"
    type => "syslog-network"
  }
}
output {
  elasticsearch { 
    hosts => ["127.0.0.1:9200"] 
  }
}		

19.4. 配置 Broker(Redis)

19.4.1. indexer

input {/etc/logstash/conf.d/indexer.conf

  redis {
    host => "127.0.0.1"
    port => "6379" 
    key => "logstash:demo"
    data_type => "list"
    codec  => "json"
    type => "logstash-redis-demo"
    tags => ["logstashdemo"]
  }
}

output {
  stdout { codec => rubydebug }
  elasticsearch {
    hosts => ["127.0.0.1:9200"]
  }
}	

测试

			# redis-cli 
127.0.0.1:6379> RPUSH logstash:demo "{\"time\": \"2012-01-01T10:20:00\", \"message\": \"logstash demo message\"}"
(integer) 1
127.0.0.1:6379> exit			

如果执行成功日志如下

			# cat /var/log/logstash/logstash-plain.log 
[2017-03-22T15:54:36,491][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[2017-03-22T15:54:36,496][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-03-22T15:54:36,600][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x20dae6aa URL:http://127.0.0.1:9200/>}
[2017-03-22T15:54:36,601][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-03-22T15:54:36,686][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-03-22T15:54:36,693][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2017-03-22T15:54:36,780][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x2f9efc89 URL://127.0.0.1>]}
[2017-03-22T15:54:36,787][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
[2017-03-22T15:54:36,792][INFO ][logstash.inputs.redis    ] Registering Redis {:identity=>"redis://@127.0.0.1:6379/0 list:logstash:demo"}
[2017-03-22T15:54:36,793][INFO ][logstash.pipeline        ] Pipeline main started
[2017-03-22T15:54:36,838][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-03-22T15:55:10,018][WARN ][logstash.runner          ] SIGTERM received. Shutting down the agent.
[2017-03-22T15:55:10,024][WARN ][logstash.agent           ] stopping pipeline {:id=>"main"}			

19.4.2. shipper

			input {
  file {
    path => [ "/var/log/nginx/access.log" ]
    start_position => "beginning"
  }
}

filter {
  grok {
    match => { "message" => "%{NGINXACCESS}" }
    add_field => { "type" => "access" }
  }
  date {
    match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
  }
  geoip {
    source => "clientip"
  }
}

output {
  redis {
    host => "127.0.0.1"
    port => 6379
    data_type => "list"
    key => "logstash:demo"
  }
}			

19.5. Kafka

input {

  kafka {
   zk_connect => "kafka:2181"
   group_id => "logstash"
   topic_id => "apache_logs"
   consumer_threads => 16
  }
}		

19.8. FAQ

19.8.1. 查看 Kibana 数据库

			# curl 'http://localhost:9200/_search?pretty'
{
  "took" : 1,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : ".kibana",
        "_type" : "config",
        "_id" : "5.2.2",
        "_score" : 1.0,
        "_source" : {
          "buildNum" : 14723
        }
      }
    ]
  }
}			

19.8.2. logstash 无法写入 elasticsearch

elasticsearch 的配置不能省略 9200 端口,否则将无法链接elasticsearch

  elasticsearch {
    hosts => ["127.0.0.1:9200"]
  }			

原文发布于微信公众号 - Netkiller(netkiller-ebook)

原文发表时间:2017-03-23

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏k8s

K8S - 安装 - kubeadm高可用安装

高可用的核心在于kube-apiserver的高可用,也就是需要有负载均衡器对多个kube-apiserver进行代理,同时保证kubelet和kube-pro...

1.2K4
来自专栏鬼谷君

12-部署EFK插件

2082
来自专栏云知识学习

Kubernetes1.10 HA高可用集群搭建文档

k8s-master01的keepalived.conf,配置文件有几点需要注意的,在下面有补充。

1.8K15
来自专栏xingoo, 一个梦想做发明家的程序员

Spark源码分析之Spark-submit和Spark-class

有了前面spark-shell的经验,看这两个脚本就容易多啦。前面总结的Spark-shell的分析可以参考: Spark源码分析之Spark Shell(上...

1835
来自专栏KaliArch

大数据平台HDP搭建

在ambari的setup中我们可以选择使用默认的postgresql,也可以自定义使用其他数据库,此处选用mariadb,便于后期管理维护

2388
来自专栏别先生

工作流调度器azkaban(以及各种工作流调度器比对)

1:工作流调度系统的作用: (1):一个完整的数据分析系统通常都是由大量任务单元组成:比如,shell脚本程序,java程序,mapreduce程序、hive...

60210

将R与Cloudera Impala集成,以实现Hadoop上的实时查询

Cloudera Impala支持Hadoop数据集上的低延迟交互式查询,这些数据集可以存储在Hadoop分布式文件系统(HDFS)或Hadoop的分布式NoS...

3277
来自专栏Hadoop实操

如何使用Oozie API接口向Kerberos环境的CDH集群提交Spark2作业

前面Fayson介绍了多种方式在CDH集群外的节点向集群提交Spark作业,文章中均采用Spark1来做为示例,本篇文章主要介绍如何是用Oozie API向Ke...

7674
来自专栏北京马哥教育

Hadoop 2.0中作业日志收集原理以及配置方法

Hadoop 2.0提供了跟1.0类似的作业日志收集组件,从一定程度上可认为直接重用了1.0的代码模块,考虑到YARN已经变为通用资源管理平台,因此,提供一个通...

3486
来自专栏运维

k8s集群之kubernetes-dashboard和kube-dns组件部署安装

k8s集群之kubernetes-dashboard和kube-dns组件部署安装

112

扫码关注云+社区