首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >ELK学习笔记之ELK搜集OpenStack节点日志

ELK学习笔记之ELK搜集OpenStack节点日志

作者头像
Jetpropelledsnake21
发布2019-04-25 09:44:08
1.3K0
发布2019-04-25 09:44:08
举报
文章被收录于专栏:JetpropelledSnakeJetpropelledSnake

模板来自网络,模板请不要直接复制,先放到notepad++内调整好格式,注意缩进

部署架构

控制节点作为日志服务器,存储所有 OpenStack 及其相关日志。Logstash 部署于所有节点,收集本节点下所需收集的日志,然后以网络(node/http)方式输送给控制节点的 Elasticsearch,Kibana 作为 web portal 提供展示日志信息:

日志格式

为了提供快速直观的检索功能,对于每一条 OpenStack 日志,我们希望它能包含以下属性,用于检索和过滤:

  • Host: 如 controller01,compute01 等
  • Service Name: 如 nova-api, neutron-server 等
  • Module: 如 nova.filters
  • Log Level: 如 DEBUG, INFO, ERROR 等
  • Log date
  • Request ID: 某次请求的 Request ID

以上属性可以通过 Logstash 实现,通过提取日志的关键字段,从而获上述几种属性,并在 Elasticsearch 建立索引。

控制节点的logstash配置

input {
  file {
    path => ['/var/log/nova/nova-api.log']
    tags => ['nova', 'oslofmt']
    type => "nova-api"
  }
  file {
    path => ['/var/log/nova/nova-conductor.log']
    tags => ['nova-conductor', 'oslofmt']
    type => "nova"
  }
  file {
    path => ['/var/log/nova/nova-manage.log']
    tags => ['nova-manage', 'oslofmt']
    type => "nova"
  }
  file {
    path => ['/var/log/nova/nova-scheduler.log']
    tags => ['nova-scheduler', 'oslofmt']
    type => "nova"
  }
  file {
    path => ['/var/log/nova/nova-spicehtml5proxy.log']
    tags => ['nova-spice', 'oslofmt']
    type => "nova"
  }
  file {
    path => ['/var/log/keystone/keystone-all.log']
    tags => ['keystone', 'keystonefmt']
    type => "keystone"
  }
  file {
    path => ['/var/log/keystone/keystone-manage.log']
    tags => ['keystone', 'keystonefmt']
    type => "keystone"
  }
  file {
    path => ['/var/log/glance/api.log']
    tags => ['glance', 'oslofmt']
    type => "glance-api"
  }
  file {
    path => ['/var/log/glance/registry.log']
    tags => ['glance', 'oslofmt']
    type => "glance-registry"
  }
  file {
    path => ['/var/log/glance/scrubber.log']
    tags => ['glance', 'oslofmt']
    type => "glance-scrubber"
  }
  file {
    path => ['/var/log/heat/heat.log']
    tags => ['heat', 'oslofmt']
    type => "heat"
  }
  file {
    path => ['/var/log/neutron/neutron-server.log']
    tags => ['neutron', 'oslofmt']
    type => "neutron-server"
  }
  file {
 	path => ['/var/log/rabbitmq/rabbit@<%= @hostname %>.log']
 	tags => ['rabbitmq', 'oslofmt']
 	type => "rabbitmq"
   }
  file {
    path => ['/var/log/httpd/access_log']
    tags => ['horizon']
    type => "horizon"
  }
  file {
    path => ['/var/log/httpd/error_log']
    tags => ['horizon']
    type => "horizon"
  }
  file {
    path => ['/var/log/httpd/horizon_access_log']
    tags => ['horizon']
    type => "horizon"
  }
  file {
    path => ['/var/log/httpd/horizon_error_log']
    tags => ['horizon']
    type => "horizon"
  }
}
filter {
  if "oslofmt" in [tags] {
    multiline {
      negate => true
      pattern => "^%{TIMESTAMP_ISO8601} "
      what => "previous"
    }
    multiline {
      negate => false
      pattern => "^%{TIMESTAMP_ISO8601}%{SPACE}%{NUMBER}?%{SPACE}?TRACE"
      what => "previous"
    }
    grok {
      #  Do multiline matching as the above mutliline filter may add newlines
      #  to the log messages.
      #  TODO move the LOGLEVELs into a proper grok pattern.
      match => { "message" => "(?m)^%{TIMESTAMP_ISO8601:logdate}%{SPACE}%{NUMBER:pid}?%{SPACE}?(?<loglevel>AUDIT|CRITICAL|DEBUG|INFO|TRACE|WARNING|ERROR) \[?\b%{NOTSPACE:module}\b\]?%{SPACE}?%{GREEDYDATA:logmessage}?" }
      add_field => { "received_at" => "%{@timestamp}" }
    }
  } else if "keystonefmt" in [tags] {
    grok {
      #  Do multiline matching as the above mutliline filter may add newlines
      #  to the log messages.
      #  TODO move the LOGLEVELs into a proper grok pattern.
      match => { "message" => "(?m)^%{TIMESTAMP_ISO8601:logdate}%{SPACE}%{NUMBER:pid}?%{SPACE}?(?<loglevel>AUDIT|CRITICAL|DEBUG|INFO|TRACE|WARNING|ERROR) \[?\b%{NOTSPACE:module}\b\]?%{SPACE}?%{GREEDYDATA:logmessage}?" }
      add_field => { "received_at" => "%{@timestamp}" }
    }
    if [module] == "iso8601.iso8601" {
  # log message for each part of the date?  Really?
  drop {}
    }
  } else if "libvirt" in [tags] {
    grok {
       match => { "message" => "(?m)^%{TIMESTAMP_ISO8601:logdate}:%{SPACE}%{NUMBER:code}:?%{SPACE}\[?\b%{NOTSPACE:loglevel}\b\]?%{SPACE}?:?%{SPACE}\[?\b%{NOTSPACE:module}\b\]?%{SPACE}?%{GREEDYDATA:logmessage}?" }
       add_field => { "received_at" => "%{@timestamp}"}
    }
    mutate {
       uppercase => [ "loglevel" ]
    }
  } else if [type] == "syslog" {
     grok {
        match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:logmessage}" }
        add_field => [ "received_at", "%{@timestamp}" ]
     }
     syslog_pri {
        severity_labels => ["ERROR", "ERROR", "ERROR", "ERROR", "WARNING", "INFO", "INFO", "DEBUG" ]
     }
     date {
        match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
     }
     if !("_grokparsefailure" in [tags]) {
        mutate {
           replace => [ "@source_host", "%{syslog_hostname}" ]
        }
     }
     mutate {
        remove_field => [ "syslog_hostname", "syslog_timestamp" ]
        add_field => [ "loglevel", "%{syslog_severity}" ]
        add_field => [ "module", "%{syslog_program}" ]
     }
  }
}
output {
    elasticsearch { 
        hosts => ["172.26.13.3:9200","172.26.13.4:9200","172.26.13.5:9200"]
        index => "log-conrtoller-%{+YYYY-MM-dd}"
 }
}

计算节点的logstash配置

input {
  file {
    path => ['/var/log/messages']
    tags => ['system_messages']
    type => "system_messages"
  }
  file {
    path => ['/var/log/secure']
    tags => ['system_secure']
    type => "system_secure"
  }
}
output {
  elasticsearch {
    hosts => ["172.26.13.3:9200","172.26.13.4:9200","172.26.13.5:9200"] 
    index => "log-cpmpute-%{+YYYY-MM-dd}" } 
}

neutron-server节点的logstash配置

input {

  file{

     type => "neutron"

     path=>"/var/log/neutron/server.log"

  }

}

output {

  elasticsearch{

网络节点的logstash配置

input {

  file{

     type => "neutron"

     path=>"/var/log/neutron/openvswitch-agent.log"

  }

  file{

     type => "neutron"

     path=>"/var/log/neutron/metadata-agent.log"

  }

  file{

     type => "neutron"

     path=>"/var/log/neutron/metering-agent.log"

  }

  file{

     type => "neutron"

     path=>"/var/log/neutron/dhcp-agent.log"

  }

  file{

     type => "neutron"

     path=>"/var/log/neutron/V**-agent.log"

  }

  file{

     type => "neutron"

     path=>"/var/log/neutron/lbaas-agent.log"

  }

  file{

     type => "neutron"

     path=>"/var/log/neutron/ha-agent.log"

  }

}

output {

  elasticsearch{
 
    hosts => ["172.26.13.3:9200","172.26.13.4:9200","172.26.13.5:9200"] 

nova-api节点的logstash配置

input {

  file{

     type => "nova"

     path=>"/var/log/nova/nova-scheduler.log"

     path=>"/var/log/nova/nova-novncproxy.log"

     path=>"/var/log/nova/nova-consoleauth.log"

     path=>"/var/log/nova/nova-conductor.log"

     path=>"/var/log/nova/nova-cert.log"

     path=>"/var/log/nova/nova-api.log"

  }

}

output {

  elasticsearch{

    hosts => ["172.26.13.3:9200","172.26.13.4:9200","172.26.13.5:9200"] 
本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2019-04-17 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 部署架构
  • 日志格式
  • 控制节点的logstash配置
  • 计算节点的logstash配置
  • neutron-server节点的logstash配置
  • 网络节点的logstash配置
  • nova-api节点的logstash配置
相关产品与服务
Elasticsearch Service
腾讯云 Elasticsearch Service(ES)是云端全托管海量数据检索分析服务,拥有高性能自研内核,集成X-Pack。ES 支持通过自治索引、存算分离、集群巡检等特性轻松管理集群,也支持免运维、自动弹性、按需使用的 Serverless 模式。使用 ES 您可以高效构建信息检索、日志分析、运维监控等服务,它独特的向量检索还可助您构建基于语义、图像的AI深度应用。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档