前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >使用telegraf收集监控接入Prometheus

使用telegraf收集监控接入Prometheus

作者头像
用户6792968
发布2022-06-27 15:08:17
4.9K1
发布2022-06-27 15:08:17
举报
文章被收录于专栏:fred 随笔

prometheus官方有很多 exporter,但是每个服务都是需要一个exporter,项目多了管理会非常麻烦,所以使用了influxdb旗下的telegraf作为客户端数据收集器

Prometheus 官方exporter:https://prometheus.io/download/ telegraf 官方下载地址:https://github.com/influxdata/telegraf

1、安装telegraf客户端

代码语言:javascript
复制
#下载二进制包
wget https://ghproxy.com/https://github.com/influxdata/telegraf/archive/refs/tags/v1.22.4.tar.gz
tar xf v1.22.4.tar.gz -C /opt
mv /opt/telegraf-1.22.4/   /opt/telegraf
#将telegraf添加至systemd管理
cat <<  eof  >> /etc/systemd/system/telegraf.service 
[Unit]
Description="telegraf"
After=network.target

[Service]
Type=simple

ExecStart=/opt/telegraf/usr/bin/telegraf --config /opt/telegraf/etc/telegraf/telegraf.conf --config-directory /opt/telegraf/etc/telegraf/telegraf.d/
WorkingDirectory=/opt/telegraf

SuccessExitStatus=0
LimitNOFILE=65536
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=telegraf
KillMode=process
KillSignal=SIGQUIT
TimeoutStopSec=5
Restart=always


[Install]
WantedBy=multi-user.target
eof
systemctl daemon-reload

2、配置telegraf收集器配置文件

代码语言:javascript
复制
#创建子配置文件目录
mkdir   /opt/telegraf/etc/telegraf/telegraf.d
cd  /opt/telegraf/etc/telegraf/telegraf.d
  • activemq 监控
代码语言:javascript
复制
cat  <<  eof  >>  inputs.activemq.conf.disable 
[[inputs.activemq]]
  ## ActiveMQ WebConsole URL
  url = "http://127.0.0.1:8161"

  username = "admin"
  password = "admin"

  ## Required ActiveMQ webadmin root path
  # webadmin = "admin"

[inputs.activemq.tags]
  # 路由键,不要修改该选项
  _router_key = "activemq"
  # 服务的分组,一般为所属项目
  group = "naoms"
  # 服务名
  # 如果项目中存在多个实例,用来描述属于哪个实例
  service = "external"

[[outputs.prometheus_client]]
  listen = ":19510"
  collectors_exclude = ["gocollector", "process"]
[outputs.prometheus_client.tagpass]
  _router_key = ["activemq"]
eof
  • apache 监控
代码语言:javascript
复制
cat   <<  eof  >>  inputs.apache.conf.disable 
# # Read Apache status information (mod_status)
# [[inputs.apache]]
#   ## An array of URLs to gather from, must be directed at the machine
#   ## readable version of the mod_status page including the auto query string.
#   ## Default is "http://localhost/server-status?auto".
#   urls = ["http://localhost/server-status?auto"]
#
#   ## Credentials for basic HTTP authentication.
#   # username = "myuser"
#   # password = "mypassword"
#
#   ## Maximum time to receive response.
#   # response_timeout = "5s"
#
#   ## Optional TLS Config
#   # tls_ca = "/etc/telegraf/ca.pem"
#   # tls_cert = "/etc/telegraf/cert.pem"
#   # tls_key = "/etc/telegraf/key.pem"
#   ## Use TLS but skip chain & host verification
#   # insecure_skip_verify = false
[inputs.apache.tags]
    _router_key = "apache"

[[outputs.prometheus_client]]
  listen = ":19350"
  collectors_exclude = ["gocollector", "process"]
[outputs.prometheus_client.tagpass]
  _router_key = ["apache"]
eof
  • elasticsearch 监控
代码语言:javascript
复制
cat  <<  eof  >>  inputs.elasticsearch.conf.disable 
# # Read stats from one or more Elasticsearch servers or clusters
# [[inputs.elasticsearch]]
#   ## specify a list of one or more Elasticsearch servers
#   # you can add username and password to your url to use basic authentication:
#   # servers = ["http://user:pass@localhost:9200"]
#   servers = ["http://localhost:9200"]
#
#   ## Timeout for HTTP requests to the elastic search server(s)
#   http_timeout = "5s"
#
#   ## When local is true (the default), the node will read only its own stats.
#   ## Set local to false when you want to read the node stats from all nodes
#   ## of the cluster.
#   local = true
#
#   ## Set cluster_health to true when you want to also obtain cluster health stats
#   cluster_health = false
#
#   ## Adjust cluster_health_level when you want to also obtain detailed health stats
#   ## The options are
#   ##  - indices (default)
#   ##  - cluster
#   # cluster_health_level = "indices"
#
#   ## Set cluster_stats to true when you want to also obtain cluster stats.
#   cluster_stats = false
#
#   ## Only gather cluster_stats from the master node. To work this require local = true
#   cluster_stats_only_from_master = true
#
#   ## Indices to collect; can be one or more indices names or _all
#   ## Use of wildcards is allowed. Use a wildcard at the end to retrieve index names that end with a changing value, like a date.
#   indices_include = ["_all"]
#
#   ## One of "shards", "cluster", "indices"
#   indices_level = "shards"
#
#   ## node_stats is a list of sub-stats that you want to have gathered. Valid options
#   ## are "indices", "os", "process", "jvm", "thread_pool", "fs", "transport", "http",
#   ## "breaker". Per default, all stats are gathered.
#   # node_stats = ["jvm", "http"]
#
#   ## HTTP Basic Authentication username and password.
#   # username = ""
#   # password = ""
#
#   ## Optional TLS Config
#   # tls_ca = "/etc/telegraf/ca.pem"
#   # tls_cert = "/etc/telegraf/cert.pem"
#   # tls_key = "/etc/telegraf/key.pem"
#   ## Use TLS but skip chain & host verification
#   # insecure_skip_verify = false

[inputs.elasticsearch.tags]
  _router_key = "elasticsearch"
  # 服务的分组,一般为所属项目
  group = "naoms"
  # 服务名
  # 如果项目中存在多个实例,用来描述属于哪个实例
  service = "external"


[[outputs.prometheus_client]]
  listen = ":19440"
  collectors_exclude = ["gocollector", "process"]
[outputs.prometheus_client.tagpass]
  _router_key = ["elasticsearch"]
eof
  • kafka 监控
代码语言:javascript
复制
cat <<  eof  >>  inputs.kafka.conf.disable 
# # Collect Kafka topics and consumers status from Burrow HTTP API.
# [[inputs.burrow]]
#   ## Burrow API endpoints in format "schema://host:port".
#   ## Default is "http://localhost:8000".
#   servers = ["http://localhost:8000"]
#
#   ## Override Burrow API prefix.
#   ## Useful when Burrow is behind reverse-proxy.
#   # api_prefix = "/v3/kafka"
#
#   ## Maximum time to receive response.
#   # response_timeout = "5s"
#
#   ## Limit per-server concurrent connections.
#   ## Useful in case of large number of topics or consumer groups.
#   # concurrent_connections = 20
#
#   ## Filter clusters, default is no filtering.
#   ## Values can be specified as glob patterns.
#   # clusters_include = []
#   # clusters_exclude = []
#
#   ## Filter consumer groups, default is no filtering.
#   ## Values can be specified as glob patterns.
#   # groups_include = []
#   # groups_exclude = []
#
#   ## Filter topics, default is no filtering.
#   ## Values can be specified as glob patterns.
#   # topics_include = []
#   # topics_exclude = []
#
#   ## Credentials for basic HTTP authentication.
#   # username = ""
#   # password = ""
#
#   ## Optional SSL config
#   # ssl_ca = "/etc/telegraf/ca.pem"
#   # ssl_cert = "/etc/telegraf/cert.pem"
#   # ssl_key = "/etc/telegraf/key.pem"
#   # insecure_skip_verify = false

[inputs.burrow.tags]
  _router_key = "kafka"

[[outputs.prometheus_client]]
  listen = ":19530"
  collectors_exclude = ["gocollector", "process"]
[outputs.prometheus_client.tagpass]
  _router_key = ["kafka"]
eof
  • mongodb 监控
代码语言:javascript
复制
cat <<  eof  >> inputs.mongodb.conf.disable 
# # Read metrics from one or many MongoDB servers
# [[inputs.mongodb]]
#   ## An array of URLs of the form:
#   ##   "mongodb://" [user ":" pass "@"] host [ ":" port]
#   ## For example:
#   ##   mongodb://user:auth_key@10.10.3.30:27017,
#   ##   mongodb://10.10.3.33:18832,
#   servers = ["mongodb://127.0.0.1:27017"]
#
#   ## When true, collect cluster status
#   ## Note that the query that counts jumbo chunks triggers a COLLSCAN, which
#   ## may have an impact on performance.
#   # gather_cluster_status = true
#
#   ## When true, collect per database stats
#   # gather_perdb_stats = false
#
#   ## When true, collect per collection stats
#   # gather_col_stats = false
#
#   ## When true, collect usage statistics for each collection
#   ## (insert, update, queries, remove, getmore, commands etc...).
#   # gather_top_stat = false
#
#   ## List of db where collections stats are collected
#   ## If empty, all db are concerned
#   # col_stats_dbs = ["local"]
#
#   ## Optional TLS Config
#   # tls_ca = "/etc/telegraf/ca.pem"
#   # tls_cert = "/etc/telegraf/cert.pem"
#   # tls_key = "/etc/telegraf/key.pem"
#   ## Use TLS but skip chain & host verification
#   # insecure_skip_verify = false

[inputs.mongodb.tags]
  _router_key = "mongodb"

[[outputs.prometheus_client]]
  listen = ":19430"
  collectors_exclude = ["gocollector", "process"]
[outputs.prometheus_client.tagpass]
  _router_key = ["mongodb"]
eof
  • mysql 监控
代码语言:javascript
复制
cat  << eof >> inputs.mysql.conf 
# # Read metrics from one or many mysql servers
 [[inputs.mysql]]
#   ## specify servers via a url matching:
#   ##  [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify|custom]]
#   ##  see https://github.com/go-sql-driver/mysql#dsn-data-source-name
#   ##  e.g.
#   ##    servers = ["user:passwd@tcp(127.0.0.1:3306)/?tls=false"]
#   ##    servers = ["user@tcp(127.0.0.1:3306)/?tls=false"]
#   #
#   ## If no servers are specified, then localhost is used as the host.
#   servers = ["tcp(127.0.0.1:3306)/"]
#
    servers = ["root:password@tcp(172.21.16.3:3306)/?tls=false"]
#   ## Selects the metric output format.
#   ##
#   ## This option exists to maintain backwards compatibility, if you have
#   ## existing metrics do not set or change this value until you are ready to
#   ## migrate to the new format.
#   ##
#   ## If you do not have existing metrics from this plugin set to the latest
#   ## version.
#   ##
#   ## Telegraf >=1.6: metric_version = 2
#   ##           <1.6: metric_version = 1 (or unset)
#   metric_version = 2
#
#   ## if the list is empty, then metrics are gathered from all database tables
#   # table_schema_databases = []
#
#   ## gather metrics from INFORMATION_SCHEMA.TABLES for databases provided above list
#   # gather_table_schema = false
#
#   ## gather thread state counts from INFORMATION_SCHEMA.PROCESSLIST
#   # gather_process_list = false
#
#   ## gather user statistics from INFORMATION_SCHEMA.USER_STATISTICS
#   # gather_user_statistics = false
#
#   ## gather auto_increment columns and max values from information schema
#   # gather_info_schema_auto_inc = false
#
#   ## gather metrics from INFORMATION_SCHEMA.INNODB_METRICS
#   # gather_innodb_metrics = false
#
#   ## gather metrics from SHOW SLAVE STATUS command output
#   # gather_slave_status = false
#
#   ## gather metrics from all channels from SHOW SLAVE STATUS command output
#   # gather_all_slave_channels = false
#
#   ## use MariaDB dialect for all channels SHOW SLAVE STATUS
#   # mariadb_dialect = false
#
#   ## gather metrics from SHOW BINARY LOGS command output
#   # gather_binary_logs = false
#
#   ## gather metrics from PERFORMANCE_SCHEMA.GLOBAL_VARIABLES
#   # gather_global_variables = true
#
#   ## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_TABLE
#   # gather_table_io_waits = false
#
#   ## gather metrics from PERFORMANCE_SCHEMA.TABLE_LOCK_WAITS
#   # gather_table_lock_waits = false
#
#   ## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_INDEX_USAGE
#   # gather_index_io_waits = false
#
#   ## gather metrics from PERFORMANCE_SCHEMA.EVENT_WAITS
#   # gather_event_waits = false
#
#   ## gather metrics from PERFORMANCE_SCHEMA.FILE_SUMMARY_BY_EVENT_NAME
#   # gather_file_events_stats = false
#
#   ## gather metrics from PERFORMANCE_SCHEMA.EVENTS_STATEMENTS_SUMMARY_BY_DIGEST
#   # gather_perf_events_statements = false
#
#   ## the limits for metrics form perf_events_statements
#   # perf_events_statements_digest_text_limit = 120
#   # perf_events_statements_limit = 250
#   # perf_events_statements_time_limit = 86400
#
#   ## gather metrics from PERFORMANCE_SCHEMA.EVENTS_STATEMENTS_SUMMARY_BY_ACCOUNT_BY_EVENT_NAME
#   # gather_perf_sum_per_acc_per_event         = false
#
#   ## list of events to be gathered for gather_perf_sum_per_acc_per_event
#   ## in case of empty list all events will be gathered
#   # perf_summary_events                       = []
#
#   ## Some queries we may want to run less often (such as SHOW GLOBAL VARIABLES)
#   ##   example: interval_slow = "30m"
#   # interval_slow = ""
#
#   ## Optional TLS Config (will be used if tls=custom parameter specified in server uri)
#   # tls_ca = "/etc/telegraf/ca.pem"
#   # tls_cert = "/etc/telegraf/cert.pem"
#   # tls_key = "/etc/telegraf/key.pem"
#   ## Use TLS but skip chain & host verification
#   # insecure_skip_verify = false

[inputs.mysql.tags]
  _router_key = "mysql"

[[outputs.prometheus_client]]
  listen = ":19410"
  collectors_exclude = ["gocollector", "process"]
[outputs.prometheus_client.tagpass]
  _router_key = ["mysql"]
eof
  • nginx 监控
代码语言:javascript
复制
cat  << eof >> inputs.nginx.conf 
# # Read Nginx's basic status information (ngx_http_stub_status_module)
[[inputs.nginx]]
#   # An array of Nginx stub_status URI to gather stats.
   urls = ["http://10.0.24.7/nginx_status"]
#
#   ## Optional TLS Config
#   tls_ca = "/etc/telegraf/ca.pem"
#   tls_cert = "/etc/telegraf/cert.cer"
#   tls_key = "/etc/telegraf/key.key"
#   ## Use TLS but skip chain & host verification
#   insecure_skip_verify = false
#
#   # HTTP response timeout (default: 5s)
#   response_timeout = "5s"

[inputs.nginx.tags]
  _router_key = "nginx"


[[inputs.tail]]
  name_override = "nginxlog"
  files = ["/usr/local/tengine/logs/grafana_alialili.log", "/usr/local/tengine/logs/spug.alialili.log", "/usr/local/tengine/logs/access.log"]
  from_beginning = true
  pipe = false
  data_format = "grok"
  grok_patterns = ["%{COMBINED_LOG_FORMAT}"]

[inputs.tail.tags]
  _router_key = "nginx"

[[outputs.prometheus_client]]
  listen = ":19360"
  collectors_exclude = ["gocollector", "process"]
[outputs.prometheus_client.tagpass]
  _router_key = ["nginx"]
eof
  • ping 探测监控
代码语言:javascript
复制
cat  <<  eof  >> inputs.ping.conf 
[[inputs.ping]]
#   ## Hosts to send ping packets to.
  urls = ["10.0.24.7","172.21.16.17"]
#
#   ## Method used for sending pings, can be either "exec" or "native".  When set
#   ## to "exec" the systems ping command will be executed.  When set to "native"
#   ## the plugin will send pings directly.
#   ##
#   ## While the default is "exec" for backwards compatibility, new deployments
#   ## are encouraged to use the "native" method for improved compatibility and
#   ## performance.
#   # method = "exec"
#
#   ## Number of ping packets to send per interval.  Corresponds to the "-c"
#   ## option of the ping command.
#   # count = 1
#
#   ## Time to wait between sending ping packets in seconds.  Operates like the
#   ## "-i" option of the ping command.
#   # ping_interval = 1.0
#
#   ## If set, the time to wait for a ping response in seconds.  Operates like
#   ## the "-W" option of the ping command.
#   # timeout = 1.0
#
#   ## If set, the total ping deadline, in seconds.  Operates like the -w option
#   ## of the ping command.
#   # deadline = 10
#
#   ## Interface or source address to send ping from.  Operates like the -I or -S
#   ## option of the ping command.
#   # interface = ""
#
#   ## Percentiles to calculate. This only works with the native method.
#   # percentiles = [50, 95, 99]
#
#   ## Specify the ping executable binary.
#   # binary = "ping"
#
#   ## Arguments for ping command. When arguments is not empty, the command from
#   ## the binary option will be used and other options (ping_interval, timeout,
#   ## etc) will be ignored.
#   # arguments = ["-c", "3"]
#
#   ## Use only IPv6 addresses when resolving a hostname.
#   # ipv6 = false
#
#   ## Number of data bytes to be sent. Corresponds to the "-s"
#   ## option of the ping command. This only works with the native method.
#   # size = 56
[inputs.ping.tags]
  _router_key = "ping"

[[outputs.prometheus_client]]
  listen = ":19310"
  collectors_exclude = ["gocollector", "process"]
[outputs.prometheus_client.tagpass]
  _router_key = ["ping"]
eof
  • rabbitmq 监控
代码语言:javascript
复制
cat   << eof  >> inputs.rabbitmq.conf.disable 
# # Reads metrics from RabbitMQ servers via the Management Plugin
# [[inputs.rabbitmq]]
#   ## Management Plugin url. (default: http://localhost:15672)
#   # url = "http://localhost:15672"
#   ## Tag added to rabbitmq_overview series; deprecated: use tags
#   # name = "rmq-server-1"
#   ## Credentials
#   # username = "guest"
#   # password = "guest"
#
#   ## Optional TLS Config
#   # tls_ca = "/etc/telegraf/ca.pem"
#   # tls_cert = "/etc/telegraf/cert.pem"
#   # tls_key = "/etc/telegraf/key.pem"
#   ## Use TLS but skip chain & host verification
#   # insecure_skip_verify = false
#
#   ## Optional request timeouts
#   ##
#   ## ResponseHeaderTimeout, if non-zero, specifies the amount of time to wait
#   ## for a server's response headers after fully writing the request.
#   # header_timeout = "3s"
#   ##
#   ## client_timeout specifies a time limit for requests made by this client.
#   ## Includes connection time, any redirects, and reading the response body.
#   # client_timeout = "4s"
#
#   ## A list of nodes to gather as the rabbitmq_node measurement. If not
#   ## specified, metrics for all nodes are gathered.
#   # nodes = ["rabbit@node1", "rabbit@node2"]
#
#   ## A list of queues to gather as the rabbitmq_queue measurement. If not
#   ## specified, metrics for all queues are gathered.
#   # queues = ["telegraf"]
#
#   ## A list of exchanges to gather as the rabbitmq_exchange measurement. If not
#   ## specified, metrics for all exchanges are gathered.
#   # exchanges = ["telegraf"]
#
#   ## Metrics to include and exclude. Globs accepted.
#   ## Note that an empty array for both will include all metrics
#   ## Currently the following metrics are supported: "exchange", "federation", "node", "overview", "queue"
#   # metric_include = []
#   # metric_exclude = []
#
#   ## Queues to include and exclude. Globs accepted.
#   ## Note that an empty array for both will include all queues
#   queue_name_include = []
#   queue_name_exclude = []
#
#   ## Federation upstreams include and exclude when gathering the rabbitmq_federation measurement.
#   ## If neither are specified, metrics for all federation upstreams are gathered.
#   ## Federation link metrics will only be gathered for queues and exchanges
#   ## whose non-federation metrics will be collected (e.g a queue excluded
#   ## by the 'queue_name_exclude' option will also be excluded from federation).
#   ## Globs accepted.
#   # federation_upstream_include = ["dataCentre-*"]
#   # federation_upstream_exclude = []
[inputs.rabbitmq.tags]
  _router_key = "rabbitmq"

[[outputs.prometheus_client]]
  listen = ":19520"
  collectors_exclude = ["gocollector", "process"]
[outputs.prometheus_client.tagpass]
  _router_key = ["rabbitmq"]
eof
  • redis 监控
代码语言:javascript
复制
cat  <<  eof  >> inputs.redis.conf.disable 
# # Read metrics from one or many redis servers
# [[inputs.redis]]
#   ## specify servers via a url matching:
#   ##  [protocol://][:password]@address[:port]
#   ##  e.g.
#   ##    tcp://localhost:6379
#   ##    tcp://:password@192.168.99.100
#   ##    unix:///var/run/redis.sock
#   ##
#   ## If no servers are specified, then localhost is used as the host.
#   ## If no port is specified, 6379 is used
#   servers = ["tcp://localhost:6379"]
#
#   ## Optional. Specify redis commands to retrieve values
#   # [[inputs.redis.commands]]
#   #   # The command to run where each argument is a separate element
#   #   command = ["get", "sample-key"]
#   #   # The field to store the result in
#   #   field = "sample-key-value"
#   #   # The type of the result
#   #   # Can be "string", "integer", or "float"
#   #   type = "string"
#
#   ## specify server password
#   # password = "s#cr@t%"
#
#   ## Optional TLS Config
#   # tls_ca = "/etc/telegraf/ca.pem"
#   # tls_cert = "/etc/telegraf/cert.pem"
#   # tls_key = "/etc/telegraf/key.pem"
#   ## Use TLS but skip chain & host verification
#   # insecure_skip_verify = true
[inputs.redis.tags]
  _router_key = "redis"

[[outputs.prometheus_client]]
  listen = ":19450"
  collectors_exclude = ["gocollector", "process"]
[outputs.prometheus_client.tagpass]
  _router_key = ["redis"]
eof
  • system 系统监控
代码语言:javascript
复制
cat <<  eof  >>  inputs.system.conf 
[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = false
[inputs.cpu.tags]
    _router_key = "system"

[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
[inputs.disk.tags]
  _router_key = "system"

[[inputs.diskio]]
[inputs.diskio.tags]
  _router_key = "system"

[[inputs.kernel]]
[inputs.kernel.tags]
  _router_key = "system"

[[inputs.mem]]
[inputs.mem.tags]
  _router_key = "system"

[[inputs.processes]]
[inputs.processes.tags]
  _router_key = "system"

[[inputs.swap]]
[inputs.swap.tags]
  _router_key = "system"

[[inputs.system]]
  fielddrop = ["uptime_format"]
[inputs.system.tags]
  _router_key = "system"

[[inputs.net]]
  ignore_protocol_stats = true
  interfaces = ["eth*", "cni-podman0", "veth*", "lo"]
[inputs.net.tags]
  _router_key = "system"

[[inputs.netstat]]
[inputs.netstat.tags]
  _router_key = "system"


[[outputs.prometheus_client]]
  listen = ":19200"
  collectors_exclude = ["gocollector", "process"]
[outputs.prometheus_client.tagpass]
  _router_key = ["system"]
eof
  • zookeeper 监控
代码语言:javascript
复制
cat  << eof >> inputs.zookeeper.conf.disable 
# # Reads 'mntr' stats from one or many zookeeper servers
# [[inputs.zookeeper]]
#   ## An array of address to gather stats about. Specify an ip or hostname
#   ## with port. ie localhost:2181, 10.0.0.1:2181, etc.
#
#   ## If no servers are specified, then localhost is used as the host.
#   ## If no port is specified, 2181 is used
#   servers = [":2181"]
#
#   ## Timeout for metric collections from all servers.  Minimum timeout is "1s".
#   # timeout = "5s"
#
#   ## Optional TLS Config
#   # enable_tls = true
#   # tls_ca = "/etc/telegraf/ca.pem"
#   # tls_cert = "/etc/telegraf/cert.pem"
#   # tls_key = "/etc/telegraf/key.pem"
#   ## If false, skip chain & host verification
#   # insecure_skip_verify = true
[inputs.zookeeper.tags]
  _router_key = "zookeeper"

[[outputs.prometheus_client]]
  listen = ":19610"
  collectors_exclude = ["gocollector", "process"]
[outputs.prometheus_client.tagpass]
  _router_key = ["zookeeper"]
eof

需要使用哪个监控,将配置文件重命名,将disable去掉即可,然后将配置中的注释按照需求取消,重启服务即可使用

3、配置Prometheus收集这些数据信息

代码语言:javascript
复制
#编辑prometheus.yml ,scrape_configs 字段下添加如下内容
scrape_configs:
···
  - job_name: 'telegraf'
    file_sd_configs:
      #- files: ['/etc/prometheus/telegraf/*.yml']
    - files: ['/opt/prometheus/telegraf/*.yml']
      refresh_interval: 5s
···
#创建子配置文件
mkdir /opt/prometheus/telegraf/
cd /opt/prometheus/telegraf/
cat  <<  eof >> localhost.yml 
- targets: [ "localhost:19200" ]  # 指向前面telegraf的端口
  labels:
    group: "telegraf"
    kind: "system"
- targets: [ "localhost:19410" ]
  labels:
    group: "telegraf"
    kind: "mysql"
- targets: [ "localhost:19360" ]
  labels:
    group: "telegraf"
    kind: "nginx"
- targets: [ "localhost:19310" ]
  labels:
    group: "telegraf"
    kind: "ping"
eof

4、配置grafana大屏

可以导入我使用的大屏模板 https://grafana.alialili.cn/d/b4FAZ7Mmz/system-metrics-single?orgId=1&refresh=5m&from=1654250864881&to=1654251764881

image-1654251812590
image-1654251812590

填入上面提供的链接就可以一键导入模板。可以查看inputs.system.conf中所有监控数据

5、展示

image-1654251873424
image-1654251873424
image-1654251899379
image-1654251899379
image-1654251916794
image-1654251916794

其他的监控项,可以在:https://grafana.com/grafana/dashboards/ 寻找核实的大屏模板,导入即可使用

我的博客即将同步至腾讯云+社区,邀请大家一同入驻:https://cloud.tencent.com/developer/support-plan?invite_code=3n76s4niliiok

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2022-06-03,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1、安装telegraf客户端
  • 2、配置telegraf收集器配置文件
  • 3、配置Prometheus收集这些数据信息
  • 4、配置grafana大屏
  • 5、展示
相关产品与服务
Elasticsearch Service
腾讯云 Elasticsearch Service(ES)是云端全托管海量数据检索分析服务,拥有高性能自研内核,集成X-Pack。ES 支持通过自治索引、存算分离、集群巡检等特性轻松管理集群,也支持免运维、自动弹性、按需使用的 Serverless 模式。使用 ES 您可以高效构建信息检索、日志分析、运维监控等服务,它独特的向量检索还可助您构建基于语义、图像的AI深度应用。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档