前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >prometheus之部署安装

prometheus之部署安装

作者头像
行 者
发布2020-02-10 10:54:04
1K0
发布2020-02-10 10:54:04
举报
文章被收录于专栏:运维技术迷运维技术迷

运行环境

操作系统:Centos7 Server版本:prometheus-2.15.1 Node版本:node_exporter-0.18.1

所需端口

Server端:9090 Node_exporter端:9191

部署安装

软件下载地址:https://prometheus.io/download/

运行node_exporter

代码语言:javascript
复制
[root@devops opt]# wget https://github.com/prometheus/node_exporter/releases/download/v0.18.1/node_exporter-0.18.1.linux-amd64.tar.gz
[root@devops opt]# tar -zxvf node_exporter-0.18.1.linux-amd64.tar.gz
[root@devops opt]# cd node_exporter-0.18.1.linux-amd64
[root@devops node_exporter-0.18.1.linux-amd64]# ./node_exporter  # 运行node,软件会监听9100端口
INFO[0000] Starting node_exporter (version=0.18.1, branch=HEAD, revision=3db77732e925c08f675d7404a8c46466b2ece83e)  source="node_exporter.go:156"
INFO[0000] Build context (go=go1.12.5, user=root@b50852a1acba, date=20190604-16:41:18)  source="node_exporter.go:157"
INFO[0000] Enabled collectors:                           source="node_exporter.go:97"
INFO[0000]  - arp                                        source="node_exporter.go:104"
INFO[0000]  - bcache                                     source="node_exporter.go:104"
INFO[0000]  - bonding                                    source="node_exporter.go:104"
INFO[0000]  - conntrack                                  source="node_exporter.go:104"
INFO[0000]  - cpu                                        source="node_exporter.go:104"
INFO[0000]  - cpufreq                                    source="node_exporter.go:104"
INFO[0000]  - diskstats                                  source="node_exporter.go:104"
INFO[0000]  - edac                                       source="node_exporter.go:104"
INFO[0000]  - entropy                                    source="node_exporter.go:104"
INFO[0000]  - filefd                                     source="node_exporter.go:104"
INFO[0000]  - filesystem                                 source="node_exporter.go:104"
INFO[0000]  - hwmon                                      source="node_exporter.go:104"
INFO[0000]  - infiniband                                 source="node_exporter.go:104"
INFO[0000]  - ipvs                                       source="node_exporter.go:104"
INFO[0000]  - loadavg                                    source="node_exporter.go:104"
INFO[0000]  - mdadm                                      source="node_exporter.go:104"
INFO[0000]  - meminfo                                    source="node_exporter.go:104"
INFO[0000]  - netclass                                   source="node_exporter.go:104"
INFO[0000]  - netdev                                     source="node_exporter.go:104"
INFO[0000]  - netstat                                    source="node_exporter.go:104"
INFO[0000]  - nfs                                        source="node_exporter.go:104"
INFO[0000]  - nfsd                                       source="node_exporter.go:104"
INFO[0000]  - pressure                                   source="node_exporter.go:104"
INFO[0000]  - sockstat                                   source="node_exporter.go:104"
INFO[0000]  - stat                                       source="node_exporter.go:104"
INFO[0000]  - textfile                                   source="node_exporter.go:104"
INFO[0000]  - time                                       source="node_exporter.go:104"
INFO[0000]  - timex                                      source="node_exporter.go:104"
INFO[0000]  - uname                                      source="node_exporter.go:104"
INFO[0000]  - vmstat                                     source="node_exporter.go:104"
INFO[0000]  - xfs                                        source="node_exporter.go:104"
INFO[0000]  - zfs                                        source="node_exporter.go:104"
INFO[0000] Listening on :9100                            source="node_exporter.go:170"
[root@devops node_exporter-0.18.1.linux-amd64]# ss -anptu | grep 9100
tcp    ESTAB      0      0      192.168.119.119:59576              192.168.119.136:9100                users:(("prometheus",pid=12267,fd=9))
tcp    LISTEN     0      128      :::9100                 :::*                   users:(("node_exporter",pid=11797,fd=3))
tcp    ESTAB      0      0       ::1:49606               ::1:9100                users:(("prometheus",pid=12267,fd=15))
tcp    ESTAB      0      0       ::1:9100                ::1:49606               users:(("node_exporter",pid=11797,fd=5))

运行server

代码语言:javascript
复制
[root@devops opt]# wget https://github.com/prometheus/prometheus/releases/download/v2.15.1/prometheus-2.15.1.linux-amd64.tar.gz
[root@devops opt]# tar -zxvf prometheus-2.15.1.linux-amd64.tar.gz
[root@devops opt]# cd prometheus-2.15.1.linux-amd64
[root@devops prometheus-2.15.1.linux-amd64]# ls
console_libraries  consoles  data  LICENSE  NOTICE  prometheus  prometheus.yml  prometheus.yml.bak  promtool  tsdb
[root@devops prometheus-2.15.1.linux-amd64]# vim prometheus.yml  # 编辑配置文件,添加node配置。
# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    - targets: ['localhost:9090']

  - job_name: 'node_exporter' # 添加node配置
    static_configs:
      - targets:
        - 'localhost:9100'

[root@devops prometheus-2.15.1.linux-amd64]# ./prometheus  # 启动server
level=info ts=2019-12-30T17:27:35.125Z caller=main.go:294 msg="no time or size retention was set so using the default time retention" duration=15d
level=info ts=2019-12-30T17:27:35.125Z caller=main.go:330 msg="Starting Prometheus" version="(version=2.15.1, branch=HEAD, revision=8744510c6391d3ef46d8294a7e1f46e57407ab13)"
level=info ts=2019-12-30T17:27:35.125Z caller=main.go:331 build_context="(go=go1.13.5, user=root@4b1e33c71b9d, date=20191225-01:04:15)"
level=info ts=2019-12-30T17:27:35.125Z caller=main.go:332 host_details="(Linux 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 devops (none))"
level=info ts=2019-12-30T17:27:35.125Z caller=main.go:333 fd_limits="(soft=65536, hard=65536)"
level=info ts=2019-12-30T17:27:35.125Z caller=main.go:334 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2019-12-30T17:27:35.126Z caller=main.go:648 msg="Starting TSDB ..."
level=info ts=2019-12-30T17:27:35.126Z caller=web.go:506 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2019-12-30T17:27:35.130Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"
level=info ts=2019-12-30T17:27:35.133Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=5
level=info ts=2019-12-30T17:27:35.141Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=1 maxSegment=5
level=info ts=2019-12-30T17:27:35.145Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=2 maxSegment=5
level=info ts=2019-12-30T17:27:35.153Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=3 maxSegment=5
level=info ts=2019-12-30T17:27:35.172Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=4 maxSegment=5
level=info ts=2019-12-30T17:27:35.172Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=5 maxSegment=5
level=info ts=2019-12-30T17:27:35.175Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC
level=info ts=2019-12-30T17:27:35.175Z caller=main.go:664 msg="TSDB started"
level=info ts=2019-12-30T17:27:35.175Z caller=main.go:734 msg="Loading configuration file" filename=prometheus.yml
level=info ts=2019-12-30T17:27:35.207Z caller=main.go:762 msg="Completed loading of configuration file" filename=prometheus.yml
level=info ts=2019-12-30T17:27:35.207Z caller=main.go:617 msg="Server is ready to receive web requests."

查看结果

运行server和node没有报错的话,使用http://你的ip地址:9090就可以看到web界面了。

web1
web1

点击导航条的Status->targets可以查看到我们新增的node。

web2
web2

点击导航条的graph,测试一下查询数据。

web3
web3
本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 运行环境
  • 所需端口
  • 部署安装
    • 运行node_exporter
      • 运行server
      • 查看结果
      相关产品与服务
      Prometheus 监控服务
      Prometheus 监控服务(TencentCloud Managed Service for Prometheus,TMP)是基于开源 Prometheus 构建的高可用、全托管的服务,与腾讯云容器服务(TKE)高度集成,兼容开源生态丰富多样的应用组件,结合腾讯云可观测平台-告警管理和 Prometheus Alertmanager 能力,为您提供免搭建的高效运维能力,减少开发及运维成本。
      领券
      问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档