前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >手把手教你搭建Tidb最新版4.0集群

手把手教你搭建Tidb最新版4.0集群

原创
作者头像
杨漆
修改2021-03-18 14:21:00
8670
修改2021-03-18 14:21:00
举报
文章被收录于专栏:TidbTidb

**导读**

> 作者:杨漆

> 16年关系型数据库管理,从oracle 9i 、10g、11g、12c到Mysql5.5、5.6、5.7、8.0 到TiDB获得3个OCP、2个OCM;运维路上不平坦,跌过不少坑、熬过许多夜。把工作笔记整理出来分享给大伙儿,希望帮到大家少走弯路、少熬夜。

TiUP部署Tidb Cluster

TiUP 是 TiDB 4.0 版本引入的集群运维工具,TiUP cluster 是 TiUP 提供的使用 Golang 编写的集群管理组件,通过 TiUP cluster 组件就可以进行日常的运维工作,包括部署、启动、关闭、销毁、弹性扩缩容、升级 TiDB 集群、管理 TiDB 集群参数。

目前 TiUP 可以支持部署 TiDB、TiFlash、TiDB Binlog、TiCDC,以及监控系统。

第 1 步:软硬件环境需求及前置检查

注意:

生产环境中的 TiDB 和 PD 可以部署和运行在同服务器上,如对性能和可靠性有更高的要求,应尽可能分开部署。

生产环境强烈推荐使用更高的配置。

TiKV 硬盘大小配置建议 PCI-E SSD 不超过 2 TB,普通 SSD 不超过 1.5 TB。

TiFlash 支持多盘部署。

TiFlash 数据目录的第一块磁盘推荐用高性能 SSD 来缓冲 TiKV 同步数据的实时写入,该盘性能应不低于 TiKV 所使用的磁盘,比如 PCI-E SSD。并且该磁盘容量建议不小于总容量的 10%,否则它可能成为这个节点的能承载的数据量的瓶颈。而其他磁盘可以根据需求部署多块普通 SSD,当然更好的 PCI-E SSD 硬盘会带来更好的性能。

TiFlash 推荐与 TiKV 部署在不同节点,如果条件所限必须将 TiFlash 与 TiKV 部署在相同节点,则需要适当增加 CPU 核数和内存,且尽量将 TiFlash 与 TiKV 部署在不同的磁盘,以免互相干扰。

TiFlash 硬盘总容量大致为:整个 TiKV 集群的需同步数据容量 / TiKV 副本数 * TiFlash 副本数。例如整体 TiKV 的规划容量为 1 TB、TiKV 副本数为 3、TiFlash 副本数为 2,则 TiFlash 的推荐总容量为 1024 GB / 3 * 2。用户可以选择同步部分表数据而非全部,具体容量可以根据需要同步的表的数据量具体分析。

TiCDC 硬盘配置建议 200 GB+ PCIE-SSD。

TiKV 部署目标机器上添加数据盘 EXT4 文件系统挂载参数

生产环境部署,建议使用 EXT4 类型文件系统的 NVME 类型的 SSD 磁盘存储 TiKV 数据文件。这个配置方案为最佳实施方案,其可靠性、安全性、稳定性已经在大量线上场景中得到证实。

使用 root 用户登录目标机器,将部署目标机器数据盘格式化成 ext4 文件系统,挂载时添加 nodelalloc 和 noatime 挂载参数。nodelalloc 是必选参数,否则 TiUP 安装时检测无法通过;noatime 是可选建议参数

注意:

如果你的数据盘已经格式化成 ext4 并挂载了磁盘,可先执行 umount /dev/nvme0n1p1 命令卸载,从编辑 /etc/fstab 文件步骤开始执行,添加挂载参数重新挂载即可。

以 /dev/nvme0n1 数据盘为例,具体操作步骤如下:

1.查看数据盘:

fdisk -l

Disk /dev/nvme0n1: 1000 GB

2.创建分区:

parted -s -a optimal /dev/nvme0n1 mklabel gpt -- mkpart primary ext4 1 -1

注意:

使用 lsblk 命令查看分区的设备号:对于 nvme 磁盘生成的分区设备号一般为 nvme0n1p1;对于普通磁盘(例如 /dev/sdb),生成的的分区设备号一般为 sdb1

3. 格式化文件系统:

  mkfs.ext4   /dev/nvme0n1p1

4. 查看数据盘分区UUID:

检测及关闭系统 swap

TiDB 运行需要有足够的内存,并且不建议使用 swap 作为内存不足的缓冲,这会降低性能。因此建议永久关闭系统 swap,并且不要使用 swapoff -a 方式关闭,否则重启机器后该操作会失效。

检测及关闭目标部署机器的防火墙

在 TiDB 集群中,需要将节点间的访问端口打通才可以保证读写请求、数据心跳等信息的正常的传输。在普遍线上场景中,数据库到业务服务和数据库节点的网络联通都是在安全域内完成数据交互。如果没有特殊安全的要求,建议将目标节点的防火墙进行关闭。否则建议按照端口使用规则,将端口信息配置到防火墙服务的白名单中。

检测及安装 NTP 服务

TiDB 是一套分布式数据库系统,需要节点间保证时间的同步,从而确保 ACID 模型的事务线性一致性。目前解决授时的普遍方案是采用 NTP 服务,可以通过互联网中的 pool.ntp.org 授时服务来保证节点的时间同步,也可以使用离线环境自己搭建的 NTP 服务来解决授时。

采用如下步骤检查是否安装 NTP 服务以及与 NTP 服务器正常同步:

1.执以下命令,如果输出 running 表示 NTP 服务正在运行:

sudo systemctl status ntpd.service

1.执行 ntpstat 命令检测是否与 NTP 服务器同步:

手动配置 SSH 互信及 sudo 免密码

对于有需求,通过手动配置中控机至目标节点互信的场景,可参考本段。通常推荐使用 TiUP 部署工具会自动配置 SSH 互信及免密登陆,可忽略本段内容

第 2 步:在中控机上安装 TiUP 组件

使用普通用户登录中控机,以 tidb 用户为例,后续安装 TiUP 及集群管理操作均通过该用户完成:

1.执行如下命令安装 TiUP 工具:

curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh

第 3 步:编辑初始化配置文件

请根据不同的集群拓扑,编辑 TiUP 所需的集群初始化配置文件。

这里举出常见的 6 种场景,请根据链接中的拓扑说明,以及给出的配置文件模板,新建一个配置文件 topology.yaml。如果有其他组合场景的需求,请根据多个模板自行调整。

A. 最小拓扑架构

最基本的集群拓扑,包括 tidb-server、tikv-server、pd-server,适合 OLTP 业务。

1. 简易配置如下:

cat  simple-mini.yaml

# # Global variables are applied to all deployments and used as the default value of 

# # the deployments if a specific deployment value is missing. 

global: 

  user: "tidb" 

  ssh_port: 22 

  deploy_dir: "/tidb-deploy" 

  data_dir: "/tidb-data" 

pd_servers: 

  - host: 10.0.1.4 

  - host: 10.0.1.5 

  - host: 10.0.1.6 

tidb_servers: 

  - host: 10.0.1.1 

  - host: 10.0.1.2 

  - host: 10.0.1.3 

tikv_servers: 

  - host: 10.0.1.7 

  - host: 10.0.1.8 

  - host: 10.0.1.9 

monitoring_servers: 

  - host: 10.0.1.10 

grafana_servers: 

  - host: 10.0.1.10 

alertmanager_servers: 

  - host: 10.0.1.10 

2. 详细配置:

cat complex-mini.yaml 

## Global variables are applied to all deployments and used as the default value of 

## the deployments if a specific deployment value is missing. 

global: 

 user: "tidb" 

 ssh_port: 22 

 deploy_dir: "/tidb-deploy" 

 data_dir: "/tidb-data" 

## Monitored variables are applied to all the machines. 

 monitored: 

  node_exporter_port: 9100 

  blackbox_exporter_port: 9115 

  # deploy_dir: "/tidb-deploy/monitored-9100" 

  # data_dir: "/tidb-data/monitored-9100" 

  # log_dir: "/tidb-deploy/monitored-9100/log" 

# # Server configs are used to specify the runtime configuration of TiDB components. 

# # All configuration items can be found in TiDB docs: 

# # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/ 

## - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/ 

# # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/ 

# # All configuration items use points to represent the hierarchy, e.g: 

# #   readpool.storage.use-unified-pool 

# #       

# # You can overwrite this configuration via the instance-level `config` field. 

 server_configs: 

  tidb: 

    log.slow-threshold: 300 

    binlog.enable: false 

    binlog.ignore-error: false 

  tikv: 

    # server.grpc-concurrency: 4 

    # raftstore.apply-pool-size: 2 

    # raftstore.store-pool-size: 2 

    # rocksdb.max-sub-compactions: 1 

    # storage.block-cache.capacity: "16GB" 

    # readpool.unified.max-thread-count: 12 

    readpool.storage.use-unified-pool: false 

    readpool.coprocessor.use-unified-pool: true 

  pd: 

    schedule.leader-schedule-limit: 4 

    schedule.region-schedule-limit: 2048 

    schedule.replica-schedule-limit: 64 

 pd_servers: 

  - host: 10.0.1.4 

    # ssh_port: 22 

    # name: "pd-1" 

    # client_port: 2379 

    # peer_port: 2380 

    # deploy_dir: "/tidb-deploy/pd-2379" 

    # data_dir: "/tidb-data/pd-2379" 

    # log_dir: "/tidb-deploy/pd-2379/log" 

    # numa_node: "0,1" 

    # # The following configs are used to overwrite the `server_configs.pd` values. 

    # config: 

    #   schedule.max-merge-region-size: 20 

    #   schedule.max-merge-region-keys: 200000 

  - host: 10.0.1.5 

  - host: 10.0.1.6 

  tidb_servers: 

  - host: 10.0.1.1 

    # ssh_port: 22 

    # port: 4000 

    # status_port: 10080 

    # deploy_dir: "/tidb-deploy/tidb-4000" 

    # log_dir: "/tidb-deploy/tidb-4000/log" 

    # numa_node: "0,1" 

    # # The following configs are used to overwrite the `server_configs.tidb` values. 

    # config: 

    #   log.slow-query-file: tidb-slow-overwrited.log 

  - host: 10.0.1.2 

  - host: 10.0.1.3 

  tikv_servers: 

  - host: 10.0.1.7 

    # ssh_port: 22 

    # port: 20160 

    # status_port: 20180 

    # deploy_dir: "/tidb-deploy/tikv-20160" 

    # data_dir: "/tidb-data/tikv-20160" 

    # log_dir: "/tidb-deploy/tikv-20160/log" 

    # numa_node: "0,1" 

    # # The following configs are used to overwrite the `server_configs.tikv` values. 

    # config: 

    #   server.grpc-concurrency: 4 

    #   server.labels: { zone: "zone1", dc: "dc1", host: "host1" } 

  - host: 10.0.1.8 

  - host: 10.0.1.9 

  monitoring_servers: 

  - host: 10.0.1.10 

    # ssh_port: 22 

    # port: 9090 

    # deploy_dir: "/tidb-deploy/prometheus-8249" 

    # data_dir: "/tidb-data/prometheus-8249" 

    # log_dir: "/tidb-deploy/prometheus-8249/log" 

grafana_servers: 

  - host: 10.0.1.10 

    # port: 3000 

    # deploy_dir: /tidb-deploy/grafana-3000 

alertmanager_servers: 

  - host: 10.0.1.10 

   # ssh_port: 22 

    # web_port: 9093 

    # cluster_port: 9094 

    # deploy_dir: "/tidb-deploy/alertmanager-9093" 

    # data_dir: "/tidb-data/alertmanager-9093" 

    # log_dir: "/tidb-deploy/alertmanager-9093/log" 

B. 增加 TiFlash 拓扑架构

包含最小拓扑的基础上,同时部署 TiFlash

TiFlash 是列式的存储引擎,已经逐步成为集群拓扑的标配。适合 Real-Time HTAP 业务。

1.简易配置如下:

cat  simple-tiflash.yaml

# # Global variables are applied to all deployments and used as the default value of 

# # the deployments if a specific deployment value is missing. 

global: 

 user: "tidb" 

 ssh_port: 22 

 deploy_dir: "/tidb-deploy" 

 data_dir: "/tidb-data" 

server_configs: 

  pd: 

    replication.enable-placement-rules: true 

pd_servers: 

  - host: 10.0.1.4 

  - host: 10.0.1.5 

  - host: 10.0.1.6 

tidb_servers: 

  - host: 10.0.1.7 

  - host: 10.0.1.8 

  - host: 10.0.1.9 

tikv_servers: 

  - host: 10.0.1.1 

  - host: 10.0.1.2 

  - host: 10.0.1.3 

tiflash_servers: 

  - host: 10.0.1.11 

    data_dir: /tidb-data/tiflash-9000 

    deploy_dir: /tidb-deploy/tiflash-9000 

monitoring_servers: 

  - host: 10.0.1.10 

grafana_servers: 

  - host: 10.0.1.10 

alertmanager_servers: 

  - host: 10.0.1.10 

3. 详细配置:

cat  complex-tiflash-yaml

# # Global variables are applied to all deployments and used as the default value of 

# # the deployments if a specific deployment value is missing. 

global: 

  user: "tidb" 

  ssh_port: 22 

  deploy_dir: "/tidb-deploy" 

  data_dir: "/tidb-data" 

# # Monitored variables are applied to all the machines. 

monitored: 

 node_exporter_port: 9100 

 blackbox_exporter_port: 9115 

 # deploy_dir: "/tidb-deploy/monitored-9100" 

 # data_dir: "/tidb-data/monitored-9100" 

 # log_dir: "/tidb-deploy/monitored-9100/log" 

# # Server configs are used to specify the runtime configuration of TiDB components. 

# # All configuration items can be found in TiDB docs: 

# # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/ 

# # - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/ 

# # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/ 

# # All configuration items use points to represent the hierarchy, e.g: 

# #   readpool.storage.use-unified-pool 

# #       

# # You can overwrite this configuration via the instance-level `config` field. 

server_configs: 

 tidb: 

   log.slow-threshold: 300 

 tikv: 

   # server.grpc-concurrency: 4 

   # raftstore.apply-pool-size: 2 

   # raftstore.store-pool-size: 2 

   # rocksdb.max-sub-compactions: 1 

   # storage.block-cache.capacity: "16GB" 

   # readpool.unified.max-thread-count: 12 

   readpool.storage.use-unified-pool: false 

   readpool.coprocessor.use-unified-pool: true 

 pd: 

   schedule.leader-schedule-limit: 4 

   schedule.region-schedule-limit: 2048 

   schedule.replica-schedule-limit: 64 

   replication.enable-placement-rules: true 

 tiflash: 

   # Maximum memory usage for processing a single query. Zero means unlimited. 

   profiles.default.max_memory_usage: 10000000000 

   # Maximum memory usage for processing all concurrently running queries on the server. Zero means unlimited. 

   profiles.default.max_memory_usage_for_all_queries: 0 

pd_servers: 

 - host: 10.0.1.4 

   # ssh_port: 22 

   # name: "pd-1" 

   # client_port: 2379 

   # peer_port: 2380 

   # deploy_dir: "/tidb-deploy/pd-2379" 

   # data_dir: "/tidb-data/pd-2379" 

   # log_dir: "/tidb-deploy/pd-2379/log" 

   # numa_node: "0,1" 

   # # The following configs are used to overwrite the `server_configs.pd` values. 

   # config: 

   #   schedule.max-merge-region-size: 20 

   #   schedule.max-merge-region-keys: 200000 

 - host: 10.0.1.5 

 - host: 10.0.1.6 

tidb_servers: 

 - host: 10.0.1.7 

   # ssh_port: 22 

   # port: 4000 

   # status_port: 10080 

   # deploy_dir: "/tidb-deploy/tidb-4000" 

   # log_dir: "/tidb-deploy/tidb-4000/log" 

   # numa_node: "0,1" 

   # # The following configs are used to overwrite the `server_configs.tidb` values. 

   # config: 

   #   log.slow-query-file: tidb-slow-overwrited.log 

 - host: 10.0.1.8 

 - host: 10.0.1.9 

tikv_servers: 

 - host: 10.0.1.1 

   # ssh_port: 22 

   # port: 20160 

   # status_port: 20180 

   # deploy_dir: "/tidb-deploy/tikv-20160" 

   # data_dir: "/tidb-data/tikv-20160" 

   # log_dir: "/tidb-deploy/tikv-20160/log" 

   # numa_node: "0,1" 

   # # The following configs are used to overwrite the `server_configs.tikv` values. 

   # config: 

   #   server.grpc-concurrency: 4 

   #   server.labels: { zone: "zone1", dc: "dc1", host: "host1" } 

 - host: 10.0.1.2 

 - host: 10.0.1.3 

tiflash_servers: 

 - host: 10.0.1.11 

   data_dir: /tidb-data/tiflash-9000 

   deploy_dir: /tidb-deploy/tiflash-9000 

   # ssh_port: 22 

    # tcp_port: 9000 

    # http_port: 8123 

    # flash_service_port: 3930 

    # flash_proxy_port: 20170 

    # flash_proxy_status_port: 20292 

    # metrics_port: 8234 

    # deploy_dir: /tidb-deploy/tiflash-9000 

    # numa_node: "0,1" 

    # # The following configs are used to overwrite the `server_configs.tiflash` values. 

    # config: 

    #   logger.level: "info" 

    # learner_config: 

    #   log-level: "info" 

  # - host: 10.0.1.12 

  # - host: 10.0.1.13 

monitoring_servers: 

  - host: 10.0.1.10 

    # ssh_port: 22 

    # port: 9090 

    # deploy_dir: "/tidb-deploy/prometheus-8249" 

    # data_dir: "/tidb-data/prometheus-8249" 

    # log_dir: "/tidb-deploy/prometheus-8249/log" 

grafana_servers: 

  - host: 10.0.1.10 

    # port: 3000 

    # deploy_dir: /tidb-deploy/grafana-3000 

alertmanager_servers: 

  - host: 10.0.1.10 

    # ssh_port: 22 

    # web_port: 9093 

    # cluster_port: 9094 

    # deploy_dir: "/tidb-deploy/alertmanager-9093" 

    # data_dir: "/tidb-data/alertmanager-9093" 

    # log_dir: "/tidb-deploy/alertmanager-9093/log" 

关键参数介绍

需要将配置模板中 replication.enable-placement-rules 设置为 true,以开启 PD 的 Placement Rules功能。

tiflash_servers 实例级别配置 "-host" 目前只支持 IP,不支持域名

TiFlash配置参数:

Cat  tiflash.toml

tmp_path = tiflash 临时文件存放路径

path = tiflash 数据存储路径     # 如果有多个目录,以英文逗号分隔

path_realtime_mode = false # 默认为 false;如果设为 true,且 path 配置了多个目录,表示在第一个目录存放最新数据,较旧的数据存放于其他目录

listen_host = tiflash 服务监听 host # 一般配置成 0.0.0.0

tcp_port = tiflash tcp 服务端口

http_port = tiflash http 服务端口

mark_cache_size = 5368709120 # 数据块元信息的内存 cache 大小限制,通常不需要修改

minmax_index_cache_size = 5368709120 # 数据块 min-max 索引的内存 cache 大小限制,通常不需要修改

[flash]

   tidb_status_addr = tidb status 端口地址 # 多个地址以逗号分割

   service_addr =  tiflash raft 服务 和 coprocessor 服务监听地址

多个 TiFlash 节点会选一个 master 来负责往 PD 增删 placement rule,需要 3 个参数控制

[flash.flash_cluster]

   refresh_interval = master 定时刷新有效期

   update_rule_interval = master 定时向 tidb 获取 tiflash 副本状态并与 pd 交互

   master_ttl = master 选出后的有效期

   cluster_manager_path = pd buddy 所在目录的绝对路径

   log = pd buddy log 路径

[flash.proxy]

   addr = proxy 监听地址

   advertise-addr = proxy 对外访问地址

   data-dir = proxy 数据存储路径

   config = proxy 配置文件路径

   log-file = proxy log 路径

[logger]

   level = log 级别(支持 trace、debug、information、warning、error)

   log = tiflash log 路径

   errorlog = tiflash error log 路径

   size = 单个日志文件的大小

   count = 最多保留日志文件个数

[raft]

   kvstore_path = kvstore 数据存储路径 # 默认为 "{path 的第一个目录}/kvstore"

   pd_addr = pd 服务地址 # 多个地址以逗号隔开

[status]

   metrics_port = Prometheus 拉取 metrics 信息的端口

[profiles]

[profiles.default]

   dt_enable_logical_split = true # 存储引擎的 segment 分裂是否使用逻辑分裂。使用逻辑分裂可以减小写放大,提高写入速度,但是会造成一定的空间浪费。默认为 true

   max_memory_usage = 10000000000 # 单次 coprocessor 查询过程中,对中间数据的内存限制,单位为 byte,默认为 10000000000。如果设置为 0 表示不限制

   max_memory_usage_for_all_queries = 0 # 所有查询过程中,对中间数据的内存限制,单位为 byte,默认为 0,表示不限制

C. 增加 TiCDC 拓扑架构

包含最小拓扑的基础上,同时部署 TiCDC。TiCDC 是 4.0 版本开始支持的 TiDB 增量数据同步工具,支持多种下游 (TiDB/MySQL/MQ)。相比于 TiDB Binlog,TiCDC 有延迟更低、天然高可用等优点。在部署完成后,需要启动 TiCDC,通过 cdc cli 创建同步任务。

1. 简单配置:

  cat simple-cdc.yaml

# # Global variables are applied to all deployments and used as the default value of 

# # the deployments if a specific deployment value is missing. 

global: 

 user: "tidb" 

 ssh_port: 22 

 deploy_dir: "/tidb-deploy" 

 data_dir: "/tidb-data" 

pd_servers: 

  - host: 10.0.1.4 

  - host: 10.0.1.5 

  - host: 10.0.1.6 

tidb_servers: 

  - host: 10.0.1.1 

  - host: 10.0.1.2 

  - host: 10.0.1.3 

tikv_servers: 

  - host: 10.0.1.7 

  - host: 10.0.1.8 

  - host: 10.0.1.9 

cdc_servers: 

  - host: 10.0.1.7 

  - host: 10.0.1.8 

  - host: 10.0.1.9 

monitoring_servers: 

  - host: 10.0.1.10 

grafana_servers: 

  - host: 10.0.1.10 

alertmanager_servers: 

  - host: 10.0.1.10 

2. 详细配置:

cat  complex-cdc.yaml

# # Global variables are applied to all deployments and used as the default value of 

# # the deployments if a specific deployment value is missing. 

global: 

  user: "tidb" 

  ssh_port: 22 

  deploy_dir: "/tidb-deploy" 

  data_dir: "/tidb-data" 

# # Monitored variables are applied to all the machines. 

monitored: 

 node_exporter_port: 9100 

 blackbox_exporter_port: 9115 

 # deploy_dir: "/tidb-deploy/monitored-9100" 

 # data_dir: "/tidb-data/monitored-9100" 

 # log_dir: "/tidb-deploy/monitored-9100/log" 

# # Server configs are used to specify the runtime configuration of TiDB components. 

# # All configuration items can be found in TiDB docs: 

# # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/ 

# # - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/ 

# # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/ 

# # All configuration items use points to represent the hierarchy, e.g: 

# #   readpool.storage.use-unified-pool 

# #       

# # You can overwrite this configuration via the instance-level `config` field. 

server_configs: 

 tidb: 

   log.slow-threshold: 300 

 tikv: 

   # server.grpc-concurrency: 4 

   # raftstore.apply-pool-size: 2 

   # raftstore.store-pool-size: 2 

   # rocksdb.max-sub-compactions: 1 

   # storage.block-cache.capacity: "16GB" 

   # readpool.unified.max-thread-count: 12 

   readpool.storage.use-unified-pool: false 

   readpool.coprocessor.use-unified-pool: true 

 pd: 

   schedule.leader-schedule-limit: 4 

   schedule.region-schedule-limit: 2048 

   schedule.replica-schedule-limit: 64 

pd_servers: 

 - host: 10.0.1.4 

   # ssh_port: 22 

   # name: "pd-1" 

   # client_port: 2379 

   # peer_port: 2380 

   # deploy_dir: "/tidb-deploy/pd-2379" 

   # data_dir: "/tidb-data/pd-2379" 

   # log_dir: "/tidb-deploy/pd-2379/log" 

   # numa_node: "0,1" 

   # # The following configs are used to overwrite the `server_configs.pd` values. 

   # config: 

   #   schedule.max-merge-region-size: 20 

   #   schedule.max-merge-region-keys: 200000 

 - host: 10.0.1.5 

 - host: 10.0.1.6 

tidb_servers: 

 - host: 10.0.1.1 

   # ssh_port: 22 

   # port: 4000 

   # status_port: 10080 

   # deploy_dir: "/tidb-deploy/tidb-4000" 

   # log_dir: "/tidb-deploy/tidb-4000/log" 

   # numa_node: "0,1" 

   # # The following configs are used to overwrite the `server_configs.tidb` values. 

   # config: 

   #   log.slow-query-file: tidb-slow-overwrited.log 

 - host: 10.0.1.2 

 - host: 10.0.1.3 

tikv_servers: 

 - host: 10.0.1.7 

   # ssh_port: 22 

   # port: 20160 

   # status_port: 20180 

   # deploy_dir: "/tidb-deploy/tikv-20160" 

   # data_dir: "/tidb-data/tikv-20160" 

   # log_dir: "/tidb-deploy/tikv-20160/log" 

   # numa_node: "0,1" 

   # # The following configs are used to overwrite the `server_configs.tikv` values. 

   # config: 

   #   server.grpc-concurrency: 4 

   #   server.labels: { zone: "zone1", dc: "dc1", host: "host1" } 

 - host: 10.0.1.8 

 - host: 10.0.1.9 

cdc_servers: 

 - host: 10.0.1.1 

   port: 8300 

   deploy_dir: "/tidb-deploy/cdc-8300" 

   log_dir: "/tidb-deploy/cdc-8300/log" 

 - host: 10.0.1.2 

   port: 8300 

   deploy_dir: "/tidb-deploy/cdc-8300" 

    log_dir: "/tidb-deploy/cdc-8300/log" 

  - host: 10.0.1.3 

    port: 8300 

    deploy_dir: "/tidb-deploy/cdc-8300" 

    log_dir: "/tidb-deploy/cdc-8300/log" 

monitoring_servers: 

  - host: 10.0.1.10 

    # ssh_port: 22 

    # port: 9090 

    # deploy_dir: "/tidb-deploy/prometheus-8249" 

    # data_dir: "/tidb-data/prometheus-8249" 

    # log_dir: "/tidb-deploy/prometheus-8249/log" 

grafana_servers: 

  - host: 10.0.1.10 

    # port: 3000 

    # deploy_dir: /tidb-deploy/grafana-3000 

alertmanager_servers: 

  - host: 10.0.1.10 

    # ssh_port: 22 

    # web_port: 9093 

    # cluster_port: 9094 

    # deploy_dir: "/tidb-deploy/alertmanager-9093" 

    # data_dir: "/tidb-data/alertmanager-9093" 

    # log_dir: "/tidb-deploy/alertmanager-9093/log" 

D. 增加 TiDB Binlog 拓扑架构

包含最小拓扑的基础上,同时部署 TiDB Binlog。TiDB Binlog 是目前广泛使用的增量同步组件,可提供准实时备份和同步功能。

1.简单配置:

  cat  simple-tidb-binlog.yaml

# # Global variables are applied to all deployments and used as the default value of 

# # the deployments if a specific deployment value is missing. 

global: 

 user: "tidb" 

 ssh_port: 22 

 deploy_dir: "/tidb-deploy" 

 data_dir: "/tidb-data" 

server_configs: 

  tidb: 

    binlog.enable: true 

    binlog.ignore-error: true 

pd_servers: 

  - host: 10.0.1.4 

  - host: 10.0.1.5 

  - host: 10.0.1.6 

tidb_servers: 

  - host: 10.0.1.1 

  - host: 10.0.1.2 

  - host: 10.0.1.3 

tikv_servers: 

  - host: 10.0.1.7 

  - host: 10.0.1.8 

  - host: 10.0.1.9 

pump_servers: 

  - host: 10.0.1.1 

  - host: 10.0.1.2 

  - host: 10.0.1.3 

drainer_servers: 

  - host: 10.0.1.12 

    config: 

      syncer.db-type: "tidb" 

      syncer.to.host: "10.0.1.12" 

      syncer.to.user: "root" 

      syncer.to.password: "" 

      syncer.to.port: 4000 

monitoring_servers: 

  - host: 10.0.1.10 

grafana_servers: 

  - host: 10.0.1.10 

alertmanager_servers: 

  - host: 10.0.1.10 

2.详细配置:

  cat  complex-tidb-binlog.yaml

# # Global variables are applied to all deployments and used as the default value of 

# # the deployments if a specific deployment value is missing. 

global: 

  user: "tidb" 

  ssh_port: 22 

  deploy_dir: "/tidb-deploy" 

  data_dir: "/tidb-data" 

# # Monitored variables are applied to all the machines. 

monitored: 

 node_exporter_port: 9100 

 blackbox_exporter_port: 9115 

 # deploy_dir: "/tidb-deploy/monitored-9100" 

 # data_dir: "/tidb-data/monitored-9100" 

 # log_dir: "/tidb-deploy/monitored-9100/log" 

# # Server configs are used to specify the runtime configuration of TiDB components. 

# # All configuration items can be found in TiDB docs: 

# # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/ 

# # - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/ 

# # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/ 

# # All configuration items use points to represent the hierarchy, e.g: 

# #   readpool.storage.use-unified-pool 

# #       

# # You can overwrite this configuration via the instance-level `config` field. 

server_configs: 

 tidb: 

   log.slow-threshold: 300 

   binlog.enable: true 

   binlog.ignore-error: true 

 tikv: 

   # server.grpc-concurrency: 4 

   # raftstore.apply-pool-size: 2 

   # raftstore.store-pool-size: 2 

   # rocksdb.max-sub-compactions: 1 

   # storage.block-cache.capacity: "16GB" 

   # readpool.unified.max-thread-count: 12 

   readpool.storage.use-unified-pool: false 

   readpool.coprocessor.use-unified-pool: true 

 pd: 

   schedule.leader-schedule-limit: 4 

   schedule.region-schedule-limit: 2048 

   schedule.replica-schedule-limit: 64 

pd_servers: 

 - host: 10.0.1.4 

   # ssh_port: 22 

   # name: "pd-1" 

   # client_port: 2379 

   # peer_port: 2380 

   # deploy_dir: "/tidb-deploy/pd-2379" 

   # data_dir: "/tidb-data/pd-2379" 

   # log_dir: "/tidb-deploy/pd-2379/log" 

   # numa_node: "0,1" 

   # # The following configs are used to overwrite the `server_configs.pd` values. 

   # config: 

   #   schedule.max-merge-region-size: 20 

   #   schedule.max-merge-region-keys: 200000 

 - host: 10.0.1.5 

 - host: 10.0.1.6 

tidb_servers: 

 - host: 10.0.1.1 

   # ssh_port: 22 

   # port: 4000 

   # status_port: 10080 

   # deploy_dir: "/tidb-deploy/tidb-4000" 

   # log_dir: "/tidb-deploy/tidb-4000/log" 

   # numa_node: "0,1" 

   # # The following configs are used to overwrite the `server_configs.tidb` values. 

   # config: 

   #   log.slow-query-file: tidb-slow-overwrited.log 

 - host: 10.0.1.2 

 - host: 10.0.1.3 

tikv_servers: 

 - host: 10.0.1.7 

   # ssh_port: 22 

   # port: 20160 

   # status_port: 20180 

   # deploy_dir: "/tidb-deploy/tikv-20160" 

   # data_dir: "/tidb-data/tikv-20160" 

   # log_dir: "/tidb-deploy/tikv-20160/log" 

   # numa_node: "0,1" 

   # # The following configs are used to overwrite the `server_configs.tikv` values. 

   # config: 

   #   server.grpc-concurrency: 4 

   #   server.labels: { zone: "zone1", dc: "dc1", host: "host1" } 

 - host: 10.0.1.8 

 - host: 10.0.1.9 

pump_servers: 

 - host: 10.0.1.1 

   ssh_port: 22 

   port: 8250 

   deploy_dir: "/tidb-deploy/pump-8249" 

   data_dir: "/tidb-data/pump-8249" 

   # The following configs are used to overwrite the `server_configs.drainer` values. 

   config: 

     gc: 7 

  - host: 10.0.1.2 

    ssh_port: 22 

    port: 8250 

    deploy_dir: "/tidb-deploy/pump-8249" 

    data_dir: "/tidb-data/pump-8249" 

    # The following configs are used to overwrite the `server_configs.drainer` values. 

    config: 

      gc: 7 

  - host: 10.0.1.3 

    ssh_port: 22 

    port: 8250 

    deploy_dir: "/tidb-deploy/pump-8249" 

    data_dir: "/tidb-data/pump-8249" 

    # The following configs are used to overwrite the `server_configs.drainer` values. 

    config: 

      gc: 7 

drainer_servers: 

  - host: 10.0.1.12 

    port: 8249 

    data_dir: "/tidb-data/drainer-8249" 

    # If drainer doesn't have a checkpoint, use initial commitTS as the initial checkpoint. 

    # Will get a latest timestamp from pd if commit_ts is set to -1 (the default value). 

    commit_ts: -1 

    deploy_dir: "/tidb-deploy/drainer-8249" 

    # The following configs are used to overwrite the `server_configs.drainer` values. 

    config: 

      syncer.db-type: "tidb" 

      syncer.to.host: "10.0.1.12" 

      syncer.to.user: "root" 

      syncer.to.password: "" 

      syncer.to.port: 4000 

monitoring_servers: 

  - host: 10.0.1.10 

    # ssh_port: 22 

    # port: 9090 

    # deploy_dir: "/tidb-deploy/prometheus-8249" 

    # data_dir: "/tidb-data/prometheus-8249" 

    # log_dir: "/tidb-deploy/prometheus-8249/log" 

grafana_servers: 

  - host: 10.0.1.10 

    # port: 3000 

    # deploy_dir: /tidb-deploy/grafana-3000 

alertmanager_servers: 

  - host: 10.0.1.10 

    # ssh_port: 22 

    # web_port: 9093 

    # cluster_port: 9094 

    # deploy_dir: "/tidb-deploy/alertmanager-9093" 

    # data_dir: "/tidb-data/alertmanager-9093" 

    # log_dir: "/tidb-deploy/alertmanager-9093/log" 

E. 增加 TiSpark 拓扑架构

包含最小拓扑的基础上,同时部署 TiSpark 组件. TiSpark 是 PingCAP 为解决用户复杂 OLAP 需求而推出的产品. 它借助 Spark 平台,同时融合 TiKV 分布式集群的优势,和 TiDB 一起为用户一站式解决 HTAP (Hybrid Transactional/Analytical Processing) 的需求. TiUP cluster 组件对 TiSpark 的支持目前为实验性特性

1. 简单配置:

  cat  simple-tispark.yaml

# # Global variables are applied to all deployments and used as the default value of 

# # the deployments if a specific deployment value is missing. 

global: 

 user: "tidb" 

 ssh_port: 22 

 deploy_dir: "/tidb-deploy" 

 data_dir: "/tidb-data" 

pd_servers: 

  - host: 10.0.1.4 

  - host: 10.0.1.5 

  - host: 10.0.1.6 

tidb_servers: 

  - host: 10.0.1.1 

  - host: 10.0.1.2 

  - host: 10.0.1.3 

tikv_servers: 

  - host: 10.0.1.7 

  - host: 10.0.1.8 

  - host: 10.0.1.9 

# NOTE: TiSpark support is an experimental feature, it's not recommend to be used in 

# production at present. 

# To use TiSpark, you need to manually install Java Runtime Environment (JRE) 8 on the 

# host, see the OpenJDK doc for a reference: https://openjdk.java.net/install/ 

# NOTE: Only 1 master node is supported for now 

tispark_masters: 

  - host: 10.0.1.21 

# NOTE: multiple worker nodes on the same host is not supported by Spark 

tispark_workers: 

  - host: 10.0.1.22 

  - host: 10.0.1.23 

monitoring_servers: 

  - host: 10.0.1.10 

grafana_servers: 

  - host: 10.0.1.10 

alertmanager_servers: 

  - host: 10.0.1.10 

2. 详细配置:

  cat  complex-tispark.yaml

# # Global variables are applied to all deployments and used as the default value of 

# # the deployments if a specific deployment value is missing. 

global: 

 user: "tidb" 

 ssh_port: 22 

 deploy_dir: "/tidb-deploy" 

 data_dir: "/tidb-data" 

# # Monitored variables are applied to all the machines. 

monitored: 

 node_exporter_port: 9100 

 blackbox_exporter_port: 9115 

 # deploy_dir: "/tidb-deploy/monitored-9100" 

 # data_dir: "/tidb-data/monitored-9100" 

 # log_dir: "/tidb-deploy/monitored-9100/log" 

# # Server configs are used to specify the runtime configuration of TiDB components. 

# # All configuration items can be found in TiDB docs: 

# # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/ 

# # - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/ 

# # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/ 

# # All configuration items use points to represent the hierarchy, e.g: 

# #   readpool.storage.use-unified-pool 

# # 

# # You can overwrite this configuration via the instance-level `config` field. 

server_configs: 

 tidb: 

   log.slow-threshold: 300 

 tikv: 

   # server.grpc-concurrency: 4 

   # raftstore.apply-pool-size: 2 

   # raftstore.store-pool-size: 2 

   # rocksdb.max-sub-compactions: 1 

   # storage.block-cache.capacity: "16GB" 

   # readpool.unified.max-thread-count: 12 

   readpool.storage.use-unified-pool: false 

   readpool.coprocessor.use-unified-pool: true 

 pd: 

   schedule.leader-schedule-limit: 4 

   schedule.region-schedule-limit: 2048 

   schedule.replica-schedule-limit: 64 

pd_servers: 

 - host: 10.0.1.4 

   # ssh_port: 22 

   # name: "pd-1" 

   # client_port: 2379 

   # peer_port: 2380 

   # deploy_dir: "/tidb-deploy/pd-2379" 

   # data_dir: "/tidb-data/pd-2379" 

   # log_dir: "/tidb-deploy/pd-2379/log" 

   # numa_node: "0,1" 

   # # The following configs are used to overwrite the `server_configs.pd` values. 

   # config: 

   #   schedule.max-merge-region-size: 20 

   #   schedule.max-merge-region-keys: 200000 

 - host: 10.0.1.5 

 - host: 10.0.1.6 

tidb_servers: 

 - host: 10.0.1.1 

   # ssh_port: 22 

   # port: 4000 

   # status_port: 10080 

   # deploy_dir: "/tidb-deploy/tidb-4000" 

   # log_dir: "/tidb-deploy/tidb-4000/log" 

   # numa_node: "0,1" 

   # # The following configs are used to overwrite the `server_configs.tidb` values. 

   # config: 

   #   log.slow-query-file: tidb-slow-overwrited.log 

 - host: 10.0.1.2 

 - host: 10.0.1.3 

tikv_servers: 

 - host: 10.0.1.7 

   # ssh_port: 22 

   # port: 20160 

   # status_port: 20180 

   # deploy_dir: "/tidb-deploy/tikv-20160" 

   # data_dir: "/tidb-data/tikv-20160" 

   # log_dir: "/tidb-deploy/tikv-20160/log" 

   # numa_node: "0,1" 

   # # The following configs are used to overwrite the `server_configs.tikv` values. 

   # config: 

   #   server.grpc-concurrency: 4 

   #   server.labels: { zone: "zone1", dc: "dc1", host: "host1" } 

 - host: 10.0.1.8 

 - host: 10.0.1.9 

# NOTE: TiSpark support is an experimental feature, it's not recommend to be used in 

# production at present. 

# To use TiSpark, you need to manually install Java Runtime Environment (JRE) 8 on the 

# host, see the OpenJDK doc for a reference: https://openjdk.java.net/install/ 

# If you have already installed JRE 1.8 at a location other than the default of system's 

# package management system, you may use the "java_home" field to set the JAVA_HOME variable. 

# NOTE: Only 1 master node is supported for now 

tispark_masters: 

  - host: 10.0.1.21 

    # ssh_port: 22 

    # port: 7077 

    # web_port: 8080 

    # deploy_dir: "/tidb-deploy/tispark-master-7077" 

    # java_home: "/usr/local/bin/java-1.8.0" 

    # spark_config: 

    #   spark.driver.memory: "2g" 

    #   spark.eventLog.enabled: "False" 

    #   spark.tispark.grpc.framesize: 268435456 

    #   spark.tispark.grpc.timeout_in_sec: 100 

    #   spark.tispark.meta.reload_period_in_sec: 60 

    #   spark.tispark.request.command.priority: "Low" 

    #   spark.tispark.table.scan_concurrency: 256 

    # spark_env: 

    #   SPARK_EXECUTOR_CORES: 5 

    #   SPARK_EXECUTOR_MEMORY: "10g" 

    #   SPARK_WORKER_CORES: 5 

    #   SPARK_WORKER_MEMORY: "10g" 

# NOTE: multiple worker nodes on the same host is not supported by Spark 

tispark_workers: 

  - host: 10.0.1.22 

    # ssh_port: 22 

    # port: 7078 

    # web_port: 8081 

    # deploy_dir: "/tidb-deploy/tispark-worker-7078" 

    # java_home: "/usr/local/bin/java-1.8.0" 

  - host: 10.0.1.23 

monitoring_servers: 

  - host: 10.0.1.10 

    # ssh_port: 22 

    # port: 9090 

    # deploy_dir: "/tidb-deploy/prometheus-8249" 

    # data_dir: "/tidb-data/prometheus-8249" 

    # log_dir: "/tidb-deploy/prometheus-8249/log" 

grafana_servers: 

  - host: 10.0.1.10 

    # port: 3000 

    # deploy_dir: /tidb-deploy/grafana-3000 

alertmanager_servers: 

  - host: 10.0.1.10 

    # ssh_port: 22 

    # web_port: 9093 

    # cluster_port: 9094 

    # deploy_dir: "/tidb-deploy/alertmanager-9093" 

    # data_dir: "/tidb-data/alertmanager-9093" 

    # log_dir: "/tidb-deploy/alertmanager-9093/log" 

图片
图片

F.混合部署拓扑架构

适用于单台机器,混合部署多个实例的情况,也包括单机多实例,需要额外增加目录、端口、资源配比、label 等配置。

TiKV 和 TiDB 混合部署拓扑以及主要参数。常见的场景为,部署机为多路 CPU 处理器,内存也充足,为提高物理机资源利用率,可单机多实例部署,即 TiDB、TiKV 通过 numa 绑核,隔离 CPU 资源。PD 和 Prometheus 混合部署,但两者的数据目录需要使用独立的文件系统.

1. 简单配置:

cat simple-multi-instance.yaml

# # Global variables are applied to all deployments and used as the default value of 

# # the deployments if a specific deployment value is missing. 

global: 

 user: "tidb" 

 ssh_port: 22 

 deploy_dir: "/tidb-deploy" 

 data_dir: "/tidb-data" 

server_configs: 

  tikv: 

    readpool.unified.max-thread-count: <The value refers to the calculation formula result of the multi-instance topology document.> 

    readpool.storage.use-unified-pool: false 

    readpool.coprocessor.use-unified-pool: true 

    storage.block-cache.capacity: "<The value refers to the calculation formula result of the multi-instance topology document.>" 

    raftstore.capacity: "<The value refers to the calculation formula result of the multi-instance topology document.>" 

  pd: 

    replication.location-labels: ["host"] 

pd_servers: 

  - host: 10.0.1.4 

  - host: 10.0.1.5 

  - host: 10.0.1.6 

tidb_servers: 

  - host: 10.0.1.1 

    port: 4000 

    status_port: 10080 

    numa_node: "0" 

  - host: 10.0.1.1 

    port: 4001 

    status_port: 10081 

    numa_node: "1" 

  - host: 10.0.1.2 

    port: 4000 

    status_port: 10080 

    numa_node: "0" 

  - host: 10.0.1.2 

    port: 4001 

    status_port: 10081 

    numa_node: "1" 

  - host: 10.0.1.3 

    port: 4000 

    status_port: 10080 

    numa_node: "0" 

  - host: 10.0.1.3 

    port: 4001 

    status_port: 10081 

    numa_node: "1" 

tikv_servers: 

  - host: 10.0.1.7 

    port: 20160 

    status_port: 20180 

    numa_node: "0" 

    config: 

      server.labels: { host: "tikv1" } 

  - host: 10.0.1.7 

    port: 20161 

    status_port: 20181 

    numa_node: "1" 

    config: 

      server.labels: { host: "tikv1" } 

  - host: 10.0.1.8 

    port: 20160 

    status_port: 20180 

    numa_node: "0" 

    config: 

      server.labels: { host: "tikv2" } 

  - host: 10.0.1.8 

    port: 20161 

    status_port: 20181 

    numa_node: "1" 

    config: 

      server.labels: { host: "tikv2" } 

  - host: 10.0.1.9 

    port: 20160 

    status_port: 20180 

    numa_node: "0" 

    config: 

      server.labels: { host: "tikv3" } 

  - host: 10.0.1.9 

    port: 20161 

    status_port: 20181 

    numa_node: "1" 

    config: 

      server.labels: { host: "tikv3" } 

monitoring_servers: 

  - host: 10.0.1.10 

grafana_servers: 

  - host: 10.0.1.10 

alertmanager_servers: 

  - host: 10.0.1.10 

2. 详细配置:

  cat  complex-multi-instance.yaml

# # Global variables are applied to all deployments and used as the default value of 

# # the deployments if a specific deployment value is missing. 

global: 

  user: "tidb" 

  ssh_port: 22 

  deploy_dir: "/tidb-deploy" 

  data_dir: "/tidb-data" 

monitored: 

 node_exporter_port: 9100 

 blackbox_exporter_port: 9115 

 deploy_dir: "/tidb-deploy/monitored-9100" 

 data_dir: "/tidb-data-monitored-9100" 

 log_dir: "/tidb-deploy/monitored-9100/log" 

server_configs: 

 tidb: 

   log.slow-threshold: 300 

 tikv: 

   readpool.unified.max-thread-count: <The value refers to the calculation formula result of the multi-instance topology document.> 

   readpool.storage.use-unified-pool: false 

   readpool.coprocessor.use-unified-pool: true 

   storage.block-cache.capacity: "<The value refers to the calculation formula result of the multi-instance topology document.>" 

   raftstore.capacity: "<The value refers to the calculation formula result of the multi-instance topology document.>" 

 pd: 

   replication.location-labels: ["host"] 

   schedule.leader-schedule-limit: 4 

   schedule.region-schedule-limit: 2048 

   schedule.replica-schedule-limit: 64 

pd_servers: 

 - host: 10.0.1.4 

 - host: 10.0.1.5 

 - host: 10.0.1.6 

tidb_servers: 

 - host: 10.0.1.1 

   port: 4000 

   status_port: 10080 

   deploy_dir: "/tidb-deploy/tidb-4000" 

   log_dir: "/tidb-deploy/tidb-4000/log" 

   numa_node: "0" 

 - host: 10.0.1.1 

   port: 4001 

   status_port: 10081 

   deploy_dir: "/tidb-deploy/tidb-4001" 

   log_dir: "/tidb-deploy/tidb-4001/log" 

   numa_node: "1" 

 - host: 10.0.1.2 

   port: 4000 

   status_port: 10080 

   deploy_dir: "/tidb-deploy/tidb-4000" 

   log_dir: "/tidb-deploy/tidb-4000/log" 

   numa_node: "0" 

 - host: 10.0.1.2 

   port: 4001 

   status_port: 10081 

   deploy_dir: "/tidb-deploy/tidb-4001" 

   log_dir: "/tidb-deploy/tidb-4001/log" 

   numa_node: "1" 

 - host: 10.0.1.3 

   port: 4000 

   status_port: 10080 

   deploy_dir: "/tidb-deploy/tidb-4000" 

   log_dir: "/tidb-deploy/tidb-4000/log" 

   numa_node: "0" 

 - host: 10.0.1.3 

   port: 4001 

   status_port: 10081 

   deploy_dir: "/tidb-deploy/tidb-4001" 

   log_dir: "/tidb-deploy/tidb-4001/log" 

   numa_node: "1" 

tikv_servers: 

 - host: 10.0.1.7 

   port: 20160 

   status_port: 20180 

   deploy_dir: "/tidb-deploy/tikv-20160" 

   data_dir: "/tidb-data/tikv-20160" 

   log_dir: "/tidb-deploy/tikv-20160/log" 

   numa_node: "0" 

   config: 

     server.labels: { host: "tikv1" } 

 - host: 10.0.1.7 

   port: 20161 

   status_port: 20181 

   deploy_dir: "/tidb-deploy/tikv-20161" 

   data_dir: "/tidb-data/tikv-20161" 

   log_dir: "/tidb-deploy/tikv-20161/log" 

   numa_node: "1" 

   config: 

     server.labels: { host: "tikv1" } 

 - host: 10.0.1.8 

   port: 20160 

   status_port: 20180 

   deploy_dir: "/tidb-deploy/tikv-20160" 

   data_dir: "/tidb-data/tikv-20160" 

   log_dir: "/tidb-deploy/tikv-20160/log" 

   numa_node: "0" 

    config: 

      server.labels: { host: "tikv2" } 

  - host: 10.0.1.8 

    port: 20161 

    status_port: 20181 

    deploy_dir: "/tidb-deploy/tikv-20161" 

    data_dir: "/tidb-data/tikv-20161" 

    log_dir: "/tidb-deploy/tikv-20161/log" 

    numa_node: "1" 

    config: 

      server.labels: { host: "tikv2" } 

  - host: 10.0.1.9 

    port: 20160 

    status_port: 20180 

    deploy_dir: "/tidb-deploy/tikv-20160" 

    data_dir: "/tidb-data/tikv-20160" 

    log_dir: "/tidb-deploy/tikv-20160/log" 

    numa_node: "0" 

    config: 

      server.labels: { host: "tikv3" } 

  - host: 10.0.1.9 

    port: 20161 

    status_port: 20181 

    deploy_dir: "/tidb-deploy/tikv-20161" 

    data_dir: "/tidb-data/tikv-20161" 

    log_dir: "/tidb-deploy/tikv-20161/log" 

    numa_node: "1" 

    config: 

      server.labels: { host: "tikv3" } 

monitoring_servers: 

  - host: 10.0.1.10 

    # ssh_port: 22 

    # port: 9090 

    # deploy_dir: "/tidb-deploy/prometheus-8249" 

    # data_dir: "/tidb-data/prometheus-8249" 

    # log_dir: "/tidb-deploy/prometheus-8249/log" 

grafana_servers: 

  - host: 10.0.1.10 

    # port: 3000 

    # deploy_dir: /tidb-deploy/grafana-3000 

alertmanager_servers: 

  - host: 10.0.1.10 

    # ssh_port: 22 

    # web_port: 9093 

    # cluster_port: 9094 

    # deploy_dir: "/tidb-deploy/alertmanager-9093" 

    # data_dir: "/tidb-data/alertmanager-9093" 

    # log_dir: "/tidb-deploy/alertmanager-9093/log" 

G.跨机房部署拓扑架构 (适用于真实生产环境)

两地三中心 跨机房部署架构 

参数配置:

 Cat  geo-redundancy-deployment.yaml

# Tip: PD priority needs to be manually set using the PD-ctl client tool. such as, member Leader_priority PD-name numbers. 

# Global variables are applied to all deployments and used as the default value of 

# the deployments if a specific deployment value is missing. 

global: 

user: "tidb" 

ssh_port: 22 

deploy_dir: "/tidb-deploy" 

data_dir: "/tidb-data" 

monitored: 

 node_exporter_port: 9100 

 blackbox_exporter_port: 9115 

 deploy_dir: "/tidb-deploy/monitored-9100" 

server_configs: 

 tidb: 

   log.level: debug 

   log.slow-query-file: tidb-slow.log 

 tikv: 

   server.grpc-compression-type: gzip 

   readpool.storage.use-unified-pool: true 

   readpool.storage.low-concurrency: 8 

 pd: 

   replication.location-labels: ["zone","dc","rack","host"] 

   replication.max-replicas: 5 

   label-property: 

     reject-leader: 

       - key: "dc" 

         value: "sha" 

pd_servers: 

- host: 10.0.1.6 

- host: 10.0.1.7 

- host: 10.0.1.8 

- host: 10.0.1.9 

- host: 10.0.1.10 

tidb_servers: 

- host: 10.0.1.1 

- host: 10.0.1.2 

- host: 10.0.1.3 

- host: 10.0.1.4 

- host: 10.0.1.5 

tikv_servers: 

- host: 10.0.1.11  

  ssh_port: 22 

  port: 20160 

  status_port: 20180 

  deploy_dir: "/tidb-deploy/tikv-20160" 

  data_dir: "/tidb-data/tikv-20160" 

  config: 

    server.labels: 

      zone: bj 

      dc: bja 

      rack: rack1 

      host: host1 

- host: 10.0.1.12 

  ssh_port: 22 

  port: 20161 

  status_port: 20181 

  deploy_dir: "/tidb-deploy/tikv-20161" 

  data_dir: "/tidb-data/tikv-20161" 

  config: 

    server.labels: 

      zone: bj 

      dc: bja 

      rack: rack1 

      host: host2 

- host: 10.0.1.13 

  ssh_port: 22 

  port: 20160 

  status_port: 20180 

  deploy_dir: "/tidb-deploy/tikv-20160" 

  data_dir: "/tidb-data/tikv-20160" 

  config: 

    server.labels: 

      zone: bj 

      dc: bjb 

      rack: rack1 

      host: host1 

- host: 10.0.1.14 

  ssh_port: 22 

  port: 20161 

  status_port: 20181 

  deploy_dir: "/tidb-deploy/tikv-20161" 

  data_dir: "/tidb-data/tikv-20161" 

  config: 

    server.labels: 

      zone: bj 

      dc: bjb 

      rack: rack1 

      host: host2 

- host: 10.0.1.15 

  ssh_port: 22 

  port: 20160 

  deploy_dir: "/tidb-deploy/tikv-20160" 

  data_dir: "/tidb-data/tikv-20160" 

  config: 

    server.labels: 

      zone: sh 

      dc: sha 

      rack: rack1 

       host: host1 

     readpool.storage.use-unified-pool: true 

     readpool.storage.low-concurrency: 10 

     raftstore.raft-min-election-timeout-ticks: 1000 

     raftstore.raft-max-election-timeout-ticks: 1020 

monitoring_servers: 

 - host: 10.0.1.16 

grafana_servers: 

 - host: 10.0.1.16 

备注:

1. Server_configs下对应组件的参数对全局有效

2. Config下配置的参数只对某个节点生效

3. 配置的层次结构使用 . 表示,如:log.slow-threshold

第4步:执行部署

tiup cluster deploy tidb-test v4.0.0 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]

******1. row********************

tidb_version(): Release Version: v4.0.0

Edition: Community

Git Commit Hash: 689a6b6439ae7835947fcaccf329a3fc303986cb

Git Branch: HEAD

UTC Build Time: 2020-09-17 11:09:45

GoVersion: go1.13.4

Race Enabled: false

TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306CheckTable Before Drop: false1 row in set (0.00 sec)

创建tidb_database

Create database tidb_database;

Query ok,0 rows affected (0.10 sec)

Using  tidb_database;

Select

STORE_ID,ADDRESS,STORE_STATE,STORE_STATE_NAME,CAPACITY,AVAILABLE,UPTIME

from  

INFORMATION_SCHEMA.TIKV_STORE_STATUS;

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
云数据库 SQL Server
腾讯云数据库 SQL Server (TencentDB for SQL Server)是业界最常用的商用数据库之一,对基于 Windows 架构的应用程序具有完美的支持。TencentDB for SQL Server 拥有微软正版授权,可持续为用户提供最新的功能,避免未授权使用软件的风险。具有即开即用、稳定可靠、安全运行、弹性扩缩等特点。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档