前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Deploy TiDB on AWS EKS

Deploy TiDB on AWS EKS

原创
作者头像
杨漆
修改2021-03-09 14:25:16
1.1K0
修改2021-03-09 14:25:16
举报
文章被收录于专栏:Tidb

**导读**

> 作者:杨漆

> 16年关系型数据库管理,从oracle 9i 、10g、11g、12c到Mysql5.5、5.6、5.7、8.0 到TiDB获得3个OCP、2个OCM;运维路上不平坦,跌过不少坑、熬过许多夜。把工作笔记整理出来分享给大伙儿,希望帮到大家少走弯路、少熬夜。

how to deploy a TiDB cluster on AWS Elastic Kubernetes Service (EKS) ?

PrerequisitesBefore deploying a TiDB cluster on AWS EKS, make sure the following requirements are satisfied:

Install Helm: used for deploying TiDB Operator.

Complete all operations in Getting started with eksctl.

This guide includes the following contents:

Install and configure awscli.

Install and configure eksctl that is used for creating Kubernetes clusters.

Install kubectl.

Note:

The operations described in this document requires at least the minumum privileges needed by eksctl and the services privileges needed to create a Linux bastion host.

DeployThis section describes how to deploy EKS, TiDB operator, the TiDB cluster, and the monitoring component.

Create EKS and the node pool

apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: <clusterName> region: us-west-2 nodeGroups: - name: admin desiredCapacity: 1 labels: dedicated: admin - name: tidb desiredCapacity: 2 labels: dedicated: tidb taints: dedicated: tidb:NoSchedule - name: pd desiredCapacity: 3 labels: dedicated: pd taints: dedicated: pd:NoSchedule - name: tikv desiredCapacity: 3 labels: dedicated: tikv taints: dedicated: tikv:NoSchedule

Save the configuration above as cluster.yaml, and replace <clusterName> with your desired cluster name. Execute the following command to create the cluster:

eksctl create cluster -f cluster.yaml

Note:

After executing the command above, you need to wait until the EKS cluster is successfully created and the node group is created and added in the EKS cluster. This process might take 5 to 10 minutes.

For more cluster configuration, refer to eksctl documentation.

Deploy TiDB OperatorTo deploy TiDB Operator in the Kubernetes cluster, refer to the Deploy TiDB Operator sectionin Getting Started.

1.Deploy TiDB cluster and

monitorPrepare the TidbCluster and TidbMonitor CR files:

curl -LO https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aws/tidb-cluster.yaml && curl -LO https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aws/tidb-monitor.yaml

2.Create Namespace:

kubectl create namespace tidb-cluster

Note:

A namespace is a virtual cluster backed by the same physical cluster. This document takes tidb-cluster as an example. If you want to use other namespace, modify the corresponding arguments of -n or --namespace.

3.Deploy the TiDB cluster:

kubectl create -f tidb-cluster.yaml -n tidb-cluster && kubectl create -f tidb-monitor.yaml -n tidb-cluster

4.View the startup status of the TiDB cluster:

kubectl get pods -n tidb-cluster

When all the Pods are in the Running or Ready state, the TiDB cluster is successfully started. For example:

NAME READY STATUS RESTARTS AGE tidb-discovery-5cb8474d89-n8cxk 1/1 Running 0 47h tidb-monitor-6fbcc68669-dsjlc 3/3 Running 0 47h tidb-pd-0 1/1 Running 0 47h tidb-pd-1 1/1 Running 0 46h tidb-pd-2 1/1 Running 0 46h tidb-tidb-0 2/2 Running 0 47h tidb-tidb-1 2/2 Running 0 46h tidb-tikv-0 1/1 Running 0 47h tidb-tikv-1 1/1 Running 0 47h tidb-tikv-2 1/1 Running 0 47h

Access the databaseAfter you deploy a TiDB cluster, you can access the TiDB database via MySQL client.

Prepare a host that can access the clusterThe LoadBalancer created for your TiDB cluster is an intranet LoadBalancer. You can create a bastion host in the cluster VPC to access the database. To create a bastion host on AWS console, refer to AWS documentation.

Select the cluster's VPC and Subnet, and verify whether the cluster name is correct in the dropdown box. You can view the cluster's VPC and Subnet by running the following command:

eksctl get cluster -n <clusterName>

Allow the bastion host to access the Internet. Select the correct key pair so that you can log in to the host via SSH.

Note:

In addition to the bastion host, you can also connect the existing machine to the cluster VPC by VPC Peering.

If the EKS cluster is created in an existing VPC, you can use the host inside the VPC.

Install the MySQL client and connectAfter the bastion host is created, you can connect to the bastion host via SSH and access the TiDB cluster via the MySQL client.

1.Connect to the bastion host via SSH:

ssh [-i /path/to/your/private-key.pem] ec2-user@<bastion-public-dns-name>

2.Install the MySQL client:

sudo yum install mysql -y

3.Connect the client to the TiDB cluster:

mysql -h <tidb-nlb-dnsname> -P 4000 -u root

For example:

$ mysql -h abfc623004ccb4cc3b363f3f37475af1-9774d22c27310bc1.elb.us-west-2.amazonaws.com -P 4000 -u root Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 1189 Server version: 5.7.25-TiDB-v4.0.2 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [(none)]> show status; +--------------------+--------------------------------------+ | Variable_name | Value | +--------------------+--------------------------------------+ | Ssl_cipher | | | Ssl_cipher_list | | | Ssl_verify_mode | 0 | | Ssl_version | | | ddl_schema_version | 22 | | server_id | ed4ba88b-436a-424d-9087-977e897cf5ec | +--------------------+--------------------------------------+ 6 rows in set (0.00 sec)

<tidb-nlb-dnsname> is the LoadBalancer domain name of the TiDB service. You can view the domain name in the EXTERNAL-IPfield by executing kubectl get svc basic-tidb -n tidb-cluster.

Note:

By default, TiDB (starting from v4.0.2) periodically shares usage details with PingCAP to help understand how to improve the product. For details about what is shared and how to disable the sharing, see Telemetry.

MonitorObtain the LoadBalancer domain name of Grafana:

kubectl -n tidb-cluster get svc basic-grafana

In the output above, the EXTERNAL-IP column is the LoadBalancer domain name.

You can access the <grafana-lb>:3000 address using your web browser to view monitoring metrics. Replace <grafana-lb> with the LoadBalancer domain name.

The initial Grafana login credentials are:

User: admin

Password: admin

UpgradeTo upgrade the TiDB cluster, edit the spec.version by executing kubectl edit tc basic -n tidb-cluster.

The upgrade process does not finish immediately. You can watch the upgrade progress by executing kubectl get pods -n tidb-cluster --watch.

Scale outBefore scaling out the cluster, you need to scale out the corresponding node group so that the new instances have enough resources for operation.

The following example shows how to scale out the tikv group of the <clusterName> cluster to 4 nodes:

eksctl scale nodegroup --cluster <clusterName> --name tikv --nodes 4 --nodes-min 4 --nodes-max 4

After that, execute kubectl edit tc basic -n tidb-cluster, and modify each component's replicas to the desired number of replicas. The scaling-out process is then completed.

For more information on managing node groups, refer to eksctl documentation.

Deploy TiFlash/TiCDCAdd node groupsIn the configuration file of eksctl (cluster.yaml), add the following two items to add a node group for TiFlash/TiCDC respectively. desiredCapacityis the number of nodes you desire.

- name: tiflash desiredCapacity: 3 labels: role: tiflash taints: dedicated: tiflash:NoSchedule - name: ticdc desiredCapacity: 1 labels: role: ticdc taints: dedicated: ticdc:NoSchedule

If the cluster is not created, execute eksctl create cluster -f cluster.yaml to create the cluster and node groups.

If the cluster is already created, execute eksctl create nodegroup -f cluster.yamlto create the node groups. The existing node groups are ignored and will not be created again.

Configure and deployIf you want to deploy TiFlash, configure spec.tiflash in tidb-cluster.yaml:

spec: ... tiflash: baseImage: pingcap/tiflash replicas: 1 storageClaims: - resources: requests: storage: 100Gi tolerations: - effect: NoSchedule key: dedicated operator: Equal value: tiflash

If you want to deploy TiCDC, configure spec.ticdc in tidb-cluster.yaml:

spec: ... ticdc: baseImage: pingcap/ticdc replicas: 1 tolerations: - effect: NoSchedule key: dedicated operator: Equal value: ticdc

Modify replicas according to your needs.

Finally, execute kubectl -n tidb-cluster apply -f tidb-cluster.yaml to update the TiDB cluster configuration.

For detailed CR configuration, refer to API references and Configure a TiDB Cluster.

Deploy TiDB Enterprise EditionIf you need to deploy TiDB/PD/TiKV/TiFlash/TiCDC Enterprise Edition, configure spec.<tidb/pd/tikv/tiflash/ticdc>.baseImage in tidb-cluster.yaml as the enterprise image. The image format is pingcap/<tidb/pd/tikv/tiflash/ticdc>-enterprise.

For example:

spec: ... pd: baseImage: pingcap/pd-enterprise ... tikv: baseImage: pingcap/tikv-enterprise

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
数据库
云数据库为企业提供了完善的关系型数据库、非关系型数据库、分析型数据库和数据库生态工具。您可以通过产品选择和组合搭建,轻松实现高可靠、高可用性、高性能等数据库需求。云数据库服务也可大幅减少您的运维工作量,更专注于业务发展,让企业一站式享受数据上云及分布式架构的技术红利!
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档