# 本地克隆,使用克隆语句带上LOCAL关键字,并指定本地MySQL Server的数据存放目录,如果不指定存目录,则不允许执行本地克隆操作
CLONE LOCAL DATA DIRECTORY [=] 'clone_dir';
# 远程克隆,使用克隆语句带上INSTANCE关键字,并指定远程MySQL Server的连接信息,以及在本地的数据存放目录,如果不指定存放目录,则会使用远程MySQL Server的数据覆盖本地MySQL Server的数据
CLONE INSTANCE FROM 'user'@'host':port IDENTIFIED BY 'password' [DATA DIRECTORY [=] 'clone_dir'];
[mysqld]
plugin-load-add=mysql_clone.so
mysql> INSTALL PLUGIN clone SONAME 'mysql_clone.so';
mysql> SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME = 'clone';
+------------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+------------------------+---------------+
| clone | ACTIVE |
+------------------------+---------------+
[mysqld]
plugin-load-add=mysql_clone.so
clone=FORCE_PLUS_PERMANENT
# 如果想要阻止MySQL Server在没有克隆插件的情况下运行,那么在插件初始化失败时,可以使用--clone选项设置FORCE或FORCE_PLUS_PERMANENT值强制MySQL Server启动失败
admin@localhost : (none) 03:10:09> SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME = 'clone';
+-------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+-------------+---------------+
| clone | ACTIVE |
+-------------+---------------+
1 row in set (0.00 sec)
root@localhost : (none) 04:45:56> create user clone_user identified with mysql_native_password by 'LXBlxb!1';
Query OK, 0 rows affected (0.01 sec)
root@localhost : (none) 04:47:13> grant replication slave,backup_admin on *.* to clone_user;
Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 04:47:18> show grants for clone_user;
+----------------------------------------------------+
| Grants for clone_user@% |
+----------------------------------------------------+
| GRANT REPLICATION SLAVE ON *.* TO `clone_user`@`%` |
| GRANT BACKUP_ADMIN ON *.* TO `clone_user`@`%` |
+----------------------------------------------------+
2 rows in set (0.00 sec)
# 使用系统变量clone_valid_donor_list指定允许执行远程克隆操作的数据源实例对应的IP和端口信息
admin@localhost : (none) 04:51:41> SET GLOBAL clone_valid_donor_list = '10.211.55.11:3306';
Query OK, 0 rows affected (0.00 sec)
# 加载组复制插件(这里只是加载,不需要启用也不需要额外的配置)
admin@localhost : (none) 10:11:48> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
Query OK, 0 rows affected (0.01 sec)
# 执行远程克隆操作
admin@localhost : (none) 04:53:03> CLONE INSTANCE FROM 'clone_user'@'10.211.55.11':3306 IDENTIFIED BY 'LXBlxb!1';
Query OK, 0 rows affected (15.04 sec)
# 对于远程克隆操作,如果没有未远程克隆的数据副本指定一个在本地的存放路径,则会覆盖本地数据库实例的数据目录下的所有文件,在远程克隆操作执行结束后,会自动使用新的数据副本来重启本地数据库实例
admin@localhost : (none) 04:53:38> show databases;
ERROR 2006 (HY000): MySQL server has gone away # 从这里可以看到,之前执行远程克隆操作的会话,已经断开了(由于本地数据库实例重启,所以,本地数据库实例中的所有连接都会被断开)
No connection. Trying to reconnect...
Connection id: 7
Current database: *** NONE ***
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sbtest |
| sbtest2 |
| sbtest3 |
| sbtest4 |
| sbtest5 |
| sys |
+--------------------+
9 rows in set (0.01 sec)
# 主库中的GTID信息
root@localhost : (none) 05:05:07> show master status\G
*************************** 1. row ***************************
File: mysql-bin.000054
Position: 2076
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-454723:1386710-1412416:2398244-2400644,
ee0905d8-a4a5-11ea-aa76-001c42789047:1-187
1 row in set (0.00 sec)
# 从库1中的GTID信息
root@localhost : (none) 05:05:07> show master status\G
*************************** 1. row ***************************
File: mysql-bin.000054
Position: 2076
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-454723:1386710-1412416:2398244-2400644,
ee0905d8-a4a5-11ea-aa76-001c42789047:1-187
1 row in set (0.00 sec)
# 通过比对主库和从库1中的二进制日志文件和位置信息、GTID信息,可以发现,两者位置信息完全一致,说明克隆操作在获取的数据快照副本的同时,也获取了一个与数据保持一致的位置信息,而且该位置信息不是单独计量在某个文件中,不需要再单独在数据库中进行设置,这样一来,要在从库1中配置主从复制就变得非常简单,只需要使用CHANGE MASTER语句,带上主库的连接信息即可,如下
admin@localhost : (none) 05:50:43> change master to master_host='10.211.55.11',master_user='clone_user',master_password='LXBlxb!1',master_auto_position=1 for channel 'ms_replication_clone_test';
# 启动复制线程
admin@localhost : (none) 05:51:43> start slave for channel 'ms_replication_clone_test'\G
Query OK, 0 rows affected (0.06 sec)
# 查看复制状态
admin@localhost : (none) 05:52:16> show slave status for channel 'ms_replication_clone_test'\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.211.55.11
Master_User: clone_user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000054
Read_Master_Log_Pos: 2076
Relay_Log_File: mysql-relay-bin-ms_replication_clone_test.000002
Relay_Log_Pos: 402
Relay_Master_Log_File: mysql-bin.000054
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
......
Exec_Master_Log_Pos: 2076
Relay_Log_Space: 629
......
Seconds_Behind_Master: 0
......
Executed_Gtid_Set: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-454723:1386710-1412416:2398244-2400644,
ee0905d8-a4a5-11ea-aa76-001c42789047:1-187
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_Name: ms_replication_clone_test
Master_TLS_Version:
Master_public_key_path:
Get_master_public_key: 0
Network_Namespace:
1 row in set (0.00 sec)
# 注意:这里我们假设主库没有任何数据写入,目的是为了证明通过克隆插件获取的数据快照与获取的快照位置信息是一致的。正是因为这一特性,才使得在使用克隆插件执行克隆操作期间,主库允许持续写入数据,即允许主库在线,不需要阻塞业务对数据库的读/写访问
# 主库中写入测试数据
root@localhost : (none) 06:35:35> create database sbtest6;
Query OK, 1 row affected (0.00 sec)
# 从库1中查看数据是否已经同步
admin@localhost : (none) 06:25:04> show databases;
+--------------------+
| Database |
+--------------------+
......
| sbtest6 |
| sys |
+--------------------+
10 rows in set (0.00 sec)
# 通过克隆插件从主库1获取全量数据快照来配置从库2,只需要执行如下语句即可,无需额外的其他配置步骤(因为复制配置信息,在从库1中)
admin@localhost : (none) 04:51:41> SET GLOBAL clone_valid_donor_list = '10.211.55.12:3306';
Query OK, 0 rows affected (0.00 sec)
admin@localhost : (none) 10:11:48> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
Query OK, 0 rows affected (0.01 sec)
admin@localhost : (none) 10:15:10> CLONE INSTANCE FROM 'clone_user'@'10.211.55.12':3306 IDENTIFIED BY 'LXBlxb!1';
Query OK, 0 rows affected (10.83 sec)
# 等待克隆语句执行完成之后,我们直接使用如下语句查看复制配置信息,可以发现,从库2的复制配置信息已存在,且复制线程已经正常启动,说明从库2已经成功加入了主从复制拓扑中
admin@localhost : (none) 10:15:54> show slave status\G
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 25
Current database: *** NONE ***
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.211.55.11
Master_User: clone_user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000055
Read_Master_Log_Pos: 232
Relay_Log_File: mysql-relay-bin-ms_replication_clone_test.000002
Relay_Log_Pos: 367
Relay_Master_Log_File: mysql-bin.000055
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
......
Exec_Master_Log_Pos: 232
Relay_Log_Space: 602
......
Seconds_Behind_Master: 0
......
Executed_Gtid_Set: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-454723:1386710-1412416:2398244-2400644,
ee0905d8-a4a5-11ea-aa76-001c42789047:1-188
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_Name: ms_replication_clone_test
......
1 row in set (0.00 sec)
# 主库
root@localhost : (none) 05:05:10> select * from performance_schema.log_status\G
*************************** 1. row ***************************
SERVER_UUID: ee0905d8-a4a5-11ea-aa76-001c42789047
LOCAL: {"gtid_executed": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-454723:1386710-1412416:2398244-2400644,\nee0905d8-a4a5-11ea-aa76-001c42789047:1-187", "binary_log_file": "mysql-bin.000054", "binary_log_position": 2076}
REPLICATION: {"channels": [{"channel_name": "group_replication_applier", "relay_log_file": "mysql-relay-bin-group_replication_applier.000023", "relay_log_position": 152}, {"channel_name": "group_replication_recovery", "relay_log_file": "mysql-relay-bin-group_replication_recovery.000006", "relay_log_position": 152}]}
STORAGE_ENGINES: {"InnoDB": {"LSN": 3915886924, "LSN_checkpoint": 3915886924}}
1 row in set (0.01 sec)
# 从库1
admin@localhost : (none) 05:33:13> select * from performance_schema.log_status\G
*************************** 1. row ***************************
SERVER_UUID: fdb2b4cd-a4a5-11ea-916d-001c42c65c10
LOCAL: {"gtid_executed": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-454723:1386710-1412416:2398244-2400644,\nee0905d8-a4a5-11ea-aa76-001c42789047:1-187", "binary_log_file": "mysql-bin.000002", "binary_log_position": 152}
REPLICATION: {"channels": [{"channel_name": "group_replication_applier", "relay_log_file": "mysql-relay-bin-group_replication_applier.000012", "relay_log_position": 152}, {"channel_name": "group_replication_recovery", "relay_log_file": "mysql-relay-bin-group_replication_recovery.000006", "relay_log_position": 152}]}
STORAGE_ENGINES: {"InnoDB": {"LSN": 3915933877, "LSN_checkpoint": 3915933877}}
1 row in set (0.00 sec)
# 节点1
[root@localhost ~]# cat /etc/my.cnf
[mysqld]
......
super_read_only=1
server_id=330611
sync_binlog=10000
innodb_flush_log_at_trx_commit = 2
binlog-checksum=NONE
binlog_row_image=full
log_slave_updates=ON
binlog_format=ROW
master_info_repository=TABLE
relay_log_info_repository=TABLE
gtid_mode=ON
enforce_gtid_consistency=ON
transaction-write-set-extraction=XXHASH64
auto_increment_increment=3
auto_increment_offset=1
plugin_load_add='group_replication.so;mysql_clone.so'
loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
loose-group_replication_single_primary_mode=OFF
loose-group_replication_enforce_update_everywhere_checks=ON
loose-group_replication_start_on_boot=OFF
loose-group_replication_ip_whitelist='0.0.0.0/0'
loose-group_replication_local_address='10.211.55.11:24901'
loose-group_replication_group_seeds='10.211.55.11:24901,10.211.55.12:24901,10.211.55.13:24901'
loose-group_replication_bootstrap_group=OFF
report_host='10.211.55.11'
# 节点2
[root@localhost ~]# cat /etc/my.cnf
[mysqld]
......
super_read_only=1
server_id=330612
sync_binlog=10000
innodb_flush_log_at_trx_commit = 2
binlog-checksum=NONE
binlog_row_image=full
log_slave_updates=ON
binlog_format=ROW
master_info_repository=TABLE
relay_log_info_repository=TABLE
gtid_mode=ON
enforce_gtid_consistency=ON
transaction-write-set-extraction=XXHASH64
auto_increment_increment=3
auto_increment_offset=2
plugin_load_add='group_replication.so;mysql_clone.so'
loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
loose-group_replication_single_primary_mode=OFF
loose-group_replication_enforce_update_everywhere_checks=ON
loose-group_replication_start_on_boot=OFF
loose-group_replication_ip_whitelist='0.0.0.0/0'
loose-group_replication_local_address='10.211.55.12:24901'
loose-group_replication_group_seeds='10.211.55.11:24901,10.211.55.12:24901,10.211.55.13:24901'
loose-group_replication_bootstrap_group=OFF
report_host='10.211.55.12'
# 节点3
[root@localhost ~]# cat /etc/my.cnf
[mysqld]
......
super_read_only=1
server_id=330613
sync_binlog=10000
innodb_flush_log_at_trx_commit = 2
binlog-checksum=NONE
binlog_row_image=full
log_slave_updates=ON
binlog_format=ROW
master_info_repository=TABLE
relay_log_info_repository=TABLE
gtid_mode=ON
enforce_gtid_consistency=ON
transaction-write-set-extraction=XXHASH64
auto_increment_increment=3
auto_increment_offset=3
plugin_load_add='group_replication.so;mysql_clone.so'
loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
loose-group_replication_single_primary_mode=OFF
loose-group_replication_enforce_update_everywhere_checks=ON
loose-group_replication_start_on_boot=OFF
loose-group_replication_ip_whitelist='0.0.0.0/0'
loose-group_replication_local_address='10.211.55.13:24901'
loose-group_replication_group_seeds='10.211.55.11:24901,10.211.55.12:24901,10.211.55.13:24901'
loose-group_replication_bootstrap_group=OFF
report_host='10.211.55.13'
# 在配置选项文件中设置好上述选项之后,依次重启三个节点的MySQL Server
[root@localhost ~]# service mysqld restart
Shutting down MySQL.. SUCCESS!
Starting MySQL... SUCCESS!
# 首先查看组复制插件和克隆插件的状态信息,从下面的信息中可以看到,这两个插件当前在节点1中都处于激活中台
root@localhost : (none) 11:12:55> SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME in ('clone','group_replication');
+-------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+-------------------+---------------+
| group_replication | ACTIVE |
| clone | ACTIVE |
+-------------------+---------------+
2 rows in set (0.00 sec)
# 创建复制用户,并为复制用于赋权
root@localhost : (none) 11:08:11> set global read_only=0;set global super_read_only=0;
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 11:08:11> set sql_log_bin=0;
Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 11:08:55> CREATE USER group_repl@'%' IDENTIFIED WITH mysql_native_password BY 'password';
Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 11:09:50> GRANT REPLICATION SLAVE,BACKUP_ADMIN ON *.* TO group_repl@'%' ;
Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 11:10:23> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 11:08:11> set sql_log_bin=1;
Query OK, 0 rows affected (0.00 sec)
# 配置组复制并引导组启动
root@localhost : (none) 11:11:27> set global group_replication_bootstrap_group=ON;
Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 11:11:35> set global group_replication_start_on_boot=ON;
Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 11:11:40> CHANGE MASTER TO MASTER_USER='group_repl', MASTER_PASSWORD='password' FOR CHANNEL 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.01 sec)
root@localhost : (none) 11:11:50> start group_replication;
Query OK, 0 rows affected (3.18 sec)
root@localhost : (none) 11:12:43> set global group_replication_bootstrap_group=OFF;
Query OK, 0 rows affected (0.00 sec)
# 引导组启动成功之后,可通过performance_schema.replication_group_members表查看组成员的状态信息,MEMBER_STATE字段为ONLINE,表示节点1已经成功加入到复制组中
root@localhost : (none) 11:12:49> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | dfb1c0c1-d07e-11ea-98e7-001c42789047 | 10.211.55.11 | 3306 | ONLINE | PRIMARY | 8.0.20 |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
1 row in set (0.01 sec)
# 使用sysbench造数
root@localhost : (none) 11:14:55> create database sbtest;
Query OK, 1 row affected (0.01 sec)
[root@localhost ~]# sysbench --db-driver=mysql --time=180 --report-interval=1 --mysql-host=10.211.55.11 --mysql-port=3306 --mysql-user=qbench --mysql-password=qbench --mysql-db=sbtest --tables=8 --table-size=50000 --db-ps-mode=disable oltp_read_write prepare --threads=8
......
# 清理二进制日志
root@localhost : (none) 11:32:19> flush binary logs;
Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 11:32:33> show binary logs;
+------------------+-----------+-----------+
| Log_name | File_size | Encrypted |
+------------------+-----------+-----------+
| mysql-bin.000001 | 179 | No |
| mysql-bin.000002 | 179 | No |
| mysql-bin.000003 | 171 | No |
| mysql-bin.000004 | 153942756 | No |
| mysql-bin.000005 | 232 | No |
+------------------+-----------+-----------+
5 rows in set (0.00 sec)
root@localhost : (none) 11:32:34> purge binary logs to 'mysql-bin.000005';
Query OK, 0 rows affected (0.02 sec)
root@localhost : (none) 11:32:44> show binary logs;
+------------------+-----------+-----------+
| Log_name | File_size | Encrypted |
+------------------+-----------+-----------+
| mysql-bin.000005 | 232 | No |
+------------------+-----------+-----------+
1 row in set (0.00 sec)
# 检查组复制插件和克隆插件状态(略)
# 创建复制账号(这里要在会话级别关闭二进制日志的写入功能,我们故意让节点2中的数据与节点1不一致,在2个节点数据不一致的情况下,新加入节点是无法通过基于二进制日志的状态传输加入组复制拓扑的,只能通过基于数据克隆的状态传输加入组复制拓扑,即新加入节点中的数据会被克隆的数据副本全部覆盖,以便使得2个节点中的数据达到一致的状态)
root@localhost : (none) 11:18:58> set global read_only=0;set global super_read_only=0;
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
# 在节点2中的操作,不能写入二进制日志,否则后续会由于节点2中存在复制组中不存在的数据而导致节点2无法加入节点1引导的复制组(这是出于数据的安全保护考虑,如果不加保护,如果节点2中的数据是有用的,被覆盖会导致数据丢失)
root@localhost : (none) 11:19:00> set sql_log_bin=0;
Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 11:19:00> CREATE USER group_repl@'%' IDENTIFIED WITH mysql_native_password BY 'password';
Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 11:19:08> GRANT REPLICATION SLAVE,BACKUP_ADMIN ON *.* TO group_repl@'%' ;
Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 11:19:12> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 11:19:00> set sql_log_bin=1;
Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 11:36:21> start group_replication;
Query OK, 0 rows affected (3.94 sec) # 从这里可以看到,直接启动节点2的组复制线程成功了
# 在节点2中启动组复制成功之后,可通过performance_schema.replication_group_members表查看组成员的状态信息,从下面的信息中可以看到,节点2也已经成功加入组复制拓扑
root@localhost : (none) 11:41:46> SELECT * FROM performance_schema.replication_group_members;
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 53
Current database: *** NONE ***
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | cf7dd27b-d07e-11ea-b86f-001c42c65c10 | 10.211.55.12 | 3306 | ONLINE | PRIMARY | 8.0.20 |
| group_replication_applier | dfb1c0c1-d07e-11ea-98e7-001c42789047 | 10.211.55.11 | 3306 | ONLINE | PRIMARY | 8.0.20 |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
2 rows in set (0.01 sec)
# 详细过程省略,当节点3操作完成之后,可通过performance_schema.replication_group_members表查看组成员的状态信息,从下面的信息中可以看到,节点3也已经成功加入组复制拓扑
root@localhost : (none) 01:49:34> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 02497ad9-d094-11ea-b80b-001c421ee27e | 10.211.55.13 | 3306 | ONLINE | PRIMARY | 8.0.20 |
| group_replication_applier | aab4673f-d094-11ea-b441-001c42c65c10 | 10.211.55.12 | 3306 | ONLINE | PRIMARY | 8.0.20 |
| group_replication_applier | dfb1c0c1-d07e-11ea-98e7-001c42789047 | 10.211.55.11 | 3306 | ONLINE | PRIMARY | 8.0.20 |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)
# 停止组复制线程
root@localhost : (none) 01:44:57> stop group_replication;
Query OK, 0 rows affected (1.01 sec)
# 指定数据源实例的IP和端口
root@localhost : performance_schema 01:07:26> SET GLOBAL clone_valid_donor_list = '10.211.55.11:3306';
Query OK, 0 rows affected (0.00 sec)
# 执行远程克隆
root@localhost : performance_schema 01:12:40> CLONE INSTANCE FROM 'group_repl'@'10.211.55.11':3306 IDENTIFIED BY 'password';
Query OK, 0 rows affected (2.48 sec)
# 远程克隆操作执行完成之后,直接通过performance_schema.replication_group_members表查看组成员列表,下面的信息中可以看到节点3已经成功加入了组复制拓扑
root@localhost : (none) 01:49:34> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 02497ad9-d094-11ea-b80b-001c421ee27e | 10.211.55.13 | 3306 | ONLINE | PRIMARY | 8.0.20 |
| group_replication_applier | aab4673f-d094-11ea-b441-001c42c65c10 | 10.211.55.12 | 3306 | ONLINE | PRIMARY | 8.0.20 |
| group_replication_applier | dfb1c0c1-d07e-11ea-98e7-001c42789047 | 10.211.55.11 | 3306 | ONLINE | PRIMARY | 8.0.20 |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)
root@localhost : (none) 03:35:47> create user group_repl_ssl identified with caching_sha2_password by 'password';
Query OK, 0 rows affected (0.01 sec)
root@localhost : (none) 03:36:19> grant replication slave,backup_admin on *.* to group_repl_ssl;
Query OK, 0 rows affected (0.01 sec)
root@localhost : performance_schema 01:07:26> SET GLOBAL clone_valid_donor_list = '10.211.55.11:3306';
Query OK, 0 rows affected (0.00 sec)
root@localhost : performance_schema 01:12:40> CLONE INSTANCE FROM 'group_repl_ssl'@'10.211.55.11':3306 IDENTIFIED BY 'password' REQUIRE SSL;
Query OK, 0 rows affected (2.48 sec)
root@localhost : (none) 03:51:14> SELECT * FROM performance_schema.replication_group_members;
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 7
Current database: *** NONE *** # 远程克隆操作完成之后,重启了MySQL Server,且由于我们在配置选项文件中关闭了系统变量group_replication_start_on_boot,因此,组复制线程不会跟随MySQL Server一并启动
+---------------------------+-----------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+-----------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | | | NULL | OFFLINE | | |
+---------------------------+-----------+-------------+-------------+--------------+-------------+----------------+
1 row in set (0.01 sec)
# 手工启动组复制线程
root@localhost : (none) 03:51:45> start group_replication;
Query OK, 0 rows affected (3.79 sec)
# 通过performance_schema.replication_group_members查看组成员状态,从下面的信息中可以看到,节点2已经成功加入了组复制拓扑
root@localhost : (none) 03:52:05> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | dfb1c0c1-d07e-11ea-98e7-001c42789047 | 10.211.55.11 | 3306 | ONLINE | PRIMARY | 8.0.20 |
| group_replication_applier | ffea7eb6-d0a6-11ea-a9c6-001c42c65c10 | 10.211.55.12 | 3306 | ONLINE | PRIMARY | 8.0.20 |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)
root@localhost : (none) 03:52:58> select * from performance_schema.clone_status\G
*************************** 1. row ***************************
ID: 1
PID: 0
STATE: Completed
BEGIN_TIME: 2020-07-28 15:51:11.885
END_TIME: 2020-07-28 15:51:19.099
SOURCE: 10.211.55.11:3306
DESTINATION: LOCAL INSTANCE
ERROR_NO: 0
ERROR_MESSAGE:
BINLOG_FILE: mysql-bin.000005
BINLOG_POSITION: 10833
GTID_EXECUTED: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-199,
dfb1c0c1-d07e-11ea-98e7-001c42789047:1-4
1 row in set (0.00 sec)
# 创建复制用户
root@localhost : (none) 04:22:58> set sql_log_bin=0;
Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 04:23:16> set global super_read_only=0;
Query OK, 0 rows affected (0.00 sec)
# 用于配置组复制专用通道的用户group_repl_ssl可以使用REQUIRE SSL子句,也可以不使用,如果需要使用,在CREATE USER语句的末尾添加REQUIRE SSL子句即可
root@localhost : (none) 03:35:47> create user group_repl_ssl identified with caching_sha2_password by 'password';
Query OK, 0 rows affected (0.01 sec)
root@localhost : (none) 03:36:19> grant replication slave,backup_admin on *.* to group_repl_ssl;
Query OK, 0 rows affected (0.01 sec)
# 使用 CHANGE MASTER语句指定group_repl_ssl用户配置组复制
root@localhost : (none) 04:25:52> CHANGE MASTER TO MASTER_USER='group_repl_ssl', MASTER_PASSWORD='password' FOR CHANNEL 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.01 sec)
root@localhost : (none) 04:25:55> start group_replication;
Query OK, 0 rows affected (3.78 sec)
root@localhost : (none) 04:26:27> SELECT * FROM performance_schema.replication_group_members;
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 7
Current database: *** NONE *** # 远程克隆数据之后,节点3自动重启了MySQL Server,因此这里可以看到连接断开了
+---------------------------+-----------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+-----------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | | | NULL | OFFLINE | | |
+---------------------------+-----------+-------------+-------------+--------------+-------------+----------------+
1 row in set (0.01 sec)
# 手动启动组复制线程
root@localhost : (none) 04:26:41> start group_replication;
Query OK, 0 rows affected (3.54 sec)
# 通过performance_schema.replication_group_members查看组成员状态,从下面的信息中可以看到,节点3已经成功加入了组复制拓扑
root@localhost : (none) 04:26:58> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 703f5576-d0ab-11ea-bd41-001c421ee27e | 10.211.55.13 | 3306 | ONLINE | PRIMARY | 8.0.20 |
| group_replication_applier | dfb1c0c1-d07e-11ea-98e7-001c42789047 | 10.211.55.11 | 3306 | ONLINE | PRIMARY | 8.0.20 |
| group_replication_applier | ffea7eb6-d0a6-11ea-a9c6-001c42c65c10 | 10.211.55.12 | 3306 | ONLINE | PRIMARY | 8.0.20 |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)
2020-07-28T16:26:24.076123+08:00 9 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
# 下面一行错误日志信息表明节点3没有从组复制拓扑中的任何存活组成员中找到能够提供基于二进制日志的快照传输来恢复数据的组成员,将使用基于克隆的状态传输
2020-07-28T16:26:27.850181+08:00 0 [Warning] [MY-013470] [Repl] Plugin group_replication reported: 'As no ONLINE member has the missing data for recovering in its binary logs, this member will start distributed recovery using clone.'
# 执行远程克隆
2020-07-28T16:26:28.907533+08:00 34 [Warning] [MY-013460] [InnoDB] Clone removing all user data for provisioning: Started
2020-07-28T16:26:28.990207+08:00 34 [Warning] [MY-013460] [InnoDB] Clone removing all user data for provisioning: Finished
2020-07-28T16:26:32.975660+08:00 0 [Warning] [MY-010909] [Server] /usr/local/mysql/bin/mysqld: Forcing close of thread 7 user: 'root'.
# 远程克隆操作执行完成之后,自动关闭MySQL Server
2020-07-28T16:26:37.144956+08:00 0 [System] [MY-010910] [Server] /usr/local/mysql/bin/mysqld: Shutdown complete (mysqld 8.0.20) MySQL Community Server - GPL.
2020-07-28T16:26:37.613107+08:00 0 [Warning] [MY-011068] [Server] The syntax 'expire-logs-days' is deprecated and will be removed in a future release. Please use binlog_expire_logs_seconds instead.
# 自动启动MySQL Server
2020-07-28T16:26:37.613416+08:00 0 [System] [MY-010116] [Server] /usr/local/mysql/bin/mysqld (mysqld 8.0.20) starting as process 28172
2020-07-28T16:26:37.624573+08:00 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2020-07-28T16:26:38.689777+08:00 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2020-07-28T16:26:38.909297+08:00 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/home/mysql/data/mysqldata1/sock/mysqlx.sock' bind-address: '::' port: 33060
2020-07-28T16:26:38.949965+08:00 0 [ERROR] [MY-011947] [InnoDB] Cannot open '/data/mysqldata1/innodb_ts/ib_buffer_pool' for reading: No such file or directory
2020-07-28T16:26:38.976043+08:00 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2020-07-28T16:26:39.014293+08:00 0 [System] [MY-010931] [Server] /usr/local/mysql/bin/mysqld: ready for connections. Version: '8.0.20' socket: '/home/mysql/data/mysqldata1/sock/mysql.sock' port: 3306 MySQL Community Server - GPL.
| 作者简介
熟悉MySQL体系结构,擅长数据库的整体调优,喜好专研开源技术,并热衷于开源技术的推广,在线上线下做过多次公开的数据库专题分享,发表过近100篇数据库相关的研究文章。
全文完。