在昨天的文章<Redis主从复制>中部署了Redis的主从复制,一主两从的架构模式可以保证数据不丢失和读写分离,但是却无法保证主节点挂了之后的应用的高可用,基于主从复制,Redis提供了哨兵模式来监控各个节点的状态,当主节点发生意外down机后,哨兵会通过选举算法在从节点中间在选举一个节点晋升为主节点。
◆
如何配置Redis的哨兵
◆
首先我们需要一个简单的一主两从架构的redis,可参考上篇文章搭建
[root@syj ~]# ps -ef | grep redisroot 17013 1 0 22:55 ? 00:00:00 redis-server 127.0.0.1:6379root 17256 1 0 22:57 ? 00:00:00 redis-server 127.0.0.1:6380root 17291 1 0 22:57 ? 00:00:00 redis-server 127.0.0.1:6381
接下来就是哨兵的配置:
Redis的安装目录存在一个配置文件sentinel.conf,想要启动哨兵只需要修改几个配置就OK了
首先修改一下配置:
sentinel monitor syj-master 127.0.0.1 6379 2sentinel down-after-milliseconds syj-master 30000sentinel parallel-syncs syj-master 1sentinel failover-timeout syj-master 180000
上述参数含义:
将以上配置文件修改完毕后复制两份出来,分别命名为sentinel-2.conf和sentinel-3.conf,并修改端口为26380、26381
使用如下命令启动3个哨兵:
[root@syj ~]# redis-sentinel sentinel.conf [root@syj ~]# redis-sentinel sentinel-2.conf [root@syj ~]# redis-sentinel sentinel-3.conf
此时的Redis的架构应该是3台数据节点,3台哨兵节点
[root@syj ~]# ps -ef | grep redisroot 3232 1 0 21:39 ? 00:00:02 redis-server 127.0.0.1:6379root 3455 1 0 21:41 ? 00:00:02 redis-server 127.0.0.1:6380root 3507 1 0 21:41 ? 00:00:02 redis-server 127.0.0.1:6381root 6568 1 0 22:06 ? 00:00:01 redis-sentinel *:26379 [sentinel]root 6599 1 0 22:06 ? 00:00:01 redis-sentinel *:26380 [sentinel]root 6655 1 0 22:06 ? 00:00:00 redis-sentinel *:26381 [sentinel]
查看某台哨兵节点的信息
[root@syj ~]# redis-cli -h 127.0.0.1 -p 26379 info Sentinel# Sentinelsentinel_masters:1sentinel_tilt:0sentinel_running_scripts:0sentinel_scripts_queue_length:0sentinel_simulate_failure_flags:0master0:name=syj-master,status=ok,address=127.0.0.1:6379,slaves=2,sentinels=3
此时kill掉6379的主节点,然后观察其中一个哨兵的日志就会发现,哨兵在30秒内发现了主节点挂掉并把6381重新选择成了主节点
22288:X 22 Apr 2019 23:37:40.350 # +monitor master syj-master 127.0.0.1 6379 quorum 2
22288:X 22 Apr 2019 23:38:43.571 # +sdown master syj-master 127.0.0.1 637922288:X 22 Apr 2019 23:38:43.626 # +odown master syj-master 127.0.0.1 6379 #quorum 2/222288:X 22 Apr 2019 23:38:43.626 # +new-epoch 122288:X 22 Apr 2019 23:38:43.626 # +try-failover master syj-master 127.0.0.1 637922288:X 22 Apr 2019 23:38:43.636 # +vote-for-leader e357bbb19db65aec60225115e5ea82ad2cfc2be5 122288:X 22 Apr 2019 23:38:43.652 # 838a497bd5a1883242a83eab14cd0b08bc7881e2 voted for e357bbb19db65aec60225115e5ea82ad2cfc2be5 122288:X 22 Apr 2019 23:38:43.652 # 3cc3b18e620302604c04b1b90fc10f7f3fe0d661 voted for e357bbb19db65aec60225115e5ea82ad2cfc2be5 122288:X 22 Apr 2019 23:38:43.688 # +elected-leader master syj-master 127.0.0.1 637922288:X 22 Apr 2019 23:38:43.688 # +failover-state-select-slave master syj-master 127.0.0.1 637922288:X 22 Apr 2019 23:38:43.765 # +selected-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ syj-master 127.0.0.1 637922288:X 22 Apr 2019 23:38:43.765 * +failover-state-send-slaveof-noone slave 127.0.0.1:6381 127.0.0.1 6381 @ syj-master 127.0.0.1 637922288:X 22 Apr 2019 23:38:43.837 * +failover-state-wait-promotion slave 127.0.0.1:6381 127.0.0.1 6381 @ syj-master 127.0.0.1 637922288:X 22 Apr 2019 23:38:44.361 # +promoted-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ syj-master 127.0.0.1 637922288:X 22 Apr 2019 23:38:44.362 # +failover-state-reconf-slaves master syj-master 127.0.0.1 637922288:X 22 Apr 2019 23:38:44.430 * +slave-reconf-sent slave 127.0.0.1:6380 127.0.0.1 6380 @ syj-master 127.0.0.1 637922288:X 22 Apr 2019 23:38:44.734 # -odown master syj-master 127.0.0.1 637922288:X 22 Apr 2019 23:38:45.396 * +slave-reconf-inprog slave 127.0.0.1:6380 127.0.0.1 6380 @ syj-master 127.0.0.1 637922288:X 22 Apr 2019 23:38:45.396 * +slave-reconf-done slave 127.0.0.1:6380 127.0.0.1 6380 @ syj-master 127.0.0.1 637922288:X 22 Apr 2019 23:38:45.478 # +failover-end master syj-master 127.0.0.1 637922288:X 22 Apr 2019 23:38:45.478 # +switch-master syj-master 127.0.0.1 6379 127.0.0.1 638122288:X 22 Apr 2019 23:38:45.478 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ syj-master 127.0.0.1 638122288:X 22 Apr 2019 23:38:45.478 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ syj-master 127.0.0.1 638122288:X 22 Apr 2019 23:39:15.520 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ syj-master 127.0.0.1 6381