LVS+Keepalived+Nginx+DRBD+Zabbix集群架构

一、准备工作:

1.1 6台模拟服务器:

主机名

IP 地址

角色

zhdy01

192.168.96.129

Master LVS + Keepalived

zhdy02

192.168.96.130

Slave LVS + Keepalived

LVS+Keepalived

192.168.96.200

vip

zhdy03

192.168.96.131

Nginx server1

zhdy04

192.168.96.132

Nginx server2

zhdy05

192.168.96.133

Master

zhdy06

192.168.96.134

Slave

确保每台机器全部关闭 firewall以及selinux服务。

# systemctl stop firewalld

# systemctl disable firewalld

# iptables -F

# setenforce 0

二、两台都需要配置脚本:

vim /usr/local/sbin/lvs_rs.sh

#! /bin/bash
vip=192.168.96.200
#把vip绑定在lo上,是为了实现rs直接把结果返回给客户端
ifdown lo
ifup lo
ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip lo:0
#以下操作为更改arp内核参数,目的是为了让rs顺利发送mac地址给客户端
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

两台Real server分别执行脚本

# sh /usr/local/sbin/lvs_rs.sh

查看一下两台real server的router -n

# route -n

查看IP是否已经绑在lo卡上

# ip addr

三、安装keepalived

zhdy01:

[[email protected]-01 ~]# yum install -y keepalived

[[email protected]-01 ~]# vim /etc/keepalived/keepalived.conf

vrrp_instance VI_1 {
    #备用服务器上为 BACKUP
    state MASTER
    #绑定vip的网卡为ens33,你的网卡和阿铭的可能不一样,这里需要你改一下
    interface ens33
    virtual_router_id 51
    #备用服务器上为90
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass zhangduanya
    }
    virtual_ipaddress {
        192.168.96.200
    }
}
virtual_server 192.168.96.200 80 {
    #(每隔10秒查询realserver状态)
    delay_loop 10
    #(lvs 算法)
    lb_algo wlc
    #(DR模式)
    lb_kind DR
    #(同一IP的连接60秒内被分配到同一台realserver)
    persistence_timeout 0
    #(用TCP协议检查realserver状态)
    protocol TCP
    real_server 192.168.96.131 80 {
        #(权重)
        weight 100
        TCP_CHECK {
        #(10秒无响应超时)
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 80
        }
    }
    real_server 192.168.96.132 80 {
        weight 90
        TCP_CHECK {
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 80
        }
     }
}

重启keepalived服务

systemctl restart keepalived

zhdy02:

[[email protected]-01 ~]# yum install -y keepalived

[[email protected]-01 ~]# vim /etc/keepalived/keepalived.conf 

vrrp_instance VI_1 {
    #备用服务器上为 BACKUP
    state MASTER
    #绑定vip的网卡为ens33,你的网卡和阿铭的可能不一样,这里需要你改一下
    interface ens33
    virtual_router_id 51
    #备用服务器上为90
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass zhangduanya
    }
    virtual_ipaddress {
        192.168.96.200
    }
}
virtual_server 192.168.96.200 80 {
    #(每隔10秒查询realserver状态)
    delay_loop 10
    #(lvs 算法)
    lb_algo wlc
    #(DR模式)
    lb_kind DR
    #(同一IP的连接60秒内被分配到同一台realserver)
    persistence_timeout 0
    #(用TCP协议检查realserver状态)
    protocol TCP
    real_server 192.168.96.131 80 {
        #(权重)
        weight 100
        TCP_CHECK {
        #(10秒无响应超时)
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 80
        }
    }
    real_server 192.168.96.132 80 {
        weight 90
        TCP_CHECK {
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 80
        }
     }
}

配置完keepalived后,需要开启端口转发(主从都要做):

echo 1 >/proc/sys/net/ipv4/ip_forward

启动keepalived, 启动master的。

systemctl start keepalived

四、配置两台nginx服务器

zhdy03(Nginx server1)
# yum install -y nginx(其实是为了搭建集群,所以就简单用yum安装了nginx,线上一定要尽量编译去安装)

# systemctl start nginx

# ps aux | grep nginx

# netstat -lntp

# vim /usr/share/nginx/html/index.html 
this is master nginx!
打开Nginx所在服务器的“路由”功能、关闭“ARP查询”功能

[[email protected] ~]# echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
[[email protected] ~]# echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
[[email protected] ~]# echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
[[email protected] ~]# echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
 
设置回环IP

[[email protected] ~]# ifconfig lo:0 192.168.96.200 broadcast 192.168.96.200 netmask 255.255.255.255 up
[[email protected] ~]# route add -host 192.168.96.200 dev lo:0
zhdy04(Nginx server2)一样的操作
# yum install -y nginx(其实是为了搭建集群,所以就简单用yum安装了nginx,线上一定要尽量编译去安装)

# systemctl start nginx

# ps aux | grep nginx

# netstat -lntp

# vim /usr/share/nginx/html/index.html 
this is master nginx!
打开Nginx所在服务器的“路由”功能、关闭“ARP查询”功能

[[email protected] ~]# echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
[[email protected] ~]# echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
[[email protected] ~]# echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
[[email protected] ~]# echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
 
设置回环IP

[[email protected] ~]# ifconfig lo:0 192.168.96.200 broadcast 192.168.96.200 netmask 255.255.255.255 up
[[email protected] ~]# route add -host 192.168.96.200 dev lo:0

五、检查并测试:

zhdy01:

[[email protected] ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.96.200:80 rr
  -> 192.168.96.131:80            Route   1      0          0         
  -> 192.168.96.132:80            Route   1      1          0      

zhdy02:

[[email protected] ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.96.200:80 rr
  -> 192.168.96.131:80            Route   100    0          0         
  -> 192.168.96.132:80            Route   90     1          0

验证在浏览器内输入 192.168.96.200。(如下图在站点)

这样LVS + Keepalived + Nginx方式的配置就做完了。

现在我们进行搭建监测: 停掉一台LVS + keepalived。

再次测试,发现效果和上面的动画是一样的效果。

再次搞事情,把nginx也停掉一台。

不管怎么刷新都是一直显示一个。(如下,用事实说话)

[[email protected] ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.96.200:80 wlc
  -> 192.168.96.131:80            Route   100    1          1

六、mysql的主从

我刚刚开始有的疑问,如果说mysql安装在一台独立的server,如何连接?之前都是安装在一台,如果分离那就直接使用IP地址喽(数据库主机)!看图吧↓

七、DRBD安装配置

  1. Mysql的主从复制功能是通过建立复制关系的多台或多台机器环境中,一台宕机就切换到另一台服务器上,保证mysql的可用性,可以实现90.000%的SLA。
  2. Mysql+DRBD的复制功能,可以实现99.999%的SLA。

看了上面有什么感想?我今天就尝试DRBD!!

6.1 增加一块专门给数据用的磁盘(虚拟机直接增加即可)

然后两台机器都需要操作:

# vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.96.133 zhdy05
192.168.96.134 zhdy06

# ntpdate -u time.nist.gov      #网络时间同步命令

6.2 安装MYSQL

cd /usr/local/src

wget http://mirrors.sohu.com/mysql/MySQL-5.6/mysql-5.6.35-linux-glibc2.5-x86_64.tar.gz 

tar zxvf mysql-5.6.35-linux-glibc2.5-x86_64.tar.gz

mv mysql-5.6.35-linux-glibc2.5-x86_64 /usr/local/mysql

cd /usr/local/mysql

useradd mysql

mkdir -p /data/mysql

chown -R mysql:mysql /data/mysql

./scripts/mysql_install_db --user=mysql --datadir=/data/mysql

cp support-files/my-default.cnf  /etc/my.cnf

cp support-files/mysql.server /etc/init.d/mysqld

vi /etc/init.d/mysqld 

vim编辑下面两行basedir和datadir配置
basedir=/usr/local/mysql
datadir=/data/mysql

/etc/init.d/mysqld start

6.3 安装DRBD

以下均为两台机器操作:

# rpm --import http://elrepo.org/RPM-GPG-KEY-elrepo.org
# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
# yum -y install drbd84-utils kmod-drbd84

6.4 格式化磁盘给drbd使用(两个节点分别提供大小相同的分区):

[[email protected] mysql]# lsblk 
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   60G  0 disk 
├─sda1   8:1    0  400M  0 part /boot
├─sda2   8:2    0    2G  0 part [SWAP]
└─sda3   8:3    0 57.6G  0 part /
sdb      8:16   0   10G  0 disk 
sr0     11:0    1  4.1G  0 rom  

[[email protected] mysql]# fdisk /dev/sdb 

n → p → 3 → 回车 → 回车 → w

[[email protected] mysql]# cat /proc/partitions
major minor  #blocks  name

   8        0   62914560 sda
   8        1     409600 sda1
   8        2    2097152 sda2
   8        3   60406784 sda3
   8       16   10485760 sdb
   8       19   10484736 sdb3
  11        0    4277248 sr0

6.5 查看DRBD配置文件

[[email protected] ~]# vim /etc/drbd.d/global_common.conf 

global {
    usage-count no;
}
common {
    protocol C;
    handlers {
        pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
    }
    startup {
        wfc-timeout 30;
        degr-wfc-timeout 30;
    }
    options {
    }
    disk {
                on-io-error detach;
                fencing resource-only;
    }
    net {
        cram-hmac-alg "sha1";
                shared-secret "mydrbd";
    }

    syncer {
        rate 100M;
    }
}

6.6 添加资源文件:

[[email protected] ~]# vim /etc/drbd.d/r0.res
resource r0 {
    device /dev/drbd0;  
        disk /dev/sdb3;  
        meta-disk internal;
        on zhdy05 {  
        address 192.168.96.133:7789;
        }  
        on zhdy06 {  
        address 192.168.96.134:7789;  
        }  
}

6.7 将配置文件为zhdy06提供一份

[[email protected] ~]# scp /etc/drbd.d/{global_common.conf,r0.res} zhdy06:/etc/drbd.d/
The authenticity of host 'zhdy06 (192.168.96.134)' can't be established.
ECDSA key fingerprint is 2f:14:f6:09:bd:e2:79:98:d1:62:15:0c:90:90:1d:bb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'zhdy06,192.168.96.134' (ECDSA) to the list of known hosts.
global_common.conf                                                                                                                                                                                         100% 2354     2.3KB/s   00:00    
r0.res

6.8 初始化资源并启动服务

iptables -F
iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 7788 -j ACCEPT
iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 7799 -j ACCEPT
service iptables save

################在NOD1节点上初始化资源并启动服务
[[email protected] ~]# drbdadm create-md r0
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.

################启动服务
[[email protected] ~]# systemctl start drbd

[[email protected] ~]# cat /proc/drbd
version: 8.4.9-1 (api:1/proto:86-101)
GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by [email protected], 2016-12-04 01:08:48
 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Diskless C r-----
    ns:0 nr:0 dw:0 dr:0 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:10484380

######查看监听地址与端口
[[email protected] ~]# netstat -anput|grep 7789
tcp        0      0 192.168.96.133:47387    192.168.96.134:7789     ESTABLISHED -                   
tcp        0      0 192.168.96.133:49493    192.168.96.134:7789     ESTABLISHED -   

将其中一个节点设置为Primary,在要设置为Primary的节点上执行如下命令,这里在zhdy05上操作

########## 设置zhdy05为主动模式
[[email protected] ~]# drbdadm -- --overwrite-data-of-peer primary r0

[[email protected] ~]# cat /proc/drbd     #开始同步
version: 8.4.9-1 (api:1/proto:86-101)
GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by [email protected], 2016-12-04 01:08:48
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
    ns:1131520 nr:0 dw:0 dr:1132432 al:8 bm:0 lo:0 pe:2 ua:0 ap:0 ep:1 wo:f oos:9354908
	[=>..................] sync'ed: 10.9% (9132/10236)M
	finish: 0:03:56 speed: 39,472 (38,944) K/sec

[[email protected] ~]# cat /proc/drbd      #完成同步,显示为主/备模式。
version: 8.4.9-1 (api:1/proto:86-101)
GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by [email protected], 2016-12-04 01:08:48
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
    ns:10484380 nr:0 dw:0 dr:10485292 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

[[email protected] ~]# cat /proc/drbd
version: 8.4.9-1 (api:1/proto:86-101)
GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by [email protected], 2016-12-04 01:08:48
 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
    ns:0 nr:10484380 dw:10484380 dr:0 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

6.9 创建文件系统并挂载:

[[email protected] ~]# mkfs.ext4 /dev/drbd0       //格式化块存储
[[email protected] ~]# mkdir /mydata             //创建挂载点
[[email protected] ~]# mount /dev/drbd0 /mydata/      //主节点挂载使用(从节点不会自动挂载的,也不需要挂载)

[[email protected] ~]# df -h      //最后一个。
文件系统        容量  已用  可用 已用% 挂载点
/dev/sda3        58G  3.1G   55G    6% /
devtmpfs        479M     0  479M    0% /dev
tmpfs           489M     0  489M    0% /dev/shm
tmpfs           489M  6.7M  482M    2% /run
tmpfs           489M     0  489M    0% /sys/fs/cgroup
/dev/sda1       397M  119M  279M   30% /boot
tmpfs            98M     0   98M    0% /run/user/0
/dev/drbd0      9.8G   37M  9.2G    1% /mydata

6.10 测试~

[[email protected] mydata]# ls
lost+found

[[email protected] mydata]# touch tst.txt

[[email protected] mydata]# cp /etc/issue /mydata/

[[email protected] mydata]# ls
issue  lost+found  tst.txt

[[email protected] mydata]# !cat
cat /proc/drbd  
version: 8.4.9-1 (api:1/proto:86-101)
GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by akemi@Build64R7, 2016-12-04 01:08:48
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
    ns:10783940 nr:0 dw:299560 dr:10486289 al:81 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

已经同步完毕!!!


[[email protected] ~]# mount /dev/drbd0 /mnt # 默认是无法挂载DRBD数据盘的
mount: you must specify the filesystem type
[[email protected] ~]# mount /dev/sdb3 /mnt # 同样物理盘也无法挂载,因为DRBD在使用它
mount: /dev/sdb1 already mounted or /mnt busy
[[email protected] ~]# drbdadm down data # 关闭同步服务
[[email protected] ~]# mount /dev/sdb3 /mnt/ # 挂载物理盘
[[email protected] ~]# df -h # 查看磁盘使用情况,可以看到此处sdb3与zhdy05使用情况完全一致,从而手动切换完毕。

[[email protected] ~]# umount /mnt/ # 卸载物理盘
[[email protected] ~]# drbdadm up data # 开启DRBD同步模式
[[email protected] ~]# cat /proc/drbd # 查看同步情况,恢复到主/备模式
version: 8.4.4 (api:1/proto:86-101)
GIT-hash: 74402fecf24da8e5438171ee8c19e28627e1c98a build by root@, 2014-07-08 20:52:23
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

大功告成,可以正常使用drbd存储。但是这种方式不高效,所以后期我准备再次增加heartbeat当故障发生时可以完全自动完成主从切换。


DRBD遇到了很多很多问题,前面的架构4个小时(思路~ 思路~ 思路~ ),就一个DRBD搞了至少6个小时(各种补脑学习~):

DRBD UpToDate/DUnknown 故障恢复 (必须要给个四级标题,整死我了~)

1, 节点状态查看

(1) 主节点状态

[[email protected] ~]# cat /proc/drbd

version: 8.4.3 (api:1/proto:86-101)    

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2013-11-29 12:28:00    

0: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown   r-----    

ns:0 nr:0 dw:0 dr:672 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:604

(2) 从节点状态

[[email protected] ~]# cat /proc/drbd

version: 8.4.3 (api:1/proto:86-101)    

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2013-11-29 12:28:00    

0: cs:StandAlone ro:Secondary/Unknown ds:UpToDate/DUnknown   r-----    

ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:548
  1. 这里确认以主节点的数据为准,重新同步到从节点

(1) 停止app2 drbd服务

[[email protected] ~]# systemctl stop drbd

Stopping all DRBD resources: .

(2) 重新初始化元数据

[[email protected] ~]# drbdadm create-md r0 #create-md后面的是drbd资源的名称

You want me to create a v08 style flexible-size internal meta data block.    

There appears to be a v08 flexible-size internal meta data block    

already in place on /dev/sdb1 at byte offset 5364318208    

Do you really want to overwrite the existing v08 meta-data?    

[need to type 'yes' to confirm] yes

Writing meta data...

md_offset 5364318208    

al_offset 5364285440    

bm_offset 5364121600

Found ext3 filesystem

5238400 kB data area apparently used    

5238400 kB left usable by current configuration

Even though it looks like this would place the new meta data intounused space, you still need to confirm, as this is only a guess.

Do you want to proceed?[need to type 'yes' to confirm] yes

initializing activity log

NOT initializing bitmap    

lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory    

New drbd meta data block successfully created.    

lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory

(3) 启动drbd服务

[[email protected] ~]# systemctl start brbd

Starting DRBD resources: [    

create res: data    

prepare disk: data    

adjust disk: data    

adjust net: data    

]    

..........    

***************************************************************    

DRBD's startup script waits for the peer node(s) to appear.    

- In case this node was already a degraded cluster before the    

reboot the timeout is 0 seconds. [degr-wfc-timeout]    

- If the peer was available before the reboot the timeout will    

expire after 0 seconds. [wfc-timeout]    

(These values are for resource 'data'; 0 sec -> wait forever)    

To abort waiting enter 'yes' [  15]:yes

.

[[email protected] ~]# cat /proc/drbd

version: 8.4.3 (api:1/proto:86-101)    

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2013-11-29 12:28:00    

0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----    

ns:0 nr:5238400 dw:5238400 dr:0 al:0 bm:320 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
  1. app1主节点下

(1) 主节点状态正常了

[[email protected] ~]# cat /proc/drbd

version: 8.4.3 (api:1/proto:86-101)    

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2013-11-29 12:28:00    

0: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown   r-----    

ns:0 nr:0 dw:0 dr:672 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:604

(2) 重启drbd之后,数据重新同步到从节点

[[email protected] ~]# systemctl reload drbd

Reloading DRBD configuration: .    

[[email protected] ~]# cat /proc/drbd    

version: 8.4.3 (api:1/proto:86-101)    

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2013-11-29 12:28:00    

0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n-    

ns:176816 nr:0 dw:0 dr:180896 al:0 bm:10 lo:4 pe:2 ua:8 ap:0 ep:1 wo:d oos:5063296    

[>....................] sync'ed:  3.4% (4944/5112)M    

finish: 0:00:57 speed: 87,552 (87,552) K/sec

[[email protected] ~]# cat /proc/drbd

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2013-11-29 12:28:00

0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r—–

ns:5238400 nr:0 dw:0 dr:5239072 al:0 bm:320 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0

再看一下节点状态就同步了!这样就解决了脑裂情况。且保证数据都还在。

扩展一下喽:

感兴趣的DRBD+Heartbeat+NFS高可用的可以点击此链接

八、关于zabbix的配置

我实在是不敢开第7台虚拟机了。我直接在其中的一台服务器上面做的。参考如下:

配置zabbix架构


最后允许我用一个三级标题:

开6个虚拟机,在打开浏览器查询资料,在使用有道笔记写个773行的笔记,卡成什么效果,就这一句话我分段打了好多次才打出来!!不过自从买了固态和内存,这一次是我使用最值得的一次!!!

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏FreeBuf

CoffeeMiner:劫持WiFi网络接入设备进行“挖矿”的框架

几个星期以前,我看到了网上关于黑客劫持星巴克WiFi网络中的笔记本电脑进行“挖矿”的报道,非常有意思,结合分析,我认为,还能用中间人MITM测试方式来实现类似目...

2516
来自专栏james大数据架构

备胎的养成记KeepAlived实现热备负载

  在  入坑系列之HAProxy负载均衡 中已经详细讲过了怎么将高并发的请求按均衡算法分发到几台服务器上做均衡防止单机崩溃。   但这样的话有没有发现所有请求...

2338
来自专栏西枫里博客

使用无线网卡连接网络,默认网关不可用的解决。

工作原因不得不使用台式机,最近挪了窝,不高兴在家横七竖八的拉网线,索性就用了一个USB的免驱无线网卡,通过这个小东西进行联网。而这几天发现网络三番两次的掉线,通...

2191
来自专栏小狼的世界

Fedora 11 的安装以及 LAMP环境的搭建(一)

最近,重新安装了一次系统,为了以后不再做无谓的重复查询的工作,特将本次安装及配置的过程记录下来,做为自己以后的一个参考,亦可以为想要安装 Fedora 桌面的同...

1223
来自专栏pangguoming

最火的Android开源项目整理

一、代码库 1、from  代码家 整理比较好的源码连接 ***************************************************...

8414
来自专栏全栈开发

webpack构建优化:bundle体积从3M到400k之路

在CQM平台开发时,把demo网站给同事体验,都纷纷反馈第一次打开页面的时候需要等待很久,页面一直在转菊花。作为一个为韩国头部厂商提供优质服务的网站,接到这种反...

7164
来自专栏玄魂工作室

[实战]如何在Kali Linux中进行WIFI钓鱼?

文中提及的部分技术可能带有一定攻击性,仅供安全学习和教学用途,禁止非法使用! ? 0x00 实验环境 操作系统:Kali 1.0 (VM) FackAP: ea...

5246
来自专栏张戈的专栏

分享Mac/Linux系统Shell终端利器SecureCRT以及注册破解方法

最近双十一剁手,退役了跟了自己 7 年多神舟承运本本,很肉痛的入手了一台 Macbook Air 13.3。在研究新鲜玩意之前,先缅怀一下这个见证我从电脑小白成...

6336
来自专栏西枫里博客

关机后远程唤醒的配置,简单实现广域网远程开机和连接

出门在外经常需要家里或者办公室电脑里面的资料。通常通过远程桌面等控制类软件连接。当家里没人,没人开电脑就麻烦了,如果让家里电脑始终开着浪费能源,所以远程桌面...

4042
来自专栏linux驱动个人学习

各种根文件系统

(1) jffs2   JFFS文件系统最早是由瑞典Axis Communications公司基于Linux2.0的内核为嵌入式系统开发的文件系统。JFFS2...

5097

扫码关注云+社区

领取腾讯云代金券