文章链接来自:https://www.cnblogs.com/cxbhakim/p/9068833.html
一、keepalived服务
-
keepalived 是什么?
-
Keepalived 一方面具有配置管理LVS的功能,同时还具有对LVS下面节点进行健康检查的功能,另一方面也可实现系统网络服务的高可用功能,用来防止单点故障。
- keepalived 工作原理
keepalived 是以 VRRP 协议为实现基础,VRRP全称Virtual Router Redundancy Protocol,即虚拟路由冗余协议。
虚拟路由冗余协议,可以认为是实现路由器高可用的协议,即将N台提供相同功能的路由器组成一个路由器组,这个组里面有一个master和多个backup,master上面有一个对外提供服务的vip(该路由器所在局域网内其他机器的默认路由为该vip),master会发组播vrrp包,用于高速backup自己还活着,当backup收不到vrrp包时就认为master宕掉了,这时就需要根据VRRP的优先级来选举一个backup当master。这样的话就可以保证路由器的高可用了。保证业务的连续性,接管速度最快可以小于1秒。

keepalived主要有三个模块,分别是core、check和vrrp。
-
core模块为keepalived的核心,负责主进程的启动、维护以及全局配置文件的加载和解析。
-
check负责健康检查,包括常见的各种检查方式。
-
vrrp模块是来实现VRRP协议的。
-
- keepalived 与 zookeeper 高可用性区别
- Keepalived:
优点:简单,基本不需要业务层面做任何事情,就可以实现高可用,主备容灾。而且容灾的宕机时间也比较短。
缺点:也是简单,因为VRRP、主备切换都没有什么复杂的逻辑,所以无法应对某些特殊场景,比如主备通信链路出问题,会导致脑裂。同时keepalived也不容易做负载均衡。 -
Zookeeper:优点:可以支持高可用,负载均衡。本身是个分布式的服务。缺点:跟业务结合的比较紧密。需要在业务代码中写好ZK使用的逻辑,比如注册名字。拉取名字对应的服务地址等。
- Keepalived:
二、keepalived 软件使用
- keepalived 安装
1[root@etcd1 ~]# yum install keepalived -y1234567/etc/keepalived/etc/keepalived/keepalived.conf#keepalived服务主配置文件/etc/rc.d/init.d/keepalived#服务启动脚本(centos 7 之前的用init.d 脚本启动,之后的systemd启动)/etc/sysconfig/keepalived/usr/bin/genhash/usr/libexec/keepalived/usr/sbin/keepalived - 配置说明
123456789101112131415161718192021222324252627282930313233343536[root@etcd1 kubernetes]# cat /etc/keepalived/keepalived.conf! Configuration Fileforkeepalivedglobal_defs {# 全局配置notification_email {# 定义报警邮件地址acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_from Alexandre.Cassen@firewall.loc# 定义发送邮件的地址smtp_server 192.168.200.1# 邮箱服务器smtp_connect_timeout 30# 定义超时时间router_id LVS_DEVEL# 定义路由器标识信息,相同的局域网唯一,标识本节点的字条串,通常为hostnamevrrp_skip_check_adv_addrvrrp_strictvrrp_garp_interval 0vrrp_gna_interval 0}<br># 虚拟 IP 配置 vrrpvrrp_instance VI_1 {# 定义实例state MASTER# 状态参数 master/backup 仅表示说明,主节点为 MASTER, 对应的备份节点为 BACKUPinterface eth0# 绑定虚拟 IP 的网络接口,与本机 IP 地址所在的网络接口相同, 我的是 eth0virtual_router_id 51# 虚拟路由的 ID 号, 两个节点设置必须一样, 可选 IP 最后一段使用, 相同的 VRID 为一个组,他将决定多播的 MAC 地址<br> mcast_src_ip 192.168.10.50 ## 本机 IP 地址priority 100# 优先级决定是主还是备 --> 越大越优先,节点优先级, 值范围 0-254, MASTER 要比 BACKUP 高 <br> nopreempt # 优先级高的设置 nopreempt 解决异常恢复后再次抢占的问题advert_int 1# 主备心跳通讯时间间隔,组播信息发送间隔,两个节点设置必须一样, 默认 1sauthentication {# 认证授权,设置验证信息,两个节点必须一致auth_type PASSauth_pass 1111}<br> <br>## 将 track_script 块加入 instance 配置块 <br> <br> track_script {<br> chk_nginx ## 执行 Nginx 监控的服务 <br> }virtual_ipaddress {# vip,设备之间使用的虚拟ip地址,可以定义多个 <br>192.168.200.16192.168.200.17192.168.200.18}}.... - 使用配置
主负载均衡服务器配置1234567891011121314151617181920212223242526272829303132333435[root@etcd1 vagrant]# vi /etc/keepalived/keepalived.conf! Configuration Fileforkeepalivedglobal_defs {router_id lvs-01}## keepalived 会定时执行脚本并对脚本执行的结果进行分析,动态调整 vrrp_instance 的优先级。# 如果脚本执行结果为 0,并且 weight 配置的值大于 0,则优先级相应的增加。# 如果脚本执行结果非 0,并且 weight配置的值小于 0,则优先级相应的减少。# 其他情况,维持原本配置的优先级,即配置文件中 priority 对应的值。vrrp_script chk_nginx {script"/etc/keepalived/nginx_check.sh"## 检测 nginx 状态的脚本路径interval 2## 检测时间间隔weight -20## 如果条件成立,权重-20}vrrp_instance VI_1 {state MASTERinterface eth1virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}## 将 track_script 块加入 instance 配置块track_script {chk_nginx## 执行 Nginx 监控的服务}virtual_ipaddress {192.168.10.99}}... - 备负载均衡服务器配置
1234567891011121314151617181920212223242526272829303132[root@etcd2 vagrant]# vi /etc/keepalived/keepalived.conf! Configuration Fileforkeepalivedglobal_defs {router_id lvs-02}vrrp_script chk_nginx {script"/etc/keepalived/nginx_check.sh"interval 3weight -20}vrrp_instance VI_1 {state MASTERinterface eth1virtual_router_id 51priority 90advert_int 1authentication {auth_type PASSauth_pass 1111}## 将 track_script 块加入 instance 配置块track_script {chk_nginx## 执行 Nginx 监控的服务}virtual_ipaddress {192.168.10.99}}... - 启动
查看启动文件:
123456789101112131415[root@etcd2 init.d]# cat /usr/lib/systemd/system/keepalived.service[Unit]Description=LVS and VRRP High Availability MonitorAfter=syslog.target network-online.target[Service]Type=forkingPIDFile=/var/run/keepalived.pidKillMode=processEnvironmentFile=-/etc/sysconfig/keepalivedExecStart=/usr/sbin/keepalived$KEEPALIVED_OPTIONSExecReload=/bin/kill-HUP $MAINPID[Install]WantedBy=multi-user.target
相关操作命令如下:
1234chkconfig keepalived on#keepalived服务开机启动service keepalived start#启动服务service keepalived stop#停止服务service keepalived restart#重启服务keepalived正常运行后,会启动3个进程,其中一个是父进程,负责监控其子进程。一个是vrrp子进程,另外一个是checkers子进程。
12345[root@etcd2 init.d]# ps -ef | grep keepalivedroot 3626 1 0 08:59 ? 00:00:00/usr/sbin/keepalived-Droot 3627 3626 0 08:59 ? 00:00:00/usr/sbin/keepalived-Droot 3628 3626 0 08:59 ? 00:00:00/usr/sbin/keepalived-Droot 3656 3322 0 09:03 pts/000:00:00grep--color=auto keepalived - nginx 安装与配置
- 安装(略)
- 配置,默认配置
- nginx 默认文件index.html配置
机器10.50,配置为:
12345678910111213141516171819202122232425<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style>body {35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}</style></head><body><h1>Welcome to nginx!testkeepalived 1</h1><p>If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.</p><p>For online documentation and support please refer to<a href="http://nginx.org/">nginx.org</a>.<br/>Commercial support is available at<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank youforusing nginx.</em></p></body></html>
机器10.51,配置为:
12345678910111213141516171819202122232425<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style>body {35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}</style></head><body><h1>Welcome to nginx!testkeepalived 2</h1><p>If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.</p><p>For online documentation and support please refer to<a href="http://nginx.org/">nginx.org</a>.<br/>Commercial support is available at<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank youforusing nginx.</em></p></body></html>
三、测试
- 编写 nginx 检测脚本
在所有的节点上面编写 Nginx 状态检测脚本 /etc/keepalived/nginx_check.sh (已在 keepalived.conf 中配置)脚本要求:如果 nginx 停止运行,尝试启动,如果无法启动则杀死本机的 keepalived 进程, keepalied将虚拟 ip 绑定到 BACKUP 机器上。
内容如下:
12345678910[root@etcd1 keepalived]# vi nginx_check.sh#!/bin/bashset-xA=`ps-C nginx --no-header |wc-l`if[ $A -eq0 ];thenecho`date`': nginx is not healthy, try to killall keepalived'>>/etc/keepalived/keepalived.logkillall keepalivedfi - 启动所有节点上的nginx和keepalived
- 192.168.10.50 机器查看ip信息
12345678910111213141516171819202122232425[root@etcd1 html]# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128scope hostvalid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether52:54:00:47:46:52 brd ff:ff:ff:ff:ff:ffinet 10.0.2.15/24brd 10.0.2.255 scope global dynamic eth0valid_lft 60191sec preferred_lft 60191secinet6 fe80::5054:ff:fe47:4652/64scope linkvalid_lft forever preferred_lft forever3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether08:00:27:c4:c3:0e brd ff:ff:ff:ff:ff:ffinet 192.168.10.50/24brd 192.168.10.255 scope global eth1valid_lft forever preferred_lft foreverinet 192.168.10.99/32scope global eth1valid_lft forever preferred_lft foreverinet6 fe80::a00:27ff:fec4:c30e/64scope link tentative dadfailedvalid_lft forever preferred_lft forever4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWNlink/ether02:42:22:bd:9b:38 brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16scope global docker0valid_lft forever preferred_lft forever - 192.168.10.51 查看 ip 信息
1234567891011121314151617181920212223[root@etcd2 vagrant]# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128scope hostvalid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether52:54:00:ca:e4:8b brd ff:ff:ff:ff:ff:ffinet 10.0.2.15/24brd 10.0.2.255 scope global dynamic eth0valid_lft 82271sec preferred_lft 82271secinet6 fe80::5054:ff:feca:e48b/64scope linkvalid_lft forever preferred_lft forever3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether08:00:27:ee:7d:7d brd ff:ff:ff:ff:ff:ffinet 192.168.10.51/24brd 192.168.10.255 scope global eth1valid_lft forever preferred_lft foreverinet6 fe80::a00:27ff:feee:7d7d/64scope link tentative dadfailedvalid_lft forever preferred_lft forever4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWNlink/ether02:42:fa:84:71:b0 brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16scope global docker0valid_lft forever preferred_lft forever
以上可以看到,vip生效是在机器192.168.10.50上面。 - 通过VIP(192.168.10.99)来访问nginx,结果如下
以上可知,现在生效的nginx代理机器是10.50;我们停掉机器10.50上面的keepalived [root@etcd1 html]# systemctl stop keepalived.service 。
再使用VIP(192.168.10.99)访问nginx服务,结果如下:
以上可知,现在生效的nginx代理机器是10.51;我们重新机器10.50上面的keepalived [root@etcd1 html]# systemctl start keepalived.service
再使用VIP(192.168.10.99)访问nginx服务,结果如下:
至此,Keepalived + Nginx 实现高可用 Web 负载均衡搭建完毕。