节点信息
RabbitMQ-节点 ip 类型 系统 mq1 10.5.9.168 RAM WINDOWS2016 mq2 10.5.9.169 DISK WINDOWS2016 整体架构:mq1+mq2 两节点镜像 +haproxy(2节点+keepalived)
先安装otp_win64_22.2.exe,C:Program Fileserl10.6
设置环境变量
此电脑-->鼠标右键“属性”-->高级系统设置-->环境变量-->“新建”系统环境变量
添加环境变量:ERLANG_HOME C:Program Fileserl10.6
然后双击系统变量path,点击“新建”,将将"%ERLANG_HOME%in"加入到path中。
然后在cmd窗口中运行,能看到版本号,说明erlang安装成功
C:UsersAdministrator>erl Eshell V10.6 (abort with ^G) 1> C:UsersAdministrator>echo %path% C:Windowssystem32;C:Windows;C:WindowsSystem32Wbem;C:WindowsSystem32WindowsPowerShellv1.0;C:Program Fileserl10.6in;C:UsersAdministratorAppDataLocalMicrosoftWindowsApps; C:UsersAdministrator>echo %path% C:Windowssystem32;C:Windows;C:WindowsSystem32Wbem;C:WindowsSystem32WindowsPowerShellv1.0;C:Program Fileserl10.6in;C:UsersAdministratorAppDataLocalMicrosoftWindowsApps; 如果存在path不生效,看不到erl,可以手动执行set path= set path=
再安装rabbitmq-server-3.7.13.exe(默认安装路径)
修改主机名字:C:WindowsSystem32driversetc 10.5.9.168 mq1 10.5.9.169 mq2
在cmd命令中执行ipconfig /flushdns
C:WindowsSystem32configsystemprofile.erlang.cookie 把第一台的.erlang.cookie拷贝到另外的相同路径
C:UsersAdministrator 同样
确认2台机器的4个文件的内容是一致的
将各节点的rabbitmq服务开启: rabbitmq-service start
C:UsersAdministrator>cd C:Program FilesRabbitMQ Server abbitmq_server-3.7.13sbin C:Program FilesRabbitMQ Server abbitmq_server-3.7.13sbin>rabbitmq-service start RabbitMQ 服务正在启动 . RabbitMQ 服务已经启动成功。 选择其中一个节点(10.5.9.169)将其停止: rabbitmqctl stop_app C:Program FilesRabbitMQ Server abbitmq_server-3.7.13sbin>rabbitmqctl stop_app Stopping rabbit application on node rabbit@mq2 ... 将10.15.9.169机器加入集群: rabbitmqctl join_cluster --ram rabbit@hostname (ram 为内存节点, 默认情况下为disc磁盘节点) 注意此时的node在windows机器下面是大写的。 C:Program FilesRabbitMQ Server abbitmq_server-3.7.13sbin>hostname ##在node1节点上查看主机名 mq1 C:Program FilesRabbitMQ Server abbitmq_server-3.7.13sbin>rabbitmqctl join_cluster --ram rabbit@mq1 Clustering node rabbit@mq2 with rabbit@mq1 开启rabbitmq服务: rabbitmqctl start_app C:Program FilesRabbitMQ Server abbitmq_server-3.7.13sbin>rabbitmqctl start_app Starting node rabbit@mq2 ... completed with 3 plugins. 查看集群状况: rabbitmqctl cluster_status C:Program FilesRabbitMQ Server abbitmq_server-3.7.13sbin>rabbitmqctl cluster_status Cluster status of node rabbit@mq2 ... [{nodes,[{disc,[rabbit@mq1]},{ram,[rabbit@mq2]}]}, {running_nodes,[rabbit@mq1,rabbit@mq2]}, {cluster_name,<<"rabbit@mq2">>}, {partitions,[]}, {alarms,[{rabbit@mq1,[]},{rabbit@mq2,[]}]}] 在节点1上查看集群状态 C:Program FilesRabbitMQ Server abbitmq_server-3.7.13sbin>rabbitmqctl cluster_status Cluster status of node rabbit@mq1 ... [{nodes,[{disc,[rabbit@mq1]},{ram,[rabbit@mq2]}]}, {running_nodes,[rabbit@mq2,rabbit@mq1]}, {cluster_name,<<"rabbit@mq2">>}, {partitions,[]}, {alarms,[{rabbit@mq2,[]},{rabbit@mq1,[]}]}]
---以上是单个节点加入集群的方式,只要一个节点加入到集群中的任何一个节点,该节点就算是加入到了集群中.
配置镜像集群
http://10.5.9.168:15672
http://localhost:15672/#/
guest,guest登录管理端 镜像模式可以在web上配置,登录mq:http://localhost:15672/#/ 选择“admin”,选择“policies”,在add/update a policy中添加: name:ha-all pattern:^ apply to: Exchanges and queues priority:0 definition: ha-mode=all 选择“add policy”创建镜像模式 模拟创建一个队列queue ,看队列是否会自动的复制到另外一个节点上面 在web界面操作:模拟在node1节点上面创建一个队列queue 选择“queues”,选择“add a new queue” name:hq-test durability:durable node:rabbit@mq1 auto delete:no 选择“add queue”,在queues页面查看,添加的队列存在2个节点上面
创建queues并查看,2个节点都存在
===haproxy+keepalived
#节点1 [root@localhost ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:50:56:88:F1:B9 [root@localhost ~]# rpm -qa|grep haproxy haproxy-1.5.18-1.el6.x86_64 [root@localhost ~]# rpm -qa|grep keepalived keepalived-1.2.13-5.el6_6.x86_64
[root@localhost haproxy]# vim haproxy.cfg ### haproxy 监控页面地址是:http://10.5.9.171:8100/haproxy_status listen admin_stats bind *:8100 mode http log 127.0.0.1 local3 err stats refresh 60s stats uri /haproxy_status stats realm welcome login Haproxy stats auth admin:admin123456 stats hide-version stats admin if TRUE listen rabbitmq_admin bind 0.0.0.0:15672 server mq1 10.5.9.168:15672 server mq2 10.5.9.169:15672 listen rabbitmq_cluster bind 0.0.0.0:5672 mode tcp balance roundrobin server mq1 10.5.9.168:5672 check inter 2000 rise 2 fall 3 weight 1 server mq2 10.5.9.169:5672 check inter 2000 rise 2 fall 3 weight 1
[root@localhost haproxy]# ip a [root@localhost haproxy]# ps -ef|grep haproxy [root@localhost haproxy]# haproxy -f /etc/haproxy/haproxy.cfg [WARNING] 077/133130 (20245) : parsing [/etc/haproxy/haproxy.cfg:45] : 'option httplog' not usable with proxy 'rabbitmq_cluster' (needs 'mode http'). Falling back to 'option tcplog'. [WARNING] 077/133130 (20245) : config : 'option forwardfor' ignored for proxy 'rabbitmq_cluster' as it requires HTTP mode. [ALERT] 077/133130 (20245) : Starting proxy rabbitmq_admin: cannot bind socket [10.5.9.168:15672] 解决报错 [root@localhost haproxy]# vim /etc/sysctl.conf net.ipv4.ip_nonlocal_bind = 1 [root@localhost haproxy]# sysctl -p [root@localhost haproxy]# haproxy -f /etc/haproxy/haproxy.cfg [WARNING] 077/133427 (20252) : parsing [/etc/haproxy/haproxy.cfg:45] : 'option httplog' not usable with proxy 'rabbitmq_cluster' (needs 'mode http'). Falling back to 'option tcplog'. [WARNING] 077/133427 (20252) : config : 'option forwardfor' ignored for proxy 'rabbitmq_cluster' as it requires HTTP mode. [root@localhost haproxy]# ps -ef|grep haproxy haproxy 20253 1 0 13:34 ? 00:00:00 haproxy -f /etc/haproxy/haproxy.cfg root 20255 20145 0 13:34 pts/0 00:00:00 grep haproxy #重新载入配置文件 [root@localhost haproxy]# service haproxy reload [root@localhost keepalived]# more /etc/keepalived/haproxy_check.sh #!/bin/bash A=`ps -C haproxy --no-header |wc -l` if [ $A -eq 0 ];then /etc/init.d/keepalived stop fi [root@localhost keepalived]# /etc/init.d/keepalived restart #查看keepalived日志 [root@localhost keepalived]# tail -f -n 30 /var/log/messages Mar 18 13:42:30 localhost Keepalived_vrrp[20281]: VRRP sockpool: [ifindex(2), proto(112), unicast(1), fd(10,11)] Mar 18 13:42:30 localhost Keepalived_healthcheckers[20280]: Using LinkWatch kernel netlink reflector... Mar 18 13:42:31 localhost Keepalived_vrrp[20281]: VRRP_Instance(VI_1) Transition to MASTER STATE Mar 18 13:42:31 localhost Keepalived_vrrp[20281]: VRRP_Instance(VI_1) Received lower prio advert, forcing new election Mar 18 13:42:32 localhost Keepalived_vrrp[20281]: VRRP_Instance(VI_1) Entering MASTER STATE Mar 18 13:42:32 localhost Keepalived_vrrp[20281]: VRRP_Instance(VI_1) setting protocol VIPs. Mar 18 13:42:32 localhost Keepalived_vrrp[20281]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.5.9.188 Mar 18 13:42:32 localhost Keepalived_healthcheckers[20280]: Netlink reflector reports IP 10.5.9.188 added Mar 18 13:42:37 localhost Keepalived_vrrp[20281]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.5.9.188