zoukankan      html  css  js  c++  java
  • MySQL高可用架构之MHA

    该文章转自: https://www.cloudbility.com/club/7104.html

    目录

    •  一、当前高可用方案
      • 1、Heartbeat+DRBD
      • 2、MySQL Cluster
      • 3、全局事务ID
      • 4、PXC
      • 5、MHA的优势
        • 1)故障切换快
        • 2)master故障不会导致数据不一致
        • 3)无需修改当前的MySQL设置
        • 4)无需增加大量的服务器
        • 5)无性能下降
        • 6)适用于任何存储引擎
    • 二、MHA简介:
      • 1、MHA结构
        • 1)MHA Manager
          • 1.Manager工具包主要工具
        • 2)MHA Node
          • 1.Node工具包
        • 3)注意
      • 2、MAH工作原理
    • 三、部署MHA
      • 1、环境准备
      • 2、安装epel源
      • 3、环境初始化
        • 1)修改每台主机名
        • 2)主机名解析
        • 3)ssh无密码登录
    • 四、规划mysql
      • 1)安装mysql
      • 2)配置master、slave01和slave02之间的主从复制
      • 3)在master、slave01上创建主从同步的账号。
      • 4)在master上执行命令,查看master状态信息
      • 5)在slave01和slave02上执行主从同步
    • 五、规划mha
      • 1)创建mha管理用的复制账号
      • 2)在3台主机上(master、slave01和slave02)上分别安装mha4mysql-node包
      • 3)在manager上安装mha4mysql-manager和mha4mysql-node包
      • 4)修改manager端mha的配置文件
      • 5)检查ssh是否畅通
      • 6)masterha_check_repl工具检查mysql主从复制是否成功
    • 六、mha实验模拟
      • 1)在每次做mha实验的时候,我们都最好先执行如下命令做检测
      • 2)在manager端启动mha服务并时刻监控日志文件的输出变化
      • 3)测试master宕机后会自动切换
      • 4)恢复master服务
      • 5)再次启动MHA的manager服务,并停止slave01
      • 6)恢复slave01服务
      • 7)重启MHA的manager服务
    • 七、通过vip实现mysql的高可用
      • 1)修改/usr/local/mha/mha.cnf
      • 2)修改脚本/usr/local/mha/scripts/master_ip_failover
      • 3)模拟故障进行切换
    • 八、MHA日常维护命令
      • 1、查看ssh登陆是否成功
      • 2、查看复制是否建立好
      • 3、启动mha
      • 4、检查启动的状态
      • 5、停止mha
      • 6、failover后下次重启
    • 九、FAQ(常见问题解答)
      • 1、可能报错1
      • 2、可能报错2
      • 3、可能报错3
      • 4、可能报错4
      • 5、可能报错5
      • 6、小知识
      • 一、当前高可用方案

        1、Heartbeat+DRBD

        开销:需要额外添加处于被动状态的master server(并不处理应用流量) 性能:为了实现DRBD复制环境的高可用,innodb-flush-log-at-trx-commit和sync-binlog必须设置为1,这样会导致写性能下降。

        一致性:在master上必要的binlog时间可能会丢失,这样slave就无法进行复制,导致产生数据一致性问题。

        2、MySQL Cluster

        MySQL Cluster真正实现了高可用,但是使用的是NDB存储引擎,并且SQL节点有单点故障问题。

        半同步复制(5.5+) 半同步复制大大减少了“binlog events只存在故障master上”的问题。

        在提交时,保证至少一个slave(并不是所有的)接收到binlog,因此一些slave可能没有接收到binlog。

        3、全局事务ID

        在二进制文件中添加全局事务ID(global transaction id)需要更改binlog格式,在5.1/5.5版本中不支持。

        在应用方面有很多方法可以直线全局事务ID,但是仍避免不了复杂度、性能、数据丢失或者一致性的问题。

        4、PXC

        PXC实现了服务高可用,数据同步时是并发复制。但是仅支持InnoDB引擎,所有的表都要有主键。锁冲突、死锁问题相对较多等等问题。

        5、MHA的优势

        1)故障切换快

        在主从复制集群中,只要从库在复制上没有延迟,MHA通常可以在数秒内实现故障切换。9-10秒内检查到master故障,可以选择在7-10秒关闭master以避免出现裂脑,几秒钟内,将差异中继日志(relay log)应用到新的master上,因此总的宕机时间通常为10-30秒。恢复新的master后,MHA并行的恢复其余的slave。即使在有数万台slave,也不会影响master的恢复时间。

        2)master故障不会导致数据不一致

        当目前的master出现故障是,MHA自动识别slave之间中继日志(relay log)的不同,并应用到所有的slave中。这样所有的salve能够保持同步,只要所有的slave处于存活状态。和Semi-Synchronous Replication一起使用,(几乎)可以保证没有数据丢失。

        3)无需修改当前的MySQL设置

        MHA的设计的重要原则之一就是尽可能地简单易用。MHA工作在传统的MySQL版本5.0和之后版本的主从复制环境中。和其它高可用解决方法比,MHA并不需要改变MySQL的部署环境。MHA适用于异步和半同步的主从复制。

        启动/停止/升级/降级/安装/卸载MHA不需要改变(包扩启动/停止)MySQL复制。当需要升级MHA到新的版本,不需要停止MySQL,仅仅替换到新版本的MHA,然后重启MHA Manager就好了。

        MHA运行在MySQL 5.0开始的原生版本上。一些其它的MySQL高可用解决方案需要特定的版本(比如MySQL集群、带全局事务ID的MySQL等等),但并不仅仅为了master的高可用才迁移应用的。在大多数情况下,已经部署了比较旧MySQL应用,并且不想仅仅为了实现Master的高可用,花太多的时间迁移到不同的存储引擎或更新的前沿发行版。MHA工作的包括5.0/5.1/5.5的原生版本的MySQL上,所以并不需要迁移。

        4)无需增加大量的服务器

        MHA由MHA Manager和MHA Node组成。

        MHA Node运行在需要故障切换/恢复的MySQL服务器上,因此并不需要额外增加服务器。

        MHA Manager运行在特定的服务器上,因此需要增加一台(实现高可用需要2台),但是MHA Manager可以监控大量(甚至上百台)单独的master,因此,并不需要增加大量的服务器。即使在一台slave上运行MHA Manager也是可以的。综上,实现MHA并没用额外增加大量的服务。

        5)无性能下降

        MHA适用与异步或半同步的MySQL复制。监控master时,MHA仅仅是每隔几秒(默认是3秒)发送一个ping包,并不发送重查询。可以得到像原生MySQL复制一样快的性能。

        6)适用于任何存储引擎

        MHA可以运行在只要MySQL复制运行的存储引擎上,并不仅限制于InnoDB,即使在不易迁移的传统的MyISAM引擎环境,一样可以使用MHA。

      • 二、MHA简介:

        MHA(Master High Availability),是比较成熟的MySQL高可用方案。MHA能够在30秒内实现故障切换,并能在故障切换中,最大可能的保证数据一致性。目前淘宝也正在开发相似产品TMHA,目前已支持一主一从。 image

        1、MHA结构

        该软件由两部分组成:MHA Manager(管理节点)和MHA Node(数据节点)。

        1)MHA Manager

        可以单独部署在一台独立的机器上管理多个master-slave集群,也可以部署在一台slave节点上。MHA Manager主要运行一些工具,比如masterha_manager工具实现自动监控MySQL Master和实现master故障切换,其它工具实现手动实现master故障切换、在线master转移、连接检查等等。

        1.Manager工具包主要工具

        masterha_check_ssh              检查MHA的SSH配置状况
        
        masterha_check_repl             检查MySQL复制状况
        
        masterha_manger                 启动MHA
        
        masterha_check_status           检测当前MHA运行状态
        
        masterha_master_monitor         检测master是否宕机
        
        masterha_master_switch          控制故障转移(自动或者手动)
        
        masterha_conf_host              添加或删除配置的server信息
        

        2)MHA Node

        MHA Node 运行在每台MySQL服务器上MHA Manager会定时探测集群中的master节点,当master出现故障时,它可以自动将最新数据的slave提升为新的master,然后将所有其他的slave重新指向新的master。整个故障转移过程对应用程序完全透明。

        部署在所有运行MySQL的服务器上,无论是master还是slave。主要作用有三个。

        Ⅰ、保存二进制日志 如果能够访问故障master,会拷贝master的二进制日志

        II、应用差异中继日志 从拥有最新数据的slave上生成差异中继日志,然后应用差异日志。

        III、清除中继日志 在不停止SQL线程的情况下删除中继日志

        1.Node工具包

        这些工具通常由MHA Manager的脚本触发,无需人为操作主要包括以下几个工具:

        save_binary_logs                保存和复制master的二进制日志
        
        apply_diff_relay_logs           识别差异的中继日志事件并将其差异的事件应用于其他的slave
        
        filter_mysqlbinlog              去除不必要的ROLLBACK事件(MHA已不再使用这个工具)
        
        purge_relay_logs                清除中继日志(不会阻塞SQL线程)
        

        3)注意

        为了尽可能的减少主库硬件损坏宕机造成的数据丢失,因此在配置MHA的同时建议配置成MySQL 5.5的半同步复制。关于半同步复制原理各位自己进行查阅。(不是必须)

        2、MAH工作原理

        1.从宕机崩溃的Master保存二进制日志事件(binlog event);

        2.识别含有最新更新的Slave;

        3.应用差异的中继日志(relay log)到其他Slave;

        4.应用从Master保存的二进制日志事件;

        5.提升一个Slave为新的Master;

        6.使其他的Slave连接新的Master进行复制;

      • 三、部署MHA

        1、环境准备

        [root@server01 ~]# cat /etc/redhat-release 
        CentOS release 6.8 (Final)
        [root@server01 ~]# uname -r
        2.6.32-642.el6.x86_64
        

        2、安装epel源

        所有节点

        #备份
        mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
        
        #下载epel源
        wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo
        
        #生成缓存
        yum makecache
        

        3、环境初始化

        1)修改每台主机名

        172.16.1.241    master
        172.16.1.242    slave01
        172.16.1.243    slave02
        172.16.1.244    manager
        

        其中master对外提供写服务,备选master(实际的slave,主机名slave01)提供读服务,slave也提供相关的读服务,一旦master宕机,将会把备选master提升为新的master,slave指向新的master。

        2)主机名解析

        #每台服务器执行修改主机名解析

        echo '''
        172.16.1.241    master
        172.16.1.242    slave01
        172.16.1.243    slave02
        172.16.1.244    manager''' >>/etc/hosts
        

        3)ssh无密码登录

        使用key登录,工作中常用,服务器之间无需密码验证的。关于配置使用key登录,一点需要注意:不能禁止 password 登陆,否则会出现错误

        注意:所以全部机器都要相互做密钥登录。服务器间,无密码ssh登录 #主机:master执行命令

        [root@master ~]# ssh-keygen -t rsa
        [root@master ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@manager
        [root@master ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@slave01
        [root@master ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@slave02
        

        #主机:slave01执行命令

        [root@slave01 ~]# ssh-keygen -t rsa
        [root@slave01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@manager
        [root@slave01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master
        [root@slave01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@slave02
        

        #主机: slave02执行命令

        [root@slave02 ~]# ssh-keygen -t rsa
        [root@slave02 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@manager
        [root@slave02 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master
        [root@slave02 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@slave01
        

        #主机:manager执行命令

        [root@manager ~]# ssh-keygen -t rsa
        [root@manager ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master
        [root@manager ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@slave01
        [root@manager ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@slave02
      • 四、规划mysql

        1)安装mysql

        #master配置文件/etc/my.cnf 核心配置如下:

        basedir = /application/mysql
        datadir = /application/mysql/data
        port = 3306
        server_id = 241
        socket = /tmp/mysql.sock
        log-bin=mysql-bin
        log-slave-updates
        expire_logs_days = 10
        

        #slave01配置文件/etc/my.cnf 核心配置如下:

        basedir = /application/mysql
        datadir = /application/mysql/data
        port = 3306
        server_id = 242
        socket = /tmp/mysql.sock
        log-bin=mysql-bin
        log-slave-updates
        expire_logs_days = 10
        

        #slave02配置文件/etc/my.cnf 核心配置如下:

        basedir = /application/mysql
        datadir = /application/mysql/data
        port = 3306
        server_id = 243
        socket = /tmp/mysql.sock
        log-bin=mysql-bin
        log-slave-updates
        expire_logs_days = 10
        read_only = 1
        

        2)配置master、slave01和slave02之间的主从复制

        注意:binlog-do-db 和 replicate-ignore-db 设置必须相同。 MHA 在启动时候会检测过滤规则,如果过滤规则不同,MHA 不启动监控和故障转移。

        在MySQL5.6 的Replication配置中,master端同样要开启两个重要的选项,server-id和log-bin,并且选项server-id在全局架构中并且唯一,不能被其它主机使用,这里采用主机ip地址的最后一位充当server-id的值;slave端要开启relay-log;

        #主机: master执行命令

        [root@master ~]# egrep "log-bin|server_id" /etc/my.cnf 
        server_id = 241
        log-bin=mysql-bin
        

        #主机: slave01执行命令

        [root@slave01 ~]# egrep "log-bin|server_id" /etc/my.cnf 
        server_id = 242
        log-bin=mysql-bin
        

        #主机: slave02执行命令

        [root@slave02 ~]# egrep "log-bin|server_id" /etc/my.cnf 
        server_id = 243
        log-bin=mysql-bin
        

        3)在master、slave01上创建主从同步的账号。

        slave01是备用master,这个也需要建立授权用户。

        #master
        [root@master ~]#  mysql  -e "grant replication slave on *.* to 'backup'@'172.16.1.%' identified by 'backup';flush privileges;
        
        #slave01 
        [root@slave01 ~]# mysql  -e "grant replication slave on *.* to 'backup'@'172.16.1.%' identified by 'backup';flush privileges;"
        

        4)在master上执行命令,查看master状态信息

        [root@master ~]# mysql -e 'show master status;'
        +------------------+----------+--------------+------------------+
        | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
        +------------------+----------+--------------+------------------+
        | mysql-bin.000007 |      107 |              |                  |
        +------------------+----------+--------------+------------------+
        

        5)在slave01和slave02上执行主从同步

        #slave01配置主从

        [root@slave01 ~]# mysql
        
        mysql> change master to master_host='172.16.1.241',master_user='backup',master_password='backup',master_port=3306,master_log_file='mysql-bin.000007',master_log_pos=107;
        Query OK, 0 rows affected (0.12 sec)
        
        mysql> start slave;
        Query OK, 0 rows affected (0.00 sec)
        
        mysql> show slave statusG
        *************************** 1. row ***************************
                       Slave_IO_State: Waiting for master to send event
                          Master_Host: 172.16.1.241
                          Master_User: backup
                          Master_Port: 3306
                        Connect_Retry: 60
                      Master_Log_File: mysql-bin.000007
                  Read_Master_Log_Pos: 107
                       Relay_Log_File: slave01-relay-bin.000002
                        Relay_Log_Pos: 253
                Relay_Master_Log_File: mysql-bin.000007
                     Slave_IO_Running: Yes
                    Slave_SQL_Running: Yes
                      Replicate_Do_DB: 
                  Replicate_Ignore_DB: 
                   Replicate_Do_Table: 
               Replicate_Ignore_Table: 
              Replicate_Wild_Do_Table: 
          Replicate_Wild_Ignore_Table: 
                           Last_Errno: 0
                           Last_Error: 
                         Skip_Counter: 0
                  Exec_Master_Log_Pos: 107
                      Relay_Log_Space: 411
                      Until_Condition: None
                       Until_Log_File: 
                        Until_Log_Pos: 0
                   Master_SSL_Allowed: No
                   Master_SSL_CA_File: 
                   Master_SSL_CA_Path: 
                      Master_SSL_Cert: 
                    Master_SSL_Cipher: 
                       Master_SSL_Key: 
                Seconds_Behind_Master: 0
        Master_SSL_Verify_Server_Cert: No
                        Last_IO_Errno: 0
                        Last_IO_Error: 
                       Last_SQL_Errno: 0
                       Last_SQL_Error: 
          Replicate_Ignore_Server_Ids: 
                     Master_Server_Id: 241
        1 row in set (0.00 sec)
        
        
        

        #slave02配置主从

        [root@slave02 ~]# mysql
        
        mysql> change master to master_host='172.16.1.241',master_user='backup',master_password='backup',master_port=3306,master_log_file='mysql-bin.000007',master_log_pos=107;
        Query OK, 0 rows affected (0.12 sec)
        
        mysql> start slave;
        Query OK, 0 rows affected (0.00 sec)
        
        mysql> show slave statusG
        *************************** 1. row ***************************
                       Slave_IO_State: Waiting for master to send event
                          Master_Host: 172.16.1.241
                          Master_User: backup
                          Master_Port: 3306
                        Connect_Retry: 60
                      Master_Log_File: mysql-bin.000007
                  Read_Master_Log_Pos: 107
                       Relay_Log_File: slave01-relay-bin.000002
                        Relay_Log_Pos: 253
                Relay_Master_Log_File: mysql-bin.000007
                     Slave_IO_Running: Yes
                    Slave_SQL_Running: Yes
                      Replicate_Do_DB: 
                  Replicate_Ignore_DB: 
                   Replicate_Do_Table: 
               Replicate_Ignore_Table: 
              Replicate_Wild_Do_Table: 
          Replicate_Wild_Ignore_Table: 
                           Last_Errno: 0
                           Last_Error: 
                         Skip_Counter: 0
                  Exec_Master_Log_Pos: 107
                      Relay_Log_Space: 411
                      Until_Condition: None
                       Until_Log_File: 
                        Until_Log_Pos: 0
                   Master_SSL_Allowed: No
                   Master_SSL_CA_File: 
                   Master_SSL_CA_Path: 
                      Master_SSL_Cert: 
                    Master_SSL_Cipher: 
                       Master_SSL_Key: 
                Seconds_Behind_Master: 0
        Master_SSL_Verify_Server_Cert: No
                        Last_IO_Errno: 0
                        Last_IO_Error: 
                       Last_SQL_Errno: 0
                       Last_SQL_Error: 
          Replicate_Ignore_Server_Ids: 
                     Master_Server_Id: 241
        1 row in set (0.00 sec)
        
        
        

        #至此主从已经配置完成!

    • 五、规划mha

      1)创建mha管理用的复制账号

      每台数据库(master、slave01、slave02)上都要创建账号,在这里以其中master为例.。

      [root@master ~]# mysql -e "grant all privileges on *.* to 'mha_rep'@'172.16.1.%' identified by '123456';flush privileges;"
      
      [root@master ~]# mysql
      
      mysql> select host,user from mysql.user;
      
      
      

      2)在3台主机上(master、slave01和slave02)上分别安装mha4mysql-node包

      安装完成后会在/usr/local/bin目录下生成以下脚本文件:

      -r-xr-xr-x 1 root root 15498 4 2 16:04 apply_diff_relay_logs # 识别差异的中继日志事件并将其差异的事件应用于其他的slave
      -r-xr-xr-x 1 root root  4807 4 2 16:04 filter_mysqlbinlog   # 去除不必要的ROLLBACK事件(MHA已不再使用这个工具)
      -r-xr-xr-x 1 root root  7401 4 2 16:04 purge_relay_logs # 清除中继日志(不会阻塞SQL线程)
      -r-xr-xr-x 1 root root  7263 4 2 16:04 save_binary_logs   # 保存和复制master的二进制日志
      

      这里以master为例,其它主机同理。

      [root@master ~]# yum install perl-DBD-MySQL -y
      [root@master ~]# rpm -ivh https://downloads.mariadb.com/files/MHA/mha4mysql-node-0.54-0.el6.noarch.rpm
      

      3)在manager上安装mha4mysql-manager和mha4mysql-node包

      MHA Manager中主要包括了几个管理员的命令行工具,例如master_manger,master_master_switch等。MHA Manger也依赖于perl模块,具体如下:

      安装完成后会在/usr/local/bin目录下面生成以下脚本文件

      -r-xr-xr-x 1 root root 15498 4 2 15:59 apply_diff_relay_logs #  识别差异的中继日志事件并将其差异的事件应用于其他的slave
      -r-xr-xr-x 1 root root  4807 4 2 15:59 filter_mysqlbinlog #  去除不必要的ROLLBACK事件(MHA已不再使用这个工具) 
      -r-xr-xr-x 1 root root  1995 4 2 16:21 masterha_check_repl # 检查MySQL复制状况
      -r-xr-xr-x 1 root root  1779 4 2 16:21 masterha_check_ssh #  检查MHA的SSH配置状况
      -r-xr-xr-x 1 root root  1865 4 2 16:21 masterha_check_status # 检测当前MHA运行状态
      -r-xr-xr-x 1 root root  3201 4 2 16:21 masterha_conf_host # 添加或删除配置的server信息
      -r-xr-xr-x 1 root root  2517 4 2 16:21 masterha_manager # 启动MHA
      -r-xr-xr-x 1 root root  2165 4 2 16:21 masterha_master_monitor # 检测master是否宕机
      -r-xr-xr-x 1 root root  2373 4 2 16:21 masterha_master_switch # 控制故障转移(自动或者手动)
      -r-xr-xr-x 1 root root  3749 4 2 16:21 masterha_secondary_check # 
      -r-xr-xr-x 1 root root  1739 4 2 16:21 masterha_stop # 
      -r-xr-xr-x 1 root root  7401 4 2 15:59 purge_relay_logs # 清除中继日志(不会阻塞SQL线程)
      -r-xr-xr-x 1 root root  7263 4 2 15:59 save_binary_logs # 保存和复制master的二进制日志
      

      复制相关脚本到/usr/local/bin目录(软件包解压缩后就有了,不是必须,因为这些脚本不完整,需要自己修改,这是软件开发着留给我们自己发挥的,如果开启下面的任何一个脚本对应的参数,而对应这里的脚本又没有修改,则会报错,自己被坑的很惨)

      [root@manager ~]# cd mha4mysql-manager-0.56/samples/scripts/ # 这是我们下载解压软件的目录
      [root@manager scripts]# ll
      总用量 32
      -rwxr-xr-x 1 root root  3443 18 2012 master_ip_failover
       #自动切换时vip管理的脚本,不是必须,如果我们使用keepalived的,我们可以自己编写脚本完成对vip的管理,比如监控mysql,如果mysql异常,我们停止keepalived就行,这样vip就会自动漂移
       
      -rwxr-xr-x 1 root root  9186 18 2012 master_ip_online_change
      #在线切换时vip的管理,不是必须,同样可以可以自行编写简单的shell完成
      
      -rwxr-xr-x 1 root root 11867 18 2012 power_manager
      #故障发生后关闭主机的脚本,不是必须
      
      -rwxr-xr-x 1 root root  1360 18 2012 send_report
      #因故障切换后发送报警的脚本,不是必须,可自行编写简单的shell完成。
      
      [root@manager ~]# cp * /usr/local/bin/
      
      
      

      #在manager上安装mha4mysql-manager和mha4mysql-node包

      [root@manager ~]# yum install perl cpan perl-DBD-MySQL perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Net-Telnet -y
      
      [root@manager ~]# rpm -ivh https://downloads.mariadb.com/files/MHA/mha4mysql-node-0.54-0.el6.noarch.rpm
      
      [root@manager ~]# wget https://downloads.mariadb.com/files/MHA/mha4mysql-manager-0.56.tar.gz
      
      [root@manager ~]# tar zvxf mha4mysql-manager-0.56.tar.gz 
      
      [root@manager ~]# cd mha4mysql-manager-0.56
      
      [root@manager ~]# perl Makefile.PL 
      
      [root@manager mha4mysql-manager-0.56]# make && make install
      
      [root@manager mha4mysql-manager-0.56]# mkdir -p /usr/local/mha/scripts
      
      [root@manager mha4mysql-manager-0.56]# cp samples/conf/app1.cnf /usr/local/mha/mha.cnf
      
      [root@manager mha4mysql-manager-0.56]# cp samples/scripts/* /usr/local/mha/scripts/
      

      4)修改manager端mha的配置文件

      记得去注释

      [root@manager mha4mysql-manager-0.56]# vim /usr/local/mha/mha.cnf
      
      
      [server default]
      user=mha_rep                                    #MHA管理mysql的用户名
      password=123456                                 #MHA管理mysql的密码
      manager_workdir=/usr/local/mha                  #MHA的工作目录
      manager_log=/usr/local/mha/manager.log          #MHA的日志路径
      ssh_user=root                                   #免秘钥登陆的用户名
      repl_user=backup                                #主从复制账号,用来在主从之间同步数据
      repl_password=backup
      ping_interval=1                                 #ping间隔时间,用来检查master是否正常
        
      [server1]
      hostname=172.16.1.241
      master_binlog_dir=/application/mysql/data/
      candidate_master=1                              #master宕机后,优先启用这台作为master
        
      [server2]
      hostname=172.16.1.242
      master_binlog_dir=/application/mysql/data/
      candidate_master=1
        
      [server3]
      hostname=172.16.1.243
      master_binlog_dir=/application/mysql/data/
      no_master=1                    
      

      5)检查ssh是否畅通

      注意:所有主机之间必须做SSH免密钥登录。否则报错。研究了两天。(通过查看MHA的功能实现过程发现)

      [root@manager ~]# masterha_check_ssh --conf=/usr/local/mha/mha.cnf 
      Mon Apr  3 21:42:33 2017 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
      Mon Apr  3 21:42:33 2017 - [info] Reading application default configurations from /usr/local/mha/mha.cnf..
      Mon Apr  3 21:42:33 2017 - [info] Reading server configurations from /usr/local/mha/mha.cnf..
      Mon Apr  3 21:42:33 2017 - [info] Starting SSH connection tests..
      Mon Apr  3 21:42:33 2017 - [debug] 
      Mon Apr  3 21:42:33 2017 - [debug]  Connecting via SSH from root@172.16.1.241(172.16.1.241:22) to root@172.16.1.242(172.16.1.242:22)..
      Mon Apr  3 21:42:33 2017 - [debug]   ok.
      Mon Apr  3 21:42:33 2017 - [debug]  Connecting via SSH from root@172.16.1.241(172.16.1.241:22) to root@172.16.1.243(172.16.1.243:22)..
      Mon Apr  3 21:42:33 2017 - [debug]   ok.
      Mon Apr  3 21:42:34 2017 - [debug] 
      Mon Apr  3 21:42:34 2017 - [debug]  Connecting via SSH from root@172.16.1.243(172.16.1.243:22) to root@172.16.1.241(172.16.1.241:22)..
      Mon Apr  3 21:42:34 2017 - [debug]   ok.
      Mon Apr  3 21:42:34 2017 - [debug]  Connecting via SSH from root@172.16.1.243(172.16.1.243:22) to root@172.16.1.242(172.16.1.242:22)..
      Mon Apr  3 21:42:34 2017 - [debug]   ok.
      Mon Apr  3 21:42:34 2017 - [debug] 
      Mon Apr  3 21:42:33 2017 - [debug]  Connecting via SSH from root@172.16.1.242(172.16.1.242:22) to root@172.16.1.241(172.16.1.241:22)..
      Mon Apr  3 21:42:33 2017 - [debug]   ok.
      Mon Apr  3 21:42:33 2017 - [debug]  Connecting via SSH from root@172.16.1.242(172.16.1.242:22) to root@172.16.1.243(172.16.1.243:22)..
      Mon Apr  3 21:42:34 2017 - [debug]   ok.
      Mon Apr  3 21:42:34 2017 - [info] All SSH connection tests passed successfully.
      

      #如果得到以上结果,表明主机之间ssh互信是畅通的

      6)masterha_check_repl工具检查mysql主从复制是否成功

      注意:slave01 slave02和master确保已经做好主从复制。否则出错。(研究22个小时)不懂perl 挺麻烦的。

      [root@manager ~]#  masterha_check_repl --conf=/usr/local/mha/mha.cnf 
      Mon Apr  3 21:44:13 2017 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
      Mon Apr  3 21:44:13 2017 - [info] Reading application default configurations from /usr/local/mha/mha.cnf..
      Mon Apr  3 21:44:13 2017 - [info] Reading server configurations from /usr/local/mha/mha.cnf..
      Mon Apr  3 21:44:13 2017 - [info] MHA::MasterMonitor version 0.56.
      Mon Apr  3 21:44:14 2017 - [info] Dead Servers:
      Mon Apr  3 21:44:14 2017 - [info] Alive Servers:
      Mon Apr  3 21:44:14 2017 - [info]   172.16.1.241(172.16.1.241:3306)
      Mon Apr  3 21:44:14 2017 - [info]   172.16.1.242(172.16.1.242:3306)
      Mon Apr  3 21:44:14 2017 - [info]   172.16.1.243(172.16.1.243:3306)
      Mon Apr  3 21:44:14 2017 - [info] Alive Slaves:
      Mon Apr  3 21:44:14 2017 - [info]   172.16.1.242(172.16.1.242:3306)  Version=5.5.32-log (oldest major version between slaves) log-bin:enabled
      Mon Apr  3 21:44:14 2017 - [info]     Replicating from 172.16.1.241(172.16.1.241:3306)
      Mon Apr  3 21:44:14 2017 - [info]     Primary candidate for the new Master (candidate_master is set)
      Mon Apr  3 21:44:14 2017 - [info]   172.16.1.243(172.16.1.243:3306)  Version=5.5.32-log (oldest major version between slaves) log-bin:enabled
      Mon Apr  3 21:44:14 2017 - [info]     Replicating from 172.16.1.241(172.16.1.241:3306)
      Mon Apr  3 21:44:14 2017 - [info]     Not candidate for the new Master (no_master is set)
      Mon Apr  3 21:44:14 2017 - [info] Current Alive Master: 172.16.1.241(172.16.1.241:3306)
      Mon Apr  3 21:44:14 2017 - [info] Checking slave configurations..
      Mon Apr  3 21:44:14 2017 - [info]  read_only=1 is not set on slave 172.16.1.242(172.16.1.242:3306).
      Mon Apr  3 21:44:14 2017 - [warning]  relay_log_purge=0 is not set on slave 172.16.1.242(172.16.1.242:3306).
      Mon Apr  3 21:44:14 2017 - [warning]  relay_log_purge=0 is not set on slave 172.16.1.243(172.16.1.243:3306).
      Mon Apr  3 21:44:14 2017 - [info] Checking replication filtering settings..
      Mon Apr  3 21:44:14 2017 - [info]  binlog_do_db= , binlog_ignore_db= 
      Mon Apr  3 21:44:14 2017 - [info]  Replication filtering check ok.
      Mon Apr  3 21:44:14 2017 - [info] Starting SSH connection tests..
      Mon Apr  3 21:44:16 2017 - [info] All SSH connection tests passed successfully.
      Mon Apr  3 21:44:16 2017 - [info] Checking MHA Node version..
      Mon Apr  3 21:44:16 2017 - [info]  Version check ok.
      Mon Apr  3 21:44:16 2017 - [info] Checking SSH publickey authentication settings on the current master..
      Mon Apr  3 21:44:16 2017 - [info] HealthCheck: SSH to 172.16.1.241 is reachable.
      Mon Apr  3 21:44:17 2017 - [info] Master MHA Node version is 0.54.
      Mon Apr  3 21:44:17 2017 - [info] Checking recovery script configurations on the current master..
      Mon Apr  3 21:44:17 2017 - [info]   Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/application/mysql/data/ --output_file=/var/tmp/save_binary_logs_test --manager_version=0.56 --start_file=mysql-bin.000007 
      Mon Apr  3 21:44:17 2017 - [info]   Connecting to root@172.16.1.241(172.16.1.241).. 
        Creating /var/tmp if not exists..    ok.
        Checking output directory is accessible or not..
         ok.
        Binlog found at /application/mysql/data/, up to mysql-bin.000007
      Mon Apr  3 21:44:17 2017 - [info] Master setting check done.
      Mon Apr  3 21:44:17 2017 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
      Mon Apr  3 21:44:17 2017 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='mha_rep' --slave_host=172.16.1.242 --slave_ip=172.16.1.242 --slave_port=3306 --workdir=/var/tmp --target_version=5.5.32-log --manager_version=0.56 --relay_log_info=/application/mysql/data/relay-log.info  --relay_dir=/application/mysql/data/  --slave_pass=xxx
      Mon Apr  3 21:44:17 2017 - [info]   Connecting to root@172.16.1.242(172.16.1.242:22).. 
        Checking slave recovery environment settings..
          Opening /application/mysql/data/relay-log.info ... ok.
          Relay log found at /application/mysql/data, up to slave01-relay-bin.000002
          Temporary relay log file is /application/mysql/data/slave01-relay-bin.000002
          Testing mysql connection and privileges.. done.
          Testing mysqlbinlog output.. done.
          Cleaning up test file(s).. done.
      Mon Apr  3 21:44:17 2017 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='mha_rep' --slave_host=172.16.1.243 --slave_ip=172.16.1.243 --slave_port=3306 --workdir=/var/tmp --target_version=5.5.32-log --manager_version=0.56 --relay_log_info=/application/mysql/data/relay-log.info  --relay_dir=/application/mysql/data/  --slave_pass=xxx
      Mon Apr  3 21:44:17 2017 - [info]   Connecting to root@172.16.1.243(172.16.1.243:22).. 
        Checking slave recovery environment settings..
          Opening /application/mysql/data/relay-log.info ... ok.
          Relay log found at /application/mysql/data, up to slave02-relay-bin.000002
          Temporary relay log file is /application/mysql/data/slave02-relay-bin.000002
          Testing mysql connection and privileges.. done.
          Testing mysqlbinlog output.. done.
          Cleaning up test file(s).. done.
      Mon Apr  3 21:44:18 2017 - [info] Slaves settings check done.
      Mon Apr  3 21:44:18 2017 - [info] 
      172.16.1.241 (current master)
       +--172.16.1.242
       +--172.16.1.243
      
      Mon Apr  3 21:44:18 2017 - [info] Checking replication health on 172.16.1.242..
      Mon Apr  3 21:44:18 2017 - [info]  ok.
      Mon Apr  3 21:44:18 2017 - [info] Checking replication health on 172.16.1.243..
      Mon Apr  3 21:44:18 2017 - [info]  ok.
      Mon Apr  3 21:44:18 2017 - [warning] master_ip_failover_script is not defined.
      Mon Apr  3 21:44:18 2017 - [warning] shutdown_script is not defined.
      Mon Apr  3 21:44:18 2017 - [info] Got exit code 0 (Not master dead).
      
      MySQL Replication Health is OK.
    • 六、mha实验模拟

      1)在每次做mha实验的时候,我们都最好先执行如下命令做检测

      [root@manager ~]# masterha_check_ssh --conf=/usr/local/mha/mha.cnf
      [root@manager ~]# masterha_check_repl --conf=/usr/local/mha/mha.cnf
      

      #确定两条命令的返回结果都是无异常的,然后启动mha服务

      2)在manager端启动mha服务并时刻监控日志文件的输出变化

      [root@manager ~]# nohup masterha_manager --conf=/usr/local/mha/mha.cnf > /tmp/mha_manager.log 2>&1 &
      [root@manager ~]# ps -ef |grep masterha |grep -v 'grep'
      root      2840  2470  2 10:53 pts/0    00:00:00 perl /usr/local/bin/masterha_manager --conf=/usr/local/mha/mha.cnf
      
      

      3)测试master宕机后会自动切换

      #测试前查看slave01,slave02的主从同步情况

      #slave01

      [root@slave01 ~]# mysql -e 'show slave statusG' |egrep 'Slave_IO_Running:|Slave_SQL_Running:'
                   Slave_IO_Running: Yes
                  Slave_SQL_Running: Yes
      
      

      #slave02

      [root@slave02 ~]# mysql -e 'show slave statusG' |egrep 'Slave_IO_Running:|Slave_SQL_Running:'
                   Slave_IO_Running: Yes
                  Slave_SQL_Running: Yes
      

      #停止master的mysql服务

      [root@master ~]# service mysqld stop
      Shutting down MySQL (Percona Server)..... SUCCESS! 
      

      #manager上查看manager节点日志

      [root@manager ~]# cat /usr/local/mha/manager.log
      
      ----- Failover Report -----
      
      mha: MySQL Master failover 172.16.1.241 to 172.16.1.242 succeeded
      
      Master 172.16.1.241 is down!
      
      Check MHA Manager logs at manager:/usr/local/mha/manager.log for details.
      
      Started automated(non-interactive) failover.
      The latest slave 172.16.1.242(172.16.1.242:3306) has all relay logs for recovery.
      Selected 172.16.1.242 as a new master.
      172.16.1.242: OK: Applying all logs succeeded.
      172.16.1.243: This host has the latest relay log events.
      Generating relay diff files from the latest slave succeeded.
      172.16.1.243: OK: Applying all logs succeeded. Slave started, replicating from 172.16.1.242.
      172.16.1.242: Resetting slave info succeeded.
      Master failover to 172.16.1.242(172.16.1.242:3306) completed successfully.
      

      从上面的输出可以看出整个MHA的切换过程,共包括以下的步骤:

      1. 配置文件检查阶段,这个阶段会检查整个集群配置文件配置。
      2. 宕机的master处理,这个阶段包括虚拟ip摘除操作,主机关机操作(待研究)。
      3. 复制dead maste和最新slave相差的relay log,并保存到MHA Manger具体的目录下。
      4. 识别含有最新更新的slave。
      5. 应用从binlog服务器保存的二进制日志事件(binlog events)。
      6. 提升一个slave为新的master进行复制。
      7. 使其他的slave连接新的master进行复制。

      6)验证new master(172.16.1.242)

      #我们查看slave02的主从同步信息

      [root@slave02 ~]#  mysql  -e 'show slave statusG' |egrep 'Master_Host|Slave_IO_Running:|Slave_SQL_Running:'
                        Master_Host: 172.16.1.242     # 表示已经转移新的ip
                   Slave_IO_Running: Yes  # 表示主从OK
                  Slave_SQL_Running: Yes
      

      4)恢复master服务

      #manage删除故障转移文件

      [root@manager ~]# cat /usr/local/mha/mha.failover.complete 
      [root@manager ~]# rm -rf /usr/local/mha/mha.failover.complete
      

      #master重启mysql服务

      [root@master ~]# service mysqld start
      Starting MySQL... SUCCESS! 
      

      #在manager的日志文件中找到主从同步的sql语句

      [root@manager ~]# grep MASTER_HOST /usr/local/mha/manager.log
      Mon Apr  3 21:50:59 2017 - [info]  All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='172.16.1.242', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000016', MASTER_LOG_POS=107, MASTER_USER='backup', MASTER_PASSWORD='xxx';
      

      #在master上启动主从同步,密码为backup

      master_log_file和master_log_pos参数需要和上面manager的日志文件中同步的语句参数里的值相同。

      mysql> change master to master_host='172.16.1.242',master_user='backup',master_password='backup',master_port=3306,master_log_file='mysql-bin.000016',master_log_pos=107;
      Query OK, 0 rows affected (1.02 sec)
      
      mysql> start slave;
      Query OK, 0 rows affected (0.00 sec)
      
      
      

      #在master和slave02上执行,检查主从同步是否都正常,这里以master为例,slave02同理

      [root@master ~]#  mysql  -e 'show slave statusG' |egrep 'Master_Host|Slave_IO_Running:|Slave_SQL_Running:'
                        Master_Host: 172.16.1.242
                   Slave_IO_Running: Yes
                  Slave_SQL_Running: Yes
      

      5)再次启动MHA的manager服务,并停止slave01

      [root@manager ~]# nohup masterha_manager --conf=/usr/local/mha/mha.cnf > /tmp/mha_manager.log 2>&1 &
      

      #关闭slave01的mysql服务

      [root@slave01 ~]# service mysqld stop
      Shutting down MySQL... SUCCESS
      
      [root@slave01 ~]#tail -f /usr/local/mha/manager.log 
      ----- Failover Report -----
      
      mha: MySQL Master failover 172.16.1.242 to 172.16.1.241 succeeded
      
      Master 172.16.1.242 is down!
      
      Check MHA Manager logs at manager:/usr/local/mha/manager.log for details.
      
      Started automated(non-interactive) failover.
      The latest slave 172.16.1.241(172.16.1.241:3306) has all relay logs for recovery.
      Selected 172.16.1.241 as a new master.
      172.16.1.241: OK: Applying all logs succeeded.
      172.16.1.243: This host has the latest relay log events.
      Generating relay diff files from the latest slave succeeded.
      172.16.1.243: OK: Applying all logs succeeded. Slave started, replicating from 172.16.1.241.
      172.16.1.241: Resetting slave info succeeded.
      Master failover to 172.16.1.241(172.16.1.241:3306) completed successfully.
      

      出现故障的快速恢复步骤

      [root@slave01 ~]# service mysqld stop
      Shutting down MySQL... SUCCESS
      
      [root@manager mha]# tail -f /usr/local/mha/manager.log 
      ----- Failover Report -----
      
      mha: MySQL Master failover 172.16.1.242
      
      Master 172.16.1.242 is down!
      
      Check MHA Manager logs at manager:/usr/local/mha/manager.log for details.
      
      Started automated(non-interactive) failover.
      The latest slave 172.16.1.241(172.16.1.241:3306) has all relay logs for recovery.
      Got Error so couldn't continue failover from here.
      
      #出现无法切换回去,后来经过排查是manager /usr/local/mha/mha.cnf [server1] (比较低级的错误,排查很久。不过主要是想跟大家分享出现问题如何恢复到之前的状态。)
      
      hostname=172.16.1.241
      master_binlog_dir=/application/mysql/data/
      candidate_master=1r   #这里多加了一个r#修改完毕
      hostname=172.16.1.241
      master_binlog_dir=/application/mysql/data/
      candidate_master=1  
      
      实现文件手动恢复到之前的状态。
      
      #manager
      [root@manager ~]# rm -rf /usr/local/mha/mha.failover.complete
      [root@manager ~]# rm -rf /usr/local/mha/mha.failover.error 
      [root@manager ~]#  nohup masterha_manager --conf=/usr/local/mha/mha.cnf > /tmp/mha_manager.log 2>&1 &
      
      
      #master
      [root@master ~]# mysql
      mysql> stop slave;
      mysql> reset slave;
      mysql> show master statusG
      *************************** 1. row ***************************
                  File: mysql-bin.000013     
              Position: 107
          Binlog_Do_DB: 
      Binlog_Ignore_DB: 
      1 row in set (0.00 sec)
      
      
      #slave01
      [root@slave01 ~]# mysql
      mysql> stop slave;
      mysql> change master to master_host='172.16.1.241',master_user='backup',master_password='backup',master_port=3306,master_log_file='mysql-bin.000013',master_log_pos=107;
      mysql> start slave;
      
      # slave01和slave02恢复之前的状态。
      [root@slave01 ~]# mysql  -e 'show slave statusG' |egrep 'Master_Host|Slave_IO_Running:|Slave_SQL_Running:'
                        Master_Host: 172.16.1.241
                   Slave_IO_Running: Yes
                  Slave_SQL_Running: Yes
      
      
      [root@slave02 ~]#  mysql  -e 'show slave statusG' |egrep 'Master_Host|Slave_IO_Running:|Slave_SQL_Running:'
                        Master_Host: 172.16.1.241
                   Slave_IO_Running: Yes
                  Slave_SQL_Running: Yes
                  
      

      #manager上查看manager节点日志

      [root@manager ~]# cat /usr/local/mha/manager.log
      ----- Failover Report -----
      
      mha: MySQL Master failover 172.16.1.242
      
      Master 172.16.1.242 is down!
      
      Check MHA Manager logs at manager:/usr/local/mha/manager.log for details.
      
      Started automated(non-interactive) failover.
      The latest slave 172.16.1.241(172.16.1.241:3306) has all relay logs for recovery.
      Got Error so couldn't continue failover from here.
      

      6)恢复slave01服务

      #删除故障转移文件

      [root@manager ~]# rm -rf /usr/local/mha/mha.failover.complete
      

      #重启mysql服务

      [root@slave01 ~]# service mysqld start
      Starting MySQL.. SUCCESS! 
      

      #在manager的日子文件中找到主从同步的sql语句

      [root@manager ~]#  grep MASTER_HOST /usr/local/mha/manager.log
      Tue Apr  4 02:47:33 2017 - [info]  All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='172.16.1.241', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000015', MASTER_LOG_POS=107, MASTER_USER='backup', MASTER_PASSWORD='xxx';
      

      #在slave01上启动主从同步,密码为backup 记得修改MASTER_PASSWORD='xxx' 为 MASTER_PASSWORD='bakcup'

      [root@slave01 ~]# mysql
      
      mysql> stop slave
      
      mysql> CHANGE MASTER TO MASTER_HOST='172.16.1.241', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000015', MASTER_LOG_POS=107, MASTER_USER='backup', MASTER_PASSWORD='backup';
      Query OK, 0 rows affected (0.39 sec)
      
      mysql> start slave;
      Query OK, 0 rows affected (0.00 sec)
       
      

      #在slave01和slave02上执行,检查主从同步是否都正常,

      #slave01
      [root@slave01 ~]# mysql  -e 'show slave statusG' |egrep 'Master_Host|Slave_IO_Running:|Slave_SQL_Running:'
                        Master_Host: 172.16.1.241
                   Slave_IO_Running: Yes
                  Slave_SQL_Running: Yes
              
      #slave02            
      [root@slave02 ~]#  mysql  -e 'show slave statusG' |egrep 'Master_Host|Slave_IO_Running:|Slave_SQL_Running:'
                        Master_Host: 172.16.1.241
                   Slave_IO_Running: Yes
                  Slave_SQL_Running: Yes
      

      7)重启MHA的manager服务

      [root@manager ~]# nohup masterha_manager --conf=/usr/local/mha/mha.cnf > /tmp/mha_manager.log 2>&1 &
      [1] 30389
      

      七、通过vip实现mysql的高可用

      1)修改/usr/local/mha/mha.cnf

      [server default]
      user=mha_rep
      password=123456
      manager_workdir=/usr/local/mha
      manager_log=/usr/local/mha/manager.log
      ssh_user=root
      master_ip_failover_script=/usr/local/mha/scripts/master_ip_failover     #添加管理vip的脚本
      repl_user=backup
      repl_password=backup
      ping_interval=1
      
      [server1]
      hostname=172.16.1.241
      master_binlog_dir=/application/mysql/data/
      candidate_master=1
      port=3306
      
      [server2]
      hostname=172.16.1.242
      master_binlog_dir=/application/mysql/data/
      candidate_master=1
      port=3306
      
      
      [server3]
      hostname=172.16.1.243
      master_binlog_dir=/application/mysql/data/
      port=3306
      no_master=1
      
      
      

      2)修改脚本/usr/local/mha/scripts/master_ip_failover

      #!/usr/bin/env perl
      use strict;
      use warnings FATAL => 'all';
        
      use Getopt::Long;
      
      my (
          $command,          $ssh_user,        $orig_master_host, $orig_master_ip,
          $orig_master_port, $new_master_host, $new_master_ip,    $new_master_port
      );
        
      my $vip = '172.16.1.240';            #vip地址
      my $key = '1';
      my $ssh_start_vip = "/sbin/ifconfig eth1:$key $vip";        #绑定在指定的网卡上面
      my $ssh_stop_vip = "/sbin/ifconfig eth1:$key down";     #我的机器有两块网卡eth1是172网段的所有我把vip绑定在eth1上,我的eth0网段是10.0.0.%。
        
      GetOptions(
          'command=s'          => $command,
          'ssh_user=s'         => $ssh_user,
          'orig_master_host=s' => $orig_master_host,
          'orig_master_ip=s'   => $orig_master_ip,
          'orig_master_port=i' => $orig_master_port,
          'new_master_host=s'  => $new_master_host,
          'new_master_ip=s'    => $new_master_ip,
          'new_master_port=i'  => $new_master_port,
      );
        
      exit &main();
        
      sub main {
        
          print "
      
      IN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===
      
      ";
        
          if ( $command eq "stop" || $command eq "stopssh" ) {
        
              my $exit_code = 1;
              eval {
                  print "Disabling the VIP on old master: $orig_master_host 
      ";
                  &stop_vip();
                  $exit_code = 0;
              };
              if ($@) {
                  warn "Got Error: $@
      ";
                  exit $exit_code;
              }
              exit $exit_code;
          }
          elsif ( $command eq "start" ) {
        
              my $exit_code = 10;
              eval {
                  print "Enabling the VIP - $vip on the new master - $new_master_host 
      ";
                  &start_vip();
                  $exit_code = 0;
              };
              if ($@) {
                  warn $@;
                  exit $exit_code;
              }
              exit $exit_code;
          }
          elsif ( $command eq "status" ) {
              print "Checking the Status of the script.. OK 
      ";
              exit 0;
          }
          else {
              &usage();
              exit 1;
          }
      }
      sub start_vip() {
          `ssh $ssh_user@$new_master_host " $ssh_start_vip "`;
      }
      # A simple system call that disable the VIP on the old_master
       sub stop_vip() {
           `ssh $ssh_user@$orig_master_host " $ssh_stop_vip "`;
      }
            
      sub usage {
             print
             "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port
      ";
      }
      

      3)模拟故障进行切换

      #停止master的mysql服务

      [root@master ~]# service mysqld stop
      Shutting down MySQL... SUCCESS! 
      

      #查看slave02的同步信息

      [root@slave02 ~]#  mysql  -e 'show slave statusG' |egrep 'Master_Host|Slave_IO_Running:|Slave_SQL_Running:'
                        Master_Host: 172.16.1.242
                   Slave_IO_Running: Yes
                  Slave_SQL_Running: Yes
      
      

      #查看slave01的IP信息

      [root@slave01 ~]# ifconfig
      eth0      Link encap:Ethernet  HWaddr 00:1C:42:58:08:EF  
                inet addr:10.0.0.242  Bcast:10.0.0.255  Mask:255.255.255.0
                inet6 addr: fe80::21c:42ff:fe58:8ef/64 Scope:Link
                UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
                RX packets:6925 errors:0 dropped:0 overruns:0 frame:0
                TX packets:2869 errors:0 dropped:0 overruns:0 carrier:0
                collisions:0 txqueuelen:1000 
                RX bytes:679548 (663.6 KiB)  TX bytes:420365 (410.5 KiB)
      
      eth0:1    Link encap:Ethernet  HWaddr 00:1C:42:58:08:EF  
                inet addr:172.16.1.240  Bcast:172.16.255.255  Mask:255.255.0.0
                UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
      
      eth1      Link encap:Ethernet  HWaddr 00:1C:42:F4:DF:3E  
                inet addr:172.16.1.242  Bcast:172.16.1.255  Mask:255.255.255.0
                inet6 addr: fe80::21c:42ff:fef4:df3e/64 Scope:Link
                UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
                RX packets:10272 errors:0 dropped:0 overruns:0 frame:0
                TX packets:7875 errors:0 dropped:0 overruns:0 carrier:0
                collisions:0 txqueuelen:1000 
                RX bytes:1575148 (1.5 MiB)  TX bytes:1644494 (1.5 MiB)
      
      eth1:1    Link encap:Ethernet  HWaddr 00:1C:42:F4:DF:3E  
                inet addr:172.16.1.240  Bcast:172.16.255.255  Mask:255.255.0.0
                UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
      # 这可以看到我们添加的VIP已经自动添加了
      lo        Link encap:Local Loopback  
                inet addr:127.0.0.1  Mask:255.0.0.0
                inet6 addr: ::1/128 Scope:Host
                UP LOOPBACK RUNNING  MTU:65536  Metric:1
                RX packets:640 errors:0 dropped:0 overruns:0 frame:0
                TX packets:640 errors:0 dropped:0 overruns:0 carrier:0
                collisions:0 txqueuelen:0 
                RX bytes:51251 (50.0 KiB)  TX bytes:51251 (50.0 KiB)
      

      4、恢复master的mysql服务同开始恢复方法一样。

    • 八、MHA日常维护命令

      1、查看ssh登陆是否成功

      masterha_check_ssh --conf=/usr/local/mha/mha.cnf
      

      2、查看复制是否建立好

      masterha_check_repl --conf=/usr/local/mha/mha.cnf
      

      3、启动mha

      nohup masterha_manager --conf=/usr/local/mha/mha.cnf > /tmp/mha_manager.log 2>&1 &
      

      4、检查启动的状态

      masterha_check_status --conf=/usr/local/mha/mha.cnf
      

      5、停止mha

      masterha_stop masterha_check_status --conf=/usr/local/mha/mha.cnf
      

      6、failover后下次重启

      #每次failover切换后会在管理目录生成文件app1.failover.complete ,下次在切换的时候会发现有这个文件导致切换不成功,需要手动清理掉。

      rm -rf /usr/local/mha/mha.failover.complete
      

      九、FAQ(常见问题解答)

      1、可能报错1

      [root@server02 mha4mysql-node-0.53]# perl Makefile.PL
      Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains: inc /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at inc/Module/Install/Can.pm line 6.
      BEGIN failed--compilation aborted at inc/Module/Install/Can.pm line 6.
      Compilation failed in require at inc/Module/Install.pm line 307.
      Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains: inc /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at inc/Module/Install/Makefile.pm line 4.
      BEGIN failed--compilation aborted at inc/Module/Install/Makefile.pm line 4.
      Compilation failed in require at inc/Module/Install.pm line 307.
      Can't locate ExtUtils/MM_Unix.pm in @INC (@INC contains: inc /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at inc/Module/Install/Metadata.pm line 316.
      
      

      解决办法:

      yum install cpan -y
      yum install perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker -y
      cpan ExtUtils::Install
      
      

      如果不想用cpan安装,那就使用下面这条命令

      yum install perl-ExtUtils-Embed -y
      

      2、可能报错2

      [root@server02 mha4mysql-node-0.53]# perl Makefile.PL
      Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains: /usr/local/lib64/perl      
      BEGIN failed--compilation aborted at Makefile.PL line 3. 
      
      

      解决办法:

      yum install perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker
      

      3、可能报错3

      [root@server01 ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf

      报错:

      
      Sun Apr  2 18:58:10 2017 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
      Sun Apr  2 18:58:10 2017 - [info] Reading application default configurations from /etc/masterha/app1.cnf..
      Sun Apr  2 18:58:10 2017 - [info] Reading server configurations from /etc/masterha/app1.cnf..
      Sun Apr  2 18:58:10 2017 - [info] Starting SSH connection tests..
      Sun Apr  2 18:58:11 2017 - [error][/usr/local/share/perl5/MHA/SSHCheck.pm, ln63] 
      Sun Apr  2 18:58:10 2017 - [debug]  Connecting via SSH from root@172.16.1.50(172.16.1.50:22) to root@172.16.1.60(172.16.1.60:22)..
      Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
      Sun Apr  2 18:58:10 2017 - [error][/usr/local/share/perl5/MHA/SSHCheck.pm, ln107] SSH connection from root@172.16.1.50(172.16.1.50:22) to root@172.16.1.60(172.16.1.60:22) failed!
      Sun Apr  2 18:58:11 2017 - [error][/usr/local/share/perl5/MHA/SSHCheck.pm, ln63] 
      Sun Apr  2 18:58:10 2017 - [debug]  Connecting via SSH from root@172.16.1.60(172.16.1.60:22) to root@172.16.1.50(172.16.1.50:22)..
      Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
      Sun Apr  2 18:58:10 2017 - [error][/usr/local/share/perl5/MHA/SSHCheck.pm, ln107] SSH connection from root@172.16.1.60(172.16.1.60:22) to root@172.16.1.50(172.16.1.50:22) failed!
      Sun Apr  2 18:58:11 2017 - [error][/usr/local/share/perl5/MHA/SSHCheck.pm, ln63] 
      Sun Apr  2 18:58:11 2017 - [debug]  Connecting via SSH from root@172.16.1.70(172.16.1.70:22) to root@172.16.1.50(172.16.1.50:22)..
      Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
      Sun Apr  2 18:58:11 2017 - [error][/usr/local/share/perl5/MHA/SSHCheck.pm, ln107] SSH connection from root@172.16.1.70(172.16.1.70:22) to root@172.16.1.50(172.16.1.50:22) failed!
      SSH Configuration Check Failed!
       at /usr/local/bin/masterha_check_ssh line 44
      

      原因分析,程序需要从manage管理ssh连接,所以会从mysql-test3 ssh到 mysql-test 再ssh到 mysql-test2,问题出在第二次连接,需要输入key的密码,导致测试失败。所以全部机器都要相互做密钥登录。

      4、可能报错4

      mysql> change master to master_host='172.16.1.241',master_user='backup',master_password='backup',master_port=3306,master_log_file='mysql-bin.000007',master_log_pos=333;

      报错:

      ERROR 1201 (HY000): Could not initialize master info structure; more error messages can be found in the MySQL error log

      解决方法

      mysql> reset slave;
      Query OK, 0 rows affected (0.00 sec)
      
      mysql> change master to master_host='172.16.1.241',master_user='backup',master_password='backup',master_port=3306,master_log_file='mysql-bin.000007',master_log_pos=333;
      Query OK, 0 rows affected (0.15 sec)
      
      mysql> start slave;
      Query OK, 0 rows affected (0.01 sec)
      
      mysql> show slave statusG
      

      5、可能报错5

      Can't locate Log/Dispatch.pm in @INC(报错)耗时22个小时才解决

      [root@manager ~]# masterha_check_ssh --conf=/usr/local/mha/mha.cnf

      Can't locate Log/Dispatch.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/local/share/perl5/MHA/MasterMonitor.pm line 28.
      BEGIN failed--compilation aborted at /usr/local/share/perl5/MHA/MasterMonitor.pm line 28.
      Compilation failed in require at /usr/local/bin/masterha_manager line 26.
      BEGIN failed--compilation aborted at /usr/local/bin/masterha_manager line 26.
      
      $ sudo cpan
      cpan[1]> install CPAN
      cpan[2]> install Module::Build
      cpan[3]> quit
      $ sudo cpan
      cpan[1]> install Log::Dispatch
      cpan[2]> install Log::Dispatch::FileRotate
      cpan[3]> quit
      

      6、小知识

      [root@server02 ~]# mysqldump -uroot -p123456 --master-data=2 --single-transaction -R --triggers -A --event > /server/backup/all_bak_$(date +%F).sql

      -u 数据库登录用户名
      
      -p 数据库登录密码
      
      --event 由于mysql在全量导出时不导出event事件表,故需要在全量导出时忽略事件表,不加此参数会出现告警。    Warning: Skipping the data of table mysql.event. Specify the --events option explicitly
      
      --master-data=2代表备份时刻记录master的Binlog位置和Position(位置点)
      
      --single-transaction意思是获取一致性快照,-R意思是备份存储过程和函数
      
      --triggres的意思是备份触发器
      
      -A代表备份所有的库
      
      更多信息请自行mysqldump --help查看。或http://note.youdao.com/noteshare?id=60607599966788d19a4d46d4ccd2ce9d
      
      

      查看复制状态(可以看见复制成功):

      [root@server03 ~]#  mysql -uroot -p123456 -e 'show slave statusG' | egrep 'Slave_IO|Slave_SQL|Until_Log_Pos'
                     Slave_IO_State: Waiting for master to send event
                   Slave_IO_Running: Yes      # IO线程
                  Slave_SQL_Running: Yes      # SQL线程
                      Until_Log_Pos: 0        # 主从之间的延迟
      

      修改app1.cnf配置文件,修改后的文件内容如下: 配置参数注释请看:MHA app1.cnf 配置文件注释

      (2)设置relay log的清除方式(在每个slave节点上):

      server03
      [root@server03 ~]# mysql -uroot -p123456 -e 'set global relay_log_purge=0'
      
      server04
      [root@server04 ~]# mysql -uroot -p123456 -e 'set global relay_log_purge=0'
      

      注意:

      MHA在发生切换的过程中,从库的恢复过程中依赖于relay log的相关信息,所以这里要将relay log的自动清除设置为OFF,采用手动清除relay log的方式。

      在默认情况下,从服务器上的中继日志会在SQL线程执行完毕后被自动删除。但是在MHA环境中,这些中继日志在恢复其他从服务器时可能会被用到,因此需要禁用中继日志的自动删除功能。

      定期清除中继日志需要考虑到复制延时的问题。在ext3的文件系统下,删除大的文件需要一定的时间,会导致严重的复制延时。为了避免复制延时,需要暂时为中继日志创建硬链接,因为在linux系统中通过硬链接删除大文件速度会很快。(在mysql数据库中,删除大表时,通常也采用建立硬链接的方式)

      MHA节点中包含了pure_relay_logs命令工具,它可以为中继日志创建硬链接,执行SET GLOBAL relay_log_purge=1,等待几秒钟以便SQL线程切换到新的中继日志,再执行SET GLOBAL relay_log_purge=0。

      pure_relay_logs脚本参数如下所示:

      --user mysql                      用户名
      --password mysql                  密码
      --port                            端口号
      --workdir                         指定创建relaylog的硬链接的位置,默认是/var/tmp,由于系统不同分区创建硬链接文件会失败,故需要执行硬链接具体位置,成功执行脚本后,硬链接的中继日志文件被删除
      --disable_relay_log_purge         默认情况下,如果relay_log_purge=1,脚本会什么都不清理,自动退出,通过设定这个参数,
  • 相关阅读:
    Linux文件权限学习总结
    【转】Hibernate和ibatis的比较
    Spring AOP原理及拦截器
    Spring AOP (下)
    Spring AOP (上)
    SQL语句限定查询知识点总结
    多线程知识点总结
    关于tomcat那些事情
    java.lang.NoClassDefFoundError: org/apache/commons/codec/DecoderException 的解决办法
    cacti 与 nagios 一些总结 【八】
  • 原文地址:https://www.cnblogs.com/qr-k/p/11854817.html
Copyright © 2011-2022 走看看