zoukankan      html  css  js  c++  java
  • ProxySQL+MGR实现读写分离和主节点故障无感知切换

      

    前面的文章介绍了ProxySQL用法,这里说下ProxySQL中间件针对Mysql组复制模式实现读写分离以及主节点故障时能够自动切换到新的主节点,而应用对此过程无感知的功能。Mysql组复制(MGR)组复制能够完成主节点故障后推选出来新的主节点,不过在应用层不可能通过修改新的主节点的IP来连接新的主节点,但是通过MGR+ProxySQL可以实际主节点故障时应用无感应自动切换到新的主节点

    根据上图,描述下实现思路:三个节点使multi-primary的方式连接,应用通过连接ProxySQL中间件,根据sql的属性(是否为select语句)来决定连接哪一个节点,一个可写节点,两个只读节点(其实三个都是可写节点,只不过通过proxysql进行了读写分离)。如果默认的可写节点挂掉的话,proxysql通过定期运行的调度器会将另一个只读节点的其中一台设为可写节点,实际主节点故障应用无感应的要求。上述的整个过程中,应用无需任何变动。应用从意识发生了故障,到连接重新指向新的主,正常提供服务,秒级别的间隔。

    一、环境准备

    172.16.60.211       MGR-node1 (master1)     Centos7.5
    172.16.60.212       MGR-node2 (master2)     Centos7.5
    172.16.60.213       MGR-node3 (master3)     Centos7.5
    172.16.60.214       ProxySQL-node           Centos7.5
    
    [root@MGR-node1 ~]# cat /etc/redhat-release
    CentOS Linux release 7.5.1804 (Core)
       
    为了方便实验,关闭所有节点的防火墙
    [root@MGR-node1 ~]# systemctl stop firewalld
    [root@MGR-node1 ~]# firewall-cmd --state
    not running
       
    [root@MGR-node1 ~]# cat /etc/sysconfig/selinux |grep "SELINUX=disabled"
    SELINUX=disabled
    [root@MGR-node1 ~]# setenforce 0            
    setenforce: SELinux is disabled
    [root@MGR-node1 ~]# getenforce              
    Disabled
     
    特别要注意一个关键点: 必须设置好各个mysql节点的主机名,并且保证能通过主机名找到各成员!
    
    则必须要在每个节点的/etc/hosts里面做主机名绑定,否则后续将节点加入group组会失败!报错RECOVERING!!
    [root@MGR-node1 ~]# cat /etc/hosts
    ........
    172.16.60.211    MGR-node1
    172.16.60.212    MGR-node2
    172.16.60.213    MGR-node3

    二、在三个节点上安装Mysql5.7

    在三个mysql节点机上使用yum方式安装Mysql5.7,参考:https://www.cnblogs.com/kevingrace/p/8340690.html
       
    安装MySQL yum资源库
    [root@MGR-node1 ~]# yum localinstall https://dev.mysql.com/get/mysql57-community-release-el7-8.noarch.rpm
       
    安装MySQL 5.7
    [root@MGR-node1 ~]# yum install -y mysql-community-server
       
    启动MySQL服务器和MySQL的自动启动
    [root@MGR-node1 ~]# systemctl start mysqld.service
    [root@MGR-node1 ~]# systemctl enable mysqld.service
       
    设置登录密码
    由于MySQL从5.7开始不允许首次安装后使用空密码进行登录!为了加强安全性,系统会随机生成一个密码以供管理员首次登录使用,
    这个密码记录在/var/log/mysqld.log文件中,使用下面的命令可以查看此密码:
    [root@MGR-node1 ~]# cat /var/log/mysqld.log|grep 'A temporary password'
    2019-01-11T05:53:17.824073Z 1 [Note] A temporary password is generated for root@localhost: TaN.k:*Qw2xs
       
    使用上面查看的密码TaN.k:*Qw2xs 登录mysql,并重置密码为123456
    [root@MGR-node1 ~]# mysql -p                 #输入默认的密码:TaN.k:*Qw2xs
    .............
    mysql> set global validate_password_policy=0;
    Query OK, 0 rows affected (0.00 sec)
       
    mysql> set global validate_password_length=1;
    Query OK, 0 rows affected (0.00 sec)
       
    mysql> set password=password("123456");
    Query OK, 0 rows affected, 1 warning (0.00 sec)
       
    mysql> flush privileges;
    Query OK, 0 rows affected (0.00 sec)
       
    查看mysql版本
    [root@MGR-node1 ~]# mysql -p123456
    ........
    mysql> select version();
    +-----------+
    | version() |
    +-----------+
    | 5.7.24    |
    +-----------+
    1 row in set (0.00 sec)
      
    =====================================================================
    温馨提示
    mysql5.7通过上面默认安装后,执行语句可能会报错:
    ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
      
    这个报错与Mysql 密码安全策略validate_password_policy的值有关,validate_password_policy可以取0、1、2三个值:
    解决办法:
    set global validate_password_policy=0;
    set global validate_password_length=1;

    三、MGR组复制环境部署 (多写模式)

    可以参考:https://www.cnblogs.com/kevingrace/p/10260685.html
    
    由于之前做了其他测试,这里需要将三个节点的mysql环境抹干净:
    # systemctl  stop  mysqld
    # rm -rf /var/lib/mysql
    # systemctl start mysqld
    
    然后重启密码
    # cat /var/log/mysqld.log|grep 'A temporary password'
    # mysql -p123456
    mysql> set global validate_password_policy=0;
    mysql> set global validate_password_length=1;
    mysql> set password=password("123456");
    mysql> flush privileges;
    
    =======================================================
    1) MGR-node1节点操作
    [root@MGR-node1 ~]# mysql -p123456
    .........
    mysql> select uuid();
    +--------------------------------------+
    | uuid()                               |
    +--------------------------------------+
    | ae09faae-34bb-11e9-9f91-005056ac6820 |
    +--------------------------------------+
    1 row in set (0.00 sec)
    
    [root@MGR-node1 ~]# cp /etc/my.cnf /etc/my.cnf.bak
    [root@MGR-node1 ~]# >/etc/my.cnf
    [root@MGR-node1 ~]# vim /etc/my.cnf
    [mysqld]
    datadir = /var/lib/mysql
    socket = /var/lib/mysql/mysql.sock
           
    symbolic-links = 0
           
    log-error = /var/log/mysqld.log
    pid-file = /var/run/mysqld/mysqld.pid
       
    #GTID:
    server_id = 1
    gtid_mode = on
    enforce_gtid_consistency = on
       
    master_info_repository=TABLE
    relay_log_info_repository=TABLE
    binlog_checksum=NONE
           
    #binlog
    log_bin = mysql-bin
    log-slave-updates = 1
    binlog_format = row
    sync-master-info = 1
    sync_binlog = 1
          
    #relay log
    skip_slave_start = 1
       
    transaction_write_set_extraction=XXHASH64      
    loose-group_replication_group_name="5db40c3c-180c-11e9-afbf-005056ac6820"     
    loose-group_replication_start_on_boot=off    
    loose-group_replication_local_address= "172.16.60.211:24901"
    loose-group_replication_group_seeds= "172.16.60.211:24901,172.16.60.212:24901,172.16.60.213:24901"
    loose-group_replication_bootstrap_group=off
    loose-group_replication_single_primary_mode=off      
    loose-group_replication_enforce_update_everywhere_checks=on    
    loose-group_replication_ip_whitelist="172.16.60.0/24,127.0.0.1/8"   
    
    重启mysql服务
    [root@MGR-node1 ~]# systemctl restart mysqld
    
    登录mysql进行相关设置操作
    [root@MGR-node1 ~]# mysql -p123456
    ............
    mysql> SET SQL_LOG_BIN=0;  
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_slave@'%' IDENTIFIED BY 'slave@123';
    Query OK, 0 rows affected, 1 warning (0.00 sec)
    
    mysql> FLUSH PRIVILEGES;
    Query OK, 0 rows affected (0.01 sec)
    
    mysql> reset master;
    Query OK, 0 rows affected (0.19 sec)
    
    mysql> SET SQL_LOG_BIN=1;
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> CHANGE MASTER TO MASTER_USER='rpl_slave', MASTER_PASSWORD='slave@123' FOR CHANNEL 'group_replication_recovery';
    Query OK, 0 rows affected, 2 warnings (0.33 sec)
    
    mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
    Query OK, 0 rows affected (0.03 sec)
    
    mysql> SHOW PLUGINS;
    +----------------------------+----------+--------------------+----------------------+---------+
    | Name                       | Status   | Type               | Library              | License |
    +----------------------------+----------+--------------------+----------------------+---------+
    ...............
    ...............
    | group_replication          | ACTIVE   | GROUP REPLICATION  | group_replication.so | GPL     |
    +----------------------------+----------+--------------------+----------------------+---------+
    46 rows in set (0.00 sec)
    
    mysql> SET GLOBAL group_replication_bootstrap_group=ON; 
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> START GROUP_REPLICATION;
    Query OK, 0 rows affected (2.34 sec)
    
    mysql> SET GLOBAL group_replication_bootstrap_group=OFF;
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> SELECT * FROM performance_schema.replication_group_members;
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | group_replication_applier | 42ca8591-34bb-11e9-8296-005056ac6820 | MGR-node1   |        3306 | ONLINE       |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    1 row in set (0.00 sec)
    
    比如要保证上面的group_replication_applier的状态为"ONLINE"才对!
     
    创建一个测试库
    mysql> CREATE DATABASE kevin CHARACTER SET utf8 COLLATE utf8_general_ci;      
    Query OK, 1 row affected (0.03 sec)
     
    mysql> use kevin;
    Database changed
    mysql> create table if not exists haha (id int(10) PRIMARY KEY AUTO_INCREMENT,name varchar(50) NOT NULL);
    Query OK, 0 rows affected (0.24 sec)
     
    mysql> insert into kevin.haha values(1,"wangshibo"),(2,"guohuihui"),(3,"yangyang"),(4,"shikui");      
    Query OK, 4 rows affected (0.07 sec)
    Records: 4  Duplicates: 0  Warnings: 0
     
    mysql> select * from kevin.haha;
    +----+-----------+
    | id | name      |
    +----+-----------+
    |  1 | wangshibo |
    |  2 | guohuihui |
    |  3 | yangyang  |
    |  4 | shikui    |
    +----+-----------+
    4 rows in set (0.00 sec)
    
    =====================================================================
    2) MGR-node2节点操作
    [root@MGR-node2 ~]# cp /etc/my.cnf /etc/my.cnf.bak
    [root@MGR-node2 ~]# >/etc/my.cnf
    [root@MGR-node2 ~]# vim /etc/my.cnf
    [mysqld]
    datadir = /var/lib/mysql
    socket = /var/lib/mysql/mysql.sock
         
    symbolic-links = 0
         
    log-error = /var/log/mysqld.log
    pid-file = /var/run/mysqld/mysqld.pid
     
    #GTID:
    server_id = 2
    gtid_mode = on
    enforce_gtid_consistency = on
     
    master_info_repository=TABLE
    relay_log_info_repository=TABLE
    binlog_checksum=NONE
         
    #binlog
    log_bin = mysql-bin
    log-slave-updates = 1
    binlog_format = row
    sync-master-info = 1
    sync_binlog = 1
        
    #relay log
    skip_slave_start = 1
     
    transaction_write_set_extraction=XXHASH64
    loose-group_replication_group_name="5db40c3c-180c-11e9-afbf-005056ac6820"
    loose-group_replication_start_on_boot=off
    loose-group_replication_local_address= "172.16.60.212:24901"
    loose-group_replication_group_seeds= "172.16.60.211:24901,172.16.60.212:24901,172.16.60.213:24901"
    loose-group_replication_bootstrap_group=off
    loose-group_replication_single_primary_mode=off
    loose-group_replication_enforce_update_everywhere_checks=on
    loose-group_replication_ip_whitelist="172.16.60.0/24,127.0.0.1/8"
    
    重启mysql服务
    [root@MGR-node2 ~]# systemctl restart mysqld
    登录mysql进行相关设置操作
    [root@MGR-node2 ~]# mysql -p123456
    .........
    mysql> SET SQL_LOG_BIN=0;
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_slave@'%' IDENTIFIED BY 'slave@123';
    Query OK, 0 rows affected, 1 warning (0.00 sec)
    
    mysql> FLUSH PRIVILEGES;
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> reset master;
    Query OK, 0 rows affected (0.17 sec)
    
    mysql> SET SQL_LOG_BIN=1;
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> CHANGE MASTER TO MASTER_USER='rpl_slave', MASTER_PASSWORD='slave@123' FOR CHANNEL 'group_replication_recovery';
    Query OK, 0 rows affected, 2 warnings (0.21 sec)
    
    mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
    Query OK, 0 rows affected (0.20 sec)
    
    mysql> SHOW PLUGINS;
    +----------------------------+----------+--------------------+----------------------+---------+
    | Name                       | Status   | Type               | Library              | License |
    +----------------------------+----------+--------------------+----------------------+---------+
    .............
    .............
    | group_replication          | ACTIVE   | GROUP REPLICATION  | group_replication.so | GPL     |
    +----------------------------+----------+--------------------+----------------------+---------+
    46 rows in set (0.00 sec)
    
    mysql> START GROUP_REPLICATION;
    Query OK, 0 rows affected (6.25 sec)
    
    mysql> SELECT * FROM performance_schema.replication_group_members;
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | group_replication_applier | 4281f7b7-34bb-11e9-8949-00505688047c | MGR-node2   |        3306 | ONLINE       |
    | group_replication_applier | 42ca8591-34bb-11e9-8296-005056ac6820 | MGR-node1   |        3306 | ONLINE       |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    2 rows in set (0.00 sec)
    
    查看下,发现已经将MGR-node1节点添加的数据同步过来了
    mysql> show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | kevin              |
    | mysql              |
    | performance_schema |
    | sys                |
    +--------------------+
    5 rows in set (0.00 sec)
    
    mysql> select * from kevin.haha;
    +----+-----------+
    | id | name      |
    +----+-----------+
    |  1 | wangshibo |
    |  2 | guohuihui |
    |  3 | yangyang  |
    |  4 | shikui    |
    +----+-----------+
    4 rows in set (0.00 sec)
    
    =====================================================================
    3) MGR-node3节点操作
    [root@MGR-node3 ~]# cp /etc/my.cnf /etc/my.cnf.bak
    [root@MGR-node3 ~]# >/etc/my.cnf
    [root@MGR-node3 ~]# vim /etc/my.cnf
    [mysqld]
    datadir = /var/lib/mysql
    socket = /var/lib/mysql/mysql.sock
         
    symbolic-links = 0
         
    log-error = /var/log/mysqld.log
    pid-file = /var/run/mysqld/mysqld.pid
     
    #GTID:
    server_id = 3
    gtid_mode = on
    enforce_gtid_consistency = on
     
    master_info_repository=TABLE
    relay_log_info_repository=TABLE
    binlog_checksum=NONE
         
    #binlog
    log_bin = mysql-bin
    log-slave-updates = 1
    binlog_format = row
    sync-master-info = 1
    sync_binlog = 1
        
    #relay log
    skip_slave_start = 1
     
    transaction_write_set_extraction=XXHASH64
    loose-group_replication_group_name="5db40c3c-180c-11e9-afbf-005056ac6820"
    loose-group_replication_start_on_boot=off
    loose-group_replication_local_address= "172.16.60.213:24901"
    loose-group_replication_group_seeds= "172.16.60.211:24901,172.16.60.212:24901,172.16.60.213:24901"
    loose-group_replication_bootstrap_group=off
    loose-group_replication_single_primary_mode=off
    loose-group_replication_enforce_update_everywhere_checks=on
    loose-group_replication_ip_whitelist="172.16.60.0/24,127.0.0.1/8"
    
    重启mysql服务
    [root@MGR-node3 ~]# systemctl restart mysqld
    
    登录mysql进行相关设置操作
    [root@MGR-node3 ~]# mysql -p123456
    ..........
    mysql> SET SQL_LOG_BIN=0;
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_slave@'%' IDENTIFIED BY 'slave@123';
    Query OK, 0 rows affected, 1 warning (0.00 sec)
    
    mysql> FLUSH PRIVILEGES;
    Query OK, 0 rows affected (0.01 sec)
    
    mysql> reset master;
    Query OK, 0 rows affected (0.10 sec)
    
    mysql> SET SQL_LOG_BIN=1;
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> CHANGE MASTER TO MASTER_USER='rpl_slave', MASTER_PASSWORD='slave@123' FOR CHANNEL 'group_replication_recovery';
    Query OK, 0 rows affected, 2 warnings (0.27 sec)
    
    mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
    Query OK, 0 rows affected (0.04 sec)
    
    mysql> SHOW PLUGINS;
    +----------------------------+----------+--------------------+----------------------+---------+
    | Name                       | Status   | Type               | Library              | License |
    +----------------------------+----------+--------------------+----------------------+---------+
    .............
    | group_replication          | ACTIVE   | GROUP REPLICATION  | group_replication.so | GPL     |
    +----------------------------+----------+--------------------+----------------------+---------+
    46 rows in set (0.00 sec)
    
    mysql> START GROUP_REPLICATION;
    Query OK, 0 rows affected (4.54 sec)
    
    mysql> SELECT * FROM performance_schema.replication_group_members;
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | group_replication_applier | 4281f7b7-34bb-11e9-8949-00505688047c | MGR-node2   |        3306 | ONLINE       |
    | group_replication_applier | 42ca8591-34bb-11e9-8296-005056ac6820 | MGR-node1   |        3306 | ONLINE       |
    | group_replication_applier | 456216bd-34bb-11e9-bbd1-005056880888 | MGR-node3   |        3306 | ONLINE       |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    3 rows in set (0.00 sec)
    
    查看下,发现已经将在其他节点上添加的数据同步过来了
    mysql> show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | kevin              |
    | mysql              |
    | performance_schema |
    | sys                |
    +--------------------+
    5 rows in set (0.00 sec)
    
    mysql> select * from kevin.haha;
    +----+-----------+
    | id | name      |
    +----+-----------+
    |  1 | wangshibo |
    |  2 | guohuihui |
    |  3 | yangyang  |
    |  4 | shikui    |
    +----+-----------+
    4 rows in set (0.00 sec)
    
    =====================================================================
    4) 组复制数据同步测试
    在任意一个节点上执行
    mysql> SELECT * FROM performance_schema.replication_group_members;
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    | group_replication_applier | 2658b203-1565-11e9-9f8b-005056880888 | MGR-node3   |        3306 | ONLINE       |
    | group_replication_applier | 2c1efc46-1565-11e9-ab8e-00505688047c | MGR-node2   |        3306 | ONLINE       |
    | group_replication_applier | 317e2aad-1565-11e9-9c2e-005056ac6820 | MGR-node1   |        3306 | ONLINE       |
    +---------------------------+--------------------------------------+-------------+-------------+--------------+
    3 rows in set (0.00 sec)
     
    如上,说明已经在MGR-node1、MGR-node2、MGR-node3 三个节点上成功部署了基于GTID的组复制同步环境。
    现在在三个节点中的任意一个上面更新数据,那么其他两个节点的数据库都会将新数据同步过去的!
     
    1)在MGR-node1节点数据库更新数据
    mysql> delete from kevin.haha where id>2;
    Query OK, 2 rows affected (0.14 sec)
     
    接着在MGR-node2、MGR-node3节点数据库查看,发现更新后数据已经同步过来了!
    mysql> select * from kevin.haha;
    +----+-----------+
    | id | name      |
    +----+-----------+
    |  1 | wangshibo |
    |  2 | guohuihui |
    +----+-----------+
    2 rows in set (0.00 sec)
     
    2)在MGR-node2节点数据库更新数据
    mysql> insert into kevin.haha values(11,"beijing"),(12,"shanghai"),(13,"anhui");
    Query OK, 3 rows affected (0.06 sec)
    Records: 3  Duplicates: 0  Warnings: 0
     
    接着在MGR-node1、MGR-node3节点数据库查看,发现更新后数据已经同步过来了!
    mysql> select * from kevin.haha;
    +----+-----------+
    | id | name      |
    +----+-----------+
    |  1 | wangshibo |
    |  2 | guohuihui |
    | 11 | beijing   |
    | 12 | shanghai  |
    | 13 | anhui     |
    +----+-----------+
    5 rows in set (0.00 sec)
     
    3)在MGR-node3节点数据库更新数据
    mysql> update kevin.haha set id=100 where name="anhui";
    Query OK, 1 row affected (0.16 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
     
    mysql> delete from kevin.haha where id=12;
    Query OK, 1 row affected (0.22 sec)
     
    接着在MGR-node1、MGR-node2节点数据库查看,发现更新后数据已经同步过来了!
    mysql> select * from kevin.haha;
    +-----+-----------+
    | id  | name      |
    +-----+-----------+
    |   1 | wangshibo |
    |   2 | guohuihui |
    |  11 | beijing   |
    | 100 | anhui     |
    +-----+-----------+
    4 rows in set (0.00 sec)

    四、ProxySQL读写分离和主节点故障无感知切换

    1) 安装mysql客户端,用于在本机连接到ProxySQL的管理接口

    [root@ProxySQL-node ~]# vim /etc/yum.repos.d/mariadb.repo
    [mariadb]
    name = MariaDB
    baseurl = http://yum.mariadb.org/10.3.5/centos6-amd64
    gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
    gpgcheck=1
       
    安装mysql-clinet客户端
    [root@ProxySQL-node ~]# yum install -y MariaDB-client
      
    ============================================================================
    如果遇到报错:
    Error: MariaDB-compat conflicts with 1:mariadb-libs-5.5.60-1.el7_5.x86_64
     You could try using --skip-broken to work around the problem
     You could try running: rpm -Va --nofiles --nodigest
       
    解决办法:
    [root@ProxySQL-node ~]# rpm -qa|grep mariadb
    mariadb-libs-5.5.60-1.el7_5.x86_64
     
    [root@ProxySQL-node ~]# rpm -e mariadb-libs-5.5.60-1.el7_5.x86_64 --nodeps
    [root@ProxySQL-node ~]# yum install -y MariaDB-client

    2) 安装proxysql

    proxysql的rpm包下载地址: https://pan.baidu.com/s/1S1_b5DKVCpZSOUNmtCXrrg
    提取密码: 5t1c
      
    [root@ProxySQL-node ~]# yum install -y perl-DBI perl-DBD-MySQL
    [root@ProxySQL-node ~]# rpm -ivh proxysql-1.4.8-1-centos7.x86_64.rpm --force
      
    启动proxysql
    [root@ProxySQL-node ~]# /etc/init.d/proxysql start
    Starting ProxySQL: DONE!
    [root@ProxySQL-node ~]# ss -lntup|grep proxy    
    tcp    LISTEN     0      128       *:6080                  *:*                   users:(("proxysql",pid=29931,fd=11))
    tcp    LISTEN     0      128       *:6032                  *:*                   users:(("proxysql",pid=29931,fd=28))
    tcp    LISTEN     0      128       *:6033                  *:*                   users:(("proxysql",pid=29931,fd=27))
    tcp    LISTEN     0      128       *:6033                  *:*                   users:(("proxysql",pid=29931,fd=26))
    tcp    LISTEN     0      128       *:6033                  *:*                   users:(("proxysql",pid=29931,fd=25))
    tcp    LISTEN     0      128       *:6033                  *:*                   users:(("proxysql",pid=29931,fd=24))
      
    [root@ProxySQL-node ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032
    ............
    ............
    MySQL [(none)]> show databases;
    +-----+---------------+-------------------------------------+
    | seq | name          | file                                |
    +-----+---------------+-------------------------------------+
    | 0   | main          |                                     |
    | 2   | disk          | /var/lib/proxysql/proxysql.db       |
    | 3   | stats         |                                     |
    | 4   | monitor       |                                     |
    | 5   | stats_history | /var/lib/proxysql/proxysql_stats.db |
    +-----+---------------+-------------------------------------+
    5 rows in set (0.000 sec)
    
    接着初始化Proxysql,将之前的proxysql数据都删除
    
    MySQL [(none)]> delete from scheduler ;
    Query OK, 0 rows affected (0.000 sec)
     
    MySQL [(none)]> delete from mysql_servers;
    Query OK, 3 rows affected (0.000 sec)
     
    MySQL [(none)]> delete from mysql_users;
    Query OK, 1 row affected (0.000 sec)
     
    MySQL [(none)]> delete from mysql_query_rules;
    Query OK, 0 rows affected (0.000 sec)
     
    MySQL [(none)]> delete from mysql_group_replication_hostgroups ;
    Query OK, 1 row affected (0.000 sec)
     
    MySQL [(none)]> LOAD MYSQL VARIABLES TO RUNTIME;
    Query OK, 0 rows affected (0.000 sec)
     
    MySQL [(none)]> SAVE MYSQL VARIABLES TO DISK;
    Query OK, 94 rows affected (0.175 sec)
     
    MySQL [(none)]> LOAD MYSQL SERVERS TO RUNTIME;
    Query OK, 0 rows affected (0.003 sec)
     
    MySQL [(none)]> SAVE MYSQL SERVERS TO DISK;
    Query OK, 0 rows affected (0.140 sec)
     
    MySQL [(none)]> LOAD MYSQL USERS TO RUNTIME;
    Query OK, 0 rows affected (0.000 sec)
     
    MySQL [(none)]> SAVE MYSQL USERS TO DISK;
    Query OK, 0 rows affected (0.050 sec)
     
    MySQL [(none)]> LOAD SCHEDULER TO RUNTIME;
    Query OK, 0 rows affected (0.000 sec)
     
    MySQL [(none)]> SAVE SCHEDULER TO DISK;
    Query OK, 0 rows affected (0.096 sec)
     
    MySQL [(none)]> LOAD MYSQL QUERY RULES TO RUNTIME;
    Query OK, 0 rows affected (0.000 sec)
     
    MySQL [(none)]> SAVE MYSQL QUERY RULES TO DISK;
    Query OK, 0 rows affected (0.156 sec)
     
    MySQL [(none)]>

    3)在数据库端建立proxysql登入需要的帐号 (在三个MGR任意一个节点上操作,会自动同步到其他节点)

    [root@MGR-node1 ~]# mysql -p123456
    .........
    mysql> CREATE USER 'proxysql'@'%' IDENTIFIED BY 'proxysql';    
    Query OK, 0 rows affected (0.07 sec)
    
    mysql> GRANT ALL ON * . * TO  'proxysql'@'%';
    Query OK, 0 rows affected (0.06 sec)
    
    mysql> create user 'sbuser'@'%' IDENTIFIED BY 'sbpass';    
    Query OK, 0 rows affected (0.05 sec)
    
    mysql> GRANT ALL ON * . * TO 'sbuser'@'%';  
    Query OK, 0 rows affected (0.08 sec)
    
    mysql> FLUSH PRIVILEGES;    
    Query OK, 0 rows affected (0.07 sec)

    4) 创建检查MGR节点状态的函数和视图 (在三个MGR任意一个节点上操作,会自动同步到其他节点)

    在MGR-node1节点上,创建系统视图sys.gr_member_routing_candidate_status,该视图将为ProxySQL提供组复制相关的监控状态指标。
    下载addition_to_sys.sql脚本,在MGR-node1节点执行如下语句导入MySQL即可 (在mgr-node1节点的mysql执行后,会同步到其他两个节点上)。
     
    下载地址: https://pan.baidu.com/s/1bNYHtExy2fmqwvEyQS3sWg
    提取密码:wst7
    
    [root@MGR-node1 ~]# vim /root/addition_to_sys.sql
    USE sys;
     
    DELIMITER $$
     
    CREATE FUNCTION IFZERO(a INT, b INT)
    RETURNS INT
    DETERMINISTIC
    RETURN IF(a = 0, b, a)$$
     
    CREATE FUNCTION LOCATE2(needle TEXT(10000), haystack TEXT(10000), offset INT)
    RETURNS INT
    DETERMINISTIC
    RETURN IFZERO(LOCATE(needle, haystack, offset), LENGTH(haystack) + 1)$$
     
    CREATE FUNCTION GTID_NORMALIZE(g TEXT(10000))
    RETURNS TEXT(10000)
    DETERMINISTIC
    RETURN GTID_SUBTRACT(g, '')$$
     
    CREATE FUNCTION GTID_COUNT(gtid_set TEXT(10000))
    RETURNS INT
    DETERMINISTIC
    BEGIN
      DECLARE result BIGINT DEFAULT 0;
      DECLARE colon_pos INT;
      DECLARE next_dash_pos INT;
      DECLARE next_colon_pos INT;
      DECLARE next_comma_pos INT;
      SET gtid_set = GTID_NORMALIZE(gtid_set);
      SET colon_pos = LOCATE2(':', gtid_set, 1);
      WHILE colon_pos != LENGTH(gtid_set) + 1 DO
         SET next_dash_pos = LOCATE2('-', gtid_set, colon_pos + 1);
         SET next_colon_pos = LOCATE2(':', gtid_set, colon_pos + 1);
         SET next_comma_pos = LOCATE2(',', gtid_set, colon_pos + 1);
         IF next_dash_pos < next_colon_pos AND next_dash_pos < next_comma_pos THEN
           SET result = result +
             SUBSTR(gtid_set, next_dash_pos + 1,
                    LEAST(next_colon_pos, next_comma_pos) - (next_dash_pos + 1)) -
             SUBSTR(gtid_set, colon_pos + 1, next_dash_pos - (colon_pos + 1)) + 1;
         ELSE
           SET result = result + 1;
         END IF;
         SET colon_pos = next_colon_pos;
      END WHILE;
      RETURN result;
    END$$
     
    CREATE FUNCTION gr_applier_queue_length()
    RETURNS INT
    DETERMINISTIC
    BEGIN
      RETURN (SELECT sys.gtid_count( GTID_SUBTRACT( (SELECT
    Received_transaction_set FROM performance_schema.replication_connection_status
    WHERE Channel_name = 'group_replication_applier' ), (SELECT
    @@global.GTID_EXECUTED) )));
    END$$
     
    CREATE FUNCTION gr_member_in_primary_partition()
    RETURNS VARCHAR(3)
    DETERMINISTIC
    BEGIN
      RETURN (SELECT IF( MEMBER_STATE='ONLINE' AND ((SELECT COUNT(*) FROM
    performance_schema.replication_group_members WHERE MEMBER_STATE != 'ONLINE') >=
    ((SELECT COUNT(*) FROM performance_schema.replication_group_members)/2) = 0),
    'YES', 'NO' ) FROM performance_schema.replication_group_members JOIN
    performance_schema.replication_group_member_stats USING(member_id));
    END$$
     
    CREATE VIEW gr_member_routing_candidate_status AS SELECT
    sys.gr_member_in_primary_partition() as viable_candidate,
    IF( (SELECT (SELECT GROUP_CONCAT(variable_value) FROM
    performance_schema.global_variables WHERE variable_name IN ('read_only',
    'super_read_only')) != 'OFF,OFF'), 'YES', 'NO') as read_only,
    sys.gr_applier_queue_length() as transactions_behind, Count_Transactions_in_queue as 'transactions_to_cert' from performance_schema.replication_group_member_stats;$$
     
    DELIMITER ;
    
    导入addition_to_sys.sql文件数据
    [root@MGR-node1 ~]# mysql -p123456 < /root/addition_to_sys.sql 
    mysql: [Warning] Using a password on the command line interface can be insecure.
    
    在三个mysql节点上可以查看该视图:
    [root@MGR-node1 ~]# mysql -p123456
    ............
    mysql> select * from sys.gr_member_routing_candidate_status;
    +------------------+-----------+---------------------+----------------------+
    | viable_candidate | read_only | transactions_behind | transactions_to_cert |
    +------------------+-----------+---------------------+----------------------+
    | YES              | NO        |                   0 |                    0 |
    +------------------+-----------+---------------------+----------------------+
    1 row in set (0.01 sec)

    5) 在proxysql中增加帐号

    [root@ProxySQL-node ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032
    ...........
    MySQL [(none)]> INSERT INTO MySQL_users(username,password,default_hostgroup) VALUES ('proxysql','proxysql',1); 
    Query OK, 1 row affected (0.000 sec)
     
    MySQL [(none)]> UPDATE global_variables SET variable_value='proxysql' where variable_name='mysql-monitor_username';   
    Query OK, 1 row affected (0.001 sec)
     
    MySQL [(none)]> UPDATE global_variables SET variable_value='proxysql' where variable_name='mysql-monitor_password';
    Query OK, 1 row affected (0.002 sec)
     
    MySQL [(none)]> LOAD MYSQL SERVERS TO RUNTIME;
    Query OK, 0 rows affected (0.006 sec)
     
    MySQL [(none)]> SAVE MYSQL SERVERS TO DISK;
    Query OK, 0 rows affected (0.387 sec)
     
    测试一下能否正常登入数据库
    [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h 127.0.0.1 -P6033 -e"select @@hostname"
    +------------+
    | @@hostname |
    +------------+
    | MGR-node1  |
    +------------+ 
    
    ===================================================================
    如果上面测试登录时报错:
    [root@ProxySQL-node ~]#  mysql -uproxysql -pproxysql -h 127.0.0.1 -P6033 -e"select @@hostname"
    ERROR 1045 (28000): ProxySQL Error: Access denied for user 'proxysql'@'127.0.0.1' (using password: YES)
    
    但是检查发现,明明用户名和密码已经修改成proxysql:proxysql了
    MySQL [(none)]> select * from global_variables;   
    ..........
    | mysql-interfaces                                    | 0.0.0.0:6033       |
    | mysql-default_schema                                | information_schema |
    | mysql-stacksize                                     | 1048576            |
    | mysql-server_version                                | 5.5.30             |
    | mysql-connect_timeout_server                        | 3000               |
    | mysql-monitor_username                              | proxysql           |
    | mysql-monitor_password                              | proxysql           |
    
    解决办法: 依次执行下面的命令
    MySQL [(none)]> LOAD MYSQL VARIABLES TO RUNTIME;
    MySQL [(none)]> SAVE MYSQL VARIABLES TO DISK;
    
    MySQL [(none)]> LOAD MYSQL SERVERS TO RUNTIME;
    MySQL [(none)]> SAVE MYSQL SERVERS TO DISK;
    
    MySQL [(none)]> LOAD MYSQL USERS TO RUNTIME;
    MySQL [(none)]> SAVE MYSQL USERS TO DISK;
    
    MySQL [(none)]> LOAD SCHEDULER TO RUNTIME;
    MySQL [(none)]> SAVE SCHEDULER TO DISK;
    
    MySQL [(none)]> LOAD MYSQL QUERY RULES TO RUNTIME;
    MySQL [(none)]> SAVE MYSQL QUERY RULES TO DISK;
    
    =========================================================
    如果测试登录再出现:
    [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h 127.0.0.1 -P6033 -e"select @@hostname"
    ERROR 9001 (HY000) at line 1: Max connect timeout reached while reaching hostgroup 1 after 10000ms
    
    这是因为后端三个mysql的MGR节点还没有加入到proxysql中的原因,再进行完下面的步骤"配置proxysql"后就可以了
    [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h 127.0.0.1 -P6033 -e"select @@hostname"
    +------------+
    | @@hostname |
    +------------+
    | MGR-node1  |
    +------------+

    6) 配置proxysql

    [root@ProxySQL-node ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032  
    .............
    MySQL [(none)]> delete from mysql_servers;
    Query OK, 3 rows affected (0.000 sec)
     
    MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(1,'172.16.60.211',3306);
    Query OK, 1 row affected (0.001 sec)
     
    MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(1,'172.16.60.212',3306);
    Query OK, 1 row affected (0.000 sec)
     
    MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(1,'172.16.60.213',3306);
    Query OK, 1 row affected (0.000 sec)
     
    MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(2,'172.16.60.211',3306);
    Query OK, 1 row affected (0.000 sec)
     
    MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(2,'172.16.60.212',3306);
    Query OK, 1 row affected (0.000 sec)
     
    MySQL [(none)]> insert into mysql_servers (hostgroup_id, hostname, port) values(2,'172.16.60.213',3306);
    Query OK, 1 row affected (0.000 sec)
     
    MySQL [(none)]> select * from  mysql_servers ;
    +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | hostgroup_id | hostname      | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
    +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | 1            | 172.16.60.211 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.212 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.213 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.211 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.212 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.213 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    6 rows in set (0.001 sec)
     
    hostgroup_id = 1代表write group,针对我们提出的限制,这个地方只配置了一个节点;
    hostgroup_id = 2代表read group,包含了MGR的所有节点,目前只是Onlinle的,等配置过scheduler后,status就会有变化 。
     
    对于上面的hostgroup配置,默认所有的写操作会发送到hostgroup_id为1的online节点,也就是发送到写节点上。
    所有的读操作,会发送为hostgroup_id为2的online节点。
     
    需要确认一下没有使用proxysql的读写分离规则(因为之前测试中配置了这个地方,所以需要删除,以免影响后面的测试)。
    MySQL [(none)]> delete from mysql_query_rules;
    Query OK, 2 rows affected (0.000 sec)
     
    MySQL [(none)]> commit;
    Query OK, 0 rows affected (0.000 sec)
     
    最后需要将global_variables,mysql_servers、mysql_users表的信息加载到RUNTIME,更进一步加载到DISK:
    MySQL [(none)]> LOAD MYSQL VARIABLES TO RUNTIME; 
    Query OK, 0 rows affected (0.001 sec)
     
    MySQL [(none)]> SAVE MYSQL VARIABLES TO DISK;
    Query OK, 94 rows affected (0.080 sec)
     
    MySQL [(none)]> LOAD MYSQL SERVERS TO RUNTIME;
    Query OK, 0 rows affected (0.003 sec)
     
    MySQL [(none)]> SAVE MYSQL SERVERS TO DISK;
    Query OK, 0 rows affected (0.463 sec)
     
    MySQL [(none)]> LOAD MYSQL USERS TO RUNTIME;
    Query OK, 0 rows affected (0.001 sec)
     
    MySQL [(none)]> SAVE MYSQL USERS TO DISK; 
    Query OK, 0 rows affected (0.134 sec)
    
    再次验证proxysql登录
    [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h 127.0.0.1 -P6033 -e"select @@hostname"
    +------------+
    | @@hostname |
    +------------+
    | MGR-node1  |
    +------------+

    7)配置scheduler
    首先,请在Github地址https://github.com/ZzzCrazyPig/proxysql_groupreplication_checker下载相应的脚本
    这个地址有三个脚本可供下载:
    proxysql_groupreplication_checker.sh用于multi-primary模式,可以实现读写分离,以及故障切换,同一时间点多个节点可以多写;
    gr_mw_mode_cheker.sh用于multi-primary模式,可以实现读写分离,以及故障切换,不过在同一时间点只能有一个节点能写;
    gr_sw_mode_checker.sh用于single-primary模式,可以实现读写分离,以及故障切换;
    由于这里实验的环境是multi-primary模式,所以选择proxysql_groupreplication_checker.sh脚本。

    三个脚本我已打包放在了百度云盘上,下载地址:https://pan.baidu.com/s/1lUzr58BSA_U7wmYwsRcvzQ
    提取密码:9rm7
    
    将下载的脚本proxysql_groupreplication_checker.sh放到目录/var/lib/proxysql/下,并增加可以执行的权限:
    [root@ProxySQL-node ~]# chmod a+x /var/lib/proxysql/proxysql_groupreplication_checker.sh
    [root@ProxySQL-node ~]# ll /var/lib/proxysql/proxysql_groupreplication_checker.sh       
    -rwxr-xr-x 1 root root 6081 Feb 20 14:25 /var/lib/proxysql/proxysql_groupreplication_checker.sh
    
    最后,在proxysql的scheduler表里面加载如下记录,然后加载到RUNTIME使其生效,同时还可以持久化到磁盘:
    执行语句"
    INSERT INTO scheduler(id,interval_ms,filename,arg1,arg2,arg3,arg4, arg5)
    VALUES (1,'10000','/var/lib/proxysql/proxysql_groupreplication_checker.sh','1','2','1','0','/var/lib/proxysql/proxysql_groupreplication_checker.log');"
    
    如下:
    [root@ProxySQL-node ~]# mysql -uadmin -padmin -h127.0.0.1 -P6032 
    ..............
    MySQL [(none)]> INSERT INTO scheduler(id,interval_ms,filename,arg1,arg2,arg3,arg4, arg5) VALUES (1,'10000','/var/lib/proxysql/proxysql_groupreplication_checker.sh','1','2','1','0','/var/lib/proxysql/proxysql_groupreplication_checker.log');
    Query OK, 1 row affected (0.000 sec)
    
    MySQL [(none)]> select * from scheduler;
    +----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+
    | id | active | interval_ms | filename                                               | arg1 | arg2 | arg3 | arg4 | arg5                                                    | comment |
    +----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+
    | 1  | 1      | 10000       | /var/lib/proxysql/proxysql_groupreplication_checker.sh | 1    | 2    | 1    | 0    | /var/lib/proxysql/proxysql_groupreplication_checker.log |         |
    +----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+
    1 row in set (0.000 sec)
    
    MySQL [(none)]> LOAD SCHEDULER TO RUNTIME;
    Query OK, 0 rows affected (0.001 sec)
    
    MySQL [(none)]> SAVE SCHEDULER TO DISK;
    Query OK, 0 rows affected (0.118 sec)
    
    ==============================================================================
    scheduler各column的说明:
    active : 1: enable scheduler to schedule the script we provide
    interval_ms : invoke one by one in cycle (eg: 5000(ms) = 5s represent every 5s invoke the script)
    filename: represent the script file path
    arg1~arg5: represent the input parameters the script received
    
    脚本proxysql_groupreplication_checker.sh对应的参数说明如下:
    arg1 is the hostgroup_id for write
    arg2 is the hostgroup_id for read
    arg3 is the number of writers we want active at the same time
    arg4 represents if we want that the member acting for writes is also candidate for reads
    arg5 is the log file
    
    schedule信息加载后,就会分析当前的环境,mysql_servers中显示出当前只有172.16.60.211是可以写的,
    172.16.60.212以及172.16.60.213是用来读的。
    
    MySQL [(none)]> select * from  mysql_servers ;              //上面操作后,稍等一会儿后执行此命令才会有下面的结果
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | 1            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    6 rows in set (0.000 sec)
    
    因为schedule的arg4,我这里设为了0,就表示可写的节点不能用于读。那我将arg4设置为1试一下:
    MySQL [(none)]> update scheduler set arg4=1;
    Query OK, 1 row affected (0.000 sec)
    
    MySQL [(none)]> select * from scheduler;
    +----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+
    | id | active | interval_ms | filename                                               | arg1 | arg2 | arg3 | arg4 | arg5                                                    | comment |
    +----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+
    | 1  | 1      | 10000       | /var/lib/proxysql/proxysql_groupreplication_checker.sh | 1    | 2    | 1    | 1    | /var/lib/proxysql/proxysql_groupreplication_checker.log |         |
    +----+--------+-------------+--------------------------------------------------------+------+------+------+------+---------------------------------------------------------+---------+
    1 row in set (0.000 sec)
    
    MySQL [(none)]> SAVE SCHEDULER TO DISK;
    Query OK, 0 rows affected (0.286 sec)
    
    MySQL [(none)]> LOAD SCHEDULER TO RUNTIME;
    Query OK, 0 rows affected (0.000 sec)
    
    MySQL [(none)]> select * from  mysql_servers;          //上面操作后,稍微等一会儿执行此命令才会有下面的结果
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | 1            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    6 rows in set (0.000 sec)
    
    arg4设置为1之后,172.16.60.211节点用来写的同时,也可以被用来读。
    
    便于下面的测试还是将arg4设为0:
    MySQL [(none)]> update scheduler set arg4=0;
    Query OK, 1 row affected (0.000 sec)
    
    MySQL [(none)]> SAVE SCHEDULER TO DISK;
    Query OK, 0 rows affected (0.197 sec)
    
    MySQL [(none)]> LOAD SCHEDULER TO RUNTIME;
    Query OK, 0 rows affected (0.000 sec)
    
    MySQL [(none)]> select * from  mysql_servers;             //稍微等一会儿执行此命令,才会有下面的结果
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | 1            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.211 | 3306 | OFFLINE_SORT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    6 rows in set (0.000 sec)
    
    各个节点的gr_member_routing_candidate_status视图也显示了当前节点是否是正常状态的,
    proxysql就是读取的这个视图的信息来决定此节点是否可用。
    
    
    [root@MGR-node1 ~]# mysql -p123456              
    ...........
    mysql> select * from sys.gr_member_routing_candidate_statusG;
    *************************** 1. row ***************************
        viable_candidate: YES
               read_only: NO
     transactions_behind: 0
    transactions_to_cert: 0
    1 row in set (0.00 sec)
    
    ERROR: 
    No query specified
    

    8) 设置读写分离

    MySQL [(none)]> insert into mysql_query_rules (active, match_pattern, destination_hostgroup, apply) values (1,"^SELECT",2,1);
    Query OK, 1 row affected (0.001 sec)
    
    MySQL [(none)]> LOAD MYSQL QUERY RULES TO RUNTIME;
    Query OK, 0 rows affected (0.001 sec)
    
    MySQL [(none)]> SAVE MYSQL QUERY RULES TO DISK;
    Query OK, 0 rows affected (0.264 sec)
    
    解释说明:
    match_pattern的规则是基于正则表达式的,
    active表示是否启用这个sql路由项,
    match_pattern就是我们正则匹配项,
    destination_hostgroup表示我们要将该类sql转发到哪些mysql上面去,这里我们将select转发到group 2,。
    apply为1表示该正则匹配后,将不再接受其他匹配,直接转发。
    
    对于for update需要在gruop1上执行,可以加上规则:
    MySQL [(none)]> insert into mysql_query_rules(active,match_pattern,destination_hostgroup,apply) values(1,'^SELECT.*FOR UPDATE$',1,1); 
    Query OK, 1 row affected (0.001 sec)
    
    在proxysql本机或其他客户机上检查下,select 语句,一直连接的是172.16.60.212和172.16.60.213
    [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
    mysql: [Warning] Using a password on the command line interface can be insecure.
    +------------+
    | @@hostname |
    +------------+
    | MGR-node3  |
    +------------+
    [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
    mysql: [Warning] Using a password on the command line interface can be insecure.
    +------------+
    | @@hostname |
    +------------+
    | MGR-node2  |
    +------------+
    [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
    mysql: [Warning] Using a password on the command line interface can be insecure.
    +------------+
    | @@hostname |
    +------------+
    | MGR-node2  |
    +------------+
    [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
    mysql: [Warning] Using a password on the command line interface can be insecure.
    +------------+
    | @@hostname |
    +------------+
    | MGR-node3  |
    +------------+
    

    9) 验证数据的读写分离效果

    [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"   
    +------------+
    | @@hostname |
    +------------+
    | MGR-node2  |
    +------------+
    [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select * from kevin.haha"
    +-----+-----------+
    | id  | name      |
    +-----+-----------+
    |   1 | wangshibo |
    |   2 | guohuihui |
    |  11 | beijing   |
    | 100 | anhui     |
    +-----+-----------+
    
    [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "delete from kevin.haha where id=1;"                      
    [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "delete from kevin.haha where id=2;"
    [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select * from kevin.haha"                
    +-----+---------+
    | id  | name    |
    +-----+---------+
    |  11 | beijing |
    | 100 | anhui   |
    +-----+---------+
    
    [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e 'insert into kevin.haha values(21,"zhongguo"),(22,"xianggang"),(23,"taiwan");'
    
    [root@ProxySQL-node ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select * from kevin.haha"                                +-----+-----------+
    | id  | name      |
    +-----+-----------+
    |  11 | beijing   |
    |  21 | zhongguo  |
    |  22 | xianggang |
    |  23 | taiwan    |
    | 100 | anhui     |
    
    最后在proxysql管理端查看读写分离情况
    [root@ProxySQL-node ~]# mysql -uadmin -padmin -h 127.0.0.1 -P6032
    ..........
    MySQL [(none)]> select hostgroup,username,digest_text,count_star from stats_mysql_query_digest;
    +-----------+----------+------------------------------------------------------+------------+
    | hostgroup | username | digest_text                                          | count_star |
    +-----------+----------+------------------------------------------------------+------------+
    | 1         | proxysql | insert into kevin.haha values(?,?),(?,?),(?,?)       | 1          |
    | 1         | proxysql | insert into kevin.haha values(?,yangyang)            | 1          |
    | 1         | proxysql | delete from kevin.haha where id=?                    | 2          |
    | 1         | proxysql | select @@version_comment limit ?                     | 120        |
    | 1         | proxysql | KILL ?                                               | 8          |
    | 1         | proxysql | select @@hostname                                    | 11         |
    | 1         | proxysql | KILL QUERY ?                                         | 10         |
    | 2         | proxysql | select @@hostname, sleep(?)                          | 53         |
    | 1         | proxysql | insert into kevin.haha values(?,yangyang),(?,shikui) | 2          |
    | 1         | proxysql | show databases                                       | 1          |
    | 2         | proxysql | select @@hostname                                    | 31         |
    | 2         | proxysql | select * from kevin.haha                             | 4          |
    | 1         | proxysql | insert into kevin.haha values(?,wawa)                | 3          |
    +-----------+----------+------------------------------------------------------+------------+
    13 rows in set (0.002 sec)
    
    MySQL [(none)]> select * from  mysql_servers;
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | 1            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    6 rows in set (0.000 sec)
    
    通过上面可以看到:
    写操作都分配到了group1组内,即写操作分配到172.16.60.211节点上。
    读操作都分配到了group2组内,即读操作分配到172.16.60.212、172.16.60.213节点上。

    10)设置故障应用无感应

    在上面的读写分离规则中,我设置了172.16.60.211为可写节点,172.16.60.212,172.16.60.213为只读节点
    如果此时172.16.60.211变成只读模式的话,应用能不能直接连到其它的节点进行写操作?
     
    现手动将172.16.60.211变成只读模式:
    [root@MGR-node1 ~]# mysql -p123456
    ........
    mysql> set global read_only=1;
    Query OK, 0 rows affected (0.00 sec)
     
    接着观察一下mysql_servers的状态,自动将group1的172.16.60.212改成了online,group2的172.16.60.211,
    172.16.60.213变成online了,就表示将172.16.60.212变为可写节点,其它两个节点变为只读节点了。
     
    [root@ProxySQL-node ~]# mysql -uadmin -padmin -h 127.0.0.1 -P6032
    ........
    MySQL [(none)]> select * from  mysql_servers;
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | 1            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    6 rows in set (0.001 sec)
     
    通过模拟的连接也可以看到select语句都连接到172.16.60.211和172.16.60.213进行了。 (模拟时可以稍微间隔一段时间,快速测试可能会连接同一个读节点)
    [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
    mysql: [Warning] Using a password on the command line interface can be insecure.
    +------------+
    | @@hostname |
    +------------+
    | MGR-node3  |
    +------------+
    [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
    mysql: [Warning] Using a password on the command line interface can be insecure.
    +------------+
    | @@hostname |
    +------------+
    | MGR-node1  |
    +------------+
    [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
    mysql: [Warning] Using a password on the command line interface can be insecure.
    +------------+
    | @@hostname |
    +------------+
    | MGR-node3  |
    +------------+
    [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
    mysql: [Warning] Using a password on the command line interface can be insecure.
    +------------+
    | @@hostname |
    +------------+
    | MGR-node1  |
    +------------+
    [root@MGR-node3 ~]# mysql -uproxysql -pproxysql -h172.16.60.214 -P6033 -e "select @@hostname"
    mysql: [Warning] Using a password on the command line interface can be insecure.
    +------------+
    | @@hostname |
    +------------+
    | MGR-node1  |
    +------------+
     
    然后再将将172.16.60.211变为可写模式后,mysql_servers也恢复过来了。
    [root@MGR-node1 ~]# mysql -p123456
    ........
    mysql> set global read_only=0;
    Query OK, 0 rows affected (0.00 sec)
     
    接着观察一下mysql_servers的状态
    [root@ProxySQL-node ~]# mysql -uadmin -padmin -h 127.0.0.1 -P6032
    .........
    MySQL [(none)]> select * from  mysql_servers;
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | 1            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    6 rows in set (0.000 sec)
     
    经过测试将172.16.60.211节点停止组复制(stop group_replication)或者该节点宕机(mysql服务挂掉)后,mysql_servers表的信息也会正常的切换新的节点。
    待172.16.60.211恢复再加入到组复制后,mysql_servers也会正常的将172.16.60.211改成online状态。
    
    ======================================================================================================
    可能出现的问题:
    
    mysql>  select * from  mysql_servers ;
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | 1            | 172.16.60.211 | 3306 | OFFLINE_HARD | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    4 rows in set (0.00 sec)
    
    也就是说,可能遇到上面所有节点都offline了的情况,查看错误日志如下:
    [root@ProxySQL-node ~]# tail -f /var/lib/proxysql/proxysql.log
    ........
    [2019-02-18 16:23:52] read node [hostgroup_id: 2, hostname: 172.16.60.213, port: 3306, isOK: 0] is not OK, we will set it's status to be 'OFFLINE_SOFT'
    ERROR 1142 (42000) at line 1: SELECT command denied to user 'proxysql'@'172.16.60.214' for table 'gr_member_routing_candidate_status'
    [2019-02-18 16:23:55] current write node [hostgroup_id: 2, hostname: 172.17.61.131, port: 3306, isOK: 0] is not OK, we need to do switch over
    ERROR 1142 (42000) at line 1: SELECT command denied to user 'proxysql'@'172.16.60.214' for table 'gr_member_routing_candidate_status'
    [2019-02-18 16:23:55] read node [hostgroup_id: 2, hostname: 172.17.61.132, port: 3306, isOK: 0] is not OK, we will set it's status to be 'OFFLINE_SOFT'
    ERROR 1142 (42000) at line 1: SELECT command denied to user 'proxysql'@'172.16.60.214' for table 'gr_member_routing_candidate_status
    
    从上面的错误日志上看出是权限的问题,proxysql用户没有足够的权限读取数据。
    
    解决办法:
    [root@MGR-node1 ~]# mysql -p123456
    .........
    mysql> GRANT ALL ON * . * TO  'proxysql'@'%';  
    mysql> flush privileges;
    
    再次看看,就有权限了
    [root@ProxySQL-node ~]# mysql -uadmin -padmin -h 127.0.0.1 -P6032
    .........
    MySQL [(none)]> select * from  mysql_servers;
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | hostgroup_id | hostname      | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | 1            | 172.16.60.211 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.212 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 1            | 172.16.60.213 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.211 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.212 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    | 2            | 172.16.60.213 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
    +--------------+---------------+------+--------------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    6 rows in set (0.000 sec)
    

    到此,ProxySQL就简单实现了MGR的读写分离和主节点故障无感知环境。

  • 相关阅读:
    OpenLayer 3 鹰眼控件和全屏显示
    OpenLayer 3 鼠标位置坐标显示控件
    Openlayer 3 图层列表控件(自定义)
    OpenLayers 3 的地图基本操作
    小米范工具系列之十四:小米范网站批量爬虫工具
    ruby所有版本下载地址
    常用代码块:使用时间生成数据库文件名
    收集些常用的正则--以后慢慢添加
    小米范工具系列最新下载地址
    小米范工具系列之十三:小米范验证码登录爆破工具
  • 原文地址:https://www.cnblogs.com/kevingrace/p/10384691.html
Copyright © 2011-2022 走看看