zoukankan      html  css  js  c++  java
  • ProxySQL Tutorial : setup in a MySQL replication topology

    ProxySQL Tutorial : setup in a MySQL replication topology

    With the imminent release of ProxySQL as stable version, and its presentation at PerconaLive Amsterdam , I decide to write few blog posts to describe how to install, configure and use it without digging too much through documentation.

    Moreover, practical examples are often easier to understand.

    This tutorial is a long one, although none of the topics are discussed in details to avoid making it even longer (or boring).

    For this tutorial, we will use a small setup with 4 nodes:

    - node1 (192.168.124.101) , mysql master

    - node2 (192.168.124.102) , mysql slave

    - node3 (192.168.124.103) , mysql slave

    - app (192.168.124.104) , application server where proxysql will also run

    Replication setup

    Without going into details on how to configure mysqld in the nodes (this is outside the scope of this tutorial), the nodes have already mysqld up and running, configure to support gtid, but replication is not running yet. We will configure replication during this tutorial from "app".

    Create root user user,  replication user, setup replication, and create application user

    First, let's create the root user to connect from app:

    vagrant@app:~$ for i in 1 2 3 ; do
    ssh node$i -- 'mysql -uroot -proot -e "GRANT ALL PRIVILEGES ON *.* TO root@app IDENTIFIED BY "root" WITH GRANT OPTION" -NB'
    done
    Warning: Using a password on the command line interface can be insecure.
    Warning: Using a password on the command line interface can be insecure.
    Warning: Using a password on the command line interface can be insecure.

    These warnings are annoying, let's remove them and try our new user:

    vagrant@app:~$ echo -e "[client]
    password=root" > .my.cnf
    vagrant@app:~$ for i in 1 2 3 ; do mysql -u root -h node$i -e "SELECT CURRENT_USER()" -NB ; done
    root@app
    root@app
    root@app

    Create replication user:

    vagrant@app:~$ mysql -u root -h node1 -e "GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.124.%' IDENTIFIED BY 'replpass'"

    Setup and start replication:

    vagrant@app:~$ for i in 2 3 ; do mysql -u root -h node$i -e "CHANGE MASTER TO MASTER_HOST='192.168.124.101' , MASTER_USER='repl', MASTER_PASSWORD='replpass', MASTER_AUTO_POSITION=1; START SLAVE;" ; done 

    At this stage, we assume replication is up and running (please double check this in your setup).

    Application needs a user to connect to the database. We will create a simple user with credential "utest"/"ptest"

    vagrant@app:~$ mysql -u root -h node1 -e "GRANT ALL PRIVILEGES ON test.* TO utest@app IDENTIFIED BY "ptest""
    vagrant@app:~$ mysql -u root -h node1 -e "CREATE DATABASE IF NOT EXISTS test"

    Now that the user is created, we can run some test to verify that we can connect everywhere and that replication is working using sysbench to generate data on master, and run a readonly workload on a slave:

    vagrant@app:~$ sysbench --test=/home/vagrant/sysbench-0.5/sysbench/tests/db/oltp.lua --mysql-user=utest --mysql-password=ptest --mysql-host=node1 --mysql-db=test prepare
    sysbench 0.5: multi-threaded system evaluation benchmark

    Creating table 'sbtest1'...
    Inserting 10000 records into 'sbtest1'

    vagrant@app:~$ sysbench --max-requests=0 --max-time=10 --test=/home/vagrant/sysbench-0.5/sysbench/tests/db/oltp.lua --mysql-user=utest --mysql-password=ptest --mysql-host=node2 --mysql-db=
    test --oltp-read-only=on run
    sysbench 0.5: multi-threaded system evaluation benchmark

    Running the test with following options:
    Number of threads: 1
    Random number generator seed is 0 and will be ignored


    Threads started!

    OLTP test statistics:
    queries performed:
    read: 22638
    write: 0
    other: 3234
    total: 25872
    transactions: 1617 (161.63 per sec.)
    read/write requests: 22638 (2262.88 per sec.)
    other operations: 3234 (323.27 per sec.)
    ignored errors: 0 (0.00 per sec.)
    reconnects: 0 (0.00 per sec.)

    General statistics:
    total time: 10.0041s
    total number of events: 1617
    total time taken by event execution: 9.9958s
    response time:
    min: 4.96ms
    avg: 6.18ms
    max: 212.22ms
    approx. 95 percentile: 6.66ms

    Threads fairness:
    events (avg/stddev): 1617.0000/0.00
    execution time (avg/stddev): 9.9958/0.00

    Download and install ProxySQL

    Now that we know that replication is up and running correctly and users have been created, it is time to download and install ProxySQL on app server.

    Precompiled  binaries and packages of ProxySQL are available at https://github.com/sysown/proxysql-binaries , for Centos7 , Ubuntu12 and Ubuntu14 .

    This is a Ubuntu14 system, so once we download the package we can install it with dpkg :

    vagrant@app:~$ wget -q https://github.com/sysown/proxysql-binaries/raw/master/binaries/Ubuntu14/proxysql_0.2.0902-ubuntu14_amd64.deb

    vagrant@app:~$ sudo dpkg -i proxysql_0.2.0902-ubuntu14_amd64.deb
    Selecting previously unselected package proxysql.
    (Reading database ... 61067 files and directories currently installed.)
    Preparing to unpack proxysql_0.2.0902-ubuntu14_amd64.deb ...
    Unpacking proxysql (0.2.0902) ...
    Setting up proxysql (0.2.0902) ...
    update-rc.d: warning: /etc/init.d/proxysql missing LSB information
    update-rc.d: see
    Adding system startup for /etc/init.d/proxysql ...
    /etc/rc0.d/K20proxysql -> ../init.d/proxysql
    /etc/rc1.d/K20proxysql -> ../init.d/proxysql
    /etc/rc6.d/K20proxysql -> ../init.d/proxysql
    /etc/rc2.d/S20proxysql -> ../init.d/proxysql
    /etc/rc3.d/S20proxysql -> ../init.d/proxysql
    /etc/rc4.d/S20proxysql -> ../init.d/proxysql
    /etc/rc5.d/S20proxysql -> ../init.d/proxysql
    Processing triggers for ureadahead (0.100.0-16) ...

    vagrant@app:~$

    ProxySQL installs an basic config file in /etc/proxysql.cnf , and uses /var/run/proxysql as its datadir (working dir).

    Although,  /etc/proxysql.cnf is not configured with any mysql servers to be used as backend, neither it is configured with any mysql users to connect to such backend. Therefore we either edit the config file before starting proxysql, or we configure users and servers directly from within proxysql.

    Although, the config file /etc/proxysql.cnf has one user configured: the user required to connect to the admin interface (details later).

    Note that proxysql doesn't parse its config file after the first time it was executed (more details in https://github.com/sysown/proxysql-0.2/blob/master/doc/configuration_system.md ), therefore the preferred way to configure proxysql is not through its config file but through its admin interface (more details in https://github.com/sysown/proxysql-0.2/blob/master/doc/configuration_system.md and https://github.com/sysown/proxysql-0.2/blob/master/doc/admin_tables.md ) .

    Start ProxySQL

    We can start proxysql using service:

    vagrant@app:~$ sudo service proxysql start
    Starting ProxySQL: DONE!

    Checking the content of proxysql's datadir, we can identify: a database file, a log file and a pid file:

    vagrant@app:~$ ls -l /var/run/proxysql/
    total 36
    -rw------- 1 root root 25600 Sep 12 22:01 proxysql.db
    -rw------- 1 root root 892 Sep 12 22:01 proxysql.log
    -rw-r--r-- 1 root root 5 Sep 12 22:01 proxysql.pid

    Note that the database file is the file that stores all the configuration related to proxysql. If this file is present, the config file is not parsed because the database file is the preferred source of configuration.

    From the list of processes running we can see two proxysql processes:

    vagrant@app:~$ ps aux | grep proxysql | grep -v grep
    root 3589 0.0 0.6 35700 5036 ? S 22:01 0:00 proxysql -c /etc/proxysql.cnf -D /var/run/proxysql
    root 3590 0.0 1.3 85148 10112 ? Sl 22:01 0:00 proxysql -c /etc/proxysql.cnf -D /var/run/proxysql

    The reason behind this is that by default proxysql forks at startup, and the parent process is nothing more than an angel process that restarts proxysql if it crashes.

    Connect to ProxySQL Admin Interface

    As said previously, proxysql.cnf is read just the very first time that proxysql is started.

    In proxysql.cnf , when was read for the first time, ProxySQL admin interface was configured as:

    admin_variables=
    {
    admin_credentials="admin:admin"
    mysql_ifaces="127.0.0.1:6032;/tmp/proxysql_admin.sock"
    }

    Therefore, to connect to ProxySQL admin interface we should use those information.

    Let's connect to proxysql and run few commands:

    vagrant@app:~$ mysql -u admin -padmin -h 127.0.0.1 -P6032
    Warning: Using a password on the command line interface can be insecure.
    Welcome to the MySQL monitor. Commands end with ; or g.
    Your MySQL connection id is 0
    Server version: 5.1.30 (ProxySQL Admin Module)

    Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.

    Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

    mysql>
    mysql> SHOW DATABASES;
    +-----+---------+-------------------------------+
    | seq | name | file |
    +-----+---------+-------------------------------+
    | 0 | main | |
    | 2 | disk | /var/run/proxysql/proxysql.db |
    | 3 | stats | |
    | 4 | monitor | |
    +-----+---------+-------------------------------+
    4 rows in set (0.00 sec)

    mysql> SHOW TABLES;
    +-------------------+
    | tables |
    +-------------------+
    | mysql_servers |
    | mysql_users |
    | mysql_query_rules |
    | global_variables |
    | mysql_collations |
    +-------------------+
    5 rows in set (0.01 sec)

    The "main" database represents the in-memory configuration, while "disk" represents the on-disk configuration. More details here: https://github.com/sysown/proxysql-0.2/blob/master/doc/configuration_system.md

    The "mysql_*" tables are described at https://github.com/sysown/proxysql-0.2/blob/master/doc/admin_tables.md , and at this stage we can verify that they are empty:

    mysql> SELECT * FROM mysql_users;
    Empty set (0.00 sec)

    mysql> SELECT * FROM mysql_servers;
    Empty set (0.01 sec)

    mysql> SHOW CREATE TABLE mysql_usersG
    *************************** 1. row ***************************
    table: mysql_users
    Create Table: CREATE TABLE mysql_users (
    username VARCHAR NOT NULL,
    password VARCHAR,
    active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 1,
    use_ssl INT CHECK (use_ssl IN (0,1)) NOT NULL DEFAULT 0,
    default_hostgroup INT NOT NULL DEFAULT 0,
    default_schema VARCHAR,
    schema_locked INT CHECK (schema_locked IN (0,1)) NOT NULL DEFAULT 0,
    transaction_persistent INT CHECK (transaction_persistent IN (0,1)) NOT NULL DEFAULT 0,
    fast_forward INT CHECK (fast_forward IN (0,1)) NOT NULL DEFAULT 0,
    backend INT CHECK (backend IN (0,1)) NOT NULL DEFAULT 1,
    frontend INT CHECK (frontend IN (0,1)) NOT NULL DEFAULT 1,
    max_connections INT CHECK (max_connections >=0) NOT NULL DEFAULT 10000,
    PRIMARY KEY (username, backend),
    UNIQUE (username, frontend))
    1 row in set (0.01 sec)

    mysql> SHOW CREATE TABLE mysql_serversG
    *************************** 1. row ***************************
    table: mysql_servers
    Create Table: CREATE TABLE mysql_servers (
    hostgroup_id INT NOT NULL DEFAULT 0,
    hostname VARCHAR NOT NULL,
    port INT NOT NULL DEFAULT 3306,
    status VARCHAR CHECK (UPPER(status) IN ('ONLINE','SHUNNED','OFFLINE_SOFT', 'OFFLINE_HARD')) NOT NULL DEFAULT 'ONLINE',
    weight INT CHECK (weight >= 0) NOT NULL DEFAULT 1,
    compression INT CHECK (compression >=0 AND compression <= 102400) NOT NULL DEFAULT 0,
    max_connections INT CHECK (max_connections >=0) NOT NULL DEFAULT 1000,
    max_replication_lag INT CHECK (max_replication_lag >= 0 AND max_replication_lag <= 126144000) NOT NULL DEFAULT 0,
    PRIMARY KEY (hostgroup_id, hostname, port) )
    1 row in set (0.00 sec)

    Configure backends

    We can now start configuring the backend.

    As described in previous articles, ProxySQL uses the concept of hostgroup: a hostgroup is a group of host with logical functionalities.

    For example, you can have the production master in hostgroup0, all the production slaves in hostgroup1, reporting slaves in hostgroup2, DR slaves in hostgroup3 , etc etc .

    For this tutorial, we will use just 2 hostgroups:

    • hostgroup0 for the master
    • hostgroup1 for the slaves

    Configuring backends in ProxySQL is as easy as inserting records into mysql_servers representing such backends, specifying the correct hostgroup_id based on their roles:

    mysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.124.101',3306);
    Query OK, 1 row affected (0.00 sec)

    mysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (1,'192.168.124.102',3306);
    Query OK, 1 row affected (0.00 sec)

    mysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (1,'192.168.124.103',3306);
    Query OK, 1 row affected (0.01 sec)

    mysql> SELECT * FROM mysql_servers;
    +--------------+-----------------+------+--------+--------+-------------+-----------------+---------------------+
    | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag |
    +--------------+-----------------+------+--------+--------+-------------+-----------------+---------------------+
    | 0 | 192.168.124.101 | 3306 | ONLINE | 1 | 0 | 1000 | 0 |
    | 1 | 192.168.124.102 | 3306 | ONLINE | 1 | 0 | 1000 | 0 |
    | 1 | 192.168.124.103 | 3306 | ONLINE | 1 | 0 | 1000 | 0 |
    +--------------+-----------------+------+--------+--------+-------------+-----------------+---------------------+
    3 rows in set (0.00 sec)

    Note that at this stage the backends are yet not activated, they are only configured. To load such configuration at runtime we need to issue a LOAD command:

    mysql> LOAD MYSQL SERVERS TO RUNTIME;
    Query OK, 0 rows affected (0.01 sec)

    These configuration changes won't persist after proxysql is shutdown because they are all in-memory. To persist these configuration changes we need to issue the correct SAVE command to save these changes onto on-disk database configuration:

    mysql> SELECT * FROM disk.mysql_servers;
    Empty set (0.00 sec)

    mysql> SAVE MYSQL SERVERS TO DISK;
    Query OK, 0 rows affected (0.00 sec)

    mysql> SELECT * FROM disk.mysql_servers;
    +--------------+-----------------+------+--------+--------+-------------+-----------------+---------------------+
    | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag |
    +--------------+-----------------+------+--------+--------+-------------+-----------------+---------------------+
    | 0 | 192.168.124.101 | 3306 | ONLINE | 1 | 0 | 1000 | 0 |
    | 1 | 192.168.124.102 | 3306 | ONLINE | 1 | 0 | 1000 | 0 |
    | 1 | 192.168.124.103 | 3306 | ONLINE | 1 | 0 | 1000 | 0 |
    +--------------+-----------------+------+--------+--------+-------------+-----------------+---------------------+
    3 rows in set (0.00 sec)

    Configure users

    This is very similar to how we configured backends:

    mysql> INSERT INTO mysql_users (username,password) VALUES ('utest','ptest');
    Query OK, 1 row affected (0.00 sec)

    mysql> SELECT * FROM mysql_usersG
    *************************** 1. row ***************************
    username: utest
    password: ptest
    active: 1
    use_ssl: 0
    default_hostgroup: 0
    default_schema: NULL
    schema_locked: 0
    transaction_persistent: 0
    fast_forward: 0
    backend: 1
    frontend: 1
    max_connections: 10000
    1 row in set (0.00 sec)
    mysql> LOAD MYSQL USERS TO RUNTIME;
    Query OK, 0 rows affected (0.00 sec)
    mysql> SELECT * FROM disk.mysql_users;
    Empty set (0.00 sec)

    mysql> SAVE MYSQL USERS TO DISK;
    Query OK, 0 rows affected (0.00 sec)

    mysql> SELECT * FROM disk.mysql_usersG
    *************************** 1. row ***************************
    username: utest
    password: ptest
    active: 1
    use_ssl: 0
    default_hostgroup: 0
    default_schema: NULL
    schema_locked: 0
    transaction_persistent: 0
    fast_forward: 0
    backend: 1
    frontend: 1
    max_connections: 10000
    1 row in set (0.00 sec)
    mysql> exit
    Bye

    Testing connectivity via proxysql

    To test if the application is able to send queries to the backend through proxysql we can run any query against proxysql. For example:

    vagrant@app:~$ mysql -u utest -pptest -h 127.0.0.1 -P6033 -e "SELECT @@hostname"
    Warning: Using a password on the command line interface can be insecure.
    +------------+
    | @@hostname |
    +------------+
    | node1 |
    +------------+

    In this example, the SELECT statement is sent to node1, and it will be always sent to node1, because there is no query rule that defines routing for such query (mysql_query_rules table is still empty), therefore the default_hostgroup for user "utest" applies (that is "0" according to mysql_users), and the query is sent to a host that has hostgroup_id=0 (that is "node1" according to mysql_servers).

    We can run a bit more complex test using sysbench pointing to proxysql (host=127.0.0.1, port=6033) :

    vagrant@app:~$ sysbench --num-threads=4 --max-requests=0 --max-time=10 
    > --test=/home/vagrant/sysbench-0.5/sysbench/tests/db/oltp.lua
    > --mysql-user=utest --mysql-password=ptest
    > --mysql-host=127.0.0.1 --mysql-port=6033
    > --mysql-db=test
    > --oltp-read-only=off
    > run
    sysbench 0.5: multi-threaded system evaluation benchmark

    Running the test with following options:
    Number of threads: 4
    Random number generator seed is 0 and will be ignored


    Threads started!

    OLTP test statistics:
    queries performed:
    read: 11592
    write: 3312
    other: 1656
    total: 16560
    transactions: 828 (82.15 per sec.)
    read/write requests: 14904 (1478.62 per sec.)
    other operations: 1656 (164.29 per sec.)
    ignored errors: 0 (0.00 per sec.)
    reconnects: 0 (0.00 per sec.)

    General statistics:
    total time: 10.0797s
    total number of events: 828
    total time taken by event execution: 40.2186s
    response time:
    min: 17.03ms
    avg: 48.57ms
    max: 307.84ms
    approx. 95 percentile: 96.42ms

    Threads fairness:
    events (avg/stddev): 207.0000/3.08
    execution time (avg/stddev): 10.0546/0.02

    vagrant@app:~$

    Read write split

    As already said in other posts, ProxySQL doesn't implement any read/write algorithm but tries to consider read/write a subset of query routing. Therefore, to implement read/write split, we need to correctly configure query routing.

    A very basic read/write split can be implemented sending to slaves all SELECTs without FOR UPDATE, and to send everything else to master. We can configure this through the admin interface:

    vagrant@app:~$ mysql -u admin -padmin -h 127.0.0.1 -P6032
    Warning: Using a password on the command line interface can be insecure.
    Welcome to the MySQL monitor. Commands end with ; or g.
    Your MySQL connection id is 19
    Server version: 5.1.30 (ProxySQL Admin Module)

    Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.

    Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

    mysql> SELECT * FROM mysql_query_rulesG
    Empty set (0.00 sec)

    mysql> SHOW CREATE TABLE mysql_query_rulesG
    *************************** 1. row ***************************
    table: mysql_query_rules
    Create Table: CREATE TABLE mysql_query_rules (
    rule_id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
    active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 0,
    username VARCHAR,
    schemaname VARCHAR,
    flagIN INT NOT NULL DEFAULT 0,
    match_pattern VARCHAR,
    negate_match_pattern INT CHECK (negate_match_pattern IN (0,1)) NOT NULL DEFAULT 0,
    flagOUT INT,
    replace_pattern VARCHAR,
    destination_hostgroup INT DEFAULT NULL,
    cache_ttl INT CHECK(cache_ttl > 0),
    reconnect INT CHECK (reconnect IN (0,1)) DEFAULT NULL,
    timeout INT UNSIGNED,
    delay INT UNSIGNED,
    apply INT CHECK(apply IN (0,1)) NOT NULL DEFAULT 0)
    1 row in set (0.00 sec)

    mysql> INSERT INTO mysql_query_rules(active,match_pattern,destination_hostgroup,apply) VALUES(1,'^SELECT.*FOR UPDATE$',0,1);
    Query OK, 1 row affected (0.00 sec)

    mysql> INSERT INTO mysql_query_rules(active,match_pattern,destination_hostgroup,apply) VALUES(1,'^SELECT',1,1);
    Query OK, 1 row affected (0.00 sec)

    mysql> SELECT * FROM mysql_query_rulesG
    *************************** 1. row ***************************
    rule_id: 1
    active: 1
    username: NULL
    schemaname: NULL
    flagIN: 0
    match_pattern: ^SELECT.*FOR UPDATE$
    negate_match_pattern: 0
    flagOUT: NULL
    replace_pattern: NULL
    destination_hostgroup: 0
    cache_ttl: NULL
    reconnect: NULL
    timeout: NULL
    delay: NULL
    apply: 1
    *************************** 2. row ***************************
    rule_id: 2
    active: 1
    username: NULL
    schemaname: NULL
    flagIN: 0
    match_pattern: ^SELECT
    negate_match_pattern: 0
    flagOUT: NULL
    replace_pattern: NULL
    destination_hostgroup: 1
    cache_ttl: NULL
    reconnect: NULL
    timeout: NULL
    delay: NULL
    apply: 1
    2 rows in set (0.00 sec)


    mysql> SELECT rule_id, match_pattern,destination_hostgroup hg_id, apply FROM mysql_query_rules WHERE active=1;
    +---------+----------------------+-------+-------+
    | rule_id | match_pattern | hg_id | apply |
    +---------+----------------------+-------+-------+
    | 1 | ^SELECT.*FOR UPDATE$ | 0 | 1 |
    | 2 | ^SELECT | 1 | 1 |
    +---------+----------------------+-------+-------+
    2 rows in set (0.00 sec)

    mysql> LOAD MYSQL QUERY RULES TO RUNTIME;
    Query OK, 0 rows affected (0.00 sec)

    mysql> SAVE MYSQL QUERY RULES TO DISK;
    Query OK, 0 rows affected (0.00 sec)

    mysql> quit
    Bye

    We created rules for SELECTs and SELECT FOR UPDATE : what about all the other queries? When the Query Processor scans the query rules trying to find a match with no success and it reaches the end, it will apply the default_hostgroup for the specific user according to mysql_users entry.

    In our case, user "utest" has a default_hostgroup=0 , therefore any query not matching the above rules will be sent to hostgroup 0.

    Also note that by default new servers are configured as ONLINE and new users are configured as active. Although, for query rules, the default is active=0 . This is intentional.

    Now that we have configured ProxySQL with query routing, let's try it running new queries against proxysql. SELECT FOR UPDATE will be send to master, while other SELECT will be send to one of the slave, completely randomly.

    vagrant@app:~$ mysql -u utest -pptest -h 127.0.0.1 -P6033 -e "SELECT @@hostname"
    Warning: Using a password on the command line interface can be insecure.
    +------------+
    | @@hostname |
    +------------+
    | node2 |
    +------------+
    vagrant@app:~$ mysql -u utest -pptest -h 127.0.0.1 -P6033 -e "SELECT @@hostname"
    Warning: Using a password on the command line interface can be insecure.
    +------------+
    | @@hostname |
    +------------+
    | node3 |
    +------------+
    vagrant@app:~$ mysql -u utest -pptest -h 127.0.0.1 -P6033 -e "SELECT @@hostname FOR UPDATE"
    Warning: Using a password on the command line interface can be insecure.
    +------------+
    | @@hostname |
    +------------+
    | node1 |
    +------------+

    And now sysbench:

    vagrant@app:~$ sysbench --num-threads=4 --max-requests=0 --max-time=10 
    > --test=/home/vagrant/sysbench-0.5/sysbench/tests/db/oltp.lua
    > --mysql-user=utest --mysql-password=ptest
    > --mysql-host=127.0.0.1 --mysql-port=6033
    > --mysql-db=test
    > --oltp-read-only=off
    > run
    sysbench 0.5: multi-threaded system evaluation benchmark

    Running the test with following options:
    Number of threads: 4
    Random number generator seed is 0 and will be ignored


    Threads started!

    OLTP test statistics:
    queries performed:
    read: 21112
    write: 6032
    other: 3016
    total: 30160
    transactions: 1508 (150.51 per sec.)
    read/write requests: 27144 (2709.16 per sec.)
    other operations: 3016 (301.02 per sec.)
    ignored errors: 0 (0.00 per sec.)
    reconnects: 0 (0.00 per sec.)

    General statistics:
    total time: 10.0193s
    total number of events: 1508
    total time taken by event execution: 40.0503s
    response time:
    min: 15.87ms
    avg: 26.56ms
    max: 215.16ms
    approx. 95 percentile: 38.74ms

    Threads fairness:
    events (avg/stddev): 377.0000/1.87
    execution time (avg/stddev): 10.0126/0.00

    vagrant@app:~$

    sysbench ran successfully, but from this report we cannot understand if read/write split worked.

    No problem, proxysql can tell us!

    vagrant@app:~$ mysql -u admin -padmin -h 127.0.0.1 -P6032
    Warning: Using a password on the command line interface can be insecure.
    Welcome to the MySQL monitor. Commands end with ; or g.
    Your MySQL connection id is 32
    Server version: 5.1.30 (ProxySQL Admin Module)

    Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.

    Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

    mysql> SHOW TABLES FROM stats;
    +--------------------------------+
    | tables |
    +--------------------------------+
    | stats_mysql_query_rules |
    | stats_mysql_commands_counters |
    | stats_mysql_processlist |
    | stats_mysql_connection_pool |
    | stats_mysql_query_digest |
    | stats_mysql_query_digest_reset |
    | stats_mysql_global |
    +--------------------------------+
    7 rows in set (0.00 sec)


    mysql> SELECT * FROM stats_mysql_connection_pool;
    +-----------+-----------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+
    | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv |
    +-----------+-----------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+
    | 0 | 192.168.124.101 | 3306 | ONLINE | 0 | 4 | 4 | 0 | 32884 | 1991796 | 47602338 |
    | 1 | 192.168.124.102 | 3306 | ONLINE | 0 | 4 | 4 | 0 | 9499 | 410383 | 25702366 |
    | 1 | 192.168.124.103 | 3306 | ONLINE | 0 | 4 | 4 | 0 | 9457 | 410141 | 26360288 |
    +-----------+-----------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+
    3 rows in set (0.00 sec)

    mysql> SELECT hostgroup, SUM(Queries), SUM(Bytes_data_sent), SUM(Bytes_data_recv) FROM stats_mysql_connection_pool GROUP BY hostgroupG
    *************************** 1. row ***************************
    hostgroup: 0
    SUM(Queries): 32884
    SUM(Bytes_data_sent): 1991796
    SUM(Bytes_data_recv): 47602338
    *************************** 2. row ***************************
    hostgroup: 1
    SUM(Queries): 18956
    SUM(Bytes_data_sent): 820524
    SUM(Bytes_data_recv): 52062654
    2 rows in set (0.01 sec)

    From stats_mysql_connection_pool it seems queries are distributed across hosts and hostgroups. Yet, this is not enough to say that read/write is working.

    Let try a read only workload, and see what it changes.

    vagrant@app:~$ sysbench --num-threads=4 --max-requests=0 --max-time=10 --test=/home/vagrant/sysbench-0.5/sysbench/tests/db/oltp.lua --mysql-user=utest --mysql-password=ptest --mysql-host=1
    27.0.0.1 --mysql-port=6033 --mysql-db=test --oltp-read-only=on --oltp-skip-trx=on run
    sysbench 0.5: multi-threaded system evaluation benchmark

    Running the test with following options:
    Number of threads: 4
    Random number generator seed is 0 and will be ignored


    Threads started!

    OLTP test statistics:
    queries performed:
    read: 32984
    write: 0
    other: 0
    total: 32984
    transactions: 0 (0.00 per sec.)
    read/write requests: 32984 (3293.70 per sec.)
    other operations: 0 (0.00 per sec.)
    ignored errors: 0 (0.00 per sec.)
    reconnects: 0 (0.00 per sec.)

    General statistics:
    total time: 10.0143s
    total number of events: 2356
    total time taken by event execution: 40.0156s
    response time:
    min: 7.62ms
    avg: 16.98ms
    max: 73.30ms
    approx. 95 percentile: 33.39ms

    Threads fairness:
    events (avg/stddev): 589.0000/146.87
    execution time (avg/stddev): 10.0039/0.01


    vagrant@app:~$ mysql -u admin -padmin -h 127.0.0.1 -P6032 -e "SELECT hostgroup, SUM(Queries), SUM(Bytes_data_sent), SUM(Bytes_data_recv) FROM stats_mysql_connection_pool GROUP BY hostgroup
    G"
    Warning: Using a password on the command line interface can be insecure.
    *************************** 1. row ***************************
    hostgroup: 0
    SUM(Queries): 32884
    SUM(Bytes_data_sent): 1991796
    SUM(Bytes_data_recv): 47602338
    *************************** 2. row ***************************
    hostgroup: 1
    SUM(Queries): 51940
    SUM(Bytes_data_sent): 2248260
    SUM(Bytes_data_recv): 142653210

    It all seems correct. The number of queries sent to hostgroup=0 didn't change, and all queries were sent to hostgroup=1 .

    More granular read/write split

    We previously said that read/write split is a subset of query routing.

    In fact, we can decide the destination of each group of queries. Maybe you don't want to send all SELECT to slaves, but only some specific ones. ProxySQL allows this granularity.

    If you don't know what queries are generated by your application, you can check in proxysql what queries are passing through using the table stats_mysql_query_digest . For example:

    mysql> SELECT * FROM stats_mysql_query_digest;
    +------------+----------+--------------------+----------------------------------------------------------------------+------------+------------+------------+----------+----------+----------+
    | schemaname | username | digest | digest_text | count_star | first_seen | last_seen | sum_time | min_time | max_time |
    +------------+----------+--------------------+----------------------------------------------------------------------+------------+------------+------------+----------+----------+----------+
    | test | utest | 0x4760CBDEFAD1519E | BEGIN | 2592 | 1442148405 | 1442148446 | 1997120 | 0 | 49046 |
    | test | utest | 0x10634DACE52A0A02 | INSERT INTO sbtest1 (id, k, c, pad) VALUES (?, ?, ?, ?) | 2592 | 1442148405 | 1442148446 | 2517386 | 377 | 35926 |
    | test | utest | 0xE75DB8313E268CF3 | SELECT SUM(K) FROM sbtest1 WHERE id BETWEEN ? AND ?+? | 4948 | 1442148405 | 1442149117 | 5694972 | 365 | 35860 |
    | test | utest | 0xCCB481C7C198E52B | UPDATE sbtest1 SET k=k+? WHERE id=? | 2592 | 1442148405 | 1442148446 | 4131398 | 531 | 19211 |
    | test | utest | 0x55319B9EE365BEB5 | DELETE FROM sbtest1 WHERE id=? | 2592 | 1442148405 | 1442148446 | 2694171 | 409 | 14392 |
    | test | utest | 0xADF3DDF2877EEAAF | SELECT c FROM sbtest1 WHERE id BETWEEN ? AND ?+? ORDER BY c | 4948 | 1442148405 | 1442149117 | 7315147 | 456 | 32245 |
    | test | utest | 0x704822A0F7D3CD60 | SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN ? AND ?+? ORDER BY c | 4948 | 1442148405 | 1442149117 | 10310160 | 677 | 40372 |
    | test | utest | 0x13781C1DBF001A0C | SELECT c FROM sbtest1 WHERE id=? | 49480 | 1442148405 | 1442149117 | 40873691 | 321 | 33556 |
    | test | utest | 0x7DD56217AF7A5197 | SELECT c FROM sbtest1 WHERE id BETWEEN ? AND ?+? | 4948 | 1442148405 | 1442149117 | 5989877 | 365 | 54172 |
    | test | utest | 0x5A23CA36FB239BC9 | UPDATE sbtest1 SET c=? WHERE id=? | 2592 | 1442148405 | 1442148446 | 3006402 | 519 | 21786 |
    | test | utest | 0x5DBEB0DD695FBF25 | COMMIT | 2592 | 1442148405 | 1442148446 | 8335739 | 1209 | 55347 |
    +------------+----------+--------------------+----------------------------------------------------------------------+------------+------------+------------+----------+----------+----------+
    11 rows in set (0.00 sec)

    We can run any sort of clause against this table for a more interesting result. For example, let assume we want to identify the SELECT statements that take most of the time:

    mysql> SELECT count_star, sum_time, digest_text FROM stats_mysql_query_digest WHERE digest_text LIKE 'SELECT%' ORDER BY sum_time DESC;
    +------------+----------+----------------------------------------------------------------------+
    | count_star | sum_time | digest_text |
    +------------+----------+----------------------------------------------------------------------+
    | 49480 | 40873691 | SELECT c FROM sbtest1 WHERE id=? |
    | 4948 | 10310160 | SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN ? AND ?+? ORDER BY c |
    | 4948 | 7315147 | SELECT c FROM sbtest1 WHERE id BETWEEN ? AND ?+? ORDER BY c |
    | 4948 | 5989877 | SELECT c FROM sbtest1 WHERE id BETWEEN ? AND ?+? |
    | 4948 | 5694972 | SELECT SUM(K) FROM sbtest1 WHERE id BETWEEN ? AND ?+? |
    +------------+----------+----------------------------------------------------------------------+
    5 rows in set (0.00 sec)

    The first type of SELECT is surely the one that is taking more time, for a total of ~40.8s (times in stats_mysql_query_digest are in microseconds).

    Once we have identified these, we can decide to disable our generic read/write split and send to slaves only the queries that take most of the times. This is just an example on how proxysql allows fine-tuning query routing, and that read/write split is just a subcase of query routing.

    vagrant@app:~$ mysql -u admin -padmin -h 127.0.0.1 -P6032
    Warning: Using a password on the command line interface can be insecure.
    Welcome to the MySQL monitor. Commands end with ; or g.
    Your MySQL connection id is 74
    Server version: 5.1.30 (ProxySQL Admin Module)

    Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.

    Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

    mysql> DELETE FROM mysql_query_rules;
    Query OK, 2 rows affected (0.01 sec)

    mysql> INSERT INTO mysql_query_rules(active,match_pattern,destination_hostgroup,apply) VALUES(1,'^SELECT c FROM sbtest1 WHERE id=d+$',1,1);
    Query OK, 1 row affected (0.01 sec)

    mysql> INSERT INTO mysql_query_rules(active,match_pattern,destination_hostgroup,apply) VALUES(1,'^SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN d+ AND d++d+ ORDER BY c$',1,1);
    Query OK, 1 row affected (0.00 sec)

    mysql> LOAD MYSQL QUERY RULES TO RUNTIME;
    Query OK, 0 rows affected (0.00 sec)

    mysql> quit
    Bye

    Once again, note that while you change the content of table mysql_query_rules, the rules aren't effective until you run LOAD MYSQL QUERY RULES TO RUNTIME . This gives you not only the time to review them before loading them at runtime, but also to create complex chains of rules. Therefore, while you are changing the content of mysql_query_rules, the rules at runtime aren't changed.

    As soon as LOAD MYSQL QUERY RULES TO RUNTIME is executed, the currently rules are wiped and replaced with the new ones, that immediately take effect.

    How can we ensure that our new rules are matching queries, and routing queries as we expect?

    Did we write the regular expression correctly? (regex could be hard to write and debug)

    Table stats_mysql_query_rules provides counters to the number of times the specified rules is matched against a query.

    For example:

    mysql> SELECT rule_id, hits, destination_hostgroup hg, match_pattern pattern FROM mysql_query_rules NATURAL JOIN stats_mysql_query_rulesG
    *************************** 1. row ***************************
    rule_id: 6
    hits: 19340
    hg: 1
    pattern: ^SELECT c FROM sbtest1 WHERE id=d+$
    *************************** 2. row ***************************
    rule_id: 7
    hits: 1934
    hg: 1
    pattern: ^SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN d+ AND d++d+ ORDER BY c$
    2 rows in set (0.00 sec)

    We have hits, that means that queries are matched against the rules and routed accordingly.

    Backends monitoring

    To operate MySQL traffic between clients and backends, ProxySQL doesn't need to proactively monitor the health of the backends. When a connection to a backend fails either during the connect phase or during the execution of a query, ProxySQL reacts to such event trying to establish a new connection (possibly on a different node) and eventually retry the query. This is a quite complex topic and needs a blog post on its own, but the main point here is that ProxySQL doesn't need proactive monitoring of backends to determine their availability and health.

    Nonetheless, there is a Monitor module that monitors the health of the backends and stores such information in some logging tables.

    Without going into all the details of Monitor module, for the scope of this tutorial we will see only few important ones.

    Small parenthesis before proceeding further: ProxySQL stores configuration variables in a table called global_variables, recalling it from mysqld. The big difference is that this table is writable and it is the way to reconfigure ProxySQL .

    Back to Monitor module...

    mysql> SELECT * FROM global_variables WHERE variable_name IN ('mysql-monitor_username','mysql-monitor_password','mysql-monitor_connect_interval','mysql-monitor_ping_interval','mysql-monitor_history');
    +--------------------------------+----------------+
    | variable_name | variable_value |
    +--------------------------------+----------------+
    | mysql-monitor_connect_interval | 120000 |
    | mysql-monitor_history | 600000 |
    | mysql-monitor_password | monitor |
    | mysql-monitor_ping_interval | 60000 |
    | mysql-monitor_username | monitor |
    +--------------------------------+----------------+
    5 rows in set (0.00 sec)

    This means that Monitor module will try to connect to all its backends once every 2 minutes (120000ms) and ping them once every 1 minute (60000ms), storing statistics and log for 10 minutes (600000ms) , using user/password "monitor"/"monitor" .

    We never created such user, so currently Monitor module is failing. To solve this we can either create the "monitor" user with the right privileges, or reconfigure Monitor module to use "root".

    As an exercise, instead of creating such user we will reconfigure Monitor module.

    This is as easy as updating the records in table global_variables and issue a LOAD MYSQL VARIABLES TO RUNTIME .

    Again, don't forget to persist the changes to disk with LOAD MYSQL VARIABLES TO DISK .

    mysql> UPDATE global_variables SET variable_value="root" WHERE variable_name="mysql-monitor_username";
    Query OK, 1 row affected (0.00 sec)

    mysql> UPDATE global_variables SET variable_value="root" WHERE variable_name="mysql-monitor_password";
    Query OK, 1 row affected (0.00 sec)

    mysql> LOAD MYSQL VARIABLES TO RUNTIME;
    Query OK, 0 rows affected (0.00 sec)

    mysql> SAVE MYSQL VARIABLES TO DISK;
    Query OK, 46 rows affected (0.00 sec)

    You can now check the results from Monitor module in tables mysql_server_connect_log and mysql_server_ping_log .

    mysql> SHOW TABLES FROM monitor;
    +----------------------------------+
    | tables |
    +----------------------------------+
    | mysql_server_connect |
    | mysql_server_connect_log |
    | mysql_server_ping |
    | mysql_server_ping_log |
    | mysql_server_replication_lag_log |
    +----------------------------------+
    5 rows in set (0.00 sec)

    mysql> SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start DESC;
    +-----------------+------+------------------+-------------------+------------+
    | hostname | port | time_start | ping_success_time | ping_error |
    +-----------------+------+------------------+-------------------+------------+
    | 192.168.124.101 | 3306 | 1442153413013369 | 513 | NULL |
    | 192.168.124.102 | 3306 | 1442153413013369 | 400 | NULL |
    | 192.168.124.103 | 3306 | 1442153413013369 | 353 | NULL |
    | 192.168.124.101 | 3306 | 1442153353012568 | 695 | NULL |
    | 192.168.124.102 | 3306 | 1442153353012568 | 580 | NULL |
    | 192.168.124.103 | 3306 | 1442153353012568 | 520 | NULL |
    | 192.168.124.101 | 3306 | 1442153293011283 | 856 | NULL |
    | 192.168.124.102 | 3306 | 1442153293011283 | 683 | NULL |
    | 192.168.124.103 | 3306 | 1442153293011283 | 550 | NULL |
    | 192.168.124.101 | 3306 | 1442153233009628 | 225 | NULL |
    | 192.168.124.102 | 3306 | 1442153233009628 | 347 | NULL |
    | 192.168.124.103 | 3306 | 1442153233009628 | 347 | NULL |
    +-----------------+------+------------------+-------------------+------------+
    12 rows in set (0.01 sec)

    mysql> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start DESC;
    +-----------------+------+------------------+----------------------+---------------+
    | hostname | port | time_start | connect_success_time | connect_error |
    +-----------------+------+------------------+----------------------+---------------+
    | 192.168.124.101 | 3306 | 1442153412970870 | 1334 | NULL |
    | 192.168.124.102 | 3306 | 1442153412970870 | 1606 | NULL |
    | 192.168.124.103 | 3306 | 1442153412970870 | 1555 | NULL |
    | 192.168.124.101 | 3306 | 1442153292970677 | 1523 | NULL |
    | 192.168.124.102 | 3306 | 1442153292970677 | 1386 | NULL |
    | 192.168.124.103 | 3306 | 1442153292970677 | 1283 | NULL |
    +-----------------+------+------------------+----------------------+---------------+
    6 rows in set (0.00 sec)

    Note that time_start is not a unix_timestamp but a monotic time in microsecond.

    Similarly, "connect_success_time" and "ping_success_time" are in microsecond.

    You can run more complex queries against these table. For example, the average connection time per host is given by this query:

    mysql> SELECT hostname,port, AVG(connect_success_time)/1000 avg_ms
    -> FROM monitor.mysql_server_connect_log
    -> WHERE connect_error IS NULL
    -> GROUP BY hostname,port;
    +-----------------+------+---------+
    | hostname | port | avg_ms |
    +-----------------+------+---------+
    | 192.168.124.101 | 3306 | 1.43025 |
    | 192.168.124.102 | 3306 | 1.5365 |
    | 192.168.124.103 | 3306 | 1.61875 |
    +-----------------+------+---------+
    3 rows in set (0.00 sec)

    Replication lag monitoring

    In the previous paragraph we said that ProxySQL doesn't perform any proactive monitoring. This is always true with only one exception so far: replication lagging.

    In fact, ProxySQL can be configure to monitor replication lag and temporary shun slaves when they lag beyond a certain threshold.

    As reported in other post, ProxySQL is not replication aware and therefore ProxySQL monitors replication lag only for hosts where it was specific a maximum replication lag through mysql_servers.max_replication_lag .

    To configure replication lag monitoring we need:

    a) monitor user (as specified in mysql-monitor_username) with SUPER or REPLICATION CLIENT privileges (we are already using "root") ;

    b) mysql_servers.max_replication_lag is greater than zero ;

    c) tune mysql-monitor_replication_lag_interval accordingly: as the name suggests, this is how often replication lag is checked.

    As mysql_servers.max_replication_lag applies only to slaves, we can configure it safely to all hosts either slaves or masters (where won't have any effect).

    mysql> SELECT * FROM mysql_servers WHERE max_replication_lag>0;
    Empty set (0.00 sec)

    mysql> UPDATE mysql_servers SET max_replication_lag=5;
    Query OK, 3 rows affected (0.00 sec)

    mysql> LOAD MYSQL SERVERS TO RUNTIME;
    Query OK, 0 rows affected (0.01 sec)

    mysql> SAVE MYSQL SERVERS TO DISK;
    Query OK, 0 rows affected (0.00 sec)

    mysql> UPDATE global_variables SET variable_value="1500"
    -> WHERE variable_name="mysql-monitor_replication_lag_interval";
    Query OK, 1 row affected (0.00 sec)

    mysql> SAVE MYSQL VARIABLES TO DISK;
    Query OK, 47 rows affected (0.00 sec)

    mysql> LOAD MYSQL VARIABLES TO RUNTIME;
    Query OK, 0 rows affected (0.00 sec)

    After this configuration, ProxySQL will monitor replication lag on every host every 1500ms , and if their replication lag is over 5 seconds the host will be temporary shunned.

    Please be very caution on how you configure this, because if misconfigured it could lead to unexpected behavior.

    For example, if all the slaves are lagging and reads go to slaves, proxysql will wait until some slave catch up and then execute the query, resulting in increased latency. Or, if the slaves do not catch up within mysql-connect_timeout_server_max millisecond, proxysql will return an error saying it can't find any host.

    A workaround to solve this is to configure also the master in the same hostgroup with the slaves, but with a weight a lot lower than the slaves: in this way, if the slaves are up and running the queries will be send to slaves most of the time and occasionally to the master, while if the slaves are not available they will be send to the master.

    We can check status using monitor table mysql_server_replication_lag_log :

    mysql> SELECT * FROM monitor.mysql_server_replication_lag_log
    -> ORDER BY time_start DESC LIMIT 10;
    +-----------------+------+------------------+--------------+----------+-------+
    | hostname | port | time_start | success_time | repl_lag | error |
    +-----------------+------+------------------+--------------+----------+-------+
    | 192.168.124.102 | 3306 | 1442156260020471 | 830 | 0 | NULL |
    | 192.168.124.103 | 3306 | 1442156260020471 | 784 | 0 | NULL |
    | 192.168.124.102 | 3306 | 1442156258518415 | 537 | 0 | NULL |
    | 192.168.124.103 | 3306 | 1442156258518415 | 448 | 0 | NULL |
    | 192.168.124.102 | 3306 | 1442156257015337 | 844 | 0 | NULL |
    | 192.168.124.103 | 3306 | 1442156257015337 | 722 | 0 | NULL |
    | 192.168.124.102 | 3306 | 1442156255514011 | 837 | 0 | NULL |
    | 192.168.124.103 | 3306 | 1442156255514011 | 691 | 0 | NULL |
    | 192.168.124.102 | 3306 | 1442156254012744 | 726 | 0 | NULL |
    | 192.168.124.103 | 3306 | 1442156254012744 | 605 | 0 | NULL |
    +-----------------+------+------------------+--------------+----------+-------+
    10 rows in set (0.00 sec)

    mysql> SELECT hostname,repl_lag,COUNT(*)
    -> FROM monitor.mysql_server_replication_lag_log
    -> WHERE error IS NULL GROUP BY hostnameG
    *************************** 1. row ***************************
    hostname: 192.168.124.102
    repl_lag: 0
    COUNT(*): 400
    *************************** 2. row ***************************
    hostname: 192.168.124.103
    repl_lag: 0
    COUNT(*): 400
    2 rows in set (0.00 sec)

    Note that there are 400 entries per host, that is because the check is performed every 1500ms and the log table hold entries for 600s .

    As soon as an entry becomes older than 600s it is deleted.

    This tutorial has reached an end.

    In this tutorial we described:

    • how to install ProxySQL
    • how to perform some basic configuration adding users and backends
    • the concept of hostgroups
    • how to configure query routing and read/write split
    • how to retrieve some of the many information and statistics available from within ProxySQL
    • how to configure global variables
    • how to configure the Monitor module
    • how to monitor replication lag

    Please stay tuned for the next tutorial, titled "seamless replication switchover in less than one second"

    PlanetMySQL Voting: Vote UP / Vote DOWN

     
     
  • 相关阅读:
    FlaskRESTful之响应处理
    FlaskRESTful之请求处理(RequestParser)
    FlaskRESTful之入门
    Flask框架之异常处理和请求钩子
    Flask框架之Cookie和Session
    Flask框架之请求和响应
    Flask框架的路由和蓝图
    初识Flask框架
    DRF框架之问题总结
    字符串和date数据进行转换和Date类型进行计算
  • 原文地址:https://www.cnblogs.com/seasonzone/p/6596470.html
Copyright © 2011-2022 走看看