zoukankan      html  css  js  c++  java
  • 涂抹mysql笔记-搭建mysql高可用体系

    mysql的高可用体系
    <>追求更高稳定性的服务体系
    可扩展性:横向扩展(增加节点)、纵向扩展(增加节点的硬件配置)
    高可用性
    <>Slave+LVS+Keepalived实现高可用:在从库部署负载均衡器。
    <>安装配置LVS:相当于负载均衡器。我们选择在192.168.1.9主机名为linux04的服务器上安装LVS
    1、modprobe -l |grep ipvs查看当前操作系统是否存在lpvs模块。
    2、lsmod |grep ip_vs查看是否ip_vs内个模块是否被加载,如果没有执行modprobe ip_vs就可以把ip_vs模块加载到内核
    [root@linux04 ipvsadm-1.26]# lsmod |grep ip_vs
    ip_vs 115643 0
    libcrc32c 1246 1 ip_vs
    ipv6 321422 36 ip_vs,ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6
    3、创建软连接
    ln -s /usr/src/kernels/2.6.32-573.3.1.el6.x86_64/ /usr/src/Linux
    4、下载管理工具ipvsadm执行常规的管理操作:http://www.linux-vs.org/software/index.html
    wget http://www.linux-vs.org/software/kernel-2.6/ipvsadm-1.26.tar.gz
    tar zxvf ipvsadm-1.26.tar.gz
    [root@linux04 /]# chmod -R 775 ipvsadm-1.26/
    [root@linux04 /]# cd ipvsadm-1.26/
    5、编译安装
    [root@linux04 ipvsadm-1.26]# make
    make -C libipvs
    make[1]: Entering directory `/soft/ipvsadm-1.26/libipvs'
    gcc -Wall -Wunused -Wstrict-prototypes -g -fPIC -DLIBIPVS_USE_NL -DHAVE_NET_IP_VS_H -c -o libipvs.o libipvs.c
    gcc -Wall -Wunused -Wstrict-prototypes -g -fPIC -DLIBIPVS_USE_NL -DHAVE_NET_IP_VS_H -c -o ip_vs_nl_policy.o ip_vs_nl_policy.c
    ar rv libipvs.a libipvs.o ip_vs_nl_policy.o
    ar: creating libipvs.a
    a - libipvs.o
    a - ip_vs_nl_policy.o
    gcc -shared -Wl,-soname,libipvs.so -o libipvs.so libipvs.o ip_vs_nl_policy.o
    make[1]: Leaving directory `/soft/ipvsadm-1.26/libipvs'
    gcc -Wall -Wunused -Wstrict-prototypes -g -DVERSION="1.26" -DSCHEDULERS=""rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq"" -DPE_LIST=""sip"" -DHAVE_NET_IP_VS_H -c -o ipvsadm.o ipvsadm.c
    ipvsadm.c: In function 'print_largenum':
    ipvsadm.c:1383: warning: field width should have type 'int', but argument 2 has type 'size_t'
    gcc -Wall -Wunused -Wstrict-prototypes -g -DVERSION="1.26" -DSCHEDULERS=""rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq"" -DPE_LIST=""sip"" -DHAVE_NET_IP_VS_H -c -o config_stream.o config_stream.c
    gcc -Wall -Wunused -Wstrict-prototypes -g -DVERSION="1.26" -DSCHEDULERS=""rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq"" -DPE_LIST=""sip"" -DHAVE_NET_IP_VS_H -c -o dynamic_array.o dynamic_array.c
    gcc -Wall -Wunused -Wstrict-prototypes -g -o ipvsadm ipvsadm.o config_stream.o dynamic_array.o libipvs/libipvs.a -lnl
    ipvsadm.o: In function `parse_options':
    /soft/ipvsadm-1.26/ipvsadm.c:432: undefined reference to `poptGetContext'
    /soft/ipvsadm-1.26/ipvsadm.c:435: undefined reference to `poptGetNextOpt'
    /soft/ipvsadm-1.26/ipvsadm.c:660: undefined reference to `poptBadOption'
    /soft/ipvsadm-1.26/ipvsadm.c:502: undefined reference to `poptGetNextOpt'
    /soft/ipvsadm-1.26/ipvsadm.c:667: undefined reference to `poptStrerror'
    /soft/ipvsadm-1.26/ipvsadm.c:667: undefined reference to `poptBadOption'
    /soft/ipvsadm-1.26/ipvsadm.c:670: undefined reference to `poptFreeContext'
    /soft/ipvsadm-1.26/ipvsadm.c:677: undefined reference to `poptGetArg'
    /soft/ipvsadm-1.26/ipvsadm.c:678: undefined reference to `poptGetArg'
    /soft/ipvsadm-1.26/ipvsadm.c:679: undefined reference to `poptGetArg'
    /soft/ipvsadm-1.26/ipvsadm.c:690: undefined reference to `poptGetArg'
    /soft/ipvsadm-1.26/ipvsadm.c:693: undefined reference to `poptFreeContext'
    collect2: ld returned 1 exit status
    make: *** [ipvsadm] Error 1
    报错需安装popt-static,下载popt-static-1.13-7.el6.x86_64.rpm通过rpm命令安装。
    而后重新解压ipvsadm-1.24.tar.gz重新编译安装成功。
    [root@linux04 ipvsadm-1.26]# make install
    make -C libipvs
    make[1]: Entering directory `/ipvsadm-1.26/libipvs'
    make[1]: Nothing to be done for `all'.
    make[1]: Leaving directory `/ipvsadm-1.26/libipvs'
    if [ ! -d /sbin ]; then mkdir -p /sbin; fi
    install -m 0755 ipvsadm /sbin
    install -m 0755 ipvsadm-save /sbin
    install -m 0755 ipvsadm-restore /sbin
    [ -d /usr/man/man8 ] || mkdir -p /usr/man/man8
    install -m 0644 ipvsadm.8 /usr/man/man8
    install -m 0644 ipvsadm-save.8 /usr/man/man8
    install -m 0644 ipvsadm-restore.8 /usr/man/man8
    [ -d /etc/rc.d/init.d ] || mkdir -p /etc/rc.d/init.d
    install -m 0755 ipvsadm.sh /etc/rc.d/init.d/ipvsadm
    6、配置LVS
    配置vip添加rs
    ipvsadm -A -t 192.168.1.10:3306 -s rr ---->192.168.1.10为vip
    ipvsadm -a -t 192.168.1.10:3306 -r 192.168.1.7:3306 -g
    ipvsadm -a -t 192.168.1.10:3306 -r 192.168.1.8:3306 -g
    查看LVS虚拟服务配置:
    [root@linux04 ipvsadm-1.26]# ipvsadm -L -n
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
    -> RemoteAddress:Port Forward Weight ActiveConn InActConn
    TCP 192.168.1.10:3306 rr
    -> 192.168.1.7:3306 Route 1 0 0
    -> 192.168.1.8:3306 Route 1 0 0
    将刚刚创建的VIP绑定到LVS所在的服务器网卡上:
    ifconfig eth0:0 192.168.1.10
    切换到RealServer节点上执行如下命令:
    [root@linux02 ~]# /sbin/ifconfig lo:10 192.168.1.10 broadcast 192.168.1.10 netmask 255.255.255.255
    ifconfig命令查看IP地址是否绑定成功:
    [root@linux02 ~]# ifconfig lo:10
    lo:10 Link encap:Local Loopback
    inet addr:192.168.1.10 Mask:255.255.255.255
    UP LOOPBACK RUNNING MTU:16436 Metric:1
    禁用lo环回接口中的arp广播执行如下:
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
    同样在192.168.1.8主机名为linux03的服务器上执行RealServer节点上执行的操作。
    到此lvs配置完毕可以让应用层通过192.168.1.10访问Slave节点了。
    6、测试LVS
    执行命令如下:
    +---------------+-------+
    [root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.10 -P 3306 -e "show variables like 'server_id'"
    Warning: Using a password on the command line interface can be insecure.
    +---------------+-------+
    | Variable_name | Value |
    +---------------+-------+
    | server_id | 613 |
    +---------------+-------+
    [root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.10 -P 3306 -e "show variables like 'server_id'"
    Warning: Using a password on the command line interface can be insecure.
    +---------------+-------+
    | Variable_name | Value |
    +---------------+-------+
    | server_id | 612 |
    +---------------+-------+
    说明mysql已经具备负载均衡能力了
    <>Keepalived的安装配置
    中断192.168.1.7主机名为linux02再连接mysql测试:
    [root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.10 -P 3306 -e "show variables like 'server_id'"
    Warning: Using a password on the command line interface can be insecure.
    +---------------+-------+
    | Variable_name | Value |
    +---------------+-------+
    | server_id | 613 |
    +---------------+-------+
    [root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.10 -P 3306 -e "show variables like 'server_id'"
    Warning: Using a password on the command line interface can be insecure.
    ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.1.10' (111)
    [root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.10 -P 3306 -e "show variables like 'server_id'"
    Warning: Using a password on the command line interface can be insecure.
    +---------------+-------+
    | Variable_name | Value |
    +---------------+-------+
    | server_id | 613 |
    +---------------+-------+
    [root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.10 -P 3306 -e "show variables like 'server_id'"
    Warning: Using a password on the command line interface can be insecure.
    ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.1.10' (111)
    [root@recover ~]#
    发现没有做健康检查和故障转移那么Keepalived上场了。
    Keepalived三个功能:实现IP地址的漂移、生成IPVS规则、执行健康检查
    1、下载Keepalived www.keepalived.org在安装有LVS的调度服务器上安装keepalived即192.168.1.9主机名为linux04
    root用户下:
    tar -zxvf keepalived-1.2.7.tar.gz
    chmod -R 775 keepalived-1.2.7/
    cd keepalived-1.2.7
    ./configure --prefix=/keepalived --with-kernel-dir=/usr/src/kernels/2.6.32-358.el6.x86_64/
    make
    make install
    2、root用户下复制文件到相关路径以方便调用:
    cp /keepalived/sbin/keepalived /usr/sbin/
    cp /keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
    cp /keepalived/etc/sysconfig/keepalived /etc/sysconfig/
    3、配置keepalived
    mkdir /etc/keepalived
    vi /etc/keepalived/keepalived.conf
    global_defs {
    notification_email {
    jasoname@qq.com
    }
    notification_email_from jasoname@qq.com
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id LVS_1_1
    }

    vrrp_instance V1_MYSQL_READ {
    state MASTER
    interface eth0
    virtual_router_id 1
    priority 100
    advert_int 1
    authentication{
    auth_type PASS
    auth_pass 3306
    }
    virtual_ipaddress {
    192.168.1.10
    }
    }

    virtual_server 192.168.1.10 3306 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    net_mask 255.255.255.0
    #persistence_timeout 20
    protocol TCP

    real_server 192.168.1.7 3306 {
    weight 1
    TCP_CHECK {
    connect_timeout 5
    nb_get_retry 3
    delay_before_retry 3
    connect_port 3306
    }
    }
    real_server 192.168.1.8 3306 {
    weight 1
    TCP_CHECK {
    connect_timeout 5
    nb_get_retry 3
    delay_before_retry 3
    connect_port 3306
    }
    }
    }

    4、启动keepalived服务,启动直接清除ipvsadm下的服务:
    [root@linux04 ~]# ipvsadm -C
    [root@linux04 ~]# service keepalived start
    Starting keepalived: [ OK ]
    5、查看keepalived生成的IPVS规则:
    [root@linux04 keepalived]# ipvsadm -L -n
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
    -> RemoteAddress:Port Forward Weight ActiveConn InActConn
    TCP 192.168.1.10:3306 rr
    -> 192.168.1.7:3306 Route 1 0 0
    -> 192.168.1.8:3306 Route 1 0 0

    <>Dual-Master高可用环境
    LVS+Keepalived+Mysql Slaves的组合提高了读的可靠性,但是Master作为写仍然是单点怎么解决这个问题呢?,虽然从数据安全的角度Master不是单点,但是从读写分离后写的角度看,写成单点了。
    接下来我们配置的是双向复制:
    在无人修改对象的情况下在原slave节点(linux02)查询当前二进制文件和读写位置:
    system@(none)>show master status G
    *************************** 1. row ***************************
    File: mysql-bin.000032
    Position: 120
    切换到原Master节点(linux01),使其从原slave节点指定位置开始读取二进制文件:
    system@5ienet>change master to master_host='192.168.1.7',master_port=3306,master_user='repl',master_password='oralinux',master_log_file='mysql-bin.000032',master_log_pos=120;
    Query OK, 0 rows affected, 2 warnings (0.01 sec)

    system@5ienet>start slave;
    Query OK, 0 rows affected (0.01 sec)

    在linux02执行如下命令:
    create table 5ienet.t4(id int not null auto_increment,v1 varchar(20),primary key(id));
    在linux01查看这个表是否同步过去:
    system@5ienet> desc t4;
    +-------+-------------+------+-----+---------+----------------+
    | Field | Type | Null | Key | Default | Extra |
    +-------+-------------+------+-----+---------+----------------+
    | id | int(11) | NO | PRI | NULL | auto_increment |
    | v1 | varchar(20) | YES | | NULL | |
    +-------+-------------+------+-----+---------+----------------+
    2 rows in set (0.00 sec)
    双向复制有个隐患就是:两端都在执行写操作,并且是写入同一个对象。举例来说,mysql数据库的表对象主键通常是自增长的,插入记录时往往无需指定主键值,那么这种情况下,若同时在两个节点分别向一个对象中执行插入,即使明确指定的列值是不同的,但是两边产生的主键也极有可能重复。我们对此做个模拟:
    linux01停止slave线程:stop slave;
    system@5ienet>stop slave;
    Query OK, 0 rows affected (0.01 sec)
    linux02执行插入语句:insert into 5ienet.t4 (v1) values('192.168.1.7');
    system@(none)>insert into 5ienet.t4 (v1) values('192.168.1.7');
    Query OK, 1 row affected (0.00 sec)
    在linux02查询t4:system@(none)>select * from 5ienet.t4;
    +----+-------------+
    | id | v1 |
    +----+-------------+
    | 1 | 192.168.1.7 |
    +----+-------------+
    1 row in set (0.00 sec)
    在linux01查询select * from 5ienet.t4;
    system@5ienet>select * from 5ienet.t4;
    Empty set (0.00 sec)
    为空因为本地slave服务没有启动,linux02节点执行的操作并未同步过来。此时在linux01上插入一条记录:
    system@5ienet>insert into 5ienet.t4 (v1) values('192.168.1.6');
    Query OK, 1 row affected (0.00 sec)
    启动linux01的slave服务:
    system@5ienet>start slave;
    Query OK, 0 rows affected (0.00 sec)
    system@5ienet>show slave status G
    *************************** 1. row ***************************
    Slave_IO_State: Waiting for master to send event
    Master_Host: 192.168.1.7
    Master_User: repl
    Master_Port: 3306
    Connect_Retry: 60
    Master_Log_File: mysql-bin.000032
    Read_Master_Log_Pos: 537
    Relay_Log_File: mysql-relay-bin.000003
    Relay_Log_Pos: 283
    Relay_Master_Log_File: mysql-bin.000032
    Slave_IO_Running: Yes
    Slave_SQL_Running: No
    Replicate_Do_DB:
    Replicate_Ignore_DB:
    Replicate_Do_Table:
    Replicate_Ignore_Table:
    Replicate_Wild_Do_Table:
    Replicate_Wild_Ignore_Table:
    Last_Errno: 1062
    Last_Error: Error 'Duplicate entry '1' for key 'PRIMARY'' on query. Default database: ''. Query: 'insert into 5ienet.t4 (v1) values('192.168.1.7')'
    Skip_Counter: 0
    Exec_Master_Log_Pos: 277
    Relay_Log_Space: 1036
    Until_Condition: None
    Until_Log_File:
    Until_Log_Pos: 0
    Master_SSL_Allowed: No
    Master_SSL_CA_File:
    Master_SSL_CA_Path:
    Master_SSL_Cert:
    Master_SSL_Cipher:
    Master_SSL_Key:
    Seconds_Behind_Master: NULL
    Master_SSL_Verify_Server_Cert: No
    Last_IO_Errno: 0
    Last_IO_Error:
    Last_SQL_Errno: 1062
    Last_SQL_Error: Error 'Duplicate entry '1' for key 'PRIMARY'' on query. Default database: ''. Query: 'insert into 5ienet.t4 (v1) values('192.168.1.7')'
    Replicate_Ignore_Server_Ids:
    Master_Server_Id: 612
    Master_UUID: 2d88ad71-23e0-11e7-8222-080027f93f02
    Master_Info_File: /mysql/conf/master.info
    SQL_Delay: 0
    SQL_Remaining_Delay: NULL
    Slave_SQL_Running_State:
    Master_Retry_Count: 86400
    Master_Bind:
    Last_IO_Error_Timestamp:
    Last_SQL_Error_Timestamp: 170424 15:07:30
    Master_SSL_Crl:
    Master_SSL_Crlpath:
    Retrieved_Gtid_Set:
    Executed_Gtid_Set:
    Auto_Position: 0
    1 row in set (0.00 sec)
    结果slave_sql线程停止工作,复制状态排除错误,提示主键重复。同样另一个节点也报这类错误。
    system@(none)> show slave status G
    *************************** 1. row ***************************
    Slave_IO_State: Waiting for master to send event
    Master_Host: 192.168.1.6
    Master_User: repl
    Master_Port: 3306
    Connect_Retry: 60
    Master_Log_File: mysql-bin.000024
    Read_Master_Log_Pos: 473194323
    Relay_Log_File: mysql-relay-bin.000019
    Relay_Log_Pos: 283
    Relay_Master_Log_File: mysql-bin.000024
    Slave_IO_Running: Yes
    Slave_SQL_Running: No
    Replicate_Do_DB:
    Replicate_Ignore_DB:
    Replicate_Do_Table:
    Replicate_Ignore_Table:
    Replicate_Wild_Do_Table:
    Replicate_Wild_Ignore_Table:
    Last_Errno: 1062
    Last_Error: Error 'Duplicate entry '1' for key 'PRIMARY'' on query. Default database: '5ienet'. Query: 'insert into 5ienet.t4 (v1) values('192.168.1.6')'
    Skip_Counter: 0
    Exec_Master_Log_Pos: 473194051
    Relay_Log_Space: 728
    Until_Condition: None
    Until_Log_File:
    Until_Log_Pos: 0
    Master_SSL_Allowed: No
    Master_SSL_CA_File:
    Master_SSL_CA_Path:
    Master_SSL_Cert:
    Master_SSL_Cipher:
    Master_SSL_Key:
    Seconds_Behind_Master: NULL
    Master_SSL_Verify_Server_Cert: No
    Last_IO_Errno: 0
    Last_IO_Error:
    Last_SQL_Errno: 1062
    Last_SQL_Error: Error 'Duplicate entry '1' for key 'PRIMARY'' on query. Default database: '5ienet'. Query: 'insert into 5ienet.t4 (v1) values('192.168.1.6')'
    Replicate_Ignore_Server_Ids:
    Master_Server_Id: 611
    Master_UUID: 2584299a-2100-11e7-af61-080027196296
    Master_Info_File: /mysql/conf/master.info
    SQL_Delay: 0
    SQL_Remaining_Delay: NULL
    Slave_SQL_Running_State:
    Master_Retry_Count: 86400
    Master_Bind:
    Last_IO_Error_Timestamp:
    Last_SQL_Error_Timestamp: 170424 15:05:25
    Master_SSL_Crl:
    Master_SSL_Crlpath:
    Retrieved_Gtid_Set:
    Executed_Gtid_Set:
    Auto_Position: 0
    1 row in set (0.00 sec)
    如理双向复制sql线程应用错误:
    1、删除源端对应的记录,而后重新执行
    2、跳过错误:sql_slave_skip_counter用于指定跳过应用最近的n次事件。默认是0
    set global sql_slave_skip_counter=1;指定跳过最近的一次事件。每个节点做同样的操作。
    start slave;

    linux01:
    system@5ienet>set global sql_slave_skip_counter=1;
    Query OK, 0 rows affected (0.00 sec)
    system@5ienet>start slave;
    Query OK, 0 rows affected (0.01 sec)

    linux02:
    system@(none)>set global sql_slave_skip_counter=1;
    Query OK, 0 rows affected (0.00 sec)
    system@(none)>start slave;
    Query OK, 0 rows affected (0.06 sec)
    任意节点执行下列语句修复数据:
    system@5ienet>delete from 5ienet.t4 where v1 in('192.168.1.6','192.168.1.7');
    Query OK, 1 row affected (0.00 sec)

    system@5ienet>insert into 5ienet.t4 (v1) values('192.168.1.6');
    Query OK, 1 row affected (0.00 sec)

    system@5ienet>insert into 5ienet.t4 (v1) values('192.168.1.7');
    Query OK, 1 row affected (0.01 sec)
    在另一节点查询:
    system@(none)> select * from 5ienet.t4;
    +----+-------------+
    | id | v1 |
    +----+-------------+
    | 2 | 192.168.1.6 |
    | 3 | 192.168.1.7 |
    +----+-------------+
    2 rows in set (0.00 sec)
    避免自增列值冲突:应用只连接双主环境的一个节点,或写入操作只允许在一个节点上执行。
    mysql数据库中的auto_increment列值增长规则由两个系统变量控制:
    auto_increment_increment:指定自增列增长时的递增值,范围从1-65535默认是1,也可以指定为0 指定该参数值为0效果等同于指定为1
    auto_increment_offset:指定自增列增长时的偏移量,用偏移量来形容这个参数可能不够直观,那么可以将值理解为自增时的初始值。值得范围及设定规则与auto_increment_increment完全相同。
    这两个参数组合使用,例如自增值从6开始增长,每次递增10,则设置参数如下:
    set auto_increment_increment=10;
    set auto_increment_offset=6;
    创建一个表插入数据看自增值:
    system@(none)>set auto_increment_increment=10;
    Query OK, 0 rows affected (0.00 sec)

    system@(none)>set auto_increment_offset=6;
    Query OK, 0 rows affected (0.00 sec)

    system@(none)>create table 5ienet.autoinc(col int not null auto_increment primary key);
    Query OK, 0 rows affected (0.02 sec)

    system@(none)>insert into 5ienet.autoinc values(null),(null),(null);
    Query OK, 3 rows affected (0.01 sec)
    Records: 3 Duplicates: 0 Warnings: 0

    system@(none)>select * from 5ienet.autoinc;
    +-----+
    | col |
    +-----+
    | 6 |
    | 16 |
    | 26 |
    +-----+
    3 rows in set (0.00 sec)
    将自增值得偏移量改为8
    system@(none)>set auto_increment_offset=8;
    Query OK, 0 rows affected (0.00 sec)

    system@(none)>insert into 5ienet.autoinc values(null),(null),(null);
    Query OK, 3 rows affected (0.01 sec)
    Records: 3 Duplicates: 0 Warnings: 0

    system@(none)>select * from 5ienet.autoinc;
    +-----+
    | col |
    +-----+
    | 6 |
    | 16 |
    | 26 |
    | 38 |
    | 48 |
    | 58 |
    +-----+
    6 rows in set (0.00 sec)
    有了自增值和偏移量我们可以为不同的mysql实例指定不同的自增值规则。对于我们当前的双主复制环境,将两个节点的递增值为改为2,将其中一个节点的偏移量改为1,另一个节点的偏移量改为2,也就是说一个节点生成的自增值为奇数,另一个节点始终为偶数。各节点的自增列生成规则不相同,那么生成的值就肯定不会重复了。
    具体修改两个节点的my.cnf

    <>双主环境IP自动漂移(主备)
    对于双主环境并发出现各种问题,我们放弃负载均衡但是实现IP地址故障漂移,提高数据库的高可用性。
    在两个节点安装配置keepalived(省略)
    1、主节点操作:
    编辑keepalived.conf

    vrrp_script check_run {
    script "/keepalived/bin/ka_check_mysql.sh"
    interval 10
    }

    vrrp_instance VPS {
    state BACKUP #初始时指定两台服务器均为备份状态,以避免服务重启时可能造成的震荡(master角色争夺)
    interface eth0
    virtual_router_id 34
    priority 100 #优先级,另一节点本参数值可设置的捎小一些。
    advert_int 1
    nopreempt #不抢占,只在优先级高的机器上设置即可,优先级低的不设置
    authentication {
    auth_type PASS
    auth_pass 3141
    }
    virtual_ipaddress {
    192.168.1.11
    }
    track_script {
    check_run
    }
    }

    ka_check_mysql.sh

    #!/bin/bash
    source /mysql/scripts/mysql_env.ini
    MYSQL_CMD=/mysql/bin/mysql
    CHECK_TIME=3 #check three times
    MYSQL_OK=1 #MYSQL_OK values to 1 when Mysql service working fine,else values to 0

    function check_mysql_health() {
    $MYSQL_CMD -u${MYSQL_USER} -p${MYSQL_PASS} -S /mysql/conf/mysql.sock -e "show status;" > /dev/null 2>&1
    if [ $? = 0 ] ;then
    MYSQL_OK=1
    else
    MYSQL_OK=0
    fi
    return $MYSQL_OK
    }

    while [ $CHECK_TIME -ne 0 ]
    do
    let "CHECK_TIME -= 1"
    check_mysql_health
    if [ $MYSQL_OK = 1 ] ; then
    CHECK_TIME=0
    exit 0
    fi

    if [ $MYSQL_OK -eq 0 ] && [ $CHECK_TIME -eq 0 ]
    then
    /etc/init.d/keepalived stop
    exit 1
    fi
    sleep 1
    done

    这段脚本用来检查mysql实例是的能够正常连接,若三次尝试连接都没能成功创建连接,则停止本地的keepalived服务,主动触发vip漂移。
    赋予这段脚本执行权限:
    chmod +x ka_check_mysql.sh
    启动keepalived:
    service keepalived start
    keepalived持有的虚拟IP,通过ifconfig差不多,使用ip addr可以查到:
    [root@linux01 keepalived]# ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:19:62:96 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.6/24 brd 192.168.1.255 scope global eth0
    inet 192.168.1.11/32 scope global eth0
    inet6 fe80::a00:27ff:fe19:6296/64 scope link
    valid_lft forever preferred_lft forever
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:b3:a6:de brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a00:27ff:feb3:a6de/64 scope link
    valid_lft forever preferred_lft forever
    现在应用层就可以通过192.168.1.11这个vip来访问mysql复制环境中的master实例了

    2、备节点操作:
    配置另一个Master节点:
    安装keepalived省略。

    编辑keepalived.conf

    vrrp_script check_run {
    script "/keepalived/bin/ka_check_mysql.sh"
    interval 10
    }

    vrrp_instance VPS {
    state BACKUP
    interface eth0
    virtual_router_id 34
    priority 90 #此处调低权重。
    advert_int 1
    nopreempt #不抢占,只在优先级高的机器上设置即可,优先级低的不设置
    authentication {
    auth_type PASS
    auth_pass 3141
    }
    virtual_ipaddress {
    192.168.1.11
    }
    track_script {
    check_run
    }
    }

    ka_check_mysql.sh从另一个节点复制一份即可。
    service keepalived start启动keepalived

    3、测试高可用:
    在客户端执行如下命令:
    [root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.11 -N -e "select @@hostname"
    Warning: Using a password on the command line interface can be insecure.
    +---------+
    | linux01 |
    +---------+
    显示linux01,说明测试正确,因为linux01为主节点。停止linux01服务测试:
    然后在节点2执行ip addr发现vip已经漂移到本节点
    [root@linux02 bin]# ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet 192.168.1.10/32 brd 192.168.1.10 scope global lo:10
    inet6 ::1/128 scope host
    valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:f9:3f:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.7/24 brd 192.168.1.255 scope global eth0
    inet 192.168.1.11/32 scope global eth0
    inet6 fe80::a00:27ff:fef9:3f02/64 scope link
    valid_lft forever preferred_lft forever
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:fd:29:66 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a00:27ff:fefd:2966/64 scope link
    valid_lft forever preferred_lft forever
    在客户端连接测试:
    [root@recover ~]# mysql -usystem -p'oralinux' -h 192.168.1.11 -N -e "select @@hostname"
    Warning: Using a password on the command line interface can be insecure.
    +---------+
    | linux02 |
    +---------+

    <>DRBD为master节点数据提供更高保障:Distributed Replication Block Device,分布式的基于块设备的复制。
    DBRD+Pacemaker+Corosync架构。

    <>官方正统的Mysql Cluster
    管理节点(Management node):前面提到的管理服务,用来管理mysql cluster中其他节点的节点,它可以配置数据、开始或停止节点、执行备份任务等。因为是由它来管理其他节点,因此这个节点必须首先启动,管理节点通过命令行工具ndb_mgmd启动。
    Date节点(Data node):用来保存cluster中的数据。Data节点的数量通常应该等于副本数量乘以数据分片的数量。副本用来提供对数据的冗余保护,对于有高可用要求的环境来说,每份数据至少应该拥有2份副本,这样安全性才有保障。Data节点通过命令行工具ndbd启动。
    SQL节点(SQL node):用于为客户提供读取Cluster中数据的服务。这个SQL节点大家可以把它看做是使用NDBCLUSTER引擎的Mysql数据库服务。(指定了--ndbcluster和--ndb-connectstring参数),这是一个比较特殊的API节点。尽管Mysql Cluster环境中的SQL节点也使用名为mysqld的应用程序启动服务,不过要注意它与标准的Mysql发行版中的mysqld还是有所不同,这是一种专用的mysqld进程,与标准发行版并不能通用。此外就算使用的是Cluster专用的mysqld进程,只要它没有连接到Mysql Cluster管理服务端,那也就无法通过NDB引擎读写Cluster中的数据。

    Mysql Cluster中所说的节点指的是某类进程,对于运行着多个节点的计算机,在Cluster里会把它称之为主机
    Mysql Cluster社区版下载地址:http://dev.mysql.com/downloads/cluster

    <>Cluster安装与配置
    管理节点:192.168.1.20
    Data节点1:192.168.1.21
    Data节点2:192.168.1.22
    SQL节点1:192.168.1.21
    SQL节点2:192.168.1.22

    源码安装Cluster root用户下:
    mkdir /mysql/conf
    tar -zxvf
    cd
    cmake . -DCMAKE_INSTALL_PREFIX=/mysql
    -DDEFAULT_CHARSET=utf8
    -DDEFAULT_COLLATION=utf8_general_ci
    -DWITH_NDB_JAVA=OFF
    -DWITH_FEDERATED_STORAGE_ENGINE=1
    -DWITH_NDBCLUSTER_STORAGE_ENGINE=1
    -DCOMPILATION_COMMENT='JASON for MySQLCluster'
    -DWITH_READLINE=ON
    -DSYSCONFDIR=/mysql/conf
    -DMYSQL_UNIX_ADDR=/mysql/conf/mysql.sock

    make && make install
    操作步骤与安装Mysql Server基本一样,只是额外指定了两个新的参数:
    WITH_NDB_JAVA:启用对Java的支持,这个参数从Cluster7.2.9版本开始引入的,默认就是启用状态。如果你需要启用对Java的支持,除了启用本参数,还需要附加WITH_CLASSPATH参数指定JDK路径,不过在本套测试环境中各服务器均未安装JDK因为我们选择禁用它
    WITH_NDBCLUSTER_STORAGE_ENGINE:支持NDBCLUSTER引擎
    chown -R mysql:mysql /mysql
    vi /home/mysql/.bash_profile增加export PATH=/mysql/bin:$PATH 这样mysql用户下在任意路径都可以调用Cluster服务相关的命令行工具。
    上述操作要在三台服务器执行。如果服务器的软硬件环境一致,可以选择在一台服务器上编译安装。之后将编译好的软件整个目录打包复制到其他服务器之间解压使用。

    配置环节mysql用户下:
    1、配置管理节点
    mkdir /mysql/mysql-cluster
    vi /mysql/mysql-cluster/config.ini
    增加下列内容:
    [ndbd default]
    NoOfReplicas=2 #指定冗余数量,建议该值不低于2,否则数据就无冗余保护
    DataMemory=200M #指定为数据分配的内存空间(测试环境,参数值偏小)
    IndexMemory=30M #指定为索引分配的内存空间(测试环境,参数值偏小)

    [ndb_mgmd]
    #指定管理节点选项
    hostname=192.168.1.20
    datadir=/mysql/mysql-cluster

    [ndbd]
    #指定Data节点选项
    hostname=192.168.1.21
    datadir=/mysql/mysql-cluster/data

    [ndbd]
    #指定Data节点选项
    hostname=192.168.1.22
    datadir=/mysql/mysql-cluster/data

    [mysqld]
    #指定SQL节点选项
    hostname=192.168.1.21

    [mysqld]
    #指定SQL节点选项
    hostname=192.168.1.22

    2、配置Data、SQL节点,操作需要在192.168.1.21/22中执行,
    vi /mysql/conf/my.cnf 添加
    [mysqld]
    ndbcluster

    [mysql_cluster]
    ndb-connectstring=192.168.1.20
    初始化192.168.1.21/22数据库
    /mysql/scripts/mysql_install_db --datadir=/mysql/data --basedir=/mysql
    这里只定义了一个参数ndb_connectstring,该参数用于指定管理节点的地址,指定ndbcluster和ndb-connectstring两参数并启动mysql server之后,集群不启动是无法执行create table或alter table语句的。

    mkdir /mysql/mysql-cluster/data
    到此配置工作全部完成,启动mysql cluster:
    先启动管理节点的后台服务在192.168.1.20下执行:ndb_mgmd -f /mysql/mysql-cluster/config.ini
    [mysql@linux05 mysql-cluster]$ ndb_mgmd -f /mysql/mysql-cluster/config.ini
    MySQL Cluster Management Server mysql-5.6.14 ndb-7.3.3

    ndb_mgm进入到专用的命令行界面:
    [mysql@linux05 mysql-cluster]$ ndb_mgm
    -- NDB Cluster -- Management Client --
    ndb_mgm>
    show查询当前集群状态
    ndb_mgm> show
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)] 2 node(s)
    id=2 (not connected, accepting connect from 192.168.1.21)
    id=3 (not connected, accepting connect from 192.168.1.22)

    [ndb_mgmd(MGM)] 1 node(s)
    id=1 @192.168.1.20 (mysql-5.6.14 ndb-7.3.3)

    [mysqld(API)] 2 node(s)
    id=4 (not connected, accepting connect from 192.168.1.21)
    id=5 (not connected, accepting connect from 192.168.1.22)

    切换到192.168.1.21/22服务器启动data节点:ndbd --initial 注意Data节点在第一次启动时,执行ndbd命令需要附加--initial参数,以后再执行该命令时,就不需要再附加该参数,否则会清空本地数据。
    [mysql@linux06 bin]$ ndbd --initial
    2017-04-27 13:44:11 [ndbd] INFO -- Angel connected to '192.168.1.20:1186'
    2017-04-27 13:44:11 [ndbd] INFO -- Angel allocated nodeid: 2

    [mysql@linux07 conf]$ ndbd --initial
    2017-04-27 13:44:43 [ndbd] INFO -- Angel connected to '192.168.1.20:1186'
    2017-04-27 13:44:43 [ndbd] INFO -- Angel allocated nodeid: 3

    切换到192.168.1.21/22服务器启动SQL节点:mysqld_safe --defaults-file=/mysql/conf/my.cnf &
    到管理节点执行show查看
    ndb_mgm> show
    Cluster Configuration
    ---------------------
    [ndbd(NDB)] 2 node(s)
    id=2 @192.168.1.21 (mysql-5.6.14 ndb-7.3.3, Nodegroup: 0, *)
    id=3 @192.168.1.22 (mysql-5.6.14 ndb-7.3.3, Nodegroup: 0)

    [ndb_mgmd(MGM)] 1 node(s)
    id=1 @192.168.1.20 (mysql-5.6.14 ndb-7.3.3)

    [mysqld(API)] 2 node(s)
    id=4 @192.168.1.21 (mysql-5.6.14 ndb-7.3.3)
    id=5 @192.168.1.22 (mysql-5.6.14 ndb-7.3.3)
    关闭Cluster:SQL节点通过传统的mysqladmin shutdown即可,Data节点可通过ndb_mgm中的shutdown子命令进行关闭

    Cluster应用体验:
    nodeld4>use test;
    Database changed
    nodeld4>create table n1(id int not null auto_increment primary key,v1 varchar(20)) engine=ndb;
    Query OK, 0 rows affected (0.26 sec)
    建表时需要通过engine选项指定要创建的是NDB类型表。
    nodeld4>insert into n1 values(null,'a');
    Query OK, 1 row affected (0.02 sec)
    nodeld5>select * from test.n1;
    +----+------+
    | id | v1 |
    +----+------+
    | 1 | a |
    +----+------+
    1 row in set (0.01 sec)
    nodeld5>insert into test.n1 values(null,'b');
    Query OK, 1 row affected (0.00 sec)
    nodeld4>select * from test.n1;
    +----+------+
    | id | v1 |
    +----+------+
    | 1 | a |
    | 2 | b |
    +----+------+
    2 rows in set (0.00 sec)

    关闭nodeld5对于的SQL节点:mysqladmin shutdown
    然后在nodeld4节点继续执行插入:

    nodeld4>insert into n1 values(null,'c');
    Query OK, 1 row affected (0.00 sec)
    启动nodeld5:mysqld_safe --defaults-file=/mysql/conf/my.cnf &
    nodeld5>select * from test.n1;
    +----+------+
    | id | v1 |
    +----+------+
    | 1 | a |
    | 2 | b |
    | 3 | c |
    +----+------+
    3 rows in set (0.01 sec)


    Cluster环境中的SQL节点也同样可以节点LVS提供VIP路由到SQL节点,来提供应用层连接的高可用性和负载均衡。
    CLuster不常用的原因:Mysql Cluster中要操作的表数据全部都要在内存里。(数据是持久化在磁盘中的,但要进行读写操作的数据必须被加载到内存中,不是传统数据库中所谓的最热数据在内存中,二是所有数据都要在内存中)也就是说所有NDB节点的内存大小,基本就决定了NDBCLUSTER能够承载的数据库规模。在最新的NDBCLUSTER版本中,非索引列数据可以保存在磁盘上,不过索引数据必须被加载到内存中。这也是我们称之为内存数据库的原因。

    <>继续扩展数据库服务
    该拆分时要拆分
    处理策略得想清

  • 相关阅读:
    简单验证码生成
    java编译器不匹配问题(java compiler level does not match the version of the installed java project facet)
    jQuery中$.fn
    ThreadLocal是什么?
    xmind 快捷键
    powerdesigner 连接oracle
    vs2019 System.FormatException:“Could not parse the JSON file.”
    svn常见操作
    sqlserve报错处理“数据类型 text 和 varchar 在 equal to 运算符中不兼容”
    数据库随机数据
  • 原文地址:https://www.cnblogs.com/datalife/p/6812342.html
Copyright © 2011-2022 走看看