zoukankan      html  css  js  c++  java
  • MHA-结合MySQL半同步复制高可用集群(Centos7)

    一.理论概述

    在MySQL故障切换过程中,MHA能做到在0~30秒之内自动完成数据库的故障切换操作,并且在进行故障切换的过程中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。

    优点 缺点
    由perl语言开发的开源工具 需要编写脚本或利用第三方工具来实现Vip的配置
    支持基于gtid的复制模式 MHA启动后只会对主数据库进行监控
    同一个监控节点可以监控多个集群 需要基于SSH免认证配置,存在一定的安全隐患
    MHA在进行故障转移时更不易产生数据丢失 没有提供从服务器的读负载均衡功能

    使用gtid时大大简化复制过程,gtid是完全基于事务的,只要主服务器上提交了事务,那么从服务器上就一定会执行该事务

    MHA是由两部分组成MHA manager节点,和MHA node节点.可以部署在一台独立的机器上管理多个集群,也可以部署在一台slave节点上,只管理当前所在的集群.MHA node运行在每台mysql服务器和 manager服务器上,MHA manager定时探测master节点状态,master故障时,自动将拥有最新数据的slave提升为master

    MHAmaster切换时会试图从宕机的主服务器上保存二进制日志文件,最大程度保证数据不丢失,但有一定的概率会丢失数据,例如,如果主服务器硬件故障或无法通过 ssh 访问,MHA 没法保存二进制日志,只进行故障转移从而丢失了最新的数据。使用 MySQL 5.5 的半同步复制,可以降低数据丢失的风险。MHA可以与半同步复制结合起来。如果只有一个slave已经收到了最新的二进制日志,MHA可以将最新的二进制日志应用于其他所有的slave服务器上,因此可以保证所有节点的数据一致性。

    本案例部署思路

    案例环境就是一个集群,所以manager节点部署在其中一个slave上就可以,只管理当前集群.

    而为了节省资源本案例使用一台主库,一台备用主库,主库空闲时也负责读操作,外加一台从库

    而由于yum安装的版本不好指定,我这里采用二进制安装,并且使用lvs来调度读库,keepalived高可用lvs

    二.环境

    测试过程关闭防火墙和selinux

    主机名 IP地址 MHA角色 mysql角色
    master 192.168.111.3 主库 master
    node1 192.168.111.4 MHA集群manager节点,从库 slave
    node2 192.168.111.5 备用主库 slave
    lvs1 192.168.111.6 lvs,keepalived
    lvs2 192.168.111.7 lvs,keepalived
    MHA VIP 192.168.111.100
    keepalived VIP 192.168.111.200

    三.部署

    部署MHA

    • 基础环境
    [root@localhost ~]# vim /etc/hosts
    
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.111.3   master
    192.168.111.4   node1
    192.168.111.5   node2
    192.168.111.6   lvs1
    192.168.111.7   lvs2
    
    [root@localhost ~]# scp /etc/hosts root@node1:/etc/
    
    [root@localhost ~]# scp /etc/hosts root@node2:/etc/
    
    [root@localhost ~]# scp /etc/hosts root@lvs1:/etc/
    
    [root@localhost ~]# scp /etc/hosts root@lvs2:/etc/
    
    [root@localhost ~]# hostname master
    [root@localhost ~]# bash
    [root@master ~]# uname -n
    master
    
    [root@localhost ~]# hostname node1
    [root@localhost ~]# bash
    [root@node1 ~]# uname -n
    node1
    
    [root@localhost ~]# hostname node2
    [root@localhost ~]# bash
    [root@node2 ~]# uname -n
    node2
    
    [root@localhost ~]# hostname lvs1
    [root@localhost ~]# bash
    [root@lvs1 ~]# uname -n
    lvs1
    
    [root@localhost ~]# hostname lvs2
    [root@localhost ~]# bash
    [root@lvs2 ~]# uname -n
    lvs2
    
    
    
    • 安装下载MHA(台机器一样操作)
    http://downloads.mariadb.com/MHA/
    #下载网址下载MHA-manager和MHA-node
    
    我这里的版本是mha4mysql-manager-0.56.tar.gz;mha4mysql-node-0.56.tar.gz
    
    自行配置epel源
    
    [root@master ~]# yum install -y perl-DBD-MySQL.x86_64 perl-DBI.x86_64 perl-CPAN perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker
    #依赖包
    
    [root@master ~]# rpm -q perl-DBD-MySQL.x86_64 perl-DBI.x86_64 perl-CPAN perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker
    #检查,必须要全部安装上
    
    #所有节点安装node
    [root@master ~]# tar xf mha4mysql-node-0.56.tar.gz 
    [root@master ~]# cd mha4mysql-node-0.56/
    [root@master mha4mysql-node-0.56]# perl Makefile.PL
    [root@master mha4mysql-node-0.56]# make && make install
    
    [root@master ~]# ls -l /usr/local/bin/
    总用量 40
    -r-xr-xr-x 1 root root 16346 5月  19 16:21 apply_diff_relay_logs
    -r-xr-xr-x 1 root root  4807 5月  19 16:21 filter_mysqlbinlog
    -r-xr-xr-x 1 root root  7401 5月  19 16:21 purge_relay_logs
    -r-xr-xr-x 1 root root  7395 5月  19 16:21 save_binary_logs
    #生成的二进制文件
    
    manager的安装
    刚才已经安了一部分依赖,有缺少的
    
    [root@node1 mha4mysql-node-0.56]# yum -y install perl perl-Log-Dispatch perl-Parallel-ForkManager perl-DBD-MySQL perl-DBI perl-Time-HiRes perl-Config-Tiny
    
    [root@node1 mha4mysql-node-0.56]# rpm -q perl perl-Log-Dispatch perl-Parallel-ForkManager perl-DBD-MySQL perl-DBI perl-Time-HiRes perl-Config-Tiny
    perl-5.16.3-294.el7_6.x86_64
    perl-Log-Dispatch-2.41-1.el7.1.noarch
    perl-Parallel-ForkManager-1.18-2.el7.noarch
    perl-DBD-MySQL-4.023-6.el7.x86_64
    perl-DBI-1.627-4.el7.x86_64
    perl-Time-HiRes-1.9725-3.el7.x86_64
    perl-Config-Tiny-2.14-7.el7.noarch
    
    [root@node1 ~]# tar xf mha4mysql-manager-0.56.tar.gz 
    [root@node1 ~]# cd mha4mysql-manager-0.56/
    [root@node1 mha4mysql-manager-0.56]# perl Makefile.PL && make && make install
    
    [root@node1 mha4mysql-manager-0.56]# ls /usr/local/bin/
    apply_diff_relay_logs  masterha_check_ssh     masterha_manager         masterha_secondary_check  save_binary_logs
    filter_mysqlbinlog     masterha_check_status  masterha_master_monitor  masterha_stop
    masterha_check_repl    masterha_conf_host     masterha_master_switch   purge_relay_logs
    
    • 配置ssh免秘钥交互
    在manager上操作
    
    [root@node1 ~]# ssh-keygen -t rsa
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa): 
    Created directory '/root/.ssh'.
    Enter passphrase (empty for no passphrase): 
    Enter same passphrase again: 
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:qvOacPMWP+iO4BcxPtHJkVDdJXE4HNklCxlPkWShdMM root@node1
    The key's randomart image is:
    +---[RSA 2048]----+
    |    .o.o oB%%=.  |
    |      o ..BXE+   |
    |     o o  ..o    |
    |    + +          |
    |   . +  S        |
    |    + ..         |
    |  o oo.+         |
    | . +o*o o        |
    |  ..=B=  .       |
    +----[SHA256]-----+
    #以上都是回车我按照的默认的来的,也可以自己指定选项
    
    [root@node1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@master
    
    [root@node1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@node2
    
    [root@node1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@node1
    #逻辑上虽然自己就是node1,不会影响什么,但是按照我的思路node1也应该传递公钥,感兴趣的可以研究一下ssh的实现原理
    
    [root@node1 ~]# ssh node1
    Last login: Sat Apr 27 13:33:49 2019 from 192.168.111.1
    [root@node1 ~]# exit
    登出
    Connection to node1 closed.
    [root@node1 ~]# ssh node2
    Last login: Thu Apr 18 22:55:10 2019 from 192.168.111.1
    [root@node2 ~]# exit
    登出
    Connection to node2 closed.
    [root@node1 ~]# ssh master
    Last login: Sun May 19 16:00:20 2019 from 192.168.111.1
    #保险起见,我一个个试了一遍
    
    每个节点上也需要分发公钥,本来只需要分发给node节点,在本案例中manager不是部署在单独服务器上,而是部署在一个node节点上,所以,也要给它分发
    
    [root@master ~]# ssh-keygen -t rsa
    
    [root@master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@node1
    
    [root@master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@node2
    
    [root@node2 ~]# ssh-keygen -t rsa
    
    [root@node2 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@master
    
    [root@node2 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@node1
    

    部署二进制包MySQL及部署主从复制

    三个节点安装mysql二进制安装 
    yum -y install libaio
    wget http://mirrors.sohu.com/mysql/MySQL-5.7/mysql-5.7.24-linux-glibc2.12-x86_64.tar.gz
    useradd -M -s /sbin/nologin mysql
    tar zxf mysql-5.7.24-linux-glibc2.12-x86_64.tar.gz 
    mv mysql-5.7.24-linux-glibc2.12-x86_64 /usr/local/mysql
    chown -R mysql:mysql /usr/local/mysql
    ln -s /usr/local/mysql/bin/* /usr/local/bin/
    /usr/local/mysql/bin/mysqld --user=mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data --initialize
    cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld
    
    将随机生成的登录密码记录下来
    2019-05-18T08:43:11.094845Z 1 [Note] A temporary password is generated for root@localhost: 2Gk75Zvp&!-y
    #可以启动服务后,使用mysql -u root -p'旧密码' password '新密码'更改密码
    
    master:
    vim /etc/my.cnf
    [mysqld]
    datadir=/usr/local/mysql/data
    socket=/usr/local/mysql/data/mysql.sock
    server-id=1
    log-bin=mysql-binlog
    log-slave-updates=true
    symbolic-links=0
    [mysqld_safe]
    log-error=/usr/local/mysql/data/mysql.log
    pid-file=/usr/local/mysql/data/mysql.pid
    
    node1:
    [root@node1 ~]# vim /etc/my.cnf
    
    [mysqld]
    datadir=/usr/local/mysql/data
    socket=/tmp/mysql.sock
    server-id=2
    relay-log=relay-log-bin
    relay-log-index=slave-relay-bin.index
    symbolic-links=0
    [mysqld_safe]
    log-error=/usr/local/mysql/data/mysql.log
    pid-file=/usr/local/mysql/data/mysql.pid
    
    
    node2:
    [root@node2 mysql]# vim /etc/my.cnf
    
    [mysqld]
    datadir=/usr/local/mysql/data
    socket=/tmp/mysql.sock
    server-id=3
    relay-log=relay-log-bin
    relay-log-index=slave-relay-bin.index
    symbolic-links=0
    [mysqld_safe]
    log-error=/usr/local/mysql/data/mysql.log
    pid-file=/usr/local/mysql/data/mysql.pid
    
    
    #主上操作
    mysql> grant replication slave on *.* to 'myslave'@'192.168.111.%' identified by '123456';
    Query OK, 0 rows affected, 1 warning (0.00 sec)
    
    mysql> flush privileges;
    Query OK, 0 rows affected (0.10 sec)
    
    mysql> show master status;
    +---------------------+----------+--------------+------------------+-------------------+
    | File                | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
    +---------------------+----------+--------------+------------------+-------------------+
    | mysql-binlog.000002 |      864 |              |                  |                   |
    +---------------------+----------+--------------+------------------+-------------------+
    
    由于我这里是测试环境,之前也没什么数据,就不对本来已经存在的数据进行备份了.
    
    
    node1,node2:
    mysql> change master to
        -> master_host='192.168.111.3',
        -> master_user='myslave',
        -> master_password='123456',
        -> master_log_file='mysql-binlog.000002',
        -> master_log_pos=864;
    Query OK, 0 rows affected, 2 warnings (0.01 sec)
    mysql> start slave;
    Query OK, 0 rows affected (0.00 sec)
    mysql> show slave statusG;
                 Slave_IO_Running: Yes
                Slave_SQL_Running: Yes
    
    然后创建表,库,到从库上查看测试.这里不再演示
    

    部署半同步复制

    [root@master plugin]# ll -lh semisync_*
    -rwxr-xr-x 1 mysql mysql 692K 10月  4 2018 semisync_master.so
    -rwxr-xr-x 1 mysql mysql 149K 10月  4 2018 semisync_slave.so
    #这是半同步的插件
    
    [root@master plugin]# mysql -u root -p123456
    
    #全部安装如下
    mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so';
    
    mysql> INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so';
    #安装插件
    mysql> SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS  WHERE PLUGIN_NAME LIKE '%semi%';
    +----------------------+---------------+
    | PLUGIN_NAME          | PLUGIN_STATUS |
    +----------------------+---------------+
    | rpl_semi_sync_slave  | ACTIVE        |
    | rpl_semi_sync_master | ACTIVE        |
    +----------------------+---------------+
    
    set global rpl_semi_sync_master_enabled=on ;
    #主上开启插件
    
    set global rpl_semi_sync_slave_enabled=on ;
    #node1备上开启
    
    mysql> set global rpl_semi_sync_slave_enabled=on ;
    mysql> set global rpl_semi_sync_master_enabled=on ;
    #node2上两个都打开,它也是备用的主
    

    以上配置重启mysql就会失效,添加到配置文件中可避免

    主:
    vim /etc/my.cnf
    plugin-load=rpl_semi_sync_master=semisync_master.so
    plugin-load=rpl_semi_sync_slave=semisync_slave.so
    rpl_semi_sync_master_enabled=on
    
    node1:
    vim /etc/my.cnf
    plugin-load=rpl_semi_sync_master=semisync_master.so
    plugin-load=rpl_semi_sync_slave=semisync_slave.so
    rpl_semi_sync_slave_enabled=on
    
    node2:
    vim /etc/my.cnf
    plugin-load=rpl_semi_sync_master=semisync_master.so
    plugin-load=rpl_semi_sync_slave=semisync_slave.so
    rpl_semi_sync_slave_enabled=on
    rpl_semi_sync_master_enabled=on
    
    • 测试半同步
    mysql> create database qiao;
    Query OK, 1 row affected (0.50 sec)
    
    mysql> SHOW GLOBAL STATUS LIKE '%semi%';
    +--------------------------------------------+--------+
    | Variable_name                              | Value  |
    +--------------------------------------------+--------+
    | Rpl_semi_sync_master_yes_tx                | 1      |
    
    #做一些测试操作,改参数会变大,随后在salve上查看是否同步
    

    配置MHA

    • 设置两台slave为只读,因为手动设置而不写入配置文件,是因为node2是备用主,随时会提升为主,而node1则可以手动输入或写入配置文件可随意.
    node2和node1一样:
    mysql> set global read_only=1;
    #该命令只限制普通用户,不包括root等具有supper权限的用户,要想拒绝所有用户可以通过"flush tables with read lock"即谁也没有办法进行写入了
    
    • 授权新的监控用户
    主:
    mysql> grant all privileges on *.* to 'root'@'192.168.111.%' identified by '123456';
    mysql> flush privileges;
    
    其余节点查看如下:
    mysql> show grants for root@'192.168.111.%';
    +-------------------------------------------------------+
    | Grants for root@192.168.111.%                         |
    +-------------------------------------------------------+
    | GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.111.%' |
    +-------------------------------------------------------+
    #可以看到授权也是更改了数据库的数据,所以也复制完成了
    
    • manager上配置
    [root@node1 ~]# vim /etc/masterha/app1.cnf 
    [server default]
    manager_workdir=/var/log/masterha/app1
    #mha manager生成的相关状态文件的绝对路径,如果没有设置,则默认使用/var/tmp
    
    manager_log=/var/log/masterha/app1/manager.log
    #mha manager生成的日志据对路径,如果没有设置,mha manager将打印在标准输出,标准错误输出上,如:当mha manager执行故障转移的时候,这些信息就会打印
    
    master_binlog_dir=/usr/local/mysql/data
    #在master上生成binlog的绝对路径,这个参数用在master挂了,但是ssh还可达的时候,从主库的这个路径下读取和复制必须的binlog events,这个参数是必须的,因为master的mysqld挂掉之后,没有办法自动识别master的binlog存放目录。默认情况下,master_binlog_dir的值是/var/lib/mysql,/var/log/mysql,/var/lib/mysql目录是大多数mysql分支默认的binlog输出目录,而 /var/log/mysql是ubuntu包的默认binlog输出目录,这个参数可以设置多个值,用逗号隔开
    
    master_ip_failover_script=/usr/local/bin/master_ip_failover
    #自己编写一个脚本来透明地把应用程序连接到新主库上,用于切换VIP转移
    
    password=123456
    user=root
    #目标mysql实例的管理帐号,尽量是root用户,因为运行所有的管理命令(如:stop slave,change master,reset slave)需要使用,默认是root,以及密码
    
    ping_interval=1
    #这个参数表示mha manager多久ping(执行select ping sql语句)一次master,连续三个丢失ping连接,mha master就判定mater死了,因此,通过4次ping间隔的最大时间的机制来发现故障,默认是3,表示间隔是3秒
    
    remote_workdir=/usr/local/mysql/data
    #每一个MHA node(指的是mysql server节点)生成日志文件的工作路径,这个路径是绝对路径,如果该路径目录不存在,则会自动创建,如果没有权限访问这个路径,那么MHA将终止后续过程,另外,你需要关心一下这个路径下的文件系统是否有足够的磁盘空间,默认值是/var/tmp
    
    repl_password=123456
    repl_user=myslave
    #在所有slave上执行change master的复制用户名及密码,这个用户最好是在主库上拥有replication slave权限
    
    [server1]
    hostname=master
    #主机名或者ip地址
    port=3306
    #mysql端口
    
    [server2]
    hostname=node2
    candidate_master=1
    #从不同的从库服务器中,提升一个可靠的机器为新主库,(比如:RAID 10比RAID0的从库更可靠),可以通过在配置文件中对应的从库服务器的配置段下添加candidate_master=1来提升这个从库被提升为新主库的优先级(这个从库要开始binlog,以及没有显著的复制延迟,如果不满足这两个条件,也并不会在主库挂掉的时候成为新主库,所以,这个参数只是提升了优先级,并不是说指定了这个参数就一定会成为新主库)
    port=3306
    check_repl_delay=0
    #默认情况下,如果从库落后主库100M的relay logs,MHA不会选择这个从库作为新主库,因为它会增加恢复的时间,设置这个参数为0,MHA在选择新主库的时候,则忽略复制延迟,这个选项用在你使用candidate_master=1 明确指定需要哪个从库作为新主库的时候使用。
    
    [server3]
    hostname=node1
    port=3306
    
    
    node2操作:
    [root@node2 ~]# vim /etc/my.cnf
    log-bin=mysql-binlog
    #开启二进制日志,并重启服务
    
    • 配置manager故障转移脚本
    [root@node1 ~]# vim /usr/local/bin/masterha_ip_failover
    #!/usr/bin/env perl  
     
    use strict;  
    use warnings FATAL => 'all';  
    use Getopt::Long;  
    my (  
    $command, $ssh_user, $orig_master_host, $orig_master_ip,  
    $orig_master_port, $new_master_host, $new_master_ip, $new_master_port,  
    );  
    my $vip = '192.168.111.100'; 
    my $key = "1";
    my $ssh_start_vip = "/sbin/ifconfig ens32:$key $vip"; 
    my $ssh_stop_vip = "/sbin/ifconfig ens32:$key down";
    $ssh_user = "root";  
    GetOptions(  
    'command=s' => $command,  
    'ssh_user=s' => $ssh_user,  
    'orig_master_host=s' => $orig_master_host,  
    'orig_master_ip=s' => $orig_master_ip,  
    'orig_master_port=i' => $orig_master_port,  
    'new_master_host=s' => $new_master_host,  
    'new_master_ip=s' => $new_master_ip,  
    'new_master_port=i' => $new_master_port,  
    );  
    exit &main();  
    sub main {  
    print "
    
    IN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===
    
    ";  
    if ( $command eq "stop" || $command eq "stopssh" ) {  
    # $orig_master_host, $orig_master_ip, $orig_master_port are passed.  
    # If you manage master ip address at global catalog database,  
    # invalidate orig_master_ip here.  
    my $exit_code = 1;  
    #eval {  
    # print "Disabling the VIP on old master: $orig_master_host 
    ";  
    # &stop_vip();  
    # $exit_code = 0;  
    #};  
    eval {  
    print "Disabling the VIP on old master: $orig_master_host 
    ";  
    #my $ping=`ping -c 1 10.0.0.13 | grep "packet loss" | awk -F',' '{print $3}' | awk '{print $1}'`;  
    #if ( $ping le "90.0%"&& $ping gt "0.0%" ){  #$exit_code = 0; 
    #}  
    #else {  
    &stop_vip();  
    # updating global catalog, etc  
    $exit_code = 0;  
    #}  
    };  
     
    if ($@) {  
    warn "Got Error: $@
    ";  
    exit $exit_code;  
    }  
    exit $exit_code;  
    }  
    elsif ( $command eq "start" ) { 
    # all arguments are passed.  
    # If you manage master ip address at global catalog database,  
    # activate new_master_ip here.  
    # You can also grant write access (create user, set read_only=0, etc) here.  
    my $exit_code = 10;  
    eval {  
    print "Enabling the VIP - $vip on the new master - $new_master_host 
    ";  
    &start_vip();  
    $exit_code = 0;  
    };  
    if ($@) {  
    warn $@;  
    exit $exit_code;  
    }
    exit $exit_code;  
    }  
    elsif ( $command eq "status" ) {  
    print "Checking the Status of the script.. OK 
    ";  
    `ssh $ssh_user@$orig_master_ip " $ssh_start_vip "`;  
    exit 0;  
    }  
    else {  
    &usage();  
    exit 1;  
    }  
    }  
    # A simple system call that enable the VIP on the new master  
    sub start_vip() {  
    `ssh $ssh_user@$new_master_host " $ssh_start_vip "`;  
    }  
    # A simple system call that disable the VIP on the old_master  
    sub stop_vip() {  
    `ssh $ssh_user@$orig_master_host " $ssh_stop_vip "`;  
    }  
    sub usage {  
    print  "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port -- new_mas
    ter_host=host --new_master_ip=ip --new_master_port=port
    "; } 
    #以上脚本只需要修改相应的ip以及网卡就好
    [root@node1 ~]# chmod +x /usr/local/bin/masterha_ip_failover
    
    • 配置relay-log

    MySQL在主从复制场景下默认情况是:从库的relaylog会在SQL线程执行完毕之后备自动删除.但是在本案例MHA场景下,对于某些滞后从库的恢复依赖于其它从库的relaylog,因此采用禁用自动删除功能以及加入计划任务定期清理的办法

    那么待会做计划任务时会采取硬链接的方式,这是因为在文件系统中;我们一个没做硬链接的文件对应一个inode节点,删除的文件是将其的所有数据块删除.而做了硬链接的文件相当于是让多个文件指向同一个inode节点,删除操作时删除的仅仅是指向inode指针.这种方法在删除一些大的文件,数据库大表中也是经常用到.

    但是阻止清楚relaylog日志也有其弊端,看这边文章

    关于以下脚本中使用到的purge_relay_logs工具的使用,详情请看这篇文档

    mysql> set global relay_log_purge=0;
    #两个从节点操作一样
    
    配置脚本并添加计划任务(两从一样)
    [root@node1 ~]# vim /opt/purge_relay_log.sh
    #!/bin/bash
    user=root
    passwd=123456
    #创建的监控账号及密码
    port=3306
    #端口
    log_dir='/usr/local/mysql/data'
    #relaylog日志的目录
    work_dir='/tmp'
    #指定创建relaylog硬链接的位置,默认是/var/tmp,因为不同分区之间是不能创建硬链接的,最好指定下硬链接的具体位置,成功执行脚本后,硬链接的中继日志文件就会被删除
    purge='/usr/local/bin/purge_relay_logs'
    if [ ! -d $log_dir ]
    then
    mkdir $log_dir -p
    fi
    $purge --user=$user --password=$passwd --disable_relay_log_purge --port=$port --workdir=$work_dir >> $log_dir/purge_relay_logs.log 2>&1
    
    [root@node1 ~]# chmod +x /opt/purge_relay_log.sh 
    
    [root@node1 ~]# crontab -e
    0 4 * * * /bin/bash /opt/purge_relay_log.sh
    
    手动执行一遍
    [root@node1 data]# purge_relay_logs --user=root --password=123456 --disable_relay_log_purge --port=3306 --workdir=/tmp
    2019-04-27 23:50:55: purge_relay_logs script started.
     Found relay_log.info: /usr/local/mysql/data/relay-log.info
     Removing hard linked relay log files relay-log-bin* under /tmp.. done.
     Current relay log file: /usr/local/mysql/data/relay-log-bin.000007
     Archiving unused relay log files (up to /usr/local/mysql/data/relay-log-bin.000006) ...
     Creating hard link for /usr/local/mysql/data/relay-log-bin.000006 under /tmp/relay-log-bin.000006 .. ok.
     Creating hard links for unused relay log files completed.
     Executing SET GLOBAL relay_log_purge=1; FLUSH LOGS; sleeping a few seconds so that SQL thread can delete older relay log files (if it keeps up); SE
    T GLOBAL relay_log_purge=0; .. ok.
     Removing hard linked relay log files relay-log-bin* under /tmp.. done.
    2019-04-27 23:50:58: All relay log purging operations succeeded.
    
    
    • 在manager上测试MHAssh通信,以下是正常情况下的
    [root@node1 data]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
    Sat Apr 27 23:55:13 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
    Sat Apr 27 23:55:13 2019 - [info] Reading application default configurations from /etc/masterha/app1.cnf..
    Sat Apr 27 23:55:13 2019 - [info] Reading server configurations from /etc/masterha/app1.cnf..
    Sat Apr 27 23:55:13 2019 - [info] Starting SSH connection tests..
    Sat Apr 27 23:55:15 2019 - [debug] 
    Sat Apr 27 23:55:14 2019 - [debug]  Connecting via SSH from root@node1(192.168.111.4:22) to root@master(192.168.111.3:22)..
    Sat Apr 27 23:55:15 2019 - [debug]   ok.
    Sat Apr 27 23:55:15 2019 - [debug]  Connecting via SSH from root@node1(192.168.111.4:22) to root@node2(192.168.111.5:22)..
    Sat Apr 27 23:55:15 2019 - [debug]   ok.
    Sat Apr 27 23:55:15 2019 - [debug] 
    Sat Apr 27 23:55:13 2019 - [debug]  Connecting via SSH from root@node2(192.168.111.5:22) to root@master(192.168.111.3:22)..
    Sat Apr 27 23:55:14 2019 - [debug]   ok.
    Sat Apr 27 23:55:14 2019 - [debug]  Connecting via SSH from root@node2(192.168.111.5:22) to root@node1(192.168.111.4:22)..
    Sat Apr 27 23:55:14 2019 - [debug]   ok.
    Sat Apr 27 23:55:15 2019 - [debug] 
    Sat Apr 27 23:55:13 2019 - [debug]  Connecting via SSH from root@master(192.168.111.3:22) to root@node2(192.168.111.5:22)..
    Sat Apr 27 23:55:13 2019 - [debug]   ok.
    Sat Apr 27 23:55:13 2019 - [debug]  Connecting via SSH from root@master(192.168.111.3:22) to root@node1(192.168.111.4:22)..
    Sat Apr 27 23:55:14 2019 - [debug]   ok.
    Sat Apr 27 23:55:15 2019 - [info] All SSH connection tests passed successfully.
    
    • 检查整个集群的状态
    [root@node1 data]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
    Sat Apr 27 23:57:21 2019 - [error][/usr/local/share/perl5/MHA/Server.pm, ln383] node2(192.168.111.5:3306): User myslave does not exist or does not have REPLICATION SLAVE privilege! Other slaves can not start replication from this host.
    Sat Apr 27 23:57:21 2019 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln401] Error happend on checking configurations.  at /usr/local/share/perl5/MHA/ServerManager.pm line 1354.Sat Apr 27 23:57:21 2019 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln500] Error happened on monitoring servers.
    #正常的不说,看报错的
    #第一句日志看出是授权用户的问题,去111.5查看下
    mysql> show grants for myslave@'192.168.111.%';
    +-------------------------------------------------------------+
    | Grants for myslave@192.168.111.%                            |
    +-------------------------------------------------------------+
    | GRANT REPLICATION SLAVE ON *.* TO 'myslave'@'192.168.111.%' |
    +-------------------------------------------------------------+
    #主上是有的,从库没有,因为我最开始时是在主上授权了之后才开启的二进制功能""show master status"就是从那之后才开始复制的
    mysql> grant replication slave on *.* to myslave@'192.168.111.%' identified by '123456';
    Query OK, 0 rows affected, 1 warning (0.00 sec)
    
    mysql> flush privileges;
    Query OK, 0 rows affected (0.01 sec)
    #主上再次授权,这次因为已经主从复制了,所以其它两台从再查看也有了
    mysql> show grants for myslave@'192.168.111.%';
    +-------------------------------------------------------------+
    | Grants for myslave@192.168.111.%                            |
    +-------------------------------------------------------------+
    | GRANT REPLICATION SLAVE ON *.* TO 'myslave'@'192.168.111.%' |
    +-------------------------------------------------------------+
    
    再次执行
    [root@node1 data]# masterha_check_repl --conf=/etc/masterha/app1.cnf
    还报错,继续解决
    [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln401] Error happend on checking configurations. Can't exec "/usr/local/bin/master_ip_failover": 没有那个文件或目录 at /usr/local/share/perl5/MHA/ManagerUtil.pm line 68.
    #监控脚本好像放的位置不对
    [root@node1 data]# ll /usr/local/bin/masterha_ip_failover 
    #名字和程序定义的不同,更改下
    [root@node1 data]# mv /usr/local/bin/masterha_ip_failover /usr/local/bin/master_ip_failover
    
    再次重试
    [root@node1 data]# masterha_check_repl --conf=/etc/masterha/app1.cnf
    MySQL Replication Health is OK
    最下面报这个,只能说明暂时没问题了,我们继续进行下去
    
    • VIP配置和管理

    两种方式,一种通过keepalived或heartbeat管VIP转移,另一种为命令方式

    本案例采取命令方式

    [root@node1 data]# masterha_check_status --conf=/etc/masterha/app1.cnf
    app1 is stopped(2:NOT_RUNNING).
    #检查manager状态如果正常会显示"PING_OK",否则会显示"NOT_RUNNING",代表 MHA 监控没有开启。我们刚才所做的都是预先进行手动测试,并不是打开MHA监控
    
    
    [root@node1 data]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover > /var/log/masterha/app1/manager.log 2>&1 &
    [1] 67827
    #--remove_dead_master_conf:如果设置此参数,当成功failover后,MHA manager将会自动删除配置文件中关于dead master的配置选项。
    #--ignore_last_failover:在缺省情况下,如果 MHA 检测到连续发生宕机,且两次宕机间隔不足 8 小时的话,则不会进行 Failover,之所以这样限制是为了避免 ping-pong 效应。该参数代表忽略上次 MHA 触发切换产生的文件,默认情况下,MHA 发生切换后会在日志目录,也就是上面我设置的/data 产生 app1.failover.complete  文件,下次再次切换的时候如果发现该目录下存在该文件将不允许触发切换,除非在第一次切换后收到删除该文件,为了方便,这里设置为--ignore_last_failover。
    
    [root@node1 data]# masterha_check_status --conf=/etc/masterha/app1.cnf
    app1 (pid:67827) is running(0:PING_OK), master:master
    #再次查看状态
    

    MHA测试

    我在master上已经看到了VIP,现在测试切换VIP,master上关闭mysql服务/etc/init.d/mysqld stop

    node2是备用主,查看VIP是否转移

    [root@node2 ~]# ip a | grep ens32
    2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        inet 192.168.111.5/24 brd 192.168.111.255 scope global noprefixroute ens32
        inet 192.168.111.100/24 brd 192.168.111.255 scope global secondary ens32:1
    #转移成功
    

    node1是从库,查看下主从复制的情况

    [root@node1 ~]# mysql -uroot -p123456
    
    mysql> show slave statusG;
    *************************** 1. row ***************************
                   Slave_IO_State: Waiting for master to send event
                      Master_Host: 192.168.111.5
                      Master_User: myslave
                      Master_Port: 3306
                    Connect_Retry: 60
                  Master_Log_File: mysql-binlog.000004
              Read_Master_Log_Pos: 154
                   Relay_Log_File: relay-log-bin.000002
                    Relay_Log_Pos: 323
            Relay_Master_Log_File: mysql-binlog.000004
                 Slave_IO_Running: Yes
                Slave_SQL_Running: Yes
    #指定的主库已经更改为192.168.111.5
    
    [root@node1 ~]# jobs -l
    [2]+ 73464 停止                  vim /usr/local/bin/master_ip_failover
    #该脚本已经切换了相当于完成了任务,已经停止了,可以再次运行
    
    [root@node1 ~]# vim /etc/masterha/app1.cnf 
    
    [server default]
    manager_log=/var/log/masterha/app1/manager.log
    manager_workdir=/var/log/masterha/app1
    master_binlog_dir=/usr/local/mysql/data
    master_ip_failover_script=/usr/local/bin/master_ip_failover
    password=123456
    ping_interval=1
    remote_workdir=/usr/local/mysql/data
    repl_password=123456
    repl_user=myslave
    user=root
    
    [server2]
    candidate_master=1
    check_repl_delay=0
    hostname=node2
    port=3306
    
    [server3]
    hostname=node1
    port=3306
    #可以看到server1由于故障已经从配置文件删除了
    
    • VIP切回
    [root@master ~]# /etc/init.d/mysqld start
    #master修复好mysql启动服务
    
    [root@master ~]# mysql -uroot -p123456
    
    mysql> stop slave;
    
    mysql> change master to
        -> master_host='192.168.111.5',
        -> master_user='myslave',
        -> master_password='123456';
    #填写新主的ip,不写二进制文件名和位置参数了.
    
    mysql> start slave;
    
    mysql> show slave statusG;
    *************************** 1. row ***************************
                 Slave_IO_Running: Yes
                Slave_SQL_Running: Yes
                
                
    manager上:
    [root@node1 ~]# vim /etc/masterha/app1.cnf 
    #添加如下
    [server1]
    hostname=master
    port=3306
    
    [root@node1 ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
    #再次检查集群状态
    MySQL Replication Health is OK.
    
    [root@node1 ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover > /var/log/masterha/app1/manager.log 2>&1 &[3] 75013
    [root@node1 ~]# jobs -l
    [3]- 75013 运行中               nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover > /var/log/mas
    terha/app1/manager.log 2>&1 &
    
    node2:关闭服务,测试VIP自动切回
    
    [root@node2 ~]# /etc/init.d/mysqld stop
    Shutting down MySQL............ SUCCESS! 
    [root@node2 ~]# ip a | grep ens32
    2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        inet 192.168.111.5/24 brd 192.168.111.255 scope global noprefixroute ens32
    
    master:查看
    [root@master ~]# ip a| grep ens32
    2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        inet 192.168.111.3/24 brd 192.168.111.255 scope global noprefixroute ens32
        inet 192.168.111.100/24 brd 192.168.111.255 scope global secondary ens32:1
    
    node1:从库查看主从复制状态,指定主ip已经自动切换
    mysql> show slave statusG;
    *************************** 1. row ***************************
                   Slave_IO_State: Waiting for master to send event
                      Master_Host: 192.168.111.3
                      Master_User: myslave
                      Master_Port: 3306
                    Connect_Retry: 60
                  Master_Log_File: mysql-binlog.000004
              Read_Master_Log_Pos: 398
                   Relay_Log_File: relay-log-bin.000002
                    Relay_Log_Pos: 323
            Relay_Master_Log_File: mysql-binlog.000004
                 Slave_IO_Running: Yes
                Slave_SQL_Running: Yes
    
    

    然后按照同样的思路以及配置再将node2加入到MHA中

    若出现这样错误:
    mysql> stop slave;
    Query OK, 0 rows affected (0.01 sec)
    
    mysql> change master to master_host='192.168.111.3', master_user='myslave', master_password='123456';
    Query OK, 0 rows affected, 2 warnings (0.11 sec)
    
    mysql> start slave;
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> show slave statusG;
    *************************** 1. row ***************************
                   Slave_IO_State: Waiting for master to send event
                      Master_Host: 192.168.111.3
                      Master_User: myslave
                      Master_Port: 3306
                    Connect_Retry: 60
                  Master_Log_File: mysql-binlog.000004
              Read_Master_Log_Pos: 842
                   Relay_Log_File: relay-log-bin.000003
                    Relay_Log_Pos: 1392
            Relay_Master_Log_File: mysql-binlog.000002
                 Slave_IO_Running: Yes
                Slave_SQL_Running: No
                  Replicate_Do_DB: 
              Replicate_Ignore_DB: 
               Replicate_Do_Table: 
           Replicate_Ignore_Table: 
          Replicate_Wild_Do_Table: 
      Replicate_Wild_Ignore_Table: 
                       Last_Errno: 1007
                       Last_Error: Error 'Can't create database 'qiao'; database exists' on query. Default database: 'qiao'. Query: 'create database qia
    o'                 Skip_Counter: 0
              Exec_Master_Log_Pos: 1173
                  Relay_Log_Space: 4470
                  Until_Condition: None
                   Until_Log_File: 
                    Until_Log_Pos: 0
               Master_SSL_Allowed: No
               Master_SSL_CA_File: 
               Master_SSL_CA_Path: 
                  Master_SSL_Cert: 
                Master_SSL_Cipher: 
                   Master_SSL_Key: 
            Seconds_Behind_Master: NULL
    Master_SSL_Verify_Server_Cert: No
                    Last_IO_Errno: 0
                    Last_IO_Error: 
                   Last_SQL_Errno: 1007
                   Last_SQL_Error: Error 'Can't create database 'qiao'; database exists' on query. Default database: 'qiao'. Query: 'create database qia
    o'
    
    解决:
    mysql> stop slave;
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> set global sql_slave_skip_counter=1;
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> start slave;
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> show slave statusG;
    *************************** 1. row ***************************
                   Slave_IO_State: Waiting for master to send event
                      Master_Host: 192.168.111.3
                      Master_User: myslave
                      Master_Port: 3306
                    Connect_Retry: 60
                  Master_Log_File: mysql-binlog.000004
              Read_Master_Log_Pos: 842
                   Relay_Log_File: relay-log-bin.000006
                    Relay_Log_Pos: 323
            Relay_Master_Log_File: mysql-binlog.000004
                 Slave_IO_Running: Yes
                Slave_SQL_Running: Yes
                  Replicate_Do_DB: 
              Replicate_Ignore_DB: 
               Replicate_Do_Table: 
           Replicate_Ignore_Table: 
          Replicate_Wild_Do_Table: 
      Replicate_Wild_Ignore_Table: 
                       Last_Errno: 0
                       Last_Error: 
                     Skip_Counter: 0
              Exec_Master_Log_Pos: 842
                  Relay_Log_Space: 1191
                  Until_Condition: None
                   Until_Log_File: 
                    Until_Log_Pos: 0
               Master_SSL_Allowed: No
               Master_SSL_CA_File: 
               Master_SSL_CA_Path: 
                  Master_SSL_Cert: 
                Master_SSL_Cipher: 
                   Master_SSL_Key: 
            Seconds_Behind_Master: 0
    Master_SSL_Verify_Server_Cert: No
                    Last_IO_Errno: 0
                    Last_IO_Error: 
                   Last_SQL_Errno: 0
                   Last_SQL_Error: 
      Replicate_Ignore_Server_Ids: 
                 Master_Server_Id: 1
                      Master_UUID: 9da60612-7a17-11e9-b288-000c2935c4a6
                 Master_Info_File: /usr/local/mysql/data/master.info
                        SQL_Delay: 0
              SQL_Remaining_Delay: NULL
          Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
               Master_Retry_Count: 86400
                      Master_Bind: 
          Last_IO_Error_Timestamp: 
         Last_SQL_Error_Timestamp: 
                   Master_SSL_Crl: 
               Master_SSL_Crlpath: 
               Retrieved_Gtid_Set: 
                Executed_Gtid_Set: 
                    Auto_Position: 0
             Replicate_Rewrite_DB: 
                     Channel_Name: 
               Master_TLS_Version: 
    1 row in set (0.00 sec)
    
    

    部署lvs+keepalived(lvs1,lvs2)

    两台机器一样先安装上
    [root@lvs2 ~]# yum -y install ipvsadm kernel-devel openssl-devel keepalived
    
    [root@lvs2 ~]# vim /etc/keepalived/keepalived.conf 
    #这里keepalived.lvs配置文件不解释,可以到其它站点搜索文档阅读
    ! Configuration File for keepalived
    
    global_defs {
       notification_email {
         acassen@firewall.loc
         failover@firewall.loc
         sysadmin@firewall.loc
       }
       notification_email_from Alexandre.Cassen@firewall.loc
       smtp_server 192.168.200.1
       smtp_connect_timeout 30
       router_id LVS_DEVEL
       vrrp_skip_check_adv_addr
       vrrp_garp_interval 0
       vrrp_gna_interval 0
    }
    
    vrrp_instance VI_1 {
        state MASTER
        interface ens32
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
    	192.168.111.200
        }
    }
    
    virtual_server 192.168.111.200 3306 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        protocol TCP
    
        real_server 192.168.111.4 3306 {
            weight 1
    	TCP_CHECK {
    	  connect_timeout 10
    	  nb_get_retry 3
    	  delay_before_retry 3
    	  connect_port 3306
    
    	}
                }
        real_server 192.168.111.5 3306 {
            weight 1
    	TCP_CHECK {
    	  connect_timeout 10
    	  nb_get_retry 3
    	  delay_before_retry 3
    	  connect_port 3306
    
    	}
            }
        }
    }
    
    [root@lvs2 ~]# scp /etc/keepalived/keepalived.conf root@lvs1:/etc/keepalived/
    #复制到另一台机器
    lvs1修改如下
    [root@lvs1 ~]# vim /etc/keepalived/keepalived.conf 
    12    router_id LVS_DEVEL1
    20     state BACKUP
            priority 90
    
    [root@lvs2 ~]# systemctl start keepalived.service 
    #分别启动服务
    
    [root@lvs2 ~]# ip a | grep ens32
    2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        inet 192.168.111.7/24 brd 192.168.111.255 scope global noprefixroute ens32
        inet 192.168.111.200/32 scope global ens32
    
    [root@lvs2 ~]# ipvsadm -ln
    #查看状态
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  192.168.111.200:3306 rr
      -> 192.168.111.4:3306           Route   1      0          0         
      -> 192.168.111.5:3306           Route   1      0          0  
    
    
    • DR模式是每个节点都要有VIP地址
    node1:
    [root@node1 ~]# vim /opt/realserver.sh
    
    #!/bin/bash
    SNS_VIP=192.168.111.200
    ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP
    /sbin/route add -host $SNS_VIP dev lo:0
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore 
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce 
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore 
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce 
    sysctl -p >/dev/null 2>&1
    
    node2:
    [root@node1 ~]# vim /opt/realserver.sh
    
    #!/bin/bash
    SNS_VIP=192.168.111.200
    ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP
    /sbin/route add -host $SNS_VIP dev lo:0
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore 
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce 
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore 
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce 
    sysctl -p >/dev/null 2>&1
    
    
    [root@node1 ~]# sh /opt/realserver.sh 
    
    #在manager,node1机器上连接VIP进行测试
    
    [root@node1 ~]# mysql -uroot -p123456 -h192.168.111.200
    mysql: [Warning] Using a password on the command line interface can be insecure.
    Welcome to the MySQL monitor.  Commands end with ; or g.
    Your MySQL connection id is 749
    Server version: 5.7.24 MySQL Community Server (GPL)
    
    Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
    
    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.
    
    Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
    
    mysql> 
    
    

    四.总结

    1. MHA集群个人感觉配置依靠人工配置的地方较多,易出错
    2. 大型架构的话还得多掂量一下
    3. 有很多扩展的地方,如amoeba实现读写分离,没有在本案例体现,灵活运用
  • 相关阅读:
    【eoe资源】通过片段创建灵活的用户界面
    史上最全的Android开发索引帖
    临时记录
    五种开源协议的比较(BSD,Apache,GPL,LGPL,MIT)
    【转】深入探讨 Android 传感器
    Java Collections Framework Java集合框架List,Map,Set等全面介绍之概要篇
    谷歌 G1 android APK安装器 离线安装软件
    【转】請為你的 Android 程式加上 obfuscation 吧!
    【转】Android Toolchain与Bionic Libc
    用VirtualBox在XP环境下虚拟Ubuntu的过程
  • 原文地址:https://www.cnblogs.com/jojoword/p/10896288.html
Copyright © 2011-2022 走看看