zoukankan      html  css  js  c++  java
  • read_buffer_size 中断复制

    原文链接:http://www.mysqlperformanceblog.com/2012/06/06/read_buffer_size-can-break-your-replication/

    There are some variables that can affect the replication behavior and sometimes cause some big troubles. In this post I’m going to talk about read_buffer_size and how this variable together with max_allowed_packet can break your replication.

    有一些参数能够影响复制的状态,有时候会引起很大的麻烦。在这篇文章里我讲讲read_buffer_size以及它和max_allowed_packet在一起时如何中断复制的。

    The setup is a master-master replication with the following values:  用下面的参数配置好M-M复制:

    max_allowed_packet = 32M
    read_buffer_size = 100M

    To break the replication I’m going to load the 4 million rows with LOAD DATA INFILE:

    为了中断复制,我将用LOAD DATA INFILE加载4百万行数据:

    MasterA (test) > LOAD DATA INFILE '/tmp/data' INTO TABLE t;
    Query OK, 4510080 rows affected (26.89 sec)

    After some time the SHOW SLAVE STATUS on MasterA gives us this output: 等待一会后,在MASTERA上执行SHOW SLAVE STATUS得到下面输出:

    Last_IO_Error: Got fatal error 1236 from master when reading data from 
    binary log: 'log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master; the first event
    'mysql-bin.000002' at 74416925, the last event read from './mysql-bin.000004' at 171,
    the last byte read from
    './mysql-bin.000004' at 190.'

    Very strange! We have loaded the data on MasterA and now it has the replication broken with a error on max_allowed_packet. The next step is to check the binary logs of both masters.

    非常神奇了!我们在MASTETA上加载数据,现在却有一个max_allowed_packet的错误导致复制中断了。下一步就是检查这2个master的二进制日志

    MasterA:

     masterA> mysqlbinlog data/mysql-bin.000004 | grep block_len
     #Begin_load_query: file_id: 1 block_len: 33554432
     #Append_block: file_id: 1 block_len: 33554432
     #Append_block: file_id: 1 block_len: 33554432
     #Append_block: file_id: 1 block_len: 4194304
     #Append_block: file_id: 1 block_len: 33554432
     #Append_block: file_id: 1 block_len: 10420608

    No block is larger than the max_allowed_packet (33554432). 没有超过max_allowed_packet (33554432)的块。

    MasterB:

     masterB> mysqlbinlog data/mysql-bin.000004 | grep block_len
     #Begin_load_query: file_id: 1 block_len: 33555308
     #Append_block: file_id: 1 block_len: 33555308
     #Append_block: file_id: 1 block_len: 33555308
     #Append_block: file_id: 1 block_len: 4191676
     #Append_block: file_id: 1 block_len: 33555308
     #Append_block: file_id: 1 block_len: 10419732

    Do you see the difference? 33555308 is larger than the max_allowed_packet size 33554432 so the Master2 has written the some blocks 876 bytes larger than the safe value. Then, MasterA tries to read again the binary log from MasterB and the replication breaks because packets are too large. No, the replicate_same_server_id is not enabled :)

    看到不同之处了没?33555308比max_allowed_packet(33554432)大,所以MASTERB写了一些比安全值大876byte的块。然后MATERA尝试去MASTERB读取二进制日志,因为读到的块太大,所以复制被中断了。replicate_same_server_id参数没有启用:)

    What is the relation between read_buffer_size and this bug? 这个BUG和read_buffer_size之间的关系是怎么样的?

    Again, an example is best than words. These are the new values:  再来一次,例子更有说服力,新的参数值如下:

    max_allowed_packet = 32M
    read_buffer_size = 16M

    We run the LOAD DATA INFILE again and now this is the output on both servers:  重新执行LOAD DATA INFILE,两个服务器的输出如下:

     #Begin_load_query: file_id: 1 block_len: 16777216
     #Append_block: file_id: 1 block_len: 16777216
     #Append_block: file_id: 1 block_len: 16777216
     #Append_block: file_id: 1 block_len: 16777216
     #Append_block: file_id: 1 block_len: 16777216
     #Append_block: file_id: 1 block_len: 16777216
     #Append_block: file_id: 1 block_len: 16777216
     #Append_block: file_id: 1 block_len: 16777216
     #Append_block: file_id: 1 block_len: 14614912

    The maximum size of the data blocks are based on the read_buffer_size value so they are never larger than the max_allowed_packet :)

    Therefore, a read_buffer_size value larger than max_allowed_packet can break your replication while importing data to MySQL. This bug affects from 5.0.x to the latest 5.5.25 release and the easiest workaround is to not have a read_buffer_size larger than max_allowed_packet. The bug 30435 seems not to be really solved.

    最大的数据块大小都是基于read_buffer_size值,所以它们不会大于max_allowed_packet :)

    所以,在MYSQL导入数据时,read_buffer_size大于max_allowed_packet时会导致复制中断。这个BUG存在于5.0.X到最新的5.5.25,最简单的解决办法是不要让read_buffer_size 大于 max_allowed_packet。BUG 30435 看起来没有被真正的解决。

    And remember, big values on read_buffer_size will not increase your performance.

    记住,read_buffer_size不会提高你的数据库性能。

  • 相关阅读:
    POJ1821 Fence 单调队列优化DP
    ZOJ 4114 dp
    2019 Multi-University Training Contest 2
    Fibonacci 矩阵乘法入门
    C
    258. Add Digits
    292. Nim Game
    345. Reverse Vowels of a String
    344. Reverse String
    169. Majority Element
  • 原文地址:https://www.cnblogs.com/zuoxingyu/p/2724685.html
Copyright © 2011-2022 走看看