zoukankan      html  css  js  c++  java
  • 如何计算合适的InnoDB log file size

    原文链接:http://www.mysqlperformanceblog.com/2008/11/21/how-to-calculate-a-good-innodb-log-file-size/

    Peter wrote a post a while ago about choosing a good InnoDB log file size.  Not to pick on Peter, but the post actually kind of talks about a lot of things and then doesn’t tell you how to choose a good log file size!  So I thought I’d clarify it a little.

    皮特前阵子写了一篇关于如何选择合适的INNODB LOG FILE SIZE的文章,不是要挑剔皮特,他那篇文章确实讲了很多东西,但是没有告诉你怎么选择合适的日志文件大小,那么我来简单阐述一下下。

    The basic point is that your log file needs to be big enough to let InnoDB optimize its I/O, but not so big that recovery takes a long time.  That much Peter covered really well.  But how do you choose that size?  I’ll show you a rule of thumb that works pretty well.

    问题的基本点就是你的日志文件大小需要足够大,好让INNODB优化它的I/0,但是又不能太大,以致需要花很长的时间去做恢复。这些皮特已经说得很好了,但是你怎么去选择这个大小?我来给你演示一个很有效的经验法则。

    In most cases, when people give you a formula for choosing a configuration setting, you should look at it with skepticism.  But in this case you can calculate a reasonable value, believe it or not.  Run these queries at your server’s peak usage time:

    在很多情况下,当有人给你公式去选择参数值时,你应该对此持怀疑态度。但是这次你能计算出合理的值,不管你信不信(反正我是信了)。在你的服务高峰期跑一下下面的语句

    mysql> pager grep sequence
    PAGER set to 'grep sequence'
    mysql> show engine innodb status\G select sleep(60); show engine innodb status\G
    Log sequence number 84 3836410803
    1 row in set (0.06 sec)
    
    1 row in set (1 min 0.00 sec)
    
    Log sequence number 84 3838334638
    1 row in set (0.05 sec)

    Notice the log sequence number.  That’s the total number of bytes written to the transaction log.  So, now you can see how many MB have been written to the log in one minute.  (The technique I showed here works on all versions of MySQL.  In 5.0 and newer, you can just watch Innodb_os_log_written from SHOW GLOBAL STATUS, too.)

    注意看日志序列号。那是写入到事务日志的总字节数。那么现在你可以看到1分钟内往日志文件写入了多少M数据(技术上来说,我演示的在所有MYSQL版本下都是可以工作的,在5.0和5.0之后的版本,你也只需要在SHOW GLOBAL STATUS里看Innodb_os_log_written值)

    mysql> select (3838334638 - 3836410803) / 1024 / 1024 as MB_per_min;
    +------------+
    | MB_per_min |
    +------------+
    | 1.83471203 | 
    +------------+

    As a rough rule of thumb, you can make the log big enough that it can hold at most an hour or so of logs.  That’s generally plenty of data for InnoDB to work with; an hour’s worth is more than enough so that it can reorder the writes to use sequential I/O during the flushing and checkpointing process.  At this rate, this server could use about 110 MB of logs, total.  Round it up to 128 for good measure.  Since there are two log files by default, divide that in half, and now you can set

    作为一个粗略的规则,你可以让这个日志足够大到能容纳最多一小时左右的日志。很多INNODB数据库就是这么做的。一个小时的消耗绰绰有余了,所有它能够在刷新数据和设置检查点过程用顺序IO重新排序写操作。按照这个比例,这个服务器使用大约110M日志,向上圆整到128。由于默认有2个日志文件,除以2,现在我们设置成:

    innodb_log_file_size=64M

    Does that look surprisingly small?  It might.  I commonly see log file sizes in the gigabyte ranges.  But that’s generally a mistake.  The server I used for the measurements above is a big one doing a lot of work, not a toy.  Log file sizes can’t be left at the default 5MB for any real workload, but they often don’t need to be as big as you might think, either.

    是不是看起来出奇的小?是有这个可能。我通常看到这个文件大小在GB范围,但是那通常是有问题的。我上面用来计算的是一个高负载的服务器,不是随随便便找来的机器。在真实负载下,日志文件大小不能是默认的5MB,但是他们经常也不需要配置的像你猜想的那么大。

    If this rule-of-thumb calculation ends up showing you that your log file size ought to be many gigabytes, well, you have a more active write workload.  Perhaps you’re inserting a lot of big rows or something.  In this case you might want to make the log smaller so you don’t end up with GB of logs.  But also realize this: the recovery time depends not only on the total log file size, but the number of entries in it.  If you’re writing huge entries to the log, fewer log entries will fit into a given log file size, which will generally make recovery faster than you might expect with a big log.

    如果这个公式最后的计算结果显示你的日志文件应该是几个GB的大小,那么好吧,你的写负担非常之重了。可能你写入很多大的行或其他什么。在这种情况下你可能希望让日志文件小点这样你就不会得到GB大小的日志。但是也要认识到这一点:恢复时间不仅仅取决于总的日志大小,也取决于日志里面的记录数。如果你有很大的记录需要写到日志,给定的日志文件大小又只能存入少量的日志,这通常会使恢复操作比大日志要快。

    However, most of the time when I run this calculation, I end up finding that the log file size needs to be a lot smaller than it’s configured to be.  In part that’s because InnoDB’s log entries are very compact.  The other reason is that the common advice to size the logs as a fraction of the buffer pool size is just wrong.

    然而,很多时候我执行这个计算,最后得到的日志文件的大小都比实际配置的要小很多。一部分原因是由于INNODB的日志记录都很紧凑。另外一个原因是通常的对于日志文件大小是BUFFER POOL大小的几分之几的建议是错误的。

    (来自http://dev.mysql.com/doc/refman/5.0/en/innodb-parameters.html#sysvar_innodb_log_file_size:The size in bytes of each log file in a log group. The combined size of log files must be less than 4GB. The default value is 5MB. Sensible values range from 1MB to 1/N-th of the size of the buffer pool, where N is the number of log files in the group. The larger the value, the less checkpoint flush activity is needed in the buffer pool, saving disk I/O. But larger log files also mean that recovery is slower in case of a crash.)

    One final note: huge buffer pools or really unusual workloads may require bigger (or smaller!) log sizes.  This is where formulas break down and judgment and experience are needed.  But this “rule of thumb” is generally a good sane place to start.

    最后一点:很大的BP或真的很不同寻常的负载环境可能要求更大(或更小)的日志文件大小。打破公式和经验判断是必要的。但是这个“经验法则”通常是一个好而理智的开始。

  • 相关阅读:
    【deep learning精华部分】稀疏自编码提取高阶特征、多层微调完全解释及代码逐行详解
    【machine learning通俗讲解code逐行注释】之线性回归实现
    softmax实现(程序逐句讲解)
    softmax回归(理论部分解释)
    AtomicInteger小小的理解
    jdk8新特性之lambda expressions
    i++ 与 ++i 的从字节码层面看二者的区别
    jdk8永久代从方法区移除的验证
    复杂事件处理引擎—Esper 处理模型
    复杂事件处理引擎—Esper参考(事件部分)
  • 原文地址:https://www.cnblogs.com/zuoxingyu/p/2738864.html
Copyright © 2011-2022 走看看