zoukankan      html  css  js  c++  java
  • hbase集群region数量和大小的影响

    1、Region数量的影响

    通常较少的region数量可使群集运行的更加平稳,官方指出每个RegionServer大约100个regions的时候效果最好,理由如下:
    1)Hbase的一个特性MSLAB,它有助于防止堆内存的碎片化,减轻垃圾回收Full GC的问题,默认是开启的。但是每个MemStore需要2MB(一个列簇对应一个写缓存memstore)。所以如果每个region有2个family列簇,总有1000个region,就算不存储数据也要3.95G内存空间.
    2)如果很多region,它们中Memstore也过多,内存大小触发Region Server级别限制导致flush,就会对用户请求产生较大的影响,可能阻塞该Region Server上的更新操作。
    3)HMaster要花大量的时间来分配和移动Region,且过多Region会增加ZooKeeper的负担。
    4)从hbase读入数据进行处理的mapreduce程序,过多Region会产生太多Map任务数量,默认情况下一个region对应一个map
    所以,如果一个HRegion中Memstore过多,而且大部分都频繁写入数据,每次flush的开销必然会很大,因此也建议在进行表设计的时候尽量减少ColumnFamily的个数。
     
    计算集群region数量的公式:
    ((RS Xmx) * hbase.regionserver.global.memstore.size) / (hbase.hregion.memstore.flush.size * (# column families))
    假设一个RS有16GB内存,那么16384*0.4/128m 等于51个活跃的region。
    如果写很重的场景下,可以适当调高hbase.regionserver.global.memstore.size,这样可以容纳更多的region数量。
    总结,建议分配合理的region数量,根据写请求量的情况,一般20-200个之间,可以提高集群稳定性,排除很多不确定的因素,提升读写性能。监控Region Server中所有Memstore的大小总和是否达到了上限(hbase.regionserver.global.memstore.upperLimit * hbase_heapsize,默认 40%的JVM内存使用量),超过可能会导致不良后果,如服务器反应迟钝或compact风暴。
     

    2、region大小的影响

    hbase中数据一开始会写入memstore,满128MB(看配置)以后,会flush到disk上而成为storefile。当storefile数量超过触发因子时(可以配置),会启动compaction过程将它们合并为一个storefile。对集群的性能有一定影响。而当合并后的storefile大于max.filesize,会触发分割动作,将它切分成两个region。
    1)当hbase.hregion.max.filesize比较小时,触发split的机率更大,系统的整体访问服务会出现不稳定现象。
    2)当hbase.hregion.max.filesize比较大时,由于长期得不到split,因此同一个region内发生多次compaction的机会增加了。这样会降低系统的性能、稳定性,因此平均吞吐量会受到一些影响而下降。
    总结,hbase.hregion.max.filesize不宜过大或过小,经过实战,生产高并发运行下,最佳大小5-10GB!关闭某些重要场景的hbase表的major_compact!在非高峰期的时候再去调用major_compact,这样可以减少split的同时,显著提供集群的性能,吞吐量、非常有用。
     

    169.2.2. Number of regions per RS - upper bound

    In production scenarios, where you have a lot of data, you are normally concerned with the maximum number of regions you can have per server. too many regions has technical discussion on the subject. Basically, the maximum number of regions is mostly determined by memstore memory usage. Each region has its own memstores; these grow up to a configurable size; usually in 128-256 MB range, see hbase.hregion.memstore.flush.size. One memstore exists per column family (so there’s only one per region if there’s one CF in the table). The RS dedicates some fraction of total memory to its memstores (see hbase.regionserver.global.memstore.size). If this memory is exceeded (too much memstore usage), it can cause undesirable consequences such as unresponsive server or compaction storms. A good starting point for the number of regions per RS (assuming one table) is:

    ((RS memory) * (total memstore fraction)) / ((memstore size)*(# column families))

    This formula is pseudo-code. Here are two formulas using the actual tunable parameters, first for HBase 0.98+ and second for HBase 0.94.x.

    HBase 0.98.x
    ((RS Xmx) * hbase.regionserver.global.memstore.size) / (hbase.hregion.memstore.flush.size * (# column families))
    HBase 0.94.x
    ((RS Xmx) * hbase.regionserver.global.memstore.upperLimit) / (hbase.hregion.memstore.flush.size * (# column families))+

    If a given RegionServer has 16 GB of RAM, with default settings, the formula works out to 16384*0.4/128 ~ 51 regions per RS is a starting point. The formula can be extended to multiple tables; if they all have the same configuration, just use the total number of families.

    This number can be adjusted; the formula above assumes all your regions are filled at approximately the same rate. If only a fraction of your regions are going to be actively written to, you can divide the result by that fraction to get a larger region count. Then, even if all regions are written to, all region memstores are not filled evenly, and eventually jitter appears even if they are (due to limited number of concurrent flushes). Thus, one can have as many as 2-3 times more regions than the starting point; however, increased numbers carry increased risk.

    For write-heavy workload, memstore fraction can be increased in configuration at the expense of block cache; this will also allow one to have more regions.

    169.2.3. Number of regions per RS - lower bound

    HBase scales by having regions across many servers. Thus if you have 2 regions for 16GB data, on a 20 node machine your data will be concentrated on just a few machines - nearly the entire cluster will be idle. This really can’t be stressed enough, since a common problem is loading 200MB data into HBase and then wondering why your awesome 10 node cluster isn’t doing anything.

    On the other hand, if you have a very large amount of data, you may also want to go for a larger number of regions to avoid having regions that are too large.

    169.2.4. Maximum region size

    For large tables in production scenarios, maximum region size is mostly limited by compactions - very large compactions, esp. major, can degrade cluster performance. Currently, the recommended maximum region size is 10-20Gb, and 5-10Gb is optimal. For older 0.90.x codebase, the upper-bound of regionsize is about 4Gb, with a default of 256Mb.

    The size at which the region is split into two is generally configured via hbase.hregion.max.filesize; for details, see arch.region.splits.

    If you cannot estimate the size of your tables well, when starting off, it’s probably best to stick to the default region size, perhaps going smaller for hot tables (or manually split hot regions to spread the load over the cluster), or go with larger region sizes if your cell sizes tend to be largish (100k and up).

    In HBase 0.98, experimental stripe compactions feature was added that would allow for larger regions, especially for log data. See ops.stripe.

    169.2.5. Total data size per region server

    According to above numbers for region size and number of regions per region server, in an optimistic estimate 10 GB x 100 regions per RS will give up to 1TB served per region server, which is in line with some of the reported multi-PB use cases. However, it is important to think about the data vs cache size ratio at the RS level. With 1TB of data per server and 10 GB block cache, only 1% of the data will be cached, which may barely cover all block indices.

     
  • 相关阅读:
    Kaggle 神器 xgboost
    改善代码可测性的若干技巧
    IDEA 代码生成插件 CodeMaker
    Elasticsearch 使用中文分词
    Java性能调优的11个实用技巧
    Lucene 快速入门
    Java中一个字符用unicode编码为什么不是两字节
    lucene 的评分机制
    面向对象设计的 10 条戒律
    2019.10.23-最长全1串(双指针)
  • 原文地址:https://www.cnblogs.com/zz-ksw/p/11374058.html
Copyright © 2011-2022 走看看