zoukankan      html  css  js  c++  java
  • hbase多用户入库,regionserver下线问题

    近期对hbase多用户插入数据时,regionserver会莫名奇妙的关闭,regionserver的日志有很多异常:

    如下:

    org.apache.hadoop.hbase.DroppedSnapshotException: region: t,12130111020202,1369296305769.f14b9a1d05ae485981f6a8579f1324fb.
            at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1000)
            at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:905)
            at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:857)
            at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:394)
            at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushOneForGlobalPressure(MemStoreFlusher.java:202)
            at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:222)

    2013-05-23 00:48:27,671 WARN org.apache.hadoop.hbase.regionserver.Store: Failed open of hdfs://cloudgis4:9000/hbase/t/c85d7d3bc3a55a93a147f5c4f07f87b8/imageFamily/2223460197050463756.74f68489b6ea43b520c2adca643cbbdb; presumption is that file was corrupted at flush and lost edits picked up by commit log replay. Verify!
    java.io.IOException: Filesystem closed
            at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:241)
            at org.apache.hadoop.hdfs.DFSClient.access$800(DFSClient.java:74)
            at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2037)
            at java.io.DataInputStream.readFully(DataInputStream.java:178)
            at java.io.DataInputStream.readLong(DataInputStream.java:399)
            at org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1526)
            at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:885)
            at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:819)
            at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
            at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382)
            at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)

    ABORTING region server serverName=cloudgis1,60020,1369232412016, load=(requests=1662, regions=111, usedHeap=3758, maxHeap=4991): Replay of HLog required. Forcing server shutdown


    2013-05-23 00:48:20,081 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Too many open files
    在网上查了很久也没有解决办法,把日志从头看了一遍,发现一句话:


    2013-05-23 00:48:16,939 WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /192.168.3.6:50010, add to deadNodes and continuejava.net.SocketException: Too many open files

    原来linux对打开文件数有限制,而datanode无法打开文件,所以就回报异常,regionserver也关闭了。解决方法如下:


    HBase是数据库,会在同一时间使用很多的文件句柄。大多数linux系统使用的默认值1024是不能满足的,会导致FAQ: Why do I see "java.io.IOException...(Too many open files)" in my logs?异常。还可能会发生这样的异常

          2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
          2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
          

    所以你需要修改你的最大文件句柄限制。可以设置到10k. 你还需要修改 hbase 用户的 nproc,如果过低会造成 OutOfMemoryError异常。 [2] [3].

    需要澄清的,这两个设置是针对操作系统的,不是Hbase本身的。有一个常见的错误是Hbase运行的用户,和设置最大值的用户不是一个用户。在Hbase启动的时候,第一行日志会现在ulimit信息,所以你最好检查一下。 [4]


    如果你使用的是Ubuntu,你可以这样设置:

    在文件 /etc/security/limits.conf 添加一行,如:

    hadoop  -       nofile  32768

    可以把 hadoop 替换成你运行Hbase和Hadoop的用户。如果你用两个用户,你就需要配两个。还有配nproc hard 和 soft limits. 如:

    hadoop soft/hard nproc 32000

    .

    在 /etc/pam.d/common-session 加上这一行:

    session required  pam_limits.so

    否则在 /etc/security/limits.conf上的配置不会生效.

    还有注销再登录,这些配置才能生效!

    一个 Hadoop HDFS Datanode 有一个同时处理文件的上限. 这个参数叫 xcievers (Hadoop的作者把这个单词拼错了). 在你加载之前,先确认下你有没有配置这个文件conf/hdfs-site.xml里面的xceivers参数,至少要有4096:

          <property>
            <name>dfs.datanode.max.xcievers</name>
            <value>4096</value>
          </property>
          

    对于HDFS修改配置要记得重启.

    如果没有这一项配置,你可能会遇到奇怪的失败。你会在Datanode的日志中看到xcievers exceeded,但是运行起来会报 missing blocks错误。例如: 10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry... [5]

  • 相关阅读:
    Xah Lee Web 李杀网
    About Unixstickers
    Amazon.com: NEW VI AND VIM EDITOR KEYBOARD STICKER: Office Products
    Company Story | Vistaprint
    8月30号周五香港接单ING~~化妆品只加10元!!!!!!
    贝佳斯绿泥多久用一次?_百度知道
    贝佳斯绿泥_百度百科
    [PHP]利用MetaWeblog API实现XMLRPC功能
    The tempfile module
    20.23. xmlrpclib — XML-RPC client access — Python v2.7.5 documentation
  • 原文地址:https://www.cnblogs.com/zhwl/p/3651863.html
Copyright © 2011-2022 走看看