zoukankan      html  css  js  c++  java
  • Hadoop运行中遇到的错误调试

    在运行Hadoop的过程中遇到的最多的问题就是DataNode不能正常的启动,各种问题都有可能,说一下我遇到的两种情况:

    (1)第一种情况是Master的防火墙没有关闭。这样在启动Hadoop的时候,Master这个节点可以正常的启动,但是Master的防火墙开启以后,使得Slave不能访问Master的9000端口。这种情况,在Slave的DataNode启动后又立即关闭了。

    (2)第二种情况的原因:每次NameNode -format会重新创建一个NameNode ID,而tmp/dfs/data/current下的文件VERSION中包含了上次format下的ID,NameNode -format清空了NameNode下的数据,但是没有清空DataNode下的数据,导致启动时失败。所要做的就是每次fotmat前,清空tmp一下的所有目录。

    在说完上面的两个问题之后,说一个怎么强调都不过分的东西,那就是要学会看运行的日志信息。上面的两个问题都会在日志中有显示。

    哪个节点没有启动,就查哪个节点的logs文件夹下的日志信息。以第二个问题为例,DataNode没有启动就能看到那个结点的日志信息:

    2010-07-21 10:12:11,987 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/admin/joe.wangh/hadoop/data/dfs.data.dir: namenode namespaceID = 898136669; datanode namespaceID = 2127444065 
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:288)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:206)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1239)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1194)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1202)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1324)

    错误提示namespaceIDs不一致。

    找到错误的原因之后,才能对症下药,找到解决问题的办法:

    下面给出两种解决办法,我使用的是第二种。

    Workaround 1: Start from scratch

    I can testify that the following steps solve this error, but the side effects won't make you happy (me neither). The crude workaround I have found is to:

    1.     stop the cluster

    2.     delete the data directory on the problematic datanode: the directory is specified by dfs.data.dir in conf/hdfs-site.xml; if you followed this tutorial, the relevant directory is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data

    3.     reformat the namenode (NOTE: all HDFS data is lost during this process!)

    4.     restart the cluster

    When deleting all the HDFS data and starting from scratch does not sound like a good idea (it might be ok during the initial setup/testing), you might give the second approach a try.

    Workaround 2: Updating namespaceID of problematic datanodes

    Big thanks to Jared Stehler for the following suggestion. I have not tested it myself yet, but feel free to try it out and send me your feedback. This workaround is "minimally invasive" as you only have to edit one file on the problematic datanodes:

    1.     stop the datanode

    2.     edit the value of namespaceID in <dfs.data.dir>/current/VERSION to match the value of the current namenode

    3.     restart the datanode

    If you followed the instructions in my tutorials, the full path of the relevant file is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data/current/VERSION (background: dfs.data.dir is by default set to ${hadoop.tmp.dir}/dfs/data, and we set hadoop.tmp.dir to /usr/local/hadoop-datastore/hadoop-hadoop).

    If you wonder how the contents of VERSION look like, here's one of mine:

    #contents of <dfs.data.dir>/current/VERSION

    namespaceID=393514426

    storageID=DS-1706792599-10.10.10.1-50010-1204306713481

    cTime=1215607609074

    storageType=DATA_NODE

    layoutVersion=-13

     

  • 相关阅读:
    ubuntu 13.04 root权限设置方法详解
    观锁和乐观锁——《POJOs in Action》
    观锁与悲观锁(Hibernate)
    关于python的环境变量问题
    vs2010 调试快捷键
    VIM7.3中文手册
    Java最全文件操作实例汇总
    response letter模板
    数据库字段类型
    Tomcat系列之Java技术详解
  • 原文地址:https://www.cnblogs.com/stemon/p/4447493.html
Copyright © 2011-2022 走看看