zoukankan      html  css  js  c++  java
  • 错误:datanode无法启动

    查看日志如下:
    2016-04-14 04:07:58,821 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to itcast01/192.168.1.201:9000. Exiting.
    java.io.IOException: Incompatible clusterIDs in /itcast/hadoop-2.6.0/tmp/dfs/data: namenode clusterID = CID-fee4dcb4-9615-42c0-bd46-d3b4acf02e61; datanode clusterID = CID-9d6dfbcc-0f6f-47e3-9a79-d01cf5fc636b
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:646)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:320)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:403)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:422)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1311)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1276)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:828)
    at java.lang.Thread.run(Thread.java:745)
    2016-04-14 04:07:58,832 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid unassigned) service to itcast01/192.168.1.201:9000
    2016-04-14 04:07:58,838 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to itcast02/192.168.1.202:9000. Exiting.
    java.io.IOException: Incompatible clusterIDs in /itcast/hadoop-2.6.0/tmp/dfs/data: namenode clusterID = CID-fee4dcb4-9615-42c0-bd46-d3b4acf02e61; datanode clusterID = CID-9d6dfbcc-0f6f-47e3-9a79-d01cf5fc636b
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:646)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:320)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:403)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:422)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1311)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1276)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:828)
    at java.lang.Thread.run(Thread.java:745)
    2016-04-14 04:07:58,839 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid unassigned) service to itcast02/192.168.1.202:9000
    2016-04-14 04:07:58,892 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool (Datanode Uuid unassigned)
    2016-04-14 04:08:00,892 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
    2016-04-14 04:08:00,923 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
    2016-04-14 04:08:00,928 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
    /*************************************************

    SHUTDOWN_MSG: Shutting down DataNode at itcast05/192.168.1.205
    **************************************************/

    从日志上看,加粗的部分说明了问题

    datanode的clusterID 和 namenode的clusterID 不匹配。

    解决办法:

    我的集群环境分布是这样的:

    这里写图片描述
    根据日志中的路径,在namenode节点上
    cd /home/hadoop/tmp/dfs
    这里写图片描述
    这里写图片描述

    datanode节点上

    cd /home/hadoop/tmp/dfs
    能看到 data/current文件夹,
    这里写图片描述

    data/current下的VERSION中的clusterID复制到data/current下的VERSION中,覆盖掉原来的clusterID
    让两个clusterID保持一致

    然后重启,启动后执行jps,查看datanode节点itcast05进程:
    [root@itcast05 ~]# jps
    3645 QuorumPeerMain
    3715 JournalNode
    4292 DataNode
    5190 Jps
    3976 NodeManager

    出现该问题的原因:

      在第一次格式化dfs后,启动并使用了hadoop,后来又重新执行了格式化命令(hdfs namenode -format),这时namenode的clusterID会重新生成,而datanode的clusterID 保持不变。

  • 相关阅读:
    css换行和超出隐藏
    [转]ObjectARX SDK 下载全集
    ObjectARX延时动画之定时器简单示意
    ObjectARX延时动画效果简单示意
    ObjectARX选择集快还是遍历块表记录获取实体objectid快?
    ObjectARX插入属性块简单例子
    AcDbRegion面域交集布尔运算简单实例
    wblockCloneObjects从块定义写块到外部文件
    ObjectARX 创建AcDbLeader引线附着块对象实例
    ObjectARX JIG简单示意,实现正交例子
  • 原文地址:https://www.cnblogs.com/shiguangmanbu2016/p/5932877.html
Copyright © 2011-2022 走看看