zoukankan      html  css  js  c++  java
  • 错误:datanode无法启动

    查看日志如下:
    2016-04-14 04:07:58,821 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to itcast01/192.168.1.201:9000. Exiting.
    java.io.IOException: Incompatible clusterIDs in /itcast/hadoop-2.6.0/tmp/dfs/data: namenode clusterID = CID-fee4dcb4-9615-42c0-bd46-d3b4acf02e61; datanode clusterID = CID-9d6dfbcc-0f6f-47e3-9a79-d01cf5fc636b
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:646)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:320)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:403)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:422)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1311)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1276)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:828)
    at java.lang.Thread.run(Thread.java:745)
    2016-04-14 04:07:58,832 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid unassigned) service to itcast01/192.168.1.201:9000
    2016-04-14 04:07:58,838 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to itcast02/192.168.1.202:9000. Exiting.
    java.io.IOException: Incompatible clusterIDs in /itcast/hadoop-2.6.0/tmp/dfs/data: namenode clusterID = CID-fee4dcb4-9615-42c0-bd46-d3b4acf02e61; datanode clusterID = CID-9d6dfbcc-0f6f-47e3-9a79-d01cf5fc636b
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:646)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:320)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:403)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:422)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1311)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1276)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:828)
    at java.lang.Thread.run(Thread.java:745)
    2016-04-14 04:07:58,839 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid unassigned) service to itcast02/192.168.1.202:9000
    2016-04-14 04:07:58,892 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool (Datanode Uuid unassigned)
    2016-04-14 04:08:00,892 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
    2016-04-14 04:08:00,923 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
    2016-04-14 04:08:00,928 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
    /*************************************************

    SHUTDOWN_MSG: Shutting down DataNode at itcast05/192.168.1.205
    **************************************************/

    从日志上看,加粗的部分说明了问题

    datanode的clusterID 和 namenode的clusterID 不匹配。

    解决办法:

    我的集群环境分布是这样的:

    这里写图片描述
    根据日志中的路径,在namenode节点上
    cd /home/hadoop/tmp/dfs
    这里写图片描述
    这里写图片描述

    datanode节点上

    cd /home/hadoop/tmp/dfs
    能看到 data/current文件夹,
    这里写图片描述

    data/current下的VERSION中的clusterID复制到data/current下的VERSION中,覆盖掉原来的clusterID
    让两个clusterID保持一致

    然后重启,启动后执行jps,查看datanode节点itcast05进程:
    [root@itcast05 ~]# jps
    3645 QuorumPeerMain
    3715 JournalNode
    4292 DataNode
    5190 Jps
    3976 NodeManager

    出现该问题的原因:

      在第一次格式化dfs后,启动并使用了hadoop,后来又重新执行了格式化命令(hdfs namenode -format),这时namenode的clusterID会重新生成,而datanode的clusterID 保持不变。

  • 相关阅读:
    PHPstorm配置xdebug问题小记
    PHP 实现遍历出目录及其子文件
    localStorage存、取数组
    关于用户体验
    PHP实现导出Excel文件
    js将一位数组分割成每三个一组
    vue 动态绑定背景图片
    父组件传值给子组件
    数组字符串 转化成 对象
    Vuex 页面刷新后store保存的数据会丢失 取cookie值
  • 原文地址:https://www.cnblogs.com/shiguangmanbu2016/p/5932877.html
Copyright © 2011-2022 走看看