zoukankan      html  css  js  c++  java
  • java.io.IOException: Incompatible clusterIDs

    启动Hadoop集群的时候,所有的datanode启动不了,报错如下

    java.io.IOException: Incompatible clusterIDs in /home/xiaoqiu/hadoop_tmp/dfs/data:
    namenode clusterID = CID-7ecadf3f-9aa7-429a-8013-4e3ad1f28870; 
    datanode clusterID = CID-77fab491-d173-4dd3-8bc4-f36c0cb28b29
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:777)
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300)
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416)
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1393)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1358)
            at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:313)
            at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:216)
            at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:637)
            at java.lang.Thread.run(Thread.java:745)

    解决办法:

    分别进入各个节点的临时目录,找到data和name目录下面的VERSION,将data的ClusterID改成和name 一致

    master节点s150:

    name:

    [xiaoqiu@s150 /home/xiaoqiu/hadoop_tmp/dfs/name/current]$ cat VERSION
    #Sun Dec 31 00:29:38 EST 2017
    namespaceID=685530356
    clusterID=CID-cd569893-3a8e-4837-8c10-bdb93fd50d65
    cTime=0
    storageType=NAME_NODE
    blockpoolID=BP-907694094-192.168.109.150-1514698178308
    layoutVersion=-63

    data:

    xiaoqiu@s150 /home/xiaoqiu/hadoop_tmp/dfs/data/current]$ cat VERSION
    #Sun Dec 31 00:27:44 EST 2017
    storageID=DS-a5caee40-5e97-4751-bcec-dc4f7a7e3fda
    clusterID=CID-576629e1-43c9-4669-a6ee-74c5344be3df//不一致
    cTime=0
    datanodeUuid=b8cfe998-2d55-4fcc-9fc5-e849017cbceb
    storageType=DATA_NODE
    layoutVersion=-56

    修改data的clusterID

    [xiaoqiu@s150 /home/xiaoqiu/hadoop_tmp/dfs/data/current]$ cat VERSION
    #Sun Dec 31 00:27:44 EST 2017
    storageID=DS-a5caee40-5e97-4751-bcec-dc4f7a7e3fda
    clusterID=CID-cd569893-3a8e-4837-8c10-bdb93fd50d65//修改成和name一致
    cTime=0
    datanodeUuid=b8cfe998-2d55-4fcc-9fc5-e849017cbceb
    storageType=DATA_NODE
    layoutVersion=-56

    启动datanode

    [xiaoqiu@s150 /home/xiaoqiu/hadoop_tmp/dfs/data/current]$ start-dfs.sh

    节点s151:

    name:

    [xiaoqiu@s151 /home/xiaoqiu/hadoop_tmp/dfs/name/current]$ cat VERSION
    #Mon Dec 25 15:20:38 EST 2017
    namespaceID=875672388
    clusterID=CID-45abf7d9-2dec-4f77-b800-f20ddab41a1b
    cTime=0
    storageType=NAME_NODE
    blockpoolID=BP-1336727972-192.168.109.151-1514233238518
    layoutVersion=-63

    data:

    [xiaoqiu@s151 /home/xiaoqiu/hadoop_tmp/dfs/data/current]$ cat VERSION
    #Sun Dec 24 10:58:58 EST 2017
    storageID=DS-421723b4-ab06-486c-aece-a5a0b3f2d25e
    #clusterID=CID-77fab491-d173-4dd3-8bc4-f36c0cb28b29
    clusterID=CID-afd6244d-a77a-4ffe-a5ef-ce1a810145a7//修改为CID-45abf7d9-2dec-4f77-b800-f20ddab41a1b
    cTime=0
    datanodeUuid=e7800fda-3197-4ab9-ad34-24a1293f8097
    storageType=DATA_NODE
    layoutVersion=-56

    启动datanode

    [xiaoqiu@s151 /home/xiaoqiu/hadoop_tmp/dfs/data/current]$ hadoop-daemon.sh start datanode

    同理,对其他节点亦如此,将每个节点的data的VERSION的clusterID 改成每个节点自己对应的name的VERSION的clusterID

    欢迎关注我的公众号:小秋的博客 CSDN博客:https://blog.csdn.net/xiaoqiu_cr github:https://github.com/crr121 联系邮箱:rongchen633@gmail.com 有什么问题可以给我留言噢~
  • 相关阅读:
    Flask从入门到入土
    flask请求上下文源码分析
    python事件调度库sched
    go中简单使用kafka
    python下使用ElasticSearch
    numpy+pandas+matplotlib+tushare股票分析
    functools模块中partial的使用
    乐观锁与悲观锁
    mysql的服务器构成
    redis事件监听及在订单系统中的使用
  • 原文地址:https://www.cnblogs.com/flyingcr/p/10326966.html
Copyright © 2011-2022 走看看