zoukankan      html  css  js  c++  java
  • hadoop异常“could only be replicated to 0 nodes, instead of 1” 解决

    异常分析

     1、“could only be replicated to 0 nodes, instead of 1”异常

    (1)异常描述

    上面配置都正确无误,并且,已经完成了如下运行步骤:

    [root@localhost hadoop-0.20.0]# bin/hadoop namenode -format

    [root@localhost hadoop-0.20.0]# bin/start-all.sh

    这时,看到5个进程jobtracker、tasktracker、namenode、datanode、secondarynamenode已经给出了启动成功信息,但是运行jps命令查看进程的时候,发现并不是那样,如下所示:

    4281 Jps
    4007 SecondaryNameNode
    3771 NameNode

    可见,只有两个进程启动成功了,其它的并没有成功,如果你再继续向下执行,准备运行wordcount实例之前执行上传文件的命令:

    [root@localhost hadoop-0.20.0]# bin/hadoop fs -put input in

    现在就会抛出一堆异常了,如下所示:

    10/08/02 15:36:04 INFO hdfs.DFSClient: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/in/LICENSE.txt could only be replicated to 0 nodes, instead of 1
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

            at org.apache.hadoop.ipc.Client.call(Client.java:739)
            at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
            at $Proxy0.addBlock(Unknown Source)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
            at $Proxy0.addBlock(Unknown Source)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2873)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2755)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2046)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2232)

    10/08/02 15:36:04 WARN hdfs.DFSClient: NotReplicatedYetException sleeping /user/root/in/LICENSE.txt retries left 4
    10/08/02 15:36:04 INFO hdfs.DFSClient: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/in/LICENSE.txt could only be replicated to 0 nodes, instead of 1
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

            at org.apache.hadoop.ipc.Client.call(Client.java:739)
            at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
            at $Proxy0.addBlock(Unknown Source)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
            at $Proxy0.addBlock(Unknown Source)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2873)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2755)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2046)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2232)

    10/08/02 15:36:04 WARN hdfs.DFSClient: NotReplicatedYetException sleeping /user/root/in/LICENSE.txt retries left 3
    10/08/02 15:36:05 INFO hdfs.DFSClient: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/in/LICENSE.txt could only be replicated to 0 nodes, instead of 1
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

            at org.apache.hadoop.ipc.Client.call(Client.java:739)
            at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
            at $Proxy0.addBlock(Unknown Source)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
            at $Proxy0.addBlock(Unknown Source)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2873)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2755)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2046)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2232)

    10/08/02 15:36:05 WARN hdfs.DFSClient: NotReplicatedYetException sleeping /user/root/in/LICENSE.txt retries left 2
    10/08/02 15:36:07 INFO hdfs.DFSClient: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/in/LICENSE.txt could only be replicated to 0 nodes, instead of 1
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

            at org.apache.hadoop.ipc.Client.call(Client.java:739)
            at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
            at $Proxy0.addBlock(Unknown Source)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
            at $Proxy0.addBlock(Unknown Source)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2873)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2755)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2046)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2232)

    10/08/02 15:36:07 WARN hdfs.DFSClient: NotReplicatedYetException sleeping /user/root/in/LICENSE.txt retries left 1
    10/08/02 15:36:10 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/in/LICENSE.txt could only be replicated to 0 nodes, instead of 1
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
            at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

            at org.apache.hadoop.ipc.Client.call(Client.java:739)
            at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
            at $Proxy0.addBlock(Unknown Source)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
            at $Proxy0.addBlock(Unknown Source)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2873)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2755)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2046)
            at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2232)

    10/08/02 15:36:10 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
    10/08/02 15:36:10 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/root/in/LICENSE.txt" - Aborting...
    put: java.io.IOException: File /user/root/in/LICENSE.txt could only be replicated to 0 nodes, instead of 1

    看到“could only be replicated to 0 nodes, instead of 1”这句信息的时候,你可能会首先想到是否hdfs-site.xml配置文件中的属性dfs.replication配置错误,但事实上并不是这样。

    这时,就要查看启动日志了,我的是位于/root/hadoop-0.20.0/logs下面,如下所示:

    hadoop-root-datanode-localhost.log    hadoop-root-namenode-localhost.log           hadoop-root-tasktracker-localhost.log

    hadoop-root-datanode-localhost.out    hadoop-root-namenode-localhost.out           hadoop-root-tasktracker-localhost.out
    hadoop-root-jobtracker-localhost.log  hadoop-root-secondarynamenode-localhost.log  history
    hadoop-root-jobtracker-localhost.out  hadoop-root-secondarynamenode-localhost.out

    查看hadoop-root-datanode-localhost.log日志文件,看到异常信息:

    2010-08-02 15:38:34,642 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
    /************************************************************
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = localhost/127.0.0.1
    STARTUP_MSG:   args = []
    STARTUP_MSG:   version = 0.20.0
    STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
    ************************************************************/
    2010-08-02 15:38:35,381 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop-root/dfs/data: namenode namespaceID = 409052671; datanode namespaceID = 769845957
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)

    2010-08-02 15:38:35,382 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
    /************************************************************
    SHUTDOWN_MSG: Shutting down DataNode at localhost/127.0.0.1
    ************************************************************/

    通过上面的信息,大致可以了解到,通过“Incompatible namespaceIDs in /tmp/hadoop-root/dfs/data”可知,是由于 /tmp/hadoop-root/dfs/data中的namespaceIDs不兼容导致的,也就是说,很可能是由于上次运行其它版本的Hadoop在/tmp/hadoop-root/dfs/data目录下有残留的不兼容的数据。事实上在我运行过程中出现这个问题就是由于,刚刚尝试了Hadoop-0.19.0版本的运行,运行后并没有清理这些数据。

    (2)解决方法

    清理对应目录 /tmp/hadoop-root/dfs/data的数据以后,就可以正常运行了,这时执行启动各个进程之后,通过jps命令可以查看到结果如下所示:

    5386 JobTracker
    5253 DataNode
    5529 Jps
    4874 SecondaryNameNode
    5489 TaskTracker
    4649 NameNode

    上面5个进程都启动起来了,可以上传文件到HDFS,并执行wordcount例子

    本篇文章来源于 Linux公社网站(www.linuxidc.com)  原文链接:http://www.linuxidc.com/Linux/2010-08/27484p3.htm

  • 相关阅读:
    Codeforces 787D. Legacy 线段树优化建图+最短路
    Codeforces 1051E. Vasya and Big Integers
    BZOJ3261 最大异或和
    BZOJ3531 SDOI2014 旅行
    洛谷P2468 SDOI 2010 粟粟的书架
    2018 ICPC 焦作网络赛 E.Jiu Yuan Wants to Eat
    HDU6280 From Tree to Graph
    HDU5985 Lucky Coins 概率dp
    (HDU)1334 -- Perfect Cubes (完美立方)
    (HDU)1330 -- Deck (覆盖物)
  • 原文地址:https://www.cnblogs.com/summersoft/p/7594727.html
Copyright © 2011-2022 走看看