zoukankan      html  css  js  c++  java
  • 【使用时发生的意外】file is not sufficiently replicated yet

    异常堆栈如下:

    2017-10-20 00:00:12,340 ERROR [com.ultrapower.secsight.util.HdfsUtil] - 追加写入文件失败!
    org.apache.hadoop.ipc.RemoteException(java.io.IOException): append: lastBlock=blk_1075691975_50802816 of src=/hebei/stdlog/std_log0001/20171016/p0001 is not sufficiently replicated yet.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2915)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:3186)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:3149)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:611)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.append(AuthorizationProviderProxyClientProtocol.java:124)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:416)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
    
        at org.apache.hadoop.ipc.Client.call(Client.java:1469)
        at org.apache.hadoop.ipc.Client.call(Client.java:1400)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy14.append(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:313)
        at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy15.append(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1756)
        at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1792)
        at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1785)
        at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:323)
        at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:319)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:319)
        at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1163)
        at com.ultrapower.secsight.util.HdfsUtil.appendWriteFile(HdfsUtil.java:309)
        at com.ultrapower.secsight.job.sync.writer.HDFSWriter.write(HDFSWriter.java:60)
        at com.ultrapower.secsight.job.maker.RunnerJob.lambda$sync$2(RunnerJob.java:130)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

    发生的场景:

      hadoop 2.6.1版本

      追加HDFS过于频繁!!执行文件复制时出错。

    解决办法:

      一次性写入多缓存数据 避免频繁追加HDFS文件

    If I run fsck it says the file is corrupt - missing one block. Sometimes the filesystem gets healthy with a random combination of running balancer/namenode recovery/hdfs dfs -setrep. After running the append for a while, the original problem reappears. I even once removed one node already which had corrupt data and the system got 100 % healthy without any problems - for a while.

    https://stackoverflow.com/questions/26361470/not-sufficiently-replicated-yet-when-appending-to-a-file-in-hdfs

  • 相关阅读:
    对PostgreSQL中bufmgr.c 中 bufs_to_lap的初步理解
    bgwriter 的睡眠时间差异
    对PostgreSQL中bufmgr.c 中 num_to_scan 的初步理解
    对PostgreSQL中bufmgr.c的进一步学习
    PHP 接收长url并重定向
    Request.ServerVariables小结
    Kiss Asp Framework 0.2.0RC Releaseed
    FLV编码、转换、录制、播放方案
    ASP错误信息
    Gzip简介
  • 原文地址:https://www.cnblogs.com/Dhouse/p/7767432.html
Copyright © 2011-2022 走看看