zoukankan      html  css  js  c++  java
  • hadoop hdfs空间满后重新启动不了


                server检查的时候,发现存在HDFS上的文件无法同步。再发现hadoop停掉了。

    进行重新启动,重新启动不成功。


               查看hadoop日志:

    2014-07-30 14:15:42,025 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.206.133
    <span style="color:#ff0000;">2014-07-30 14:15:42,026 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Cannot roll edit log, edits.new files already exists in all healthy directories:
      /home/hadoop/hdfs/name/current/edits.new</span>
    
    <span style="color:#ff0000;">2014-07-30 14:17:17,853 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Unable to sync edit log.
    java.io.IOException: No space left on device</span>
    	at sun.nio.ch.FileChannelImpl.force0(Native Method)
    	at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:348)
    	at org.apache.hadoop.hdfs.server.namenode.FSEditLog$EditLogFileOutputStream.flushAndSync(FSEditLog.java:215)
    	at org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:89)
    	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:1017)
    	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1190)
    	at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:628)
    	at sun.reflect.GeneratedMethodAccessor300.invoke(Unknown Source)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    	at java.lang.reflect.Method.invoke(Method.java:597)
    	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    	at java.security.AccessController.doPrivileged(Native Method)
    	at javax.security.auth.Subject.doAs(Subject.java:396)
    	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2014-07-30 14:17:17,853 FATAL org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No edit streams are accessible
    java.lang.Exception: No edit streams are accessible
    	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.fatalExit(FSEditLog.java:388)
    	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.exitIfNoStreams(FSEditLog.java:407)
    	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.removeEditsAndStorageDir(FSEditLog.java:432)
    	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.removeEditsStreamsAndStorageDirs(FSEditLog.java:470)
    	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:1030)
    	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1190)
    	at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:628)
    	at sun.reflect.GeneratedMethodAccessor300.invoke(Unknown Source)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    	at java.lang.reflect.Method.invoke(Method.java:597)
    	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    	at java.security.AccessController.doPrivileged(Native Method)
    	at javax.security.auth.Subject.doAs(Subject.java:396)
    	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

    空间满了。

    使用命令df -h查看当中一个datanode。果然


    但使用du -s查看,但显示仅仅使用几百G,并没有满


    这个实候确实让人费解,仅仅好重新启动datanode所在的server,并又一次挂载。回复原样,容量显示正常。


    可是这个时候hadoop集群依旧无法重新启动,一直报错:

    014-07-30 16:27:46,610 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
    java.io.IOException: Incorrect data format. logVersion is -32 but writables.length is 0. 
    	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:560)
    	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1026)
    	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:839)
    	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:377)
    	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
    	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
    	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
    	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
    	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
    	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
    	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
    2014-07-30 16:27:46,612 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: Incorrect data format. logVersion is -32 but writables.length is 0. 
    	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:560)
    	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1026)
    	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:839)
    	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:377)
    	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
    	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
    	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
    	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
    	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
    	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
    	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

    不得解,仅仅好从secondrynamenode尝试恢复

    1. 先备份namenode和secondrynamenode的数据

    2. 运行checkpoint数据:hadoop namenode –importCheckpoint

        失败……


    最后查了网上的方法(有一个是改动代码并又一次编译的。就算了):
    printf "xffxffxffxeexff" > edits

    重新启动失败。继续:
    printf "xffxffxffxeexff" > edits.new

    重新启动成功。


    可是问题来了。数据丢失,由于清空了edits文件最新的日志内容。

    一般来说。到这份上是没有办法了。

    可是,幸好我们有两地同步和备份。将别一个数据中心HDFS集群的数据同步过来,恢复成功。

    假设没有异地备份的话。上面的操作我也会慎之又慎的。




         

  • 相关阅读:
    bzoj3796
    bzoj2186
    bzoj3769
    bzoj2660
    bzoj2245
    bzoj2916
    bzoj1261
    在IDEA中以TDD的方式对String类和Arrays类进行学习
    2018-2019-2 实验二《Java面向对象程序设计》实验报告
    《Java程序设计》第 6 周学习总结
  • 原文地址:https://www.cnblogs.com/brucemengbm/p/6927329.html
Copyright © 2011-2022 走看看