Hadoop0.20.203.0在关机重启后,namenode启动报错:
- 2011-10-21 05:22:20,504 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-fzuir/dfs/name does not exist.
- 2011-10-21 05:22:20,506 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
- org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-fzuir/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
- at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
- at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)
这个以开始的解决方法是将namenode重新再format下,但是后面想想不对,这样每次都format,那不是玩完了~~
然后就搜了下,发现是因为临时文件/tmp会被删除掉,解决方法就是修改core-site.xml,添加hadoop.tmp.dir属性:
- <property>
- <name>hadoop.tmp.dir</name>
- <value>/home/fzuir/Hadoop0.20.203.0/tmp/hadoop-${user.name}</value>
- </property>
问题解决了,重启电脑后,再去启动hadoop就不会出现/dfs/name is in an inconsistent state的错误了~~