zoukankan      html  css  js  c++  java
  • hdfs 的存储空间扩展

    问题:之前集群上每个节点的的大小为50G, 但是硬盘容量是160G的, 不明白为什么才50个G;


    后来发现是因为dfs.data.dir设置的问题,该目录下挂载的磁盘空间的大小就会作为该节点的容量。。

    所以想到了挂两个目录,一个在 / 下面,一个在 /home下面,因为我的集群上这两个目录是挂在不同分区上的

    但是随之而来的问题就是:

    /************************************************************
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = cdfsrv6.mit.edu/18.77.0.180
    STARTUP_MSG:   args = []
    STARTUP_MSG:   version = 0.19.2-dev
    STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/core/tags/release-0.19.1 -r 748415; compiled by 'wart' on Mon Mar 23 15:21:37 PDT 2009
    ************************************************************/
    2010-03-30 16:46:18,456 ERROR datanode.DataNode (DataNode.java:main(1331)) - org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /export/06a/hadoop/data is in an inconsistent state: has incompatible storage Id.
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.getFields(DataStorage.java:183)
            at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:227)
            at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:216)
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:228)
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:291)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:209)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1242)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1197)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1205)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1327)

    This is caused if one of the data directories gets reformatted. This causes the VERSION file (i.e., /path/to/hadoop/data/current/VERSION) to get regenerated. If there are multiple data directories, and at least one has a different VERSION file, you will get this message.


     Solution

    Take the following actions:

    1. Verify there is no datanode java process on the node currently running.
    2. Create a backup of all the VERSION files.
    3. Copy one of the VERSION files into all the data directories in the correct place ($PREFIX/current/VERSION).
    4. Start the data node. If the error does not go away, contact osg-hadoop support.

    参考文献:https://twiki.grid.iu.edu/bin/view/Storage/HadoopDebug#Incompatible_Storage_IDs_on_the


  • 相关阅读:
    MySQL灾备切换
    crontab 定时任务
    Mysql常用命令 详细整理版
    linux 常用命令
    shell逻辑运算总结, 包括[[]]与[]的区别,&&与-a的区别,||与-o的区别
    linux端口详解大全
    编译安装php5.6
    linux给用户添加sudo权限
    Effective C#(二)
    Effective C#(一)
  • 原文地址:https://www.cnblogs.com/Stomach-ache/p/3703179.html
Copyright © 2011-2022 走看看