zoukankan      html  css  js  c++  java
  • hdfs 的存储空间扩展

    问题:之前集群上每个节点的的大小为50G, 但是硬盘容量是160G的, 不明白为什么才50个G;


    后来发现是因为dfs.data.dir设置的问题,该目录下挂载的磁盘空间的大小就会作为该节点的容量。。

    所以想到了挂两个目录,一个在 / 下面,一个在 /home下面,因为我的集群上这两个目录是挂在不同分区上的

    但是随之而来的问题就是:

    /************************************************************
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG:   host = cdfsrv6.mit.edu/18.77.0.180
    STARTUP_MSG:   args = []
    STARTUP_MSG:   version = 0.19.2-dev
    STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/core/tags/release-0.19.1 -r 748415; compiled by 'wart' on Mon Mar 23 15:21:37 PDT 2009
    ************************************************************/
    2010-03-30 16:46:18,456 ERROR datanode.DataNode (DataNode.java:main(1331)) - org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /export/06a/hadoop/data is in an inconsistent state: has incompatible storage Id.
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.getFields(DataStorage.java:183)
            at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:227)
            at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:216)
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:228)
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:291)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:209)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1242)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1197)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1205)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1327)

    This is caused if one of the data directories gets reformatted. This causes the VERSION file (i.e., /path/to/hadoop/data/current/VERSION) to get regenerated. If there are multiple data directories, and at least one has a different VERSION file, you will get this message.


     Solution

    Take the following actions:

    1. Verify there is no datanode java process on the node currently running.
    2. Create a backup of all the VERSION files.
    3. Copy one of the VERSION files into all the data directories in the correct place ($PREFIX/current/VERSION).
    4. Start the data node. If the error does not go away, contact osg-hadoop support.

    参考文献:https://twiki.grid.iu.edu/bin/view/Storage/HadoopDebug#Incompatible_Storage_IDs_on_the


  • 相关阅读:
    Windows上部署MySql
    LeetCode 将一个按照升序排列的有序数组,转换为一棵高度平衡二叉搜索树
    LeetCode 把二叉搜索树转换为累加树
    Spring Cloud Eureka 分布式开发之服务注册中心、负载均衡、声明式服务调用实现
    mysql事务详解
    Java并发编程之ThreadLocal解析
    redis之mq实现发布订阅模式
    Zookeeper之Leader选举过程
    Spring Boot MyBatis 数据库集群访问实现
    分布式配置中心之spring-cloud-config
  • 原文地址:https://www.cnblogs.com/Stomach-ache/p/3703179.html
Copyright © 2011-2022 走看看