退役dn2
echo "dn2" >>excludes
echo "dn2" >>yarn-excludes
sh refresh-namenodes.sh
(注:hdfs dfsadmin -refreshnodes和上述脚本实质是一致的,但是使用本命令退役失败,原因待研究)
yarn rmadmin -refreshNodes
注意事项:执行完命令后,hadoop会确保dn2上的blk都在其他借点上保留了符合副本数的副本,此过程为Decommissioning,这一过程结束后出现Decommissioned的状态,才算退役成功,一般生产中,Decommissioning状态时间较长
遇到的问题:
遇到了如下问题
************************************************************
内容引自http://www.freeoa.net/osuport/db/my-hbase-usage-problem-sets_2979.html
11、hadoop decommission时因block的replicas不够时久不能退役
hadoop
decommission一个节点Datanode,几万个block都同步过去了,但是唯独剩下2个block一直停留在哪,导致该节点几个小时也无法
下线。hadoop UI中显示在Under Replicated Blocks里面有2个块始终无法消除。
Under Replicated Blocks 2 Under Replicated Blocks In Files Under Construction 2
Under Replicated Blocks 2
Under Replicated Blocks In Files Under Construction 2
Namenode日志里面一直有这样的滚动:
2015-01-20
15:04:47,978 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
Block: blk_8859027644264991843_26141120, Expected Replicas: 3, live
replicas: 2, corrupt replicas: 0, decommissioned replicas: 1, excess
replicas: 0, Is Open File: true, Datanodes having this block:
10.11.12.13:50010 10.11.12.14:50010 10.11.12.15:50010 , Current
Datanode: 10.11.12.13:50010, Is current datanode decommissioning: true
2015-01-20
15:04:47,978 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
Block: blk_8859027644264991843_26141120,Expected Replicas: 3, live
replicas: 2, corrupt replicas: 0, decommissioned replicas: 1, excess
replicas: 0, Is Open File: true, Datanodes having this block:
10.11.12.13:50010 10.11.12.14:50010 10.11.12.15:50010 , Current
Datanode: 10.11.12.13:50010, Is current datanode decommissioning:true
google了好久,貌似是一个hadoop的bug,https://issues.apache.org/jira/browse/HDFS-5579
NameNode发现block的Replicas不够(期待应该有3个,实际有两个),或许是namenode认为数据不完整,执着地不让这个DataNode下架。。。
最终尝试如下方式解决,把replications设置成2:
hadoop fs -setrep -R 2 /
执行完后很快,该节点就下线了,神奇的replications。
************************************************************
但是我们的系统本身设置的副本数就是2,如果再设成1,感觉很不安全,加上仅剩8个block未完成,因此直接stop了dn应用。
dn节点网络修复后,启动dn,直接进入decommissioned状态,怀疑是bug;
下次遇到这种情况,hdfs fsck / -files -blocks -locations >a.log
将有问题文件的副本系数先升再降,看能否解决问题,待实验