zoukankan      html  css  js  c++  java
  • hadoop集群namenode同时挂datanode

    背景:(测试环境)只有两台机器一台namenode一台namenode,但集群只有一个结点感觉不出来效果,在namenode上挂一个datanode就有两个节点,弊端见最后

    操作非常简单(添加独立节点参照:http://www.cnblogs.com/pu20065226/p/8493316.html)

    1.修改namenode节点的slave文件,增加新节点信息

    [hadoop@hadoop-master hadoop]$ pwd
    /usr/hadoop/hadoop-2.7.5/etc/hadoop
    [hadoop@hadoop-master hadoop]$ cat slaves
    slave1
    hadoop-master
    [hadoop@hadoop-master hadoop]$ 

    2.启动新datanodedatanodenodemanger进程

    先确认namenode和当前的datanode中,etc/hoadoop/excludes文件中无待加入的主机,再进行下面操作

    [hadoop@slave2 hadoop-2.7.5]$ sbin/hadoop-daemon.sh start datanode
    starting datanode, logging to /usr/hadoop/hadoop-2.7.5/logs/hadoop-hadoop-datanode-slave2.out
    [hadoop@slave2 hadoop-2.7.5]$ sbin/yarn-daemon.sh start nodemanager
    starting datanode, logging to /usr/hadoop/hadoop-2.7.5/logs/yarn-hadoop-datanode-slave2.out
    [hadoop@slave2 hadoop-2.7.5]$
    91284 SecondaryNameNode
    90979 NameNode
    91519 ResourceManager
    41768 DataNode
    41899 NodeManager
    41999 Jps
    [hadoop@slave2
    ~]$


    3.在NameNode上刷新节点

    [hadoop@hadoop-master ~]$ hdfs dfsadmin -refreshNodes
    Refresh nodes successful
    [hadoop@hadoop-master ~]$sbin/start-balancer.sh

    4.在namenode查看当前集群情况,

    确认节点已经正常加入

    [hadoop@hadoop-master hadoop-2.7.5]$ hdfs dfsadmin -report
    Configured Capacity: 58663657472 (54.63 GB)
    Present Capacity: 35990061540 (33.52 GB)
    DFS Remaining: 35989540864 (33.52 GB)
    DFS Used: 520676 (508.47 KB)
    DFS Used%: 0.00%
    Under replicated blocks: 12
    Blocks with corrupt replicas: 0
    Missing blocks: 0
    Missing blocks (with replication factor 1): 0
    
    -------------------------------------------------
    Live datanodes (2):
    
    Name: 192.168.48.129:50010 (hadoop-master)
    Hostname: hadoop-master
    Decommission Status : Normal
    Configured Capacity: 38588669952 (35.94 GB)
    DFS Used: 213476 (208.47 KB)
    Non DFS Used: 16331292188 (15.21 GB)
    DFS Remaining: 22257164288 (20.73 GB)
    DFS Used%: 0.00%
    DFS Remaining%: 57.68%
    Configured Cache Capacity: 0 (0 B)
    Cache Used: 0 (0 B)
    Cache Remaining: 0 (0 B)
    Cache Used%: 100.00%
    Cache Remaining%: 0.00%
    Xceivers: 1
    Last contact: Mon Mar 19 19:54:45 PDT 2018
    
    
    Name: 192.168.48.132:50010 (slave1)
    Hostname: slave1
    Decommission Status : Normal
    Configured Capacity: 20074987520 (18.70 GB)
    DFS Used: 307200 (300 KB)
    Non DFS Used: 6342303744 (5.91 GB)
    DFS Remaining: 13732376576 (12.79 GB)
    DFS Used%: 0.00%
    DFS Remaining%: 68.41%
    Configured Cache Capacity: 0 (0 B)
    Cache Used: 0 (0 B)
    Cache Remaining: 0 (0 B)
    Cache Used%: 100.00%
    Cache Remaining%: 0.00%
    Xceivers: 1
    Last contact: Mon Mar 19 19:54:46 PDT 2018

    网页查看

    弊端(来源网络):首先NameNode将文件命名空间的状态保存在状态中,比如哪个文件块在哪个datanode上,由于在较大的hadoop集群中,会存在很多文件块,这样就会占用NameNode很大的内存,所以不会浪费NameNode的计算资源 其次,对于长时间运行的集群来说,NameNode一致将命名空间的状态变化写入edits日志文件,时间久了该文件也会很大,只要将NameNode的存储规划的合理,是不会浪费存储的

    hadoop集群重要的是保证namdenode的长期稳定运行,把datanode放在namenode上,增加了namenode的负担,datanode占用大量的磁盘io,网络流量可能导致hdfs响应慢,错误率增加,要进行大量错误恢复,这影响集群的稳定性。

    至于namenode是否浪费资源,namenode要维护整个集群的(一,二级关系)一、目录树,文件元信息,二、块到数据节点的映射。对于一定规模的集群要消耗大量的内存,cpu资源。namenode还会把一级关系持久化到镜像文件中,并且用编辑日志保证数据被持久化。这也会占用大量的存储资源,同事,有大量的datanode节点,可能还有大量的客户端同namenode进行网络通信。综上,namenode资源并没浪费!

  • 相关阅读:
    emqttd的启动脚本
    vue2的全局变量
    windows 上优雅的安装 node 和 npm
    Intent数据清理
    android 滑动刷新的实验总结
    Android 音量键拦截
    多进程通讯笔记 android aidl
    perl-Thread-Queue for openwrt
    openwrt的编译环境
    高德地图白屏的问题
  • 原文地址:https://www.cnblogs.com/pu20065226/p/8608128.html
Copyright © 2011-2022 走看看