zoukankan      html  css  js  c++  java
  • hadoop 搭建3节点集群,遇到Live Nodes显示为0时解决办法

    首先,尼玛哥在搭建hadoop 的3节点集群时,安装基本的步骤,配置好以下几个文件

    1. core-site.xml
    2. hadoop-env.sh
    3. hdfs-site.xml
    4. yarn-env.sh
    5. yarn-site.xml
    6. slaves

    之后就是格式化NameNode节点,

    [root@spark1 hadoop]# hdfs namenode -format

    启动hdfs集群

    [root@spark1 hadoop]# start-dfs.sh

    查询各个节点是否运行成功。
    spark1 :

    [root@spark1 hadoop]# jps
    5575 SecondaryNameNode
    5722 Jps
    5443 DataNode
    5336 NameNode

    spark2:

    [root@spark2 hadoop]# jps
    1859 Jps
    1795 DataNode

    spark3:

    [root@spark3 ~]# jps
    1748 DataNode
    1812 Jps

    尼玛哥的集群搭建过程核心文件配置没问题,可是,就是在使用50070端口检测的时候,显示livenode为1 ,而且只是spark1 !

    于是,经过对问题的排查,发现,最终因为前期配置/etc/ hosts 的时候,配置分别为:

    spark1:

    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.30.111  spark1

    spark2:

    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.30.112  spark2

    spark3:

    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.30.113  spark3

    现在,统一改为:

    
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.30.113  spark3
    192.168.30.111  spark1
    192.168.30.112  spaqk2

    ok ,问题得到解决
    利用 代码 :

    [root@spark1 hadoop]# hadoop dfsadmin -report
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
    Instead use the hdfs command for it.
    
    Configured Capacity: 55609774080 (51.79 GB)
    Present Capacity: 47725793280 (44.45 GB)
    DFS Remaining: 47725719552 (44.45 GB)
    DFS Used: 73728 (72 KB)
    DFS Used%: 0.00%
    Under replicated blocks: 0
    Blocks with corrupt replicas: 0
    Missing blocks: 0
    
    -------------------------------------------------
    Datanodes available: 3 (3 total, 0 dead)
    
    Live datanodes:
    Name: 192.168.30.111:50010 (spark1)
    Hostname: spark1
    Decommission Status : Normal
    Configured Capacity: 18536591360 (17.26 GB)
    DFS Used: 24576 (24 KB)
    Non DFS Used: 2628579328 (2.45 GB)
    DFS Remaining: 15907987456 (14.82 GB)
    DFS Used%: 0.00%
    DFS Remaining%: 85.82%
    Configured Cache Capacity: 0 (0 B)
    Cache Used: 0 (0 B)
    Cache Remaining: 0 (0 B)
    Cache Used%: 100.00%
    Cache Remaining%: 0.00%
    Last contact: Wed Aug 09 05:03:06 CST 2017
    
    
    Name: 192.168.30.113:50010 (spark3)
    Hostname: spark3
    Decommission Status : Normal
    Configured Capacity: 18536591360 (17.26 GB)
    DFS Used: 24576 (24 KB)
    Non DFS Used: 2627059712 (2.45 GB)
    DFS Remaining: 15909507072 (14.82 GB)
    DFS Used%: 0.00%
    DFS Remaining%: 85.83%
    Configured Cache Capacity: 0 (0 B)
    Cache Used: 0 (0 B)
    Cache Remaining: 0 (0 B)
    Cache Used%: 100.00%
    Cache Remaining%: 0.00%
    Last contact: Wed Aug 09 05:03:05 CST 2017
    
    
    Name: 192.168.30.112:50010 (spark2)
    Hostname: spark2
    Decommission Status : Normal
    Configured Capacity: 18536591360 (17.26 GB)
    DFS Used: 24576 (24 KB)
    Non DFS Used: 2628341760 (2.45 GB)
    DFS Remaining: 15908225024 (14.82 GB)
    DFS Used%: 0.00%
    DFS Remaining%: 85.82%
    Configured Cache Capacity: 0 (0 B)
    Cache Used: 0 (0 B)
    Cache Remaining: 0 (0 B)
    Cache Used%: 100.00%
    Cache Remaining%: 0.00%
    Last contact: Wed Aug 09 05:03:05 CST 2017
    

    代码部分,可以看出,连接的datanode 为3个。
    分别为 192.168.30.111
    192.168.30.112
    192.168.30.113

  • 相关阅读:
    git--简单操作
    flask--简记
    Python--进阶处理9
    Python--比较两个字典部分value是否相等
    Python--进阶处理8
    Python--进阶处理7
    Python--进阶处理6
    Python--进阶处理5
    java7连接数据库 网页 添加学生信息测试
    使用类的静态字段和构造函数,可以跟踪某个类所创建对象的个数。请写一个类,在任何时候都可以向它查询“你已经创建了多少个对象?”
  • 原文地址:https://www.cnblogs.com/DeepRunning/p/9205955.html
Copyright © 2011-2022 走看看