zoukankan      html  css  js  c++  java
  • hadoop 集群安装

    一、安装集群

    A、上传HADOOP安装包

    B、规划安装目录  /export/servers/hadoop-2.8.4

    C、解压安装包

    D、修改配置文件  $HADOOP_HOME/etc/hadoop/

    1、hadoop-env.sh

      export JAVA_HOME=/export/servers/jdk1.8.0_11

    2、core-site.xml  

      <configuration>

        <property>

        <name>fs.defaultFS</name>

        <value>hdfs://hadoop1:9000</value>

        </property>

        <property>

        <name>hadoop.tmp.dir</name>

        <value>/export/servers/hadoop/tmp</value>

        </property>

      </configuration>

    3、hdfs-site.xml

    <configuration>
    <property>
    <name>dfs.name.dir</name>
    <value>/export/servers/hadoop/dfs/name</value>
    <description>Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.</description>
    </property>
    <property>
    <name>dfs.data.dir</name>
    <value>/export/servers/hadoop/dfs/data</value>
    <description>Comma separated list of paths on the localfilesystem of a DataNode where it should store its blocks.</description>
    </property>

    <!-- 指定HDFS副本的数量 -->
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>

    <property>
    <name>dfs.secondary.http.address</name>
    <value>hadoop1:50090</value>
    </property>
    </configuration>

    4、yarn-site.xml

    <configuration>
    <!-- 指定YARN的老大(ResourceManager)的地址 -->

    <property>

    <name>yarn.resourcemanager.hostname</name>
    <value>hadoop1</value>
    </property>
    <!-- reducer获取数据的方式 -->
    <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    </property>

    </configuration>

    5、mapred-site.xml

    <configuration>
    <!-- 指定mr运行在yarn上 -->
    <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    </property>
    </configuration>

    6、slaves 

    hadoop2
    hadoop3

    E、分发安装目录到其他机器节点

    F、在namenode节点初始化HDFS   本例配置在 hadoop1 上

      执行 #./bin/hadoop  namenode  -format

    G、启动HDFS

      执行 # ./sbin/start-dfs.sh

    [root@hadoop1 hadoop-2.8.4]# ./sbin/start-dfs.sh
    Starting namenodes on [hadoop1]
    hadoop1: namenode running as process 2343. Stop it first.
    hadoop2: starting datanode, logging to /export/servers/hadoop-2.8.4/logs/hadoop-root-datanode-hadoop2.out
    hadoop3: starting datanode, logging to /export/servers/hadoop-2.8.4/logs/hadoop-root-datanode-hadoop3.out
    hadoop4: ssh: connect to host hadoop4 port 22: No route to host 【这个是我在slave 配置了hadoop4 ,然后我又没有分发和启动hadoop4节点,所以链接不到】
    Starting secondary namenodes [hadoop1]
    hadoop1: secondarynamenode running as process 2510. Stop it first. 【secondary namenode】hdfs 的冷备

    H、启动YARN

      执行 #./sbin/start-yarn.sh

    starting yarn daemons
    resourcemanager running as process 2697. Stop it first.【在哪台机器上执行命令,resourcemanager就在这太机器上,然后再启动slave配置的nodemanager】
    hadoop2: starting nodemanager, logging to /export/servers/hadoop-2.8.4/logs/yarn-root-nodemanager-hadoop2.out
    hadoop3: starting nodemanager, logging to /export/servers/hadoop-2.8.4/logs/yarn-root-nodemanager-hadoop3.out

     

    二、测试

     

     

  • 相关阅读:
    第36课 经典问题解析三
    第35课 函数对象分析
    67. Add Binary
    66. Plus One
    58. Length of Last Word
    53. Maximum Subarray
    38. Count and Say
    35. Search Insert Position
    28. Implement strStr()
    27. Remove Element
  • 原文地址:https://www.cnblogs.com/seamanone/p/9523145.html
Copyright © 2011-2022 走看看