zoukankan      html  css  js  c++  java
  • hadoop伪分布式安装(单机版) HDFS +Yarn

    一.安装环境与版本

    Linux,虚拟机IP192.168.21.150:

    hadoop-3.2.0

    jdk1.8

    二.  伪分布式安装HDFS 

    2.1hosts与profile配置

    cd /etc
    vim hosts
    192.168.21.142 node01

    cd /etc
    vim profile
    export JAVA_HOME=/usr/local/jdk
    export HADOOP_HOME=/usr/local/hadoop-3.2.0
    PATH=$PATH::$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib:$ZK_HOME/bin

    2.2解压

    cd /usr/local/software
    tar zxvf hadoop-3.2.0.tar.gz
    mv hadoop-3.2.0  /usr/local/hadoop-3.2.0

    2.3配置修改

    hadoop-env.sh

    cd /usr/local/hadoop-3.2.0/etc/hadoop

    vim hadoop-env.sh

    export JAVA_HOME=/usr/local/jdk

    export HDFS_NAMENODE_USER=root

    export HDFS_DATANODE_USER=root

    export HDFS_SECONDARYNAMENODE_USER=root

    core-site.xml

    cd /usr/local/hadoop-3.2.0/etc/hadoop

    vim core-site.xml

    configuration中录入

        <property>

            <name>fs.defaultFS</name>

            <value>hdfs://node01:9000</value>

        </property>

        <property>

            <name>hadoop.tmp.dir</name>

            <value>/var/sxt/hadoop/local</value>  

        </property>

    hdfs-site.xml

    cd /usr/local/hadoop-3.2.0/etc/hadoop
    vim hdfs-site.xml
    在configuration中录入

        <property>

            <name>dfs.replication</name>

            <value>1</value>

        </property>

        <property>

            <name>dfs.namenode.secondary.http-address</name>

            <value>node01:50090</value>

        </property>

    workers

    cd /usr/local/hadoop-3.2.0/etc/hadoop

    vim workers

    里面内容只有"node1"

    2.4编译与启动

    cd /usr/local/hadoop-3.2.0/bin
    ./hdfs namenode -format

    cd /usr/local/hadoop-3.2.0/sbin
    ./start-dfs.sh

    2.5验证

    IP输入http://192.168.21.144:9870/出现

     active说明已能成功运行hdfs

    三.  伪分布式安装Yarn

    3.1配置修改

    hadoop-env.sh

    cd /usr/local/hadoop-3.2.0/etc/hadoop
    vim hadoop-env.sh

    export JAVA_HOME=/usr/local/jdk

    export HDFS_NAMENODE_USER=root

    export HDFS_DATANODE_USER=root

    export HDFS_SECONDARYNAMENODE_USER=root

    export HDFS_JOURNALNODE_USER=root

    export YARN_RESOURCEMANAGER_USER=root

    export YARN_NODEMANAGER_USER=root

    mapred-site.xml

    cd /usr/local/hadoop-3.2.0/etc/hadoop/

    vim mapred-site.xml

    configuration中录入

      <property>

            <name>mapreduce.framework.name</name>

            <value>yarn</value>

        </property>

    <property>

      <name>yarn.app.mapreduce.am.env</name>

      <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>

    </property>

    <property>

      <name>mapreduce.map.env</name>

      <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>

    </property>

    <property>

      <name>mapreduce.reduce.env</name>

      <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>

    </property>

    yarn-site.xml

    cd /usr/local/hadoop-3.2.0-ha/etc/hadoop/

    vim yarn-site.xml

    configuration中录入

      <property>

            <name>yarn.nodemanager.aux-services</name>

            <value>mapreduce_shuffle</value>

        </property>

        <property>

            <name>yarn.resourcemanager.hostname</name>

            <value>node01</value>

        </property>

        <property>

            <name>yarn.log-aggregation-enable</name>

            <value>true</value>

        </property>

        <property>

            <name>yarn.log-aggregation.retain-seconds</name>

            <value>640800</value>

        </property>

    3.2启动

    cd /usr/local/hadoop-3.2.0/sbin

    ./start-yarn.sh

    3.3登录验证

    http://192.168.21.150:8088/

     

     工具下载

    点击此处

  • 相关阅读:
    Tasklet机制
    linux 内核与用户空间通信之netlink使用方法
    inline总结与思考
    PF_NETLINK应用实例NETLINK_KOBJECT_UEVENT具体实现--udev实现原理
    2410下DMA驱动源码分析
    [转]数据库建立索引的一般依据
    [转]性能调优的步骤
    [原] JT SQL Server 性能调优札记之二
    [转]SQL Server 2000执行计划成本(5/5)
    [转]SQL Server 2000执行计划成本(3/5)
  • 原文地址:https://www.cnblogs.com/hzcjd/p/13669270.html
Copyright © 2011-2022 走看看