zoukankan      html  css  js  c++  java
  • Hadoop2.0安装之非HA版

    主要步骤跟Hadoop1.0(1.0安装地址)一致,主要在配置这块有更改

    安装

    # 设置JAVA_HOME
    export JAVA_HOME="/usr/local/src/jdk1.8.0_181/"
    
    • 修改./etc/hadoop/yarn-env.sh
    # 设置JAVA_HOME
    JAVA_HOME="/usr/local/src/jdk1.8.0_181/"
    
    • 修改./etc/hadoop/slaves
    slave1
    slave2
    
    • 修改./etc/hadoop/core-site.xml
    <configuration>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://master:9000</value>
        </property>
        <property>
            <name>hadoop.tmp.dir</name>
            <value>file:/usr/local/src/hadoop-2.6.5/tmp</value>
        </property>
    </configuration>
    
    • 修改./etc/hadoop/hdfs-site.xml
    <configuration>
        <property>
            <name>dfs.namenode.secondary.http-address</name>
            <value>master:9001</value>
        </property>
        <property>
            <name>dfs.namenode.name.dir</name>
            <value>file:/usr/local/src/hadoop-2.6.5/dfs/name</value>
        </property>
        <property>
            <name>dfs.datanode.data.dir</name>
            <value>file:/usr/local/src/hadoop-2.6.5/dfs/data</value>
        </property>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
        </property>
    </configuration>
    
    • 修改./etc/hadoop/mapred-site.xml
    <configuration>
        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>
        <property>
            <name>mapreduce.jobhistory.address</name>
            <value>slave1:10020</value>
        </property>
        <property>
            <name>mapreduce.jobhistory.webapp.address</name>
            <value>slave1:19888</value>
        </property>
    </configuration>
    
    • 修改./etc/hadoop/yarn-site.xml
    <configuration>
        <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
        <property>
            <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
            <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
        <property>
            <name>yarn.resourcemanager.address</name>
            <value>master:8032</value>
        </property>
        <property>
            <name>yarn.resourcemanager.scheduler.address</name>
            <value>master:8030</value>
        </property>
        <property>
            <name>yarn.resourcemanager.resource-tracker.address</name>
            <value>master:8035</value>
        </property>
        <property>
            <name>yarn.resourcemanager.admin.address</name>
            <value>master:8033</value>
        </property>
        <property>
            <name>yarn.resourcemanager.webapp.address</name>
            <value>master:8088</value>
        </property>
        <property>
            <name>yarn.log-aggregation-enable</name>
            <value>true</value>
        </property>
        <property>
            <name>yarn.log-aggregation.retain-seconds</name>
            <value>259200</value>
        </property>
        <property>
            <name>yarn.log.server.url</name>
            <value>http://slave1:19888/jobhistory/logs</value>
        </property>
        <property>
            <name>yarn.nodemanager.vmem-pmem-ratio</name>
            <value>4.0</value>
        </property>
    </configuration>
    
    • 和Hadoop1.0一样,第一次启动前,需要格式化hdfs:./bin/hadoop namenode -format

    • 启动:./sbin/start-all.sh

    • 使用:跟Hadoop1.0一样,使用./bin/hadoop命令

    • 关闭:./sbin/stop-all.sh

    提交MapReduce任务

    基本上没什么变化,除了Hadoop streaming地址变了

    [wadeyu@master mr_count]$ cat run.sh
    HADOOP_CMD=/usr/local/src/hadoop-2.6.5/bin/hadoop
    HADOOP_STREAMING_JAR=/usr/local/src/hadoop-2.6.5/share/hadoop/tools/lib/hadoop-streaming-2.6.5.jar
    
    INPUT_FILE=/data/The_Man_of_Property.txt
    OUTPUT_DIR=/output/wc
    
    $HADOOP_CMD fs -rmr -skipTrash $OUTPUT_DIR
    
    $HADOOP_CMD jar $HADOOP_STREAMING_JAR 
        -input $INPUT_FILE 
        -output $OUTPUT_DIR 
        -mapper "python map.py" 
        -reducer "python red.py" 
        -file ./map.py 
        -file ./red.py
    

    参考资料

    【0】八斗学院内部培训资料

  • 相关阅读:
    38.Linux驱动调试-根据系统时钟定位出错位置
    37.Linux驱动调试-根据oops的栈信息,确定函数调用过程
    36.Linux驱动调试-根据oops定位错误代码行
    35.Linux-分析并制作环形缓冲区
    34.Linux-printk分析、使用__FILE__, __FUNCTION__, __LINE__ 调试
    arm裸板驱动总结(makefile+lds链接脚本+裸板调试)
    33.Linux-实现U盘自动挂载(详解)
    Android插件化技术——原理篇
    Android插件化(五):OpenAtlasの四大组件的Hack
    Android插件化(4):OpenAtlasの插件的卸载与更新
  • 原文地址:https://www.cnblogs.com/wadeyu/p/9696044.html
Copyright © 2011-2022 走看看