zoukankan      html  css  js  c++  java
  • Hadoop分布式配置

    请先参照Linux安装Java安装好java,以及参照Hadoop伪分布模式配置安装好SSH

    Hadoop请按以下过程安装。

     [All]LinuxOS+Java+hostname&hosts+ssh install
     [master]generate ssh &scp to slaves + configure Hadoop & scp to slaves
     [slaves]add master's key to ssh file + create link for hadoop-2.2.0

    准备工作

    hostname IP Address
    master 192.168.1.100
    slave1 192.168.1.102

    修改主机名(对应地改成master、slave1)

    sudo vi /etc/hostname

    修改hosts

    sudo vi /etc/hosts
    127.0.0.1    localhost
    192.168.1.121    master
    192.168.1.122    slave1

    关闭防火墙(重启生效)

    sudo ufw disable

    先重启,使得主机名生效,并以hadoop用户登录

    SSH

    进入master的.ssh目录将key复制到各slave中

    ssh-copy-id hadoop@slave1

    至此,可以在master上面ssh hadoop@slave1进行无密码登陆了

     对其余slave作同样处理即可

    Hadoop安装和配置 

    安装参考 https://www.cnblogs.com/manhua/p/3529928.html

    使用备份的配置 文件

    cd ~/setupEnv/hadoop_distribute_setting
    sudo cp core-site.xml ~/hadoop/etc/hadoop
    sudo cp hadoop-env.sh ~/hadoop/etc/hadoop
    sudo cp hdfs-site.xml ~/hadoop/etc/hadoop
    sudo cp mapred-site.xml ~/hadoop/etc/hadoop
    sudo cp yarn-site.xml ~/hadoop/etc/hadoop

    (手动配置)进入Hadoop配置文件目录

    cd ~/hadoop/etc/hadoop

    sudo gedit core-site.xml

    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/hadoop/tmp</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hadoop.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hadoop.groups</name>
        <value>*</value>
    </property>

    sudo gedit hdfs-site.xml

    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>master:9001</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>

    默认不存在此文件,需要创建:
    sudo cp mapred-site.xml.template mapred-site.xml
    sudo gedit mapred-site.xml

    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master:19888</value>
    </property>

    sudo gedit yarn-site.xml

    <property>
        <name>yarn.nodemanager.aux-services</name> 
        <value>mapreduce_shuffle</value>
        </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master:8088</value>
    </property>

    新建masters文档:添加master

    修改slaves:添加slave1

    sudo gedit slaves

     复制配置到slave1上

    cp2slave.sh
    #!/bin/bash
    
    scp –r /home/hadoop/hadoop-2.2.0/ hadoop@slave1:~/

     

    测试

    在master中启动hadoop

    hdfs namenode –format
    start-dfs.sh
    start-yarn.sh

    在master上运行jps命令可见到namenode、secondarynamenode、resourcemanager

    在slave1上运行jps命令可见到datanode、nodemanager

    =============================================

    使用过程中如果出现无法解决的问题,或者在修改配置文件,可以尝试执行以下步骤:

    〇停止任务

    hadoop job -kill [jobID,如job_1394263427873_0002]

    ①stop-all.sh

    ②[修改配置文件]

    ③scp setting 到每台slave

    cd /home/casper/hadoop/hadoop-2.2.0/etc/hadoop
    scp core-site.xml casper@hdp002:~/hadoop/hadoop-2.2.0/etc/hadoop
    scp hdfs-site.xml casper@hdp002:~/hadoop/hadoop-2.2.0/etc/hadoop
    scp mapred-site.xml casper@hdp002:~/hadoop/hadoop-2.2.0/etc/hadoop
    scp yarn-site.xml casper@hdp002:~/hadoop/hadoop-2.2.0/etc/hadoop
    scp slaves casper@hdp002:~/hadoop/hadoop-2.2.0/etc/hadoop

    ④删除每台机器的临时文件夹、dfs数据(路径根据自己配置的修改)

    cd ~/hadoop
    rm -rf dfs
    rm -rf tmp
    ls

    ⑤格式化namenode

    hadoop namenode -format

    ⑥启动start-dfs.shstart-yarn.sh

    ⑦上传文件 : hadoop fs  -put ss-out.txt  /

    ⑧运行jar: hadoop jar part-45-90-3-goodrule.jar RICO /ss-out.txt /rico-out 5 0.9


    why map task always running on a single node

    If that doesn't work, check to make sure that your cluster is configured correctly. Specifically, check that your name node has paths to your other nodes set in its slaves file, and that each slave node has your name node set in its masters file.

     -----TODO

    create masters file in etc/hadoop/

    reset block size in hdfs-site.xml to enlarge the number of blocks--check block size in dfs browser later

     最简单方法:

    上传的时候使用命令  hadoop fs -D dfs.blocksize=16777216 -put ss-part-out.txt  /targetDir

  • 相关阅读:
    第十五节 css3动画之animation简单示例
    第十四节 css3动画之animation
    第十三节 css3动画之翻页动画
    第十二节 css3动画之三维X轴旋转
    第十一节 css3动画之三维Y轴旋转
    第十节 css3动画之transform斜切
    第九节 css3动画之transform旋转
    第八节 css3动画之transform缩放
    ECMAScript基本语法——⑤运算符 比较运算符
    ECMAScript基本语法——⑤运算符 赋值运算符
  • 原文地址:https://www.cnblogs.com/manhua/p/3530956.html
Copyright © 2011-2022 走看看