zoukankan      html  css  js  c++  java
  • 3.1-3.5 分布式部署hadoop2.x的准备和配置

    一、环境

    192.168.1.130     master

    192.168.1.131     slave1

    192.168.1.132     slave2

    所有主机:

    1、关闭防火墙、selinux

    2、配置hosts文件

    3、yum -y install vim wget tree ntpdate lrzsz openssh-clients

    4、配置文件描述符等,同步时间;

    5、mkdir -p /opt/app             #hadoop安装目录

    6、mkdir -p /opt/{datas,modules,softwares,tools}       #存放其他程序或文件的目录

    二、节点配置规划

    服务:

    服务 master slave1 slave2
    HDFS namenode   secondarynamenode
    HDFS datanode datanode datanode
    yarn   resourcemanager  
    yarn nodemanager nodemanager nodemanager
    mapreduce jobhistoryserver    


    配置文件:

    HDFS:
    hadoop
    -env.sh -->JDK core-site.xml -->namenode hdfs-site.xml -->secondarynamenode slaves -->datanode YARN: yarn-env.sh -->JDK yarn-site.xml -->resourcemanager slaves -->nodemanager MapReduce: mapred-env.sh -->JDK mapred-site.xml -->Jobhistoryserver

    三、安装

    1、安装JDK(所有节点)

    #先卸载系统自带的JDK
    
    [root@master softwares]# pwd    #softwares目录存放安装包
    /opt/softwares
    
    [root@master softwares]# ls
    hadoop-2.5.0.tar.gz  jdk-7u80-linux-x64.tar.gz
    
    [root@master softwares]# tar zxf jdk-7u80-linux-x64.tar.gz -C /opt/moduls/
    
    #配置环境变量
    vim /etc/profile
    #JDK
    export JAVA_HOME=/opt/modules/jdk1.7.0_80
    export PATH=$PATH:$JAVA_HOME/bin
    
    #source
    source /etc/profile
    
    java -version

    2、安装hadoop

    (1)hdfs

    #解压
    [root@master softwares]# tar zxf hadoop-2.5.0.tar.gz -C /opt/app/  #现在只解压到master,后面会分发
    
    #hadoop-env.sh 
    export JAVA_HOME=/opt/modules/jdk1.7.0_80
    
    #core-site.xml
    <configuration>
    
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://master:8020</value>
        </property>
    
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/opt/app/hadoop-2.5.0/data/tmp</value>
        </property>
        
        <property>
            <name>fs.trash.interval</name>
            <value>10080</value>
        </property>
        
    </configuration>
    
    #创建 /opt/app/hadoop-2.5.0/data/tmp
    [root@master ~]# mkdir -p /opt/app/hadoop-2.5.0/data/tmp
    
    #hdfs-site.xml
    <configuration>
    
        <property>
            <name>dfs.namenode.secondary.http-address</name>
            <value>slave2:50090</value>
        </property>
        
    </configuration>
    
    #slaves  (datanode和nodemanager都在是这里面配置的)
    master
    slave1
    slave2

    (2)yarn

    #yarn-env.sh
    export JAVA_HOME=/opt/modules/jdk1.7.0_80
    
    #yarn-site.xml
    <configuration>
    
        <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
    
        <property>
            <name>yarn.resourcemanager.hostname</name>
            <value>slave1</value>
        </property>
    
        <property>
            <name>yarn.nodemanager.resource.memory-mb</name>
            <value>4096</value>
        </property>
        
        <property>
            <name>yarn.nodemanager.resource.cpu-vcores</name>
            <value>4</value>
        </property>
        
        <property>
            <name>yarn.log-aggregation-enable</name>
            <value>true</value>
        </property>
        
        <property>
            <name>yarn.log-aggregation.retain-seconds</name>
            <value>604800</value>
        </property>
        
    </configuration>
    
    #slaves
    master
    slave1
    slave2

    (3)mapreduce

    #mapred-env.sh
    export JAVA_HOME=/opt/modules/jdk1.7.0_80
    
    #mapred-site.xml
    <configuration>
    
        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>
    
        <property>
            <name>mapreduce.jobhistory.address</name>
            <value>master:10020</value>
        </property>
    
        <property>
            <name>mapreduce.jobhistory.webapp.address</name>
            <value>master:19888</value>
        </property>
        
    </configuration>

    四、配置ssh免密登陆

    此步骤不难,基本可以略过;

    cd /root/.ssh/
    ssh-keygen -t rsa
    cat id_rsa.pub >>authorized_keys

    五、分发

    分发HADOOP安装包至各个机器节点;

    #master上
    [root@master ~]# scp -r /opt/app/hadoop-2.5.0 root@slave1:/opt/app/
    
    [root@master ~]# scp -r /opt/app/hadoop-2.5.0 root@slave2:/opt/app/
    
    #slave1
    [root@slave1 ~]# ls /opt/app/
    hadoop-2.5.0
    
    #slave2
    [root@slave2 ~]# ls /opt/app/
    hadoop-2.5.0
  • 相关阅读:
    C# 创建与读写配置文件
    C# 绘图三种方式
    WindowsForms获取服务名称
    Hbase之JAVA API不能远程访问问题解决
    Jenkins之自动构建
    Jenkins配置匿名用户拥有只读权限
    XShell中文乱码问题解决
    mybatis之关联(2)
    mybatis之动态SQL
    mybatis之一对一关联
  • 原文地址:https://www.cnblogs.com/weiyiming007/p/10718899.html
Copyright © 2011-2022 走看看