zoukankan      html  css  js  c++  java
  • hadoop2.5.1搭建(一)

    1.1配置

    1.1.1修改hosts

    vi /etc/hosts 

    192.168.220.64 cluster4 192.168.220.63 cluster3

    1.2安装jdk

    rpm安装

    rpm -ivh jdk-7u17-linux-x64.rpm 

    环境变量

    vi /etc/profile
    #set java environment
    JAVA_HOME=/usr/java/jdk1.7.0_17
    CLASSPATH=.:$JAVA_HOME/lib.tools.jar
    PATH=$JAVA_HOME/bin:$PATH
    export JAVA_HOME CLASSPATH PATH

    环境变量生效

    source /etc/profile 

    链接

    ln -s -f /usr/java/jdk1.7.0_17/jre/bin/java
    ln -s -f /usr/java/jdk1.7.0_17/bin/javac

    测试

    java -version

    java -version
    java version "1.7.0_17"
    Java(TM) SE Runtime Environment (build 1.7.0_17-b02)
    Java HotSpot(TM) 64-Bit Server VM (build 23.7-b01, mixed mode)

    安装jdk报错,但是后续貌似不影响,暂时跳过。

    [root@cluster3 java]# rpm -ivh jdk-7u17-linux-x64.rpm 
    Preparing...                ########################################### [100%]
       1:jdk                    ########################################### [100%]
    Unpacking JAR files...
            rt.jar...
    Error: Could not open input file: /usr/java/jdk1.7.0_17/jre/lib/rt.pack
            jsse.jar...
    Error: Could not open input file: /usr/java/jdk1.7.0_17/jre/lib/jsse.pack
            charsets.jar...
    Error: Could not open input file: /usr/java/jdk1.7.0_17/jre/lib/charsets.pack
            tools.jar...
    Error: Could not open input file: /usr/java/jdk1.7.0_17/lib/tools.pack
            localedata.jar...
    Error: Could not open input file: /usr/java/jdk1.7.0_17/jre/lib/ext/localedata.pack

    1.3配置ssh公钥密钥自动登录

    [root@cluster3 ~]# cd .ssh
    [root@cluster3 .ssh]# ssh-keygen -t rsa
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa): 
    Enter passphrase (empty for no passphrase): 
    Enter same passphrase again: 
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    51:66:2a:0f:ce:30:a6:9d:d7:ed:0b:4b:69:6b:3d:50 root@cluster3
    [root@cluster3 .ssh]# cat id_rsa.pub >> authorized_keys
    [root@cluster3 .ssh]# scp authorized_keys root@192.168.220.64:/root/.ssh/
    root@192.168.220.64's password: 
    authorized_keys                               100%  394     0.4KB/s   00:00    
    [root@cluster3 .ssh]# ssh root@cluster3
    Last login: Thu Oct 30 10:59:36 2014 from localhost.localdomain
    [root@cluster3 ~]# ssh root@cluster4
    Last login: Tue Oct 28 12:38:13 2014 from 192.168.220.1

    2安装hadoop

    解压hadoop

    tar -zxf hadoop-2.5.1.tar.gz  

    2.1修改core-site.xml

    vi core-site.xml
    <configuration> 
        <property>  
            <name>fs.defaultFS</name>  
            <value>hdfs://cluster3:9000</value>  
        </property>  
        <property>  
            <name>hadoop.tmp.dir</name>  
            <value>/usr/local/hadoop/hadoop-2.5.1/tmp</value>  
            <description>Abase for other temporary directories.</description>  
        </property>
        <property>  
            <name>io.file.buffer.size</name>  
            <value>4096</value>  
        </property>  
    </configuration>  

    2.2修改hdfs-site.xml

    vi hdfs-site.xml
    <configuration>  
        <property>  
            <name>dfs.nameservices</name>  
            <value>hadoop-cluster3</value>  
        </property>  
        <property>  
            <name>dfs.namenode.secondary.http-address</name>  
            <value>cluster3:50090</value>  
        </property>  
        <property>  
            <name>dfs.namenode.name.dir</name>  
            <value>file:///usr/local/hadoop/hadoop-2.5.1/dfs/name</value>  
        </property>  
        <property>  
            <name>dfs.namenode.data.dir</name>  
            <value>file:///usr/local/hadoop/hadoop-2.5.1/dfs/data</value>  
        </property>  
        <property>  
            <name>dfs.replication</name>  
            <value>1</value>
        </property>  
        <property>  
            <name>dfs.webhdfs.enabled</name>  
            <value>true</value>  
        </property>  
    </configuration> 

    2.3修改mapred-site.xml

    cp mapred-site.xml.template mapred-site.xml
    vi mapred-site.xml
    <configuration>  
        <property>  
            <name>mapreduce.framework.name</name>  
            <value>yarn</value>  
        </property>  
        <property>  
            <name>mapreduce.jobtracker.http.address</name>  
            <value>cluster3:50030</value>  
        </property>  
        <property>  
            <name>mapreduce.jobhistory.address</name>  
            <value>cluster3:10020</value>  
        </property>  
        <property>  
            <name>mapreduce.jobhistory.webapp.address</name>  
            <value>cluster3:19888</value>  
        </property>  
    </configuration>  

    2.4修改yarn-site.xml

    vi yarn-site.xml
    <configuration>
    <!-- Site specific YARN configuration properties -->  
        <property>  
            <name>yarn.nodemanager.aux-services</name>  
            <value>mapreduce_shuffle</value>  
        </property>  
        <property>  
            <name>yarn.resourcemanager.address</name>  
            <value>cluster3:8032</value>  
        </property>  
        <property>  
            <name>yarn.resourcemanager.scheduler.address</name>  
            <value>cluster3:8030</value>  
        </property>  
        <property>  
            <name>yarn.resourcemanager.resource-tracker.address</name>  
            <value>cluster3:8031</value>  
        </property>  
        <property>  
            <name>yarn.resourcemanager.admin.address</name>  
            <value>cluster3:8033</value>  
        </property>  
        <property>  
            <name>yarn.resourcemanager.webapp.address</name>  
            <value>cluster3:8088</value>  
        </property>  
    </configuration> 

    2.5修改slaves

    vi slaves
    cluster4

    3.1修改JAVA_HOME

    分别在文件hadoop-env.sh和yarn-env.sh中添加JAVA_HOME配置

    vi hadoop-env.sh
    
    export JAVA_HOME=/usr/java/jdk1.7.0_17
    
    vi yarn-env.sh
    
    export JAVA_HOME=/usr/java/jdk1.7.0_17

    3.2hadoop环境变量

    登录Master,配置Hadoop环境变量。
    
    vi /etc/profile
    
    export HADOOP_HOME=/usr/local/hadoop/hadoop-2.5.1
    export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
    
    source /etc/profile

    3.3格式化文件系统

    hdfs namenode -format  

    3.4启动、停止服务

    [root@cluster3 sbin]# start-dfs.sh
    
    [root@cluster3 sbin]# start-yarn.sh

      [root@cluster3 sbin]# stop-dfs.sh

      [root@cluster3 sbin]# stop-yarn.sh

    4验证

    [root@cluster3 hadoop]# jps
    28287 Jps
    28032 ResourceManager
    27810 NameNode
    
    [root@cluster4 hadoop]# jps
    27828 NodeManager
    27724 DataNode
    27930 Jps

    查看日志

    [root@cluster3 logs]# tail -n200 /usr/local/hadoop/hadoop-2.5.1/logs/yarn-root-resourcemanager-cluster3.log 

    4.2浏览器访问:

    http://192.168.220.63:50070
    
    http://192.168.220.63:8088
  • 相关阅读:
    C语言字符串之无重复字符的最长子串
    C语言递归之求根到叶节点数字之和
    C语言递归之二叉树的最大深度
    C语言递归之翻转二叉树
    C语言递归之对称二叉树
    C语言链表之两数相加
    如何把笔记本电脑的有线网分享给手机上
    安利spacemacs ^^
    lambda创世纪
    jinterface包详解
  • 原文地址:https://www.cnblogs.com/huanhuanang/p/4080712.html
Copyright © 2011-2022 走看看