zoukankan      html  css  js  c++  java
  • Hadoop2.5.0伪分布式安装

    使用2.5.0版本,当前用户为hadoop

    Jdk已安装:$JAVA_HOME=/usr/usoft/jdk1.8.0_151

    1.上传Hadoop安装包并解压

    tar -zxvf hadoop-2.5.0.tar.gz -C /usr/usoft

    2.配置Hadoop环境变量

    sudo vi /etc/profile

    添加一下内容:

    HADOOP_HOME=/usr/usoft/hadoop-2.5.0

    export PATH=$PATH:$HADOOP_HOME/bin

    export PATH=$PATH:$HADOOP_HOME/sbin

    export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

    export HADOOP_HDFS_HOME=${HADOOP_HOME}

    export HADOOP_YARN_HOME=${HADOOP_HOME}

    使配置文件生效

    sudo source /etc/profile

    3.创建Hadoop需要的目录

    cd /usr/usoft/hadoop-2.5.0 

    mkdir tmp 

    mkdir dfs 

    cd dfs

    mkdir name 

    mkdir data

    修改Hadoop的文件权限

    sudo chown -R hadoop:hadoop ./hadoop-2.5.0

    4.修改hadoop-env.sh文件

    vi /usr/usoft/hadoop-2.5.0/etc/hadoop/hadoop-env.sh

    #添加

    export JAVA_HOME=/usr/usoft/jdk1.8.0_151

    5.修改core-site.xml

    vi /usr/usoft/hadoop-2.5.0/etc/hadoop/core-site.xml

    #添加

    <property>

      <name>hadoop.tmp.dir</name>

      <value>/home/hadoop/hadoop-2.5.0/tmp</value>

    </property>

    <property>

      <name>fs.defaultFS</name>

      <value>hdfs://localhost:9000</value>

    </property>

    6.修改hdfs-site.xml

    vim /usr/usoft/hadoop-2.5.0/etc/hadoop/hdfs-site.xml

    #添加

    <property>

      <name>dfs.name.dir</name> <value>/home/hadoop/hadoop-2.5.0/dfs/name</value>

    </property>

    <property>

      <name>dfs.data.dir</name> <value>/home/hadoop/hadoop-2.5.0/dfs/data</value>

    </property>

    <property>

      <name>dfs.replication</name>

      <value>1</value>

    </property>

    <property>

      <name>dfs.permissions</name>

      <value>false</value>

    </property>

    7.修改mapred-site.xml

    cp /usr/usoft/hadoop-2.5.0/etc/hadoop/mapred-site.xml.template /usr/usoft/hadoop-2.5.0/etc/hadoop/mapred-site.xml

    #添加

    <property>

      <name>mapreduce.framework.name</name>

      <value>yarn</value>

    </property>

    8.修改yarn-site.xml

    vim /usr/usoft/hadoop-2.5.0/etc/hadoop/yarn-site.xml

    #添加

    <property>

      <name>yarn.resourcemanager.hostname</name>

      <value>localhost</value>

    </property>

    <property>

      <name>yarn.nodemanager.aux-services</name>

      <value>mapreduce_shuffle</value>

    </property>

      <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>

      <value>org.apache.hadoop.mapred.ShuffleHandler</value>

    </property>

    <property>

      <name>yarn.resourcemanager.scheduler.class</name>

      <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>

    </property>

    9.修改slaves

    vim /usr/usoft/hadoop-2.5.0/etc/hadoop/slaves

    删除现有内容添加localhost

    一般默认就是localhost

    10.格式化HDFS

    cd /usr/usoft/hadoop-2.5.0

    bin/hdfs namenode -format

    11.启动集群

    sbin/start-all.sh

    12.查看集群启动情况

    输入jps,出现一些内容就说明成功了

     

    13.图形界面

    当前IP地址:50070 —> HDFS 
    当前IP地址:8088 —> MapReduce

  • 相关阅读:
    [saiku] 系统登录成功后查询Cubes
    216. Combination Sum III
    215. Kth Largest Element in an Array
    214. Shortest Palindrome
    213. House Robber II
    212. Word Search II
    211. Add and Search Word
    210. Course Schedule II
    分硬币问题
    开始学习Python
  • 原文地址:https://www.cnblogs.com/charles-jiang/p/8385607.html
Copyright © 2011-2022 走看看