zoukankan      html  css  js  c++  java
  • linux安装hadoop 1.2.1

    我的服务器里面会装很多东西,所以我在跟目录下面建立了个doc文档文件夹

    1.创建存放软件的doc文件夹

    mkdir doc

    2.进去doc文件夹进行下载hadoop-1.2.1资源包或者到我的百度云下载 地址http://pan.baidu.com/s/1gdSws07

    cd doc
    wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz

    3.下载完毕进行解压hadoop-1.2.1.tar.gz

    tar -zxf hadoop-1.2.1.tar.gz

    4.这个时候在doc目录应该有个hadoop-1.2.1文件夹,这个是我们存放软件包的目录,一般我们安装一个服务会单独的创建一个相关的文件夹,我的hadoop服务是装在

    /usr/local/hadoop/hadoop-1.2.1这个目录里面的,所以我们把doc里面的hadoop-1.2.1复制到/usr/local/hadoop目录
    #进入usr下面的local文件夹
    cd /usr/local
    #创建hadoop文件夹
    mkdir hadoop
    #转移hadoop-1.2.1文件夹到hadoop文件夹中
    mv /doc/hadoop-1.2.1 /usr/local/hadoop

    5.ok 现在开到配置hadoop的配置文件

    配置hadoop-env.sh文件

    使用echo命令看JAVA_HOME,jdk的安装目录

    [root@iZ94j7ahvuvZ conf]# echo $JAVA_HOME 
    /usr/local/java/jdk1.7.0

    修改hadoop-env.sh的JAVA_HOME信息

    进入hadoop的conf文件夹

    cd /usr/local/hadoop/hadoop-1.2.1/conf
    vi hadoop-env.sh

    完善 JAVA_HOME属性  

    export JAVA_HOME=/usr/local/java/jdk1.7.0(自己的JDK目录)

    6.配置文件core-site.xml

    <configuration>
      <property>
           <name>hadoop.tmp.dir</name>
           <value>/hadoop</value>
      </property>
     
      <property>
         <name>dfs.name.dir</name>
         <value>/hadoop/name</value>
      </property>
     
      <property>
         <name>fs.default.name</name>
         <value>hdfs://localhost:9000</value>
      </property>
    </configuration>
     
    7.配置hdfs-site.xml文件
    <configuration>
        <property>
            <name>dfs.data.dir</name>
            <value>/hadoop/data</value>
        </property>
    </configuration>
     
    8.配置mapred-site.xml文件
    <configuration>
        <property>
            <name>mapred.job.tracker</name>
            <value>localhost:9001</value>
        </property>
    </configuration>
     
    9.配置hadoop的环境变量 /etc/profile文件
     
    cd /etc
    在下面加入
    export HADOOP_HOME=/usr/local/hadoop/hadoop-1.2.1
    export PATH=$PATH:/usr/local/java/jdk1.7.0/bin:$HADOOP_HOME/bin(这里的JDK路径按照自己的JDK路径)
    HADOOP_HOME在上面 不然好像生效了也没用
    保存退出
    dadoop
    如果有像敲完java后的提示 那么则成功
    [root@iZ94j7ahvuvZ conf]# hadoop
    Usage: hadoop [--config confdir] COMMAND
    where COMMAND is one of:
      namenode -format     format the DFS filesystem
      secondarynamenode    run the DFS secondary namenode
      namenode             run the DFS namenode
      datanode             run a DFS datanode
      dfsadmin             run a DFS admin client
      mradmin              run a Map-Reduce admin client
      fsck                 run a DFS filesystem checking utility
      fs                   run a generic filesystem user client
      balancer             run a cluster balancing utility
      oiv                  apply the offline fsimage viewer to an fsimage
      fetchdt              fetch a delegation token from the NameNode
      jobtracker           run the MapReduce job Tracker node
      pipes                run a Pipes job
      tasktracker          run a MapReduce task Tracker node
      historyserver        run job history servers as a standalone daemon
      job                  manipulate MapReduce jobs
      queue                get information regarding JobQueues
      version              print the version
      jar <jar>            run a jar file
      distcp <srcurl> <desturl> copy file or directories recursively
      distcp2 <srcurl> <desturl> DistCp version 2
      archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
      classpath            prints the class path needed to get the
                           Hadoop jar and the required libraries
      daemonlog            get/set the log level for each daemon
     or
      CLASSNAME            run the class named CLASSNAME
    10.启动hadoop
     
    cd /usr/local/hadoop/hadoop-1.2.1/bin
    ./start-all.sh
    #敲完需要输入3次密码 如下提示则成功
    
    [root@iZ94j7ahvuvZ bin]# ./start-all.sh 
    namenode running as process 1341. Stop it first.
    root@localhost's password: 
    localhost: starting datanode, logging to /usr/local/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-root-datanode-iZ94j7ahvuvZ.out
    root@localhost's password: 
    localhost: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-root-secondarynamenode-iZ94j7ahvuvZ.out
    starting jobtracker, logging to /usr/local/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-root-jobtracker-iZ94j7ahvuvZ.out
    root@localhost's password: 
    localhost: starting tasktracker, logging to /usr/local/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-root-tasktracker-iZ94j7ahvuvZ.out
  • 相关阅读:
    STP配置和选路规则
    Apache(基于端口号)
    Apache(基于主机名)
    Apache(基于IP地址)
    (Apache服务)个人用户主页功能
    c语言实现通讯录,包括增加名字,删除信息,查找,修改,排序
    C语言用函数指针的数组简单实现计算器功能
    指针与数组,指针与函数之间的关系
    循环
    SQLite不支持的SQL语法总结
  • 原文地址:https://www.cnblogs.com/sz-jack/p/5196154.html
Copyright © 2011-2022 走看看