zoukankan      html  css  js  c++  java
  • hadoop2.2.0 单机伪分布式(含64位hadoop编译) 及 eclipse hadoop开发环境搭建

    hadoop中文镜像地址:http://mirrors.hust.edu.cn/apache/hadoop/core/hadoop-2.2.0/

    第一步,下载 wget
    'http://archive.apache.org/dist/hadoop/core/hadoop-2.2.0/hadoop-2.2.0.tar.gz'

    第二步,编译haoop-2.2.0(注解:这一步很费时间)
    因为官方下载只提供32位的,所以自己编译为64位
    http://blog.csdn.net/canlets/article/details/18709969 在Ubuntu 64位OS上运行hadoop2.2.0[重新编译hadoop]
    我遇到了与上文作者完全一致的错误:
    [INFO] BUILD FAILURE
    根据他提供的方法:
    目前的2.2.0 的Source Code 压缩包解压出来的code有个bug 需要patch后才能编译。否则编译hadoop-auth 会提示上面错误。
    解决办法如下:
    修改下面的pom文件。该文件在hadoop源码包下寻找:
    hadoop-common-project/hadoop-auth/pom.xml
    打开上面的的pom文件,在54行加入如下的依赖:
         <dependency>
           <groupId>org.mortbay.jetty</groupId>
          <artifactId>jetty-util</artifactId>
          <scope>test</scope>
         </dependency>
         <dependency>
           <groupId>org.mortbay.jetty</groupId>
           <artifactId>jetty</artifactId>
           <scope>test</scope>
         </dependency>
    然后重新运行编译指令即可。编译是一个缓慢的过程,耐心等待哦。

    至此,应该编译完成

    第三步:伪分布式安装
    http://my.oschina.net/u/179537/blog/189239 主要参考这个
    遇到的问题1:
    14/02/14 09:57:59 INFO mapreduce.Job: Task Id : attempt_1392341518773_0004_m_000000_0, Status : FAILED
    Container launch failed for container_1392341518773_0004_01_000002 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:152)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)

    解决办法是:
    vim etc/hadoop/yarn-site.xml
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value> --------注意事项:是mapreduce_shuffle 不是 mapreduce.shuffle
    </property>
    然后重新启动 hadoop 即可。

    遇到的问题2:

    root@water:/home/hadoop# sbin/start-dfs.sh 
    Starting namenodes on [localhost]
    localhost: Error: JAVA_HOME is not set and could not be found.
    localhost: Error: JAVA_HOME is not set and could not be found.

    vim libexec/hadoop-config.sh
    找到 JAVA_HOME is not set and could not be found. 这个错误提示的代码,然后在其代码前面定义JAVA_HOME

    export JAVA_HOME=/usr/java/jdk
    # Attempt to set JAVA_HOME if it is not set
    if [[ -z $JAVA_HOME ]]; then
      # On OSX use java_home (or /Library for older versions)
      if [ "Darwin" == "$(uname -s)" ]; then
        if [ -x /usr/libexec/java_home ]; then
          export JAVA_HOME=($(/usr/libexec/java_home))
        else
          export JAVA_HOME=(/Library/Java/Home)
        fi
      fi
    
      # Bail if we did not detect it
      if [[ -z $JAVA_HOME ]]; then
        echo "Error: JAVA_HOME is not set and could not be found." 1>&2
        exit 1
      fi
    fi

    错误消失

    第四步:运行wordcount:

    bin/hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.2.0-sources.jar org.apache.hadoop.examples.WordCount /in /out

    14/02/14 10:15:23 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
    14/02/14 10:15:24 INFO input.FileInputFormat: Total input paths to process : 2
    14/02/14 10:15:24 INFO mapreduce.JobSubmitter: number of splits:2
    14/02/14 10:15:24 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
    14/02/14 10:15:24 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
    14/02/14 10:15:24 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
    14/02/14 10:15:24 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
    14/02/14 10:15:24 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
    14/02/14 10:15:24 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
    14/02/14 10:15:24 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
    14/02/14 10:15:24 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
    14/02/14 10:15:24 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
    14/02/14 10:15:24 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
    14/02/14 10:15:24 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
    14/02/14 10:15:24 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
    14/02/14 10:15:24 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1392344053646_0001
    14/02/14 10:15:24 INFO impl.YarnClientImpl: Submitted application application_1392344053646_0001 to ResourceManager at /0.0.0.0:8032
    14/02/14 10:15:24 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1392344053646_0001/
    14/02/14 10:15:24 INFO mapreduce.Job: Running job: job_1392344053646_0001
    14/02/14 10:15:31 INFO mapreduce.Job: Job job_1392344053646_0001 running in uber mode : false
    14/02/14 10:15:31 INFO mapreduce.Job:  map 0% reduce 0%
    14/02/14 10:15:35 INFO mapreduce.Job:  map 100% reduce 0%
    14/02/14 10:15:40 INFO mapreduce.Job:  map 100% reduce 100%
    14/02/14 10:15:41 INFO mapreduce.Job: Job job_1392344053646_0001 completed successfully
    14/02/14 10:15:41 INFO mapreduce.Job: Counters: 43

    运行结果:

    root@water:/home/hadoop# bin/hdfs dfs -cat /out/*
    hadoop    1
    hello    2
    world    1

    附加:hadoop启动的一些相关命令

    启动namenode
    sbin/hadoop-daemon.sh start namenode
    sbin/hadoop-daemon.sh start datanode
    关闭namenode
    sbin/hadoop-daemon.sh stop datanode
    sbin/hadoop-daemon.sh stop namenode

    root@water:/home/hadoop# sbin/start-dfs.sh
    Starting namenodes on [localhost]
    localhost: starting namenode, logging to /home/hadoop-2.2.0/logs/hadoop-root-namenode-water.out
    localhost: starting datanode, logging to /home/hadoop-2.2.0/logs/hadoop-root-datanode-water.out
    Starting secondary namenodes [0.0.0.0]
    0.0.0.0: starting secondarynamenode, logging to /home/hadoop-2.2.0/logs/hadoop-root-secondarynamenode-water.out
    root@water:/home/hadoop# jps
    6569 SecondaryNameNode
    6283 NameNode
    6400 DataNode
    6703 Jps

    root@water:/home/hadoop# sbin/start-yarn.sh
    starting yarn daemons
    starting resourcemanager, logging to /home/hadoop-2.2.0/logs/yarn-root-resourcemanager-water.out
    localhost: starting nodemanager, logging to /home/hadoop-2.2.0/logs/yarn-root-nodemanager-water.out
    root@water:/home/hadoop# jps
    6569 SecondaryNameNode
    6283 NameNode
    6400 DataNode
    6961 Jps
    6757 ResourceManager
    6886 NodeManager

    http://127.0.0.1:8088/  可以访问hadoop管理页面 hadoop job管理界面
    http://127.0.0.1:50070 可以访问namenode节点信息。 可以查看各节点的文件 browser file system


    2014-02-19 09:46:49,710 WARN  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
    java.net.ConnectException: 拒绝连接
            at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
            at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
            at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
            at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)

    这个异常的解决可以 启动完hbase,不关闭,然后再启动一次hbase 就可以。。。。。奇怪 参考连接:



  • 相关阅读:
    lr http_get访问webservice
    lr http_post请求webservice
    快速幂(fast power)
    运算符重载
    1010 Radix 二分
    1054 The Dominant Color
    1042 Shuffling Machine
    1059 Prime Factors
    1061 Dating
    1078 Hashing
  • 原文地址:https://www.cnblogs.com/i80386/p/3548132.html
Copyright © 2011-2022 走看看