zoukankan      html  css  js  c++  java
  • Ubuntu下配置安装Hadoop 2.2

    ---恢复内容开始---

    这两天玩Hadoop,之前在我的Mac上配置了好长时间都没成功的Hadoop环境,今天想在win7 虚拟机下的Ubuntu12.04 64位机下配置,

    然后再建一个组群看一看。

    参考资料:

    1. Installing single node Hadoop 2.2.0 on Ubuntu: http://bigdatahandler.com/hadoop-hdfs/installing-single-node-hadoop-2-2-0-on-ubuntu/

    配置过程如下:

    1. openssh配置

    li@li-pc:~$ sudo apt-get install openssh-client
    

     2.Java 环境我之前已经配置好了,不再罗嗦了

    3.添加分组

    li@li-pc:~$ sudo addgroup Hadoop
    addgroup: Please enter a username matching the regular expression configured
    via the NAME_REGEX[_SYSTEM] configuration variable.  Use the `--force-badname'
    option to relax this check or reconfigure NAME_REGEX.
    

     遇到问题。根据提醒,使用那个如下命令即可:

    sudo addgroup --force-badname Hadoop
    

     然后为该分组添加一个用户:

    li@li-pc:/usr/local$ sudo adduser -ingroup Hadoop hduser
    正在添加用户"hduser"...
    正在添加新用户"hduser" (1001) 到组"Hadoop"...
    创建主目录"/home/hduser"...
    正在从"/etc/skel"复制文件...
    输入新的 UNIX 密码: 
    重新输入新的 UNIX 密码: 
    passwd:已成功更新密码
    正在改变 hduser 的用户信息
    请输入新值,或直接敲回车键以使用默认值
    	全名 []: 
    	房间号码 []: 
    	工作电话 []: 
    	家庭电话 []: 
    	其它 []: 
    这些信息是否正确? [Y/n] Y
    

     4. 确认SSH配置

    关于ssh配置的作用,这里讲的很清楚,在master机器对slaver机器控制时需要通过ssh验证,而本地机器也要通过用户的授权才可以运行。

    The need for SSH Key based authentication is required so that the master node can then login to slave nodes (and the secondary node) to 
    start/stop them and also local machine if you want to use Hadoop with it. For our single-node setup of Hadoop, we therefore need to configure
    SSH access to localhost for the hduser user we created in the previous section.

     操作如下:

    li@li-pc:/usr/local$ su - hduser
    密码: 
    hduser@li-pc:~$ ls
    examples.desktop
    hduser@li-pc:~$ ssh-keygen -t rsa -P ""
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/hduser/.ssh/id_rsa): 
    Created directory '/home/hduser/.ssh'.
    Your identification has been saved in /home/hduser/.ssh/id_rsa.
    Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
    The key fingerprint is:
    23:2c:1a:6f:1c:dc:a6:61:6e:51:97:a3:a4:1f:b8:43 hduser@li-pc
    The key's randomart image is:
    +--[ RSA 2048]----+
    |                 |
    |         .       |
    |      o +        |
    |   . B o .       |
    |  . E B S        |
    |   O X o .       |
    |  . X .          |
    |   o .           |
    |                 |
    +-----------------+
    hduser@li-pc:~$ 
    

     测试ssh登录hduser,结果还是失败:

    hduser@li-pc:~$ cd .ssh/
    hduser@li-pc:~/.ssh$ ls
    id_rsa  id_rsa.pub
    hduser@li-pc:~/.ssh$ cat id_rsa.pub >> ~/.ssh/authorized_keys
    hduser@li-pc:~/.ssh$ ls
    authorized_keys  id_rsa  id_rsa.pub
    hduser@li-pc:~/.ssh$ ssh hduser@localhost
    ssh: connect to host localhost port 22: Connection refused
    

     结果发现是没与安装openssh-server,按照完成测试就成功了。

    li@li-pc:~$ sudo apt-get install openssh-server
    
    hduser@li-pc:~/.ssh$ ssh hduser@localhost
    The authenticity of host 'localhost (127.0.0.1)' can't be established.
    ECDSA key fingerprint is 9c:84:a4:41:d6:d2:88:3d:59:c3:b8:3f:95:44:15:f5.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
    Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.2.0-56-generic-pae i686)
    
     * Documentation:  https://help.ubuntu.com/
    
    
    The programs included with the Ubuntu system are free software;
    the exact distribution terms for each program are described in the
    individual files in /usr/share/doc/*/copyright.
    
    Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
    applicable law.
    

     5. 配置hadoop 2.2

    下载地址:http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.2.0/

    注意src和tar的区别。

    ---恢复内容结束---

    想哭的心情,写了好多,晚上吃了一个饭,发现firefox给刷新了,幸好可以恢复了一部分,好多都丢失了。。。

    参考博文再回忆一下吧:

    mv hadoop-2.2.0 hadoop
    sudo mv hadoop /usr/local/
    sudo chown -R hduser:Hadoop hadoop
    

     然后是配置一下文件:

    a. yarn-site.xml:
    b. core-site.xml
    c. mapred-site.xml
    d. hdfs-site.xml
    e. Update $HOME/.bashrc

    yarn-site.xml:

    <configuration>
    <!-- Site specific YARN configuration properties -->
    <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    </property>
    <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    </configuration>
    

    core-site.xml:

    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
    </property>
    </configuration>
    

     mapred-site.xml:

    <configuration>
    <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    </property>
    </configuration>
    

     配置datanode 和 namenode

    mkdir -p $HADOOP_HOME/yarn_data/hdfs/namenode
    sudo mkdir -p $HADOOP_HOME/yarn_data/hdfs/namenode
    mkdir -p $HADOOP_HOME/yarn_data/hdfs/datanode
    

     hdfs-site.xml:

    <configuration>
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/usr/local/hadoop/yarn_data/hdfs/namenode</value>
    </property>
    <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/usr/local/hadoop/yarn_data/hdfs/datanode</value>
    </property>
    </configuration>
    

     配置~/.bashrc文件,但这个只是对本用户的,可以修改/etc/profile对所有用户配置,配置如下,注意JAVA_HOME和HADOOP_HOME的位置:

    # Set Hadoop-related environment variables
    export HADOOP_PREFIX=/usr/local/hadoop
    export HADOOP_HOME=/usr/local/hadoop
    export HADOOP_MAPRED_HOME=${HADOOP_HOME}
    export HADOOP_COMMON_HOME=${HADOOP_HOME}
    export HADOOP_HDFS_HOME=${HADOOP_HOME}
    export YARN_HOME=${HADOOP_HOME}
    export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
    # Native Path
    export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib"
    #Java path
    export JAVA_HOME='/usr/locla/Java/jdk1.7.0_45'
    # Add Hadoop bin/ directory to PATH
    export PATH=$PATH:$HADOOP_HOME/bin:$JAVA_PATH/bin:$HADOOP_HOME/sbin
    

     6.运行hadoop

    格式化HDFS文件:

    hadoop namenode -format
    

     运行:

    $ hadoop-daemon.sh start namenode
    $ hadoop-daemon.sh start datanode
    $ yarn-daemon.sh start nodemanager
    $ mr-jobhistory-daemon.sh start historyserver

     如果需要终止的话,输入如下命令:

    stop-dfs.sh
    stop-yarn.sh
    

     注意的是,我在这个过程中遇到了许多问题:

    首先是提醒JAVA_HOME没有set,后来看了stackflow的解答,是在hadoop_env.sh配置了JAVA_HOME的变量解决了。

    另外是,retry server的问题,后来发现是yarn 守护进程没有打开。

    7. 运行hadoop-example.jar

    参考链接:http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

    找了半天hadoop 2.2的example jar包的位置是在下面,要把它复制到hadoop/bin目录下才可以使用:

    /usr/local/hadoop/share/hadoop/mapreduce
    

     而hadoop fs命令也折腾了好久:

    ./hadoop fs -mkdir -p /user/hduser
    ./hadoop fs -mkdir -p /user/hduser/output
     ./hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /usr/hduser/tmp /user/hduser/output
    

     运行结果:

    hduser@li-pc:/usr/local/hadoop/bin$ ./hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /usr/hduser/tmp /user/hduser/output2
    14/10/28 18:13:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    14/10/28 18:13:04 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
    14/10/28 18:13:06 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hduser/.staging/job_1414490582870_0001
    14/10/28 18:13:06 ERROR security.UserGroupInformation: PriviledgedActionException as:hduser (auth:SIMPLE) cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:9000/usr/hduser/tmp
    org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:9000/usr/hduser/tmp
    	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:285)
    	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:340)
    	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:491)
    	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:508)
    	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392)
    	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
    	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
    	at java.security.AccessController.doPrivileged(Native Method)
    	at javax.security.auth.Subject.doAs(Subject.java:416)
    	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
    	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
    	at org.apache.hadoop.examples.WordCount.main(WordCount.java:84)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:622)
    	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
    	at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
    	at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:622)
    	at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
    hduser@li-pc:/usr/local/hadoop/bin$ ./hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /user/hduser/tmp /user/hduser/output2
    14/10/28 18:13:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    14/10/28 18:13:30 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
    14/10/28 18:13:31 INFO input.FileInputFormat: Total input paths to process : 3
    14/10/28 18:13:32 INFO mapreduce.JobSubmitter: number of splits:3
    14/10/28 18:13:32 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
    14/10/28 18:13:32 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
    14/10/28 18:13:32 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
    14/10/28 18:13:32 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
    14/10/28 18:13:32 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
    14/10/28 18:13:32 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
    14/10/28 18:13:32 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
    14/10/28 18:13:32 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
    14/10/28 18:13:32 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
    14/10/28 18:13:32 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
    14/10/28 18:13:32 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
    14/10/28 18:13:32 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
    14/10/28 18:13:34 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1414490582870_0002
    14/10/28 18:13:35 INFO impl.YarnClientImpl: Submitted application application_1414490582870_0002 to ResourceManager at /0.0.0.0:8032
    14/10/28 18:13:35 INFO mapreduce.Job: The url to track the job: http://li-pc:8088/proxy/application_1414490582870_0002/
    14/10/28 18:13:35 INFO mapreduce.Job: Running job: job_1414490582870_0002
    14/10/28 18:14:12 INFO mapreduce.Job: Job job_1414490582870_0002 running in uber mode : false
    14/10/28 18:14:12 INFO mapreduce.Job:  map 0% reduce 0%
    14/10/28 18:19:31 INFO mapreduce.Job:  map 100% reduce 0%
    14/10/28 18:19:57 INFO mapreduce.Job: Task Id : attempt_1414490582870_0002_r_000000_0, Status : FAILED
    Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hduser/output2/_temporary/1/_temporary/attempt_1414490582870_0002_r_000000_0/part-r-00000 could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.
    	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
    	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
    	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
    	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
    	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
    	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
    	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
    	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
    	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
    	at java.security.AccessController.doPrivileged(Native Method)
    	at javax.security.auth.Subject.doAs(Subject.java:416)
    	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
    
    	at org.apache.hadoop.ipc.Client.call(Client.java:1347)
    	at org.apache.hadoop.ipc.Client.call(Client.java:1300)
    	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    	at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:622)
    	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
    	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    	at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
    	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
    	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
    	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
    
    14/10/28 18:20:03 INFO mapreduce.Job: Task Id : attempt_1414490582870_0002_r_000000_1, Status : FAILED
    Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hduser/output2/_temporary/1/_temporary/attempt_1414490582870_0002_r_000000_1/part-r-00000 could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.
    	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
    	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
    	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
    	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
    	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
    	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
    	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
    	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
    	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
    	at java.security.AccessController.doPrivileged(Native Method)
    	at javax.security.auth.Subject.doAs(Subject.java:416)
    	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
    
    	at org.apache.hadoop.ipc.Client.call(Client.java:1347)
    	at org.apache.hadoop.ipc.Client.call(Client.java:1300)
    	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    	at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:622)
    	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
    	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    	at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
    	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
    	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
    	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
    
    14/10/28 18:20:08 INFO mapreduce.Job: Task Id : attempt_1414490582870_0002_r_000000_2, Status : FAILED
    Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hduser/output2/_temporary/1/_temporary/attempt_1414490582870_0002_r_000000_2/part-r-00000 could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.
    	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
    	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
    	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
    	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
    	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
    	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
    	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
    	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
    	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
    	at java.security.AccessController.doPrivileged(Native Method)
    	at javax.security.auth.Subject.doAs(Subject.java:416)
    	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
    
    	at org.apache.hadoop.ipc.Client.call(Client.java:1347)
    	at org.apache.hadoop.ipc.Client.call(Client.java:1300)
    	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    	at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:622)
    	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
    	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    	at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
    	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
    	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
    	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
    
    14/10/28 18:20:14 INFO mapreduce.Job:  map 100% reduce 100%
    14/10/28 18:20:16 INFO mapreduce.Job: Job job_1414490582870_0002 failed with state FAILED due to: Task failed task_1414490582870_0002_r_000000
    Job failed as tasks failed. failedMaps:0 failedReduces:1
    
    14/10/28 18:20:18 INFO mapreduce.Job: Counters: 32
    	File System Counters
    		FILE: Number of bytes read=0
    		FILE: Number of bytes written=1696663
    		FILE: Number of read operations=0
    		FILE: Number of large read operations=0
    		FILE: Number of write operations=0
    		HDFS: Number of bytes read=3671863
    		HDFS: Number of bytes written=0
    		HDFS: Number of read operations=9
    		HDFS: Number of large read operations=0
    		HDFS: Number of write operations=0
    	Job Counters 
    		Failed reduce tasks=4
    		Launched map tasks=3
    		Launched reduce tasks=4
    		Data-local map tasks=3
    		Total time spent by all maps in occupied slots (ms)=951614
    		Total time spent by all reduces in occupied slots (ms)=26289
    	Map-Reduce Framework
    		Map input records=77931
    		Map output records=629172
    		Map output bytes=6076101
    		Map output materialized bytes=1459156
    		Input split bytes=340
    		Combine input records=629172
    		Combine output records=101113
    		Spilled Records=101113
    		Failed Shuffles=0
    		Merged Map outputs=0
    		GC time elapsed (ms)=20683
    		CPU time spent (ms)=15220
    		Physical memory (bytes) snapshot=292376576
    		Virtual memory (bytes) snapshot=1189588992
    		Total committed heap usage (bytes)=350171136
    	File Input Format Counters 
    		Bytes Read=3671523
    
  • 相关阅读:
    详解EBS接口开发之采购申请导入
    EBS HRMS数据表
    会计期间
    帐套和会计科目的理解
    oracle中动态SQL详解
    不同币种汇率转换
    API创建/更新员工联系电话
    API创建/更新员工薪水
    Android 圆形、圆角图片ImageView
    Knowledge Generation Model for Visual Analytics
  • 原文地址:https://www.cnblogs.com/sansan/p/4058098.html
Copyright © 2011-2022 走看看