zoukankan      html  css  js  c++  java
  • Hadoop源码分析33 Child的主要流程

    添加调试参数

     

    <property>

      namemapred.child.java.opts/name

      value-Xmx200m-Xdebug-Xrunjdwp:transport=dt_socket,address=9999,server=y,suspend=y/value

    /property

     

    提交作业:

    hadoop  jar /opt/hadoop-1.0.0/hadoop-examples-1.0.0.jar wordcount /user/admin/in/yellow2.txt /user/admin/out/128

     

    生成2Map2Reduce任务。

     

    执行Setup任务:

     

    args= [127.0.0.1,40996, attempt_201404282305_0001_m_000003_0,/opt/hadoop-1.0.0/logs/userlogs/job_201404282305_0001/attempt_201404282305_0001_m_000003_0,-1093852866]

     

    变量:

    jvmId=JVMId{id=-1093852866,isMap=true,jobId=job_201404282305_0001}

     

    cwd=/tmp/hadoop-admin/mapred/local/taskTracker/admin/jobcache/job_201404282305_0001/attempt_201404282305_0001_m_000003_0/work

     

    jobTokenFile=/tmp/hadoop-admin/mapred/local/taskTracker/admin/jobcache/job_201404282305_0001/jobToken

     

    taskOwner=job_201404282305_0001

     

    umbilical=(TaskUmbilicalProtocol)RPC.getProxy(TaskUmbilicalProtocol.class,

                 TaskUmbilicalProtocol.versionID,address,defaultConf);

     

    context=JvmContext{jvmId=jvm_201404282305_0001_m_-1093852866,pid="28737"}

     

    myTask =JvmTask  {shouldDie=false,

       t=MapTask  { taskId=attempt_201404282305_0001_m_000003_0,jobCleanup=false, jobSetup=truetaskCleanup=false, jobFile="/tmp/hadoop-admin/mapred/local/taskTracker/admin/jobcache/job_201404282305_0001/job.xml"}, taskStatus=MapTaskStatus{runState=UNASSIGNED}}      

     

    job= JobConf{Configuration:core-default.xml, core-site.xml, mapred-default.xml,mapred-site.xml,/tmp/hadoop-admin/mapred/local/taskTracker/admin/jobcache/job_201404282305_0001/job.xml}

     

    currentJobSegmented= false

    isCleanup =false

     

    DistributedFileSystem workingDir=hdfs://server1:9000/user/admin

     

    启动一个TaskReporter线程,检查Task.progressFlag变量(AtomicBoolean)true则通过RPC汇报statusUpdate(taskId,taskStatus,jvmContext)false则通过RPC进行ping(askId,jvmContext).

     

    TaskjobContext={conf={Configuration:core-default.xml, core-site.xml, mapred-default.xml,mapred-site.xml, hdfs-default.xml, hdfs-site.xml,/tmp/hadoop-admin/mapred/local/taskTracker/admin/jobcache/job_201404282305_0001/job.xml}

    job=JobConf{Configuration:core-default.xml, core-site.xml, mapred-default.xml,mapred-site.xml, hdfs-default.xml, hdfs-site.xml,/tmp/hadoop-admin/mapred/local/taskTracker/admin/jobcache/job_201404282305_0001/job.xml},

    jobId={job_201404282305_0001}}

     

    TasktaskContext={conf={Configuration:core-default.xml, core-site.xml, mapred-default.xml,mapred-site.xml, hdfs-default.xml, hdfs-site.xml,/tmp/hadoop-admin/mapred/local/taskTracker/admin/jobcache/job_201404282305_0001/job.xml}taskId=attempt_201404282305_0001_m_000003_0jobId=job_201404282305_0001status=""}

     

    outputFormat=org.apache.hadoop.mapreduce.lib.output.TextOutputFormat@7099c91f

     

    committer={outputFileSystem=DFS[DFSClient[clientName=DFSClient_attempt_201404282305_0001_m_000003_0,ugi=admin]]outputpath=/user/admin/out/128, workPath=hdfs://server1:9000/user/admin/out/128/_temporary/_attempt_201404282305_0001_m_000003_0}

     

    TaskresourceCalculator=org.apache.hadoop.util.LinuxResourceCalculatorPlugin@2288e718

     

    TaskinitCpuCumulativeTime=13620

     

    建立文件夹 /user/admin/out/128/_temporary后则完成.

  • 相关阅读:
    Golang的标准命令简述
    Golang的环境安装
    初识Golang编程语言
    基于Ambari的WebUI部署Hive服务
    基于Ambari Server部署HDP集群实战案例
    HBase shell常用命令总结
    HBase完全分布式集群搭建
    HBase工作原理概述
    面向对象-接口(interface)实战案例
    myBatis 简介
  • 原文地址:https://www.cnblogs.com/leeeee/p/7276482.html
Copyright © 2011-2022 走看看