zoukankan      html  css  js  c++  java
  • windows下安装并启动hadoop2.7.2

      64位windows安装hadoop没必要倒腾Cygwin,直接解压官网下载hadoop安装包到本地->最小化配置4个基本文件->执行1条启动命令->完事。一个前提是你的电脑上已经安装了jdk,设置了java环境变量。下面把这几步细化贴出来,以hadoop2.7.2为例

      1、下载hadoop安装包就不细说了:http://hadoop.apache.org/->左边点Releases->点mirror site->点http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common->下载hadoop-2.7.2.tar.gz;

      2、解压也不细说了:复制到D盘根目录直接解压,出来一个目录D:hadoop-2.7.2,配置到环境变量HADOOP_HOME中,在PATH里加上%HADOOP_HOME%in;点击http://download.csdn.net/detail/wuxun1997/9841472下载相关工具类,直接解压后把文件丢到D:hadoop-2.7.2in目录中去,将其中的hadoop.dll在c:/windows/System32下也丢一份;

      3、去D:hadoop-2.7.2etchadoop找到下面4个文件并按如下最小配置粘贴上去:

    core-site.xml

    <configuration>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://localhost:9000</value>
        </property>    
    </configuration>

    hdfs-site.xml

    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>1</value>
        </property>
        <property>    
            <name>dfs.namenode.name.dir</name>    
            <value>file:/hadoop/data/dfs/namenode</value>    
        </property>    
        <property>    
            <name>dfs.datanode.data.dir</name>    
            <value>file:/hadoop/data/dfs/datanode</value>  
        </property>
    </configuration>

    mapred-site.xml

    <configuration>
        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>
    </configuration>

    yarn-site.xml

    <configuration>
        <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
        <property>
            <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
            <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
    </configuration>

      4、启动windows命令行窗口,进入hadoop-2.7.2in目录,执行下面2条命令,先格式化namenode再启动hadoop

    D:hadoop-2.7.2bin>hadoop namenode -format
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
    Instead use the hdfs command for it.
    17/05/13 07:16:40 INFO namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = wulinfeng/192.168.8.5
    STARTUP_MSG:   args = [-format]
    STARTUP_MSG:   version = 2.7.2
    STARTUP_MSG:   classpath = D:hadoop-2.7.2etchadoop;D:hadoop-2.7.2sharehado
    opcommonlibactivation-1.1.jar;D:hadoop-2.7.2sharehadoopcommonlibapached
    s-i18n-2.0.0-M15.jar;D:hadoop-2.7.2sharehadoopcommonlibapacheds-kerberos-c
    odec-2.0.0-M15.jar;D:hadoop-2.7.2sharehadoopcommonlibapi-asn1-api-1.0.0-M2
    0.jar;D:hadoop-2.7.2sharehadoopcommonlibapi-util-1.0.0-M20.jar;D:hadoop-2
    .7.2sharehadoopcommonlibasm-3.2.jar;D:hadoop-2.7.2sharehadoopcommonlib
    avro-1.7.4.jar;D:hadoop-2.7.2sharehadoopcommonlibcommons-beanutils-1.7.0.
    jar;D:hadoop-2.7.2sharehadoopcommonlibcommons-beanutils-core-1.8.0.jar;D:
    hadoop-2.7.2sharehadoopcommonlibcommons-cli-1.2.jar;D:hadoop-2.7.2shareh
    adoopcommonlibcommons-codec-1.4.jar;D:hadoop-2.7.2sharehadoopcommonlibc
    ommons-collections-3.2.2.jar;D:hadoop-2.7.2sharehadoopcommonlibcommons-com
    press-1.4.1.jar;D:hadoop-2.7.2sharehadoopcommonlibcommons-configuration-1.
    6.jar;D:hadoop-2.7.2sharehadoopcommonlibcommons-digester-1.8.jar;D:hadoop
    -2.7.2sharehadoopcommonlibcommons-httpclient-3.1.jar;D:hadoop-2.7.2share
    hadoopcommonlibcommons-io-2.4.jar;D:hadoop-2.7.2sharehadoopcommonlibcom
    mons-lang-2.6.jar;D:hadoop-2.7.2sharehadoopcommonlibcommons-logging-1.1.3.
    jar;D:hadoop-2.7.2sharehadoopcommonlibcommons-math3-3.1.1.jar;D:hadoop-2.
    7.2sharehadoopcommonlibcommons-net-3.1.jar;D:hadoop-2.7.2sharehadoopcom
    monlibcurator-client-2.7.1.jar;D:hadoop-2.7.2sharehadoopcommonlibcurator
    -framework-2.7.1.jar;D:hadoop-2.7.2sharehadoopcommonlibcurator-recipes-2.7
    .1.jar;D:hadoop-2.7.2sharehadoopcommonlibgson-2.2.4.jar;D:hadoop-2.7.2sh
    arehadoopcommonlibguava-11.0.2.jar;D:hadoop-2.7.2sharehadoopcommonlibh
    adoop-annotations-2.7.2.jar;D:hadoop-2.7.2sharehadoopcommonlibhadoop-auth-
    2.7.2.jar;D:hadoop-2.7.2sharehadoopcommonlibhamcrest-core-1.3.jar;D:hadoo
    p-2.7.2sharehadoopcommonlibhtrace-core-3.1.0-incubating.jar;D:hadoop-2.7.2
    sharehadoopcommonlibhttpclient-4.2.5.jar;D:hadoop-2.7.2sharehadoopcommo
    nlibhttpcore-4.2.5.jar;D:hadoop-2.7.2sharehadoopcommonlibjackson-core-as
    l-1.9.13.jar;D:hadoop-2.7.2sharehadoopcommonlibjackson-jaxrs-1.9.13.jar;D:
    hadoop-2.7.2sharehadoopcommonlibjackson-mapper-asl-1.9.13.jar;D:hadoop-2.
    7.2sharehadoopcommonlibjackson-xc-1.9.13.jar;D:hadoop-2.7.2sharehadoopc
    ommonlibjava-xmlbuilder-0.4.jar;D:hadoop-2.7.2sharehadoopcommonlibjaxb-a
    pi-2.2.2.jar;D:hadoop-2.7.2sharehadoopcommonlibjaxb-impl-2.2.3-1.jar;D:ha
    doop-2.7.2sharehadoopcommonlibjersey-core-1.9.jar;D:hadoop-2.7.2sharehad
    oopcommonlibjersey-json-1.9.jar;D:hadoop-2.7.2sharehadoopcommonlibjerse
    y-server-1.9.jar;D:hadoop-2.7.2sharehadoopcommonlibjets3t-0.9.0.jar;D:had
    oop-2.7.2sharehadoopcommonlibjettison-1.1.jar;D:hadoop-2.7.2sharehadoop
    commonlibjetty-6.1.26.jar;D:hadoop-2.7.2sharehadoopcommonlibjetty-util-6
    .1.26.jar;D:hadoop-2.7.2sharehadoopcommonlibjsch-0.1.42.jar;D:hadoop-2.7.
    2sharehadoopcommonlibjsp-api-2.1.jar;D:hadoop-2.7.2sharehadoopcommonli
    bjsr305-3.0.0.jar;D:hadoop-2.7.2sharehadoopcommonlibjunit-4.11.jar;D:had
    oop-2.7.2sharehadoopcommonliblog4j-1.2.17.jar;D:hadoop-2.7.2sharehadoop
    commonlibmockito-all-1.8.5.jar;D:hadoop-2.7.2sharehadoopcommonlib
    etty-3
    .6.2.Final.jar;D:hadoop-2.7.2sharehadoopcommonlibparanamer-2.3.jar;D:hado
    op-2.7.2sharehadoopcommonlibprotobuf-java-2.5.0.jar;D:hadoop-2.7.2shareh
    adoopcommonlibservlet-api-2.5.jar;D:hadoop-2.7.2sharehadoopcommonlibslf
    4j-api-1.7.10.jar;D:hadoop-2.7.2sharehadoopcommonlibslf4j-log4j12-1.7.10.j
    ar;D:hadoop-2.7.2sharehadoopcommonlibsnappy-java-1.0.4.1.jar;D:hadoop-2.7
    .2sharehadoopcommonlibstax-api-1.0-2.jar;D:hadoop-2.7.2sharehadoopcommo
    nlibxmlenc-0.52.jar;D:hadoop-2.7.2sharehadoopcommonlibxz-1.0.jar;D:hado
    op-2.7.2sharehadoopcommonlibzookeeper-3.4.6.jar;D:hadoop-2.7.2sharehadoo
    pcommonhadoop-common-2.7.2-tests.jar;D:hadoop-2.7.2sharehadoopcommonhadoo
    p-common-2.7.2.jar;D:hadoop-2.7.2sharehadoopcommonhadoop-nfs-2.7.2.jar;D:h
    adoop-2.7.2sharehadoophdfs;D:hadoop-2.7.2sharehadoophdfslibasm-3.2.jar;
    D:hadoop-2.7.2sharehadoophdfslibcommons-cli-1.2.jar;D:hadoop-2.7.2share
    hadoophdfslibcommons-codec-1.4.jar;D:hadoop-2.7.2sharehadoophdfslibcomm
    ons-daemon-1.0.13.jar;D:hadoop-2.7.2sharehadoophdfslibcommons-io-2.4.jar;D
    :hadoop-2.7.2sharehadoophdfslibcommons-lang-2.6.jar;D:hadoop-2.7.2share
    hadoophdfslibcommons-logging-1.1.3.jar;D:hadoop-2.7.2sharehadoophdfslib
    guava-11.0.2.jar;D:hadoop-2.7.2sharehadoophdfslibhtrace-core-3.1.0-incubat
    ing.jar;D:hadoop-2.7.2sharehadoophdfslibjackson-core-asl-1.9.13.jar;D:had
    oop-2.7.2sharehadoophdfslibjackson-mapper-asl-1.9.13.jar;D:hadoop-2.7.2sh
    arehadoophdfslibjersey-core-1.9.jar;D:hadoop-2.7.2sharehadoophdfslibje
    rsey-server-1.9.jar;D:hadoop-2.7.2sharehadoophdfslibjetty-6.1.26.jar;D:ha
    doop-2.7.2sharehadoophdfslibjetty-util-6.1.26.jar;D:hadoop-2.7.2sharehad
    oophdfslibjsr305-3.0.0.jar;D:hadoop-2.7.2sharehadoophdfslibleveldbjni-a
    ll-1.8.jar;D:hadoop-2.7.2sharehadoophdfsliblog4j-1.2.17.jar;D:hadoop-2.7.
    2sharehadoophdfslib
    etty-3.6.2.Final.jar;D:hadoop-2.7.2sharehadoophdfs
    lib
    etty-all-4.0.23.Final.jar;D:hadoop-2.7.2sharehadoophdfslibprotobuf-ja
    va-2.5.0.jar;D:hadoop-2.7.2sharehadoophdfslibservlet-api-2.5.jar;D:hadoop
    -2.7.2sharehadoophdfslibxercesImpl-2.9.1.jar;D:hadoop-2.7.2sharehadooph
    dfslibxml-apis-1.3.04.jar;D:hadoop-2.7.2sharehadoophdfslibxmlenc-0.52.ja
    r;D:hadoop-2.7.2sharehadoophdfshadoop-hdfs-2.7.2-tests.jar;D:hadoop-2.7.2
    sharehadoophdfshadoop-hdfs-2.7.2.jar;D:hadoop-2.7.2sharehadoophdfshadoop
    -hdfs-nfs-2.7.2.jar;D:hadoop-2.7.2sharehadoopyarnlibactivation-1.1.jar;D:
    hadoop-2.7.2sharehadoopyarnlibaopalliance-1.0.jar;D:hadoop-2.7.2sharehad
    oopyarnlibasm-3.2.jar;D:hadoop-2.7.2sharehadoopyarnlibcommons-cli-1.2.j
    ar;D:hadoop-2.7.2sharehadoopyarnlibcommons-codec-1.4.jar;D:hadoop-2.7.2s
    harehadoopyarnlibcommons-collections-3.2.2.jar;D:hadoop-2.7.2sharehadoop
    yarnlibcommons-compress-1.4.1.jar;D:hadoop-2.7.2sharehadoopyarnlibcommon
    s-io-2.4.jar;D:hadoop-2.7.2sharehadoopyarnlibcommons-lang-2.6.jar;D:hadoo
    p-2.7.2sharehadoopyarnlibcommons-logging-1.1.3.jar;D:hadoop-2.7.2shareha
    doopyarnlibguava-11.0.2.jar;D:hadoop-2.7.2sharehadoopyarnlibguice-3.0.j
    ar;D:hadoop-2.7.2sharehadoopyarnlibguice-servlet-3.0.jar;D:hadoop-2.7.2s
    harehadoopyarnlibjackson-core-asl-1.9.13.jar;D:hadoop-2.7.2sharehadoopya
    rnlibjackson-jaxrs-1.9.13.jar;D:hadoop-2.7.2sharehadoopyarnlibjackson-ma
    pper-asl-1.9.13.jar;D:hadoop-2.7.2sharehadoopyarnlibjackson-xc-1.9.13.jar;
    D:hadoop-2.7.2sharehadoopyarnlibjavax.inject-1.jar;D:hadoop-2.7.2shareh
    adoopyarnlibjaxb-api-2.2.2.jar;D:hadoop-2.7.2sharehadoopyarnlibjaxb-imp
    l-2.2.3-1.jar;D:hadoop-2.7.2sharehadoopyarnlibjersey-client-1.9.jar;D:had
    oop-2.7.2sharehadoopyarnlibjersey-core-1.9.jar;D:hadoop-2.7.2sharehadoop
    yarnlibjersey-guice-1.9.jar;D:hadoop-2.7.2sharehadoopyarnlibjersey-json
    -1.9.jar;D:hadoop-2.7.2sharehadoopyarnlibjersey-server-1.9.jar;D:hadoop-2
    .7.2sharehadoopyarnlibjettison-1.1.jar;D:hadoop-2.7.2sharehadoopyarnli
    bjetty-6.1.26.jar;D:hadoop-2.7.2sharehadoopyarnlibjetty-util-6.1.26.jar;D
    :hadoop-2.7.2sharehadoopyarnlibjsr305-3.0.0.jar;D:hadoop-2.7.2sharehado
    opyarnlibleveldbjni-all-1.8.jar;D:hadoop-2.7.2sharehadoopyarnliblog4j-1
    .2.17.jar;D:hadoop-2.7.2sharehadoopyarnlib
    etty-3.6.2.Final.jar;D:hadoop-
    2.7.2sharehadoopyarnlibprotobuf-java-2.5.0.jar;D:hadoop-2.7.2sharehadoop
    yarnlibservlet-api-2.5.jar;D:hadoop-2.7.2sharehadoopyarnlibstax-api-1.0
    -2.jar;D:hadoop-2.7.2sharehadoopyarnlibxz-1.0.jar;D:hadoop-2.7.2shareha
    doopyarnlibzookeeper-3.4.6-tests.jar;D:hadoop-2.7.2sharehadoopyarnlibzo
    okeeper-3.4.6.jar;D:hadoop-2.7.2sharehadoopyarnhadoop-yarn-api-2.7.2.jar;D:
    hadoop-2.7.2sharehadoopyarnhadoop-yarn-applications-distributedshell-2.7.2.
    jar;D:hadoop-2.7.2sharehadoopyarnhadoop-yarn-applications-unmanaged-am-laun
    cher-2.7.2.jar;D:hadoop-2.7.2sharehadoopyarnhadoop-yarn-client-2.7.2.jar;D:
    hadoop-2.7.2sharehadoopyarnhadoop-yarn-common-2.7.2.jar;D:hadoop-2.7.2sha
    rehadoopyarnhadoop-yarn-registry-2.7.2.jar;D:hadoop-2.7.2sharehadoopyarn
    hadoop-yarn-server-applicationhistoryservice-2.7.2.jar;D:hadoop-2.7.2sharehad
    oopyarnhadoop-yarn-server-common-2.7.2.jar;D:hadoop-2.7.2sharehadoopyarnh
    adoop-yarn-server-nodemanager-2.7.2.jar;D:hadoop-2.7.2sharehadoopyarnhadoop
    -yarn-server-resourcemanager-2.7.2.jar;D:hadoop-2.7.2sharehadoopyarnhadoop-
    yarn-server-sharedcachemanager-2.7.2.jar;D:hadoop-2.7.2sharehadoopyarnhadoo
    p-yarn-server-tests-2.7.2.jar;D:hadoop-2.7.2sharehadoopyarnhadoop-yarn-serv
    er-web-proxy-2.7.2.jar;D:hadoop-2.7.2sharehadoopmapreducelibaopalliance-1.
    0.jar;D:hadoop-2.7.2sharehadoopmapreducelibasm-3.2.jar;D:hadoop-2.7.2sha
    rehadoopmapreducelibavro-1.7.4.jar;D:hadoop-2.7.2sharehadoopmapreduceli
    bcommons-compress-1.4.1.jar;D:hadoop-2.7.2sharehadoopmapreducelibcommons-
    io-2.4.jar;D:hadoop-2.7.2sharehadoopmapreducelibguice-3.0.jar;D:hadoop-2.
    7.2sharehadoopmapreducelibguice-servlet-3.0.jar;D:hadoop-2.7.2sharehadoo
    pmapreducelibhadoop-annotations-2.7.2.jar;D:hadoop-2.7.2sharehadoopmapred
    ucelibhamcrest-core-1.3.jar;D:hadoop-2.7.2sharehadoopmapreducelibjackson
    -core-asl-1.9.13.jar;D:hadoop-2.7.2sharehadoopmapreducelibjackson-mapper-a
    sl-1.9.13.jar;D:hadoop-2.7.2sharehadoopmapreducelibjavax.inject-1.jar;D:h
    adoop-2.7.2sharehadoopmapreducelibjersey-core-1.9.jar;D:hadoop-2.7.2share
    hadoopmapreducelibjersey-guice-1.9.jar;D:hadoop-2.7.2sharehadoopmapreduc
    elibjersey-server-1.9.jar;D:hadoop-2.7.2sharehadoopmapreducelibjunit-4.1
    1.jar;D:hadoop-2.7.2sharehadoopmapreducelibleveldbjni-all-1.8.jar;D:hadoo
    p-2.7.2sharehadoopmapreduceliblog4j-1.2.17.jar;D:hadoop-2.7.2sharehadoop
    mapreducelib
    etty-3.6.2.Final.jar;D:hadoop-2.7.2sharehadoopmapreducelib
    paranamer-2.3.jar;D:hadoop-2.7.2sharehadoopmapreducelibprotobuf-java-2.5.0
    .jar;D:hadoop-2.7.2sharehadoopmapreducelibsnappy-java-1.0.4.1.jar;D:hadoo
    p-2.7.2sharehadoopmapreducelibxz-1.0.jar;D:hadoop-2.7.2sharehadoopmapre
    ducehadoop-mapreduce-client-app-2.7.2.jar;D:hadoop-2.7.2sharehadoopmapreduc
    ehadoop-mapreduce-client-common-2.7.2.jar;D:hadoop-2.7.2sharehadoopmapreduc
    ehadoop-mapreduce-client-core-2.7.2.jar;D:hadoop-2.7.2sharehadoopmapreduce
    hadoop-mapreduce-client-hs-2.7.2.jar;D:hadoop-2.7.2sharehadoopmapreducehado
    op-mapreduce-client-hs-plugins-2.7.2.jar;D:hadoop-2.7.2sharehadoopmapreduce
    hadoop-mapreduce-client-jobclient-2.7.2-tests.jar;D:hadoop-2.7.2sharehadoopm
    apreducehadoop-mapreduce-client-jobclient-2.7.2.jar;D:hadoop-2.7.2sharehadoo
    pmapreducehadoop-mapreduce-client-shuffle-2.7.2.jar;D:hadoop-2.7.2sharehado
    opmapreducehadoop-mapreduce-examples-2.7.2.jar
    STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b16
    5c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08
    Z
    STARTUP_MSG:   java = 1.8.0_101
    ************************************************************/
    17/05/13 07:16:40 INFO namenode.NameNode: createNameNode [-format]
    Formatting using clusterid: CID-1284c5d0-592a-4a41-b185-e53fb57dcfbf
    17/05/13 07:16:42 INFO namenode.FSNamesystem: No KeyProvider found.
    17/05/13 07:16:42 INFO namenode.FSNamesystem: fsLock is fair:true
    17/05/13 07:16:42 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.lim
    it=1000
    17/05/13 07:16:42 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.re
    gistration.ip-hostname-check=true
    17/05/13 07:16:42 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.
    block.deletion.sec is set to 000:00:00:00.000
    17/05/13 07:16:42 INFO blockmanagement.BlockManager: The block deletion will sta
    rt around 2017 五月 13 07:16:42
    17/05/13 07:16:42 INFO util.GSet: Computing capacity for map BlocksMap
    17/05/13 07:16:42 INFO util.GSet: VM type       = 64-bit
    17/05/13 07:16:42 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
    17/05/13 07:16:42 INFO util.GSet: capacity      = 2^21 = 2097152 entries
    17/05/13 07:16:42 INFO blockmanagement.BlockManager: dfs.block.access.token.enab
    le=false
    17/05/13 07:16:42 INFO blockmanagement.BlockManager: defaultReplication
    = 1
    17/05/13 07:16:42 INFO blockmanagement.BlockManager: maxReplication
    = 512
    17/05/13 07:16:42 INFO blockmanagement.BlockManager: minReplication
    = 1
    17/05/13 07:16:42 INFO blockmanagement.BlockManager: maxReplicationStreams
    = 2
    17/05/13 07:16:42 INFO blockmanagement.BlockManager: replicationRecheckInterval
    = 3000
    17/05/13 07:16:42 INFO blockmanagement.BlockManager: encryptDataTransfer
    = false
    17/05/13 07:16:42 INFO blockmanagement.BlockManager: maxNumBlocksToLog
    = 1000
    17/05/13 07:16:42 INFO namenode.FSNamesystem: fsOwner             = Administrato
    r (auth:SIMPLE)
    17/05/13 07:16:42 INFO namenode.FSNamesystem: supergroup          = supergroup
    17/05/13 07:16:42 INFO namenode.FSNamesystem: isPermissionEnabled = true
    17/05/13 07:16:42 INFO namenode.FSNamesystem: HA Enabled: false
    17/05/13 07:16:42 INFO namenode.FSNamesystem: Append Enabled: true
    17/05/13 07:16:43 INFO util.GSet: Computing capacity for map INodeMap
    17/05/13 07:16:43 INFO util.GSet: VM type       = 64-bit
    17/05/13 07:16:43 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
    17/05/13 07:16:43 INFO util.GSet: capacity      = 2^20 = 1048576 entries
    17/05/13 07:16:43 INFO namenode.FSDirectory: ACLs enabled? false
    17/05/13 07:16:43 INFO namenode.FSDirectory: XAttrs enabled? true
    17/05/13 07:16:43 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
    17/05/13 07:16:43 INFO namenode.NameNode: Caching file names occuring more than
    10 times
    17/05/13 07:16:43 INFO util.GSet: Computing capacity for map cachedBlocks
    17/05/13 07:16:43 INFO util.GSet: VM type       = 64-bit
    17/05/13 07:16:43 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
    17/05/13 07:16:43 INFO util.GSet: capacity      = 2^18 = 262144 entries
    17/05/13 07:16:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pc
    t = 0.9990000128746033
    17/05/13 07:16:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanode
    s = 0
    17/05/13 07:16:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension
      = 30000
    17/05/13 07:16:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.n
    um.buckets = 10
    17/05/13 07:16:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.user
    s = 10
    17/05/13 07:16:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.
    minutes = 1,5,25
    17/05/13 07:16:43 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
    
    17/05/13 07:16:43 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total
     heap and retry cache entry expiry time is 600000 millis
    17/05/13 07:16:43 INFO util.GSet: Computing capacity for map NameNodeRetryCache
    17/05/13 07:16:43 INFO util.GSet: VM type       = 64-bit
    17/05/13 07:16:43 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.
    1 KB
    17/05/13 07:16:43 INFO util.GSet: capacity      = 2^15 = 32768 entries
    17/05/13 07:16:43 INFO namenode.FSImage: Allocated new BlockPoolId: BP-664414510
    -192.168.8.5-1494631003212
    17/05/13 07:16:43 INFO common.Storage: Storage directory hadoopdatadfs
    ameno
    de has been successfully formatted.
    17/05/13 07:16:43 INFO namenode.NNStorageRetentionManager: Going to retain 1 ima
    ges with txid >= 0
    17/05/13 07:16:43 INFO util.ExitUtil: Exiting with status 0
    17/05/13 07:16:43 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at wulinfeng/192.168.8.5
    ************************************************************/
    
    D:hadoop-2.7.2in>cd ..sbin
    
    D:hadoop-2.7.2sbin>start-all.cmd
    This script is Deprecated. Instead use start-dfs.cmd and start-yarn.cmd
    starting yarn daemons
    
    D:hadoop-2.7.2sbin>jps
    4944 DataNode
    5860 NodeManager
    3532 Jps
    7852 NameNode
    7932 ResourceManager
    
    D:hadoop-2.7.2sbin>

      通过jps命令可以看到4个进程都拉起来了,到这里hadoop的安装启动已经完事了。接着我们可以用浏览器到localhost:8088看mapreduce任务,到localhost:50070->Utilites->Browse the file system看hdfs文件。如果重启hadoop无需再格式化namenode,只要stop-all.cmd再start-all.cmd就可以了。

      上面拉起4个进程时会弹出4个窗口,我们可以看看这4个进程启动时都干了啥:

    DataNode

    ************************************************************/
    17/05/13 07:18:24 INFO impl.MetricsConfig: loaded properties from hadoop-metrics
    2.properties
    17/05/13 07:18:25 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 s
    econd(s).
    17/05/13 07:18:25 INFO impl.MetricsSystemImpl: DataNode metrics system started
    17/05/13 07:18:25 INFO datanode.BlockScanner: Initialized block scanner with tar
    getBytesPerSec 1048576
    17/05/13 07:18:25 INFO datanode.DataNode: Configured hostname is wulinfeng
    17/05/13 07:18:25 INFO datanode.DataNode: Starting DataNode with maxLockedMemory
     = 0
    17/05/13 07:18:25 INFO datanode.DataNode: Opened streaming server at /0.0.0.0:50
    010
    17/05/13 07:18:25 INFO datanode.DataNode: Balancing bandwith is 1048576 bytes/s
    17/05/13 07:18:25 INFO datanode.DataNode: Number threads for balancing is 5
    17/05/13 07:18:25 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter
    (org.mortbay.log) via org.mortbay.log.Slf4jLog
    17/05/13 07:18:26 INFO server.AuthenticationFilter: Unable to initialize FileSig
    nerSecretProvider, falling back to use random secrets.
    17/05/13 07:18:26 INFO http.HttpRequestLog: Http request log for http.requests.d
    atanode is not defined
    17/05/13 07:18:26 INFO http.HttpServer2: Added global filter 'safety' (class=org
    .apache.hadoop.http.HttpServer2$QuotingInputFilter)
    17/05/13 07:18:26 INFO http.HttpServer2: Added filter static_user_filter (class=
    org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context data
    node
    17/05/13 07:18:26 INFO http.HttpServer2: Added filter static_user_filter (class=
    org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context stat
    ic
    17/05/13 07:18:26 INFO http.HttpServer2: Added filter static_user_filter (class=
    org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
    
    17/05/13 07:18:26 INFO http.HttpServer2: Jetty bound to port 53058
    17/05/13 07:18:26 INFO mortbay.log: jetty-6.1.26
    17/05/13 07:18:29 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWi
    thSafeStartup@localhost:53058
    17/05/13 07:18:41 INFO web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.
    0:50075
    17/05/13 07:18:42 INFO datanode.DataNode: dnUserName = Administrator
    17/05/13 07:18:42 INFO datanode.DataNode: supergroup = supergroup
    17/05/13 07:18:42 INFO ipc.CallQueueManager: Using callQueue class java.util.con
    current.LinkedBlockingQueue
    17/05/13 07:18:42 INFO ipc.Server: Starting Socket Reader #1 for port 50020
    17/05/13 07:18:42 INFO datanode.DataNode: Opened IPC server at /0.0.0.0:50020
    17/05/13 07:18:42 INFO datanode.DataNode: Refresh request received for nameservi
    ces: null
    17/05/13 07:18:42 INFO datanode.DataNode: Starting BPOfferServices for nameservi
    ces: <default>
    17/05/13 07:18:42 INFO ipc.Server: IPC Server listener on 50020: starting
    17/05/13 07:18:42 INFO ipc.Server: IPC Server Responder: starting
    17/05/13 07:18:42 INFO datanode.DataNode: Block pool <registering> (Datanode Uui
    d unassigned) service to localhost/127.0.0.1:9000 starting to offer service
    17/05/13 07:18:43 INFO common.Storage: Lock on hadoopdatadfsdatanodein_use.
    lock acquired by nodename 4944@wulinfeng
    17/05/13 07:18:43 INFO common.Storage: Storage directory hadoopdatadfsdatano
    de is not formatted for BP-664414510-192.168.8.5-1494631003212
    17/05/13 07:18:43 INFO common.Storage: Formatting ...
    17/05/13 07:18:43 INFO common.Storage: Analyzing storage directories for bpid BP
    -664414510-192.168.8.5-1494631003212
    17/05/13 07:18:43 INFO common.Storage: Locking is disabled for hadoopdatadfs
    datanodecurrentBP-664414510-192.168.8.5-1494631003212
    17/05/13 07:18:43 INFO common.Storage: Block pool storage directory hadoopdata
    dfsdatanodecurrentBP-664414510-192.168.8.5-1494631003212 is not formatted fo
    r BP-664414510-192.168.8.5-1494631003212
    17/05/13 07:18:43 INFO common.Storage: Formatting ...
    17/05/13 07:18:43 INFO common.Storage: Formatting block pool BP-664414510-192.16
    8.8.5-1494631003212 directory hadoopdatadfsdatanodecurrentBP-664414510-192
    .168.8.5-1494631003212current
    17/05/13 07:18:43 INFO datanode.DataNode: Setting up storage: nsid=61861794;bpid
    =BP-664414510-192.168.8.5-1494631003212;lv=-56;nsInfo=lv=-63;cid=CID-1284c5d0-59
    2a-4a41-b185-e53fb57dcfbf;nsid=61861794;c=0;bpid=BP-664414510-192.168.8.5-149463
    1003212;dnuuid=null
    17/05/13 07:18:43 INFO datanode.DataNode: Generated and persisted new Datanode U
    UID e6e53ca9-b788-4c1c-9308-29b31be28705
    17/05/13 07:18:43 INFO impl.FsDatasetImpl: Added new volume: DS-f2b82635-0df9-48
    4f-9d12-4364a9279b20
    17/05/13 07:18:43 INFO impl.FsDatasetImpl: Added volume - hadoopdatadfsdatan
    odecurrent, StorageType: DISK
    17/05/13 07:18:43 INFO impl.FsDatasetImpl: Registered FSDatasetState MBean
    17/05/13 07:18:43 INFO impl.FsDatasetImpl: Adding block pool BP-664414510-192.16
    8.8.5-1494631003212
    17/05/13 07:18:43 INFO impl.FsDatasetImpl: Scanning block pool BP-664414510-192.
    168.8.5-1494631003212 on volume D:hadoopdatadfsdatanodecurrent...
    17/05/13 07:18:43 INFO impl.FsDatasetImpl: Time taken to scan block pool BP-6644
    14510-192.168.8.5-1494631003212 on D:hadoopdatadfsdatanodecurrent: 15ms
    17/05/13 07:18:43 INFO impl.FsDatasetImpl: Total time to scan all replicas for b
    lock pool BP-664414510-192.168.8.5-1494631003212: 20ms
    17/05/13 07:18:43 INFO impl.FsDatasetImpl: Adding replicas to map for block pool
     BP-664414510-192.168.8.5-1494631003212 on volume D:hadoopdatadfsdatanodecu
    rrent...
    17/05/13 07:18:43 INFO impl.FsDatasetImpl: Time to add replicas to map for block
     pool BP-664414510-192.168.8.5-1494631003212 on volume D:hadoopdatadfsdatano
    decurrent: 0ms
    17/05/13 07:18:43 INFO impl.FsDatasetImpl: Total time to add all replicas to map
    : 17ms
    17/05/13 07:18:44 INFO datanode.DirectoryScanner: Periodic Directory Tree Verifi
    cation scan starting at 1494650306107 with interval 21600000
    17/05/13 07:18:44 INFO datanode.VolumeScanner: Now scanning bpid BP-664414510-19
    2.168.8.5-1494631003212 on volume hadoopdatadfsdatanode
    17/05/13 07:18:44 INFO datanode.VolumeScanner: VolumeScanner(hadoopdatadfsda
    tanode, DS-f2b82635-0df9-484f-9d12-4364a9279b20): finished scanning block pool B
    P-664414510-192.168.8.5-1494631003212
    17/05/13 07:18:44 INFO datanode.DataNode: Block pool BP-664414510-192.168.8.5-14
    94631003212 (Datanode Uuid null) service to localhost/127.0.0.1:9000 beginning h
    andshake with NN
    17/05/13 07:18:44 INFO datanode.VolumeScanner: VolumeScanner(hadoopdatadfsda
    tanode, DS-f2b82635-0df9-484f-9d12-4364a9279b20): no suitable block pools found
    to scan.  Waiting 1814399766 ms.
    17/05/13 07:18:44 INFO datanode.DataNode: Block pool Block pool BP-664414510-192
    .168.8.5-1494631003212 (Datanode Uuid null) service to localhost/127.0.0.1:9000
    successfully registered with NN
    17/05/13 07:18:44 INFO datanode.DataNode: For namenode localhost/127.0.0.1:9000
    using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec
     CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
    17/05/13 07:18:44 INFO datanode.DataNode: Namenode Block pool BP-664414510-192.1
    68.8.5-1494631003212 (Datanode Uuid e6e53ca9-b788-4c1c-9308-29b31be28705) servic
    e to localhost/127.0.0.1:9000 trying to claim ACTIVE state with txid=1
    17/05/13 07:18:44 INFO datanode.DataNode: Acknowledging ACTIVE Namenode Block po
    ol BP-664414510-192.168.8.5-1494631003212 (Datanode Uuid e6e53ca9-b788-4c1c-9308
    -29b31be28705) service to localhost/127.0.0.1:9000
    17/05/13 07:18:44 INFO datanode.DataNode: Successfully sent block report 0x20e81
    034dafa,  containing 1 storage report(s), of which we sent 1. The reports had 0
    total blocks and used 1 RPC(s). This took 5 msec to generate and 91 msecs for RP
    C and NN processing. Got back one command: FinalizeCommand/5.
    17/05/13 07:18:44 INFO datanode.DataNode: Got finalize command for block pool BP
    -664414510-192.168.8.5-1494631003212

    NameNode

    ************************************************************/
    17/05/13 07:18:24 INFO namenode.NameNode: createNameNode []
    17/05/13 07:18:26 INFO impl.MetricsConfig: loaded properties from hadoop-metrics
    2.properties
    17/05/13 07:18:26 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 s
    econd(s).
    17/05/13 07:18:26 INFO impl.MetricsSystemImpl: NameNode metrics system started
    17/05/13 07:18:26 INFO namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
    17/05/13 07:18:26 INFO namenode.NameNode: Clients are to use localhost:9000 to a
    ccess this namenode/service.
    17/05/13 07:18:28 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0
    .0.0:50070
    17/05/13 07:18:28 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter
    (org.mortbay.log) via org.mortbay.log.Slf4jLog
    17/05/13 07:18:28 INFO server.AuthenticationFilter: Unable to initialize FileSig
    nerSecretProvider, falling back to use random secrets.
    17/05/13 07:18:28 INFO http.HttpRequestLog: Http request log for http.requests.n
    amenode is not defined
    17/05/13 07:18:28 INFO http.HttpServer2: Added global filter 'safety' (class=org
    .apache.hadoop.http.HttpServer2$QuotingInputFilter)
    17/05/13 07:18:28 INFO http.HttpServer2: Added filter static_user_filter (class=
    org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
    
    17/05/13 07:18:28 INFO http.HttpServer2: Added filter static_user_filter (class=
    org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
    
    17/05/13 07:18:28 INFO http.HttpServer2: Added filter static_user_filter (class=
    org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context stat
    ic
    17/05/13 07:18:28 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.we
    b.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
    17/05/13 07:18:28 INFO http.HttpServer2: addJerseyResourcePackage: packageName=o
    rg.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.r
    esources, pathSpec=/webhdfs/v1/*
    17/05/13 07:18:28 INFO http.HttpServer2: Jetty bound to port 50070
    17/05/13 07:18:28 INFO mortbay.log: jetty-6.1.26
    17/05/13 07:18:31 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWi
    thSafeStartup@0.0.0.0:50070
    17/05/13 07:18:31 WARN namenode.FSNamesystem: Only one image storage directory (
    dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant
    storage directories!
    17/05/13 07:18:31 WARN namenode.FSNamesystem: Only one namespace edits storage d
    irectory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of
     redundant storage directories!
    17/05/13 07:18:31 INFO namenode.FSNamesystem: No KeyProvider found.
    17/05/13 07:18:31 INFO namenode.FSNamesystem: fsLock is fair:true
    17/05/13 07:18:31 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.lim
    it=1000
    17/05/13 07:18:31 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.re
    gistration.ip-hostname-check=true
    17/05/13 07:18:31 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.
    block.deletion.sec is set to 000:00:00:00.000
    17/05/13 07:18:31 INFO blockmanagement.BlockManager: The block deletion will sta
    rt around 2017 五月 13 07:18:31
    17/05/13 07:18:31 INFO util.GSet: Computing capacity for map BlocksMap
    17/05/13 07:18:31 INFO util.GSet: VM type       = 64-bit
    17/05/13 07:18:31 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
    17/05/13 07:18:31 INFO util.GSet: capacity      = 2^21 = 2097152 entries
    17/05/13 07:18:31 INFO blockmanagement.BlockManager: dfs.block.access.token.enab
    le=false
    17/05/13 07:18:31 INFO blockmanagement.BlockManager: defaultReplication
    = 1
    17/05/13 07:18:31 INFO blockmanagement.BlockManager: maxReplication
    = 512
    17/05/13 07:18:31 INFO blockmanagement.BlockManager: minReplication
    = 1
    17/05/13 07:18:31 INFO blockmanagement.BlockManager: maxReplicationStreams
    = 2
    17/05/13 07:18:31 INFO blockmanagement.BlockManager: replicationRecheckInterval
    = 3000
    17/05/13 07:18:31 INFO blockmanagement.BlockManager: encryptDataTransfer
    = false
    17/05/13 07:18:31 INFO blockmanagement.BlockManager: maxNumBlocksToLog
    = 1000
    17/05/13 07:18:31 INFO namenode.FSNamesystem: fsOwner             = Administrato
    r (auth:SIMPLE)
    17/05/13 07:18:31 INFO namenode.FSNamesystem: supergroup          = supergroup
    17/05/13 07:18:31 INFO namenode.FSNamesystem: isPermissionEnabled = true
    17/05/13 07:18:31 INFO namenode.FSNamesystem: HA Enabled: false
    17/05/13 07:18:31 INFO namenode.FSNamesystem: Append Enabled: true
    17/05/13 07:18:32 INFO util.GSet: Computing capacity for map INodeMap
    17/05/13 07:18:32 INFO util.GSet: VM type       = 64-bit
    17/05/13 07:18:32 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
    17/05/13 07:18:32 INFO util.GSet: capacity      = 2^20 = 1048576 entries
    17/05/13 07:18:32 INFO namenode.FSDirectory: ACLs enabled? false
    17/05/13 07:18:32 INFO namenode.FSDirectory: XAttrs enabled? true
    17/05/13 07:18:32 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
    17/05/13 07:18:32 INFO namenode.NameNode: Caching file names occuring more than
    10 times
    17/05/13 07:18:32 INFO util.GSet: Computing capacity for map cachedBlocks
    17/05/13 07:18:32 INFO util.GSet: VM type       = 64-bit
    17/05/13 07:18:32 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
    17/05/13 07:18:32 INFO util.GSet: capacity      = 2^18 = 262144 entries
    17/05/13 07:18:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pc
    t = 0.9990000128746033
    17/05/13 07:18:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanode
    s = 0
    17/05/13 07:18:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension
      = 30000
    17/05/13 07:18:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.n
    um.buckets = 10
    17/05/13 07:18:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.user
    s = 10
    17/05/13 07:18:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.
    minutes = 1,5,25
    17/05/13 07:18:32 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
    
    17/05/13 07:18:32 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total
     heap and retry cache entry expiry time is 600000 millis
    17/05/13 07:18:33 INFO util.GSet: Computing capacity for map NameNodeRetryCache
    17/05/13 07:18:33 INFO util.GSet: VM type       = 64-bit
    17/05/13 07:18:33 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.
    1 KB
    17/05/13 07:18:33 INFO util.GSet: capacity      = 2^15 = 32768 entries
    17/05/13 07:18:33 INFO common.Storage: Lock on hadoopdatadfs
    amenodein_use.
    lock acquired by nodename 7852@wulinfeng
    17/05/13 07:18:34 INFO namenode.FileJournalManager: Recovering unfinalized segme
    nts in hadoopdatadfs
    amenodecurrent
    17/05/13 07:18:34 INFO namenode.FSImage: No edit log streams selected.
    17/05/13 07:18:34 INFO namenode.FSImageFormatPBINode: Loading 1 INodes.
    17/05/13 07:18:34 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 0 secon
    ds.
    17/05/13 07:18:34 INFO namenode.FSImage: Loaded image for txid 0 from hadoopda
    tadfs
    amenodecurrentfsimage_0000000000000000000
    17/05/13 07:18:34 INFO namenode.FSNamesystem: Need to save fs image? false (stal
    eImage=false, haEnabled=false, isRollingUpgrade=false)
    17/05/13 07:18:34 INFO namenode.FSEditLog: Starting log segment at 1
    17/05/13 07:18:34 INFO namenode.NameCache: initialized with 0 entries 0 lookups
    17/05/13 07:18:35 INFO namenode.FSNamesystem: Finished loading FSImage in 1331 m
    secs
    17/05/13 07:18:36 INFO namenode.NameNode: RPC server is binding to localhost:900
    0
    17/05/13 07:18:36 INFO ipc.CallQueueManager: Using callQueue class java.util.con
    current.LinkedBlockingQueue
    17/05/13 07:18:36 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean
    
    17/05/13 07:18:36 INFO ipc.Server: Starting Socket Reader #1 for port 9000
    17/05/13 07:18:36 INFO namenode.LeaseManager: Number of blocks under constructio
    n: 0
    17/05/13 07:18:36 INFO namenode.LeaseManager: Number of blocks under constructio
    n: 0
    17/05/13 07:18:36 INFO namenode.FSNamesystem: initializing replication queues
    17/05/13 07:18:36 INFO hdfs.StateChange: STATE* Leaving safe mode after 5 secs
    17/05/13 07:18:36 INFO hdfs.StateChange: STATE* Network topology has 0 racks and
     0 datanodes
    17/05/13 07:18:36 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 bloc
    ks
    17/05/13 07:18:36 INFO blockmanagement.DatanodeDescriptor: Number of failed stor
    age changes from 0 to 0
    17/05/13 07:18:37 INFO blockmanagement.BlockManager: Total number of blocks
           = 0
    17/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of invalid blocks
           = 0
    17/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of under-replicated
    blocks = 0
    17/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of  over-replicated
    blocks = 0
    17/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of blocks being writ
    ten    = 0
    17/05/13 07:18:37 INFO hdfs.StateChange: STATE* Replication Queue initialization
     scan for invalid, over- and under-replicated blocks completed in 98 msec
    17/05/13 07:18:37 INFO namenode.NameNode: NameNode RPC up at: localhost/127.0.0.
    1:9000
    17/05/13 07:18:37 INFO namenode.FSNamesystem: Starting services required for act
    ive state
    17/05/13 07:18:37 INFO ipc.Server: IPC Server Responder: starting
    17/05/13 07:18:37 INFO ipc.Server: IPC Server listener on 9000: starting
    17/05/13 07:18:37 INFO blockmanagement.CacheReplicationMonitor: Starting CacheRe
    plicationMonitor with interval 30000 milliseconds
    17/05/13 07:18:44 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeR
    egistration(127.0.0.1:50010, datanodeUuid=e6e53ca9-b788-4c1c-9308-29b31be28705,
    infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-1284
    c5d0-592a-4a41-b185-e53fb57dcfbf;nsid=61861794;c=0) storage e6e53ca9-b788-4c1c-9
    308-29b31be28705
    17/05/13 07:18:44 INFO blockmanagement.DatanodeDescriptor: Number of failed stor
    age changes from 0 to 0
    17/05/13 07:18:44 INFO net.NetworkTopology: Adding a new node: /default-rack/127
    .0.0.1:50010
    17/05/13 07:18:44 INFO blockmanagement.DatanodeDescriptor: Number of failed stor
    age changes from 0 to 0
    17/05/13 07:18:44 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID
     DS-f2b82635-0df9-484f-9d12-4364a9279b20 for DN 127.0.0.1:50010
    17/05/13 07:18:44 INFO BlockStateChange: BLOCK* processReport: from storage DS-f
    2b82635-0df9-484f-9d12-4364a9279b20 node DatanodeRegistration(127.0.0.1:50010, d
    atanodeUuid=e6e53ca9-b788-4c1c-9308-29b31be28705, infoPort=50075, infoSecurePort
    =0, ipcPort=50020, storageInfo=lv=-56;cid=CID-1284c5d0-592a-4a41-b185-e53fb57dcf
    bf;nsid=61861794;c=0), blocks: 0, hasStaleStorage: false, processing time: 2 mse
    cs

    NodeManager

    ************************************************************/
    17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.nodemanager.containermanager.container.ContainerEventType for clas
    s org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImp
    l$ContainerEventDispatcher
    17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.nodemanager.containermanager.application.ApplicationEventType for
    class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManage
    rImpl$ApplicationEventDispatcher
    17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.nodemanager.containermanager.localizer.event.LocalizationEventType
     for class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.
    ResourceLocalizationService
    17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.nodemanager.containermanager.AuxServicesEventType for class org.ap
    ache.hadoop.yarn.server.nodemanager.containermanager.AuxServices
    17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorEventType fo
    r class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Conta
    inersMonitorImpl
    17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEventType
    for class org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Co
    ntainersLauncher
    17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.nodemanager.ContainerManagerEventType for class org.apache.hadoop.
    yarn.server.nodemanager.containermanager.ContainerManagerImpl
    17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.nodemanager.NodeManagerEventType for class org.apache.hadoop.yarn.
    server.nodemanager.NodeManager
    17/05/13 07:18:34 INFO impl.MetricsConfig: loaded properties from hadoop-metrics
    2.properties
    17/05/13 07:18:34 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 s
    econd(s).
    17/05/13 07:18:34 INFO impl.MetricsSystemImpl: NodeManager metrics system starte
    d
    17/05/13 07:18:34 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerEventType
    for class org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.
    NonAggregatingLogHandler
    17/05/13 07:18:34 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUplo
    adEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager
    .localizer.sharedcache.SharedCacheUploadService
    17/05/13 07:18:34 INFO localizer.ResourceLocalizationService: per directory file
     limit = 8192
    17/05/13 07:18:43 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.nodemanager.containermanager.localizer.event.LocalizerEventType fo
    r class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Res
    ourceLocalizationService$LocalizerTracker
    17/05/13 07:18:44 WARN containermanager.AuxServices: The Auxilurary Service name
    d 'mapreduce_shuffle' in the configuration is for class org.apache.hadoop.mapred
    .ShuffleHandler which has a name of 'httpshuffle'. Because these are not the sam
    e tools trying to send ServiceData and read Service Meta Data may have issues un
    less the refer to the name in the config.
    17/05/13 07:18:44 INFO containermanager.AuxServices: Adding auxiliary service ht
    tpshuffle, "mapreduce_shuffle"
    17/05/13 07:18:44 INFO monitor.ContainersMonitorImpl:  Using ResourceCalculatorP
    lugin : org.apache.hadoop.yarn.util.WindowsResourceCalculatorPlugin@4ee203eb
    17/05/13 07:18:44 INFO monitor.ContainersMonitorImpl:  Using ResourceCalculatorP
    rocessTree : null
    17/05/13 07:18:44 INFO monitor.ContainersMonitorImpl: Physical memory check enab
    led: true
    17/05/13 07:18:44 INFO monitor.ContainersMonitorImpl: Virtual memory check enabl
    ed: true
    17/05/13 07:18:44 WARN monitor.ContainersMonitorImpl: NodeManager configured wit
    h 8 G physical memory allocated to containers, which is more than 80% of the tot
    al physical memory available (5.6 G). Thrashing might happen.
    17/05/13 07:18:44 INFO nodemanager.NodeStatusUpdaterImpl: Initialized nodemanage
    r for null: physical-memory=8192 virtual-memory=17204 virtual-cores=8
    17/05/13 07:18:44 INFO ipc.CallQueueManager: Using callQueue class java.util.con
    current.LinkedBlockingQueue
    17/05/13 07:18:44 INFO ipc.Server: Starting Socket Reader #1 for port 53137
    17/05/13 07:18:44 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.had
    oop.yarn.api.ContainerManagementProtocolPB to the server
    17/05/13 07:18:44 INFO containermanager.ContainerManagerImpl: Blocking new conta
    iner-requests as container manager rpc server is still starting.
    17/05/13 07:18:44 INFO ipc.Server: IPC Server Responder: starting
    17/05/13 07:18:44 INFO ipc.Server: IPC Server listener on 53137: starting
    17/05/13 07:18:44 INFO security.NMContainerTokenSecretManager: Updating node add
    ress : wulinfeng:53137
    17/05/13 07:18:44 INFO ipc.CallQueueManager: Using callQueue class java.util.con
    current.LinkedBlockingQueue
    17/05/13 07:18:44 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.had
    oop.yarn.server.nodemanager.api.LocalizationProtocolPB to the server
    17/05/13 07:18:44 INFO ipc.Server: IPC Server listener on 8040: starting
    17/05/13 07:18:44 INFO ipc.Server: Starting Socket Reader #1 for port 8040
    17/05/13 07:18:44 INFO localizer.ResourceLocalizationService: Localizer started
    on port 8040
    17/05/13 07:18:44 INFO ipc.Server: IPC Server Responder: starting
    17/05/13 07:18:44 INFO mapred.IndexCache: IndexCache created with max memory = 1
    0485760
    17/05/13 07:18:44 INFO mapred.ShuffleHandler: httpshuffle listening on port 1356
    2
    17/05/13 07:18:44 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree curre
    ntly is supported only on Linux.
    17/05/13 07:18:45 INFO containermanager.ContainerManagerImpl: ContainerManager s
    tarted at wulinfeng/192.168.8.5:53137
    17/05/13 07:18:45 INFO containermanager.ContainerManagerImpl: ContainerManager b
    ound to 0.0.0.0/0.0.0.0:0
    17/05/13 07:18:45 INFO webapp.WebServer: Instantiating NMWebApp at 0.0.0.0:8042
    17/05/13 07:18:45 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter
    (org.mortbay.log) via org.mortbay.log.Slf4jLog
    17/05/13 07:18:45 INFO server.AuthenticationFilter: Unable to initialize FileSig
    nerSecretProvider, falling back to use random secrets.
    17/05/13 07:18:45 INFO http.HttpRequestLog: Http request log for http.requests.n
    odemanager is not defined
    17/05/13 07:18:45 INFO http.HttpServer2: Added global filter 'safety' (class=org
    .apache.hadoop.http.HttpServer2$QuotingInputFilter)
    17/05/13 07:18:45 INFO http.HttpServer2: Added filter static_user_filter (class=
    org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context node
    
    17/05/13 07:18:45 INFO http.HttpServer2: Added filter static_user_filter (class=
    org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
    
    17/05/13 07:18:45 INFO http.HttpServer2: Added filter static_user_filter (class=
    org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context stat
    ic
    17/05/13 07:18:45 INFO http.HttpServer2: adding path spec: /node/*
    17/05/13 07:18:45 INFO http.HttpServer2: adding path spec: /ws/*
    17/05/13 07:18:46 INFO webapp.WebApps: Registered webapp guice modules
    17/05/13 07:18:46 INFO http.HttpServer2: Jetty bound to port 8042
    17/05/13 07:18:46 INFO mortbay.log: jetty-6.1.26
    17/05/13 07:18:46 INFO mortbay.log: Extract jar:file:/D:/hadoop-2.7.2/share/hado
    op/yarn/hadoop-yarn-common-2.7.2.jar!/webapps/node to C:UsersADMINI~1AppData
    LocalTempJetty_0_0_0_0_8042_node____19tj0xwebapp
    五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv
    iderFactory register
    信息: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices
     as a root resource class
    五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv
    iderFactory register
    信息: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a pro
    vider class
    五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv
    iderFactory register
    信息: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextRe
    solver as a provider class
    五月 13, 2017 7:18:47 上午 com.sun.jersey.server.impl.application.WebApplication
    Impl _initiate
    信息: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
    五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv
    iderFactory getComponentProvider
    信息: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolv
    er to GuiceManagedComponentProvider with the scope "Singleton"
    五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv
    iderFactory getComponentProvider
    信息: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceMana
    gedComponentProvider with the scope "Singleton"
    五月 13, 2017 7:18:48 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv
    iderFactory getComponentProvider
    信息: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices to
    GuiceManagedComponentProvider with the scope "Singleton"
    17/05/13 07:18:48 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWi
    thSafeStartup@0.0.0.0:8042
    17/05/13 07:18:48 INFO webapp.WebApps: Web app node started at 8042
    17/05/13 07:18:49 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0
    :8031
    17/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Sending out 0 NM conta
    iner statuses: []
    17/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Registering with RM us
    ing containers :[]
    17/05/13 07:18:49 INFO security.NMContainerTokenSecretManager: Rolling master-ke
    y for container-tokens, got key with id -610858047
    17/05/13 07:18:49 INFO security.NMTokenSecretManagerInNM: Rolling master-key for
     container-tokens, got key with id 2017302061
    17/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Registered with Resour
    ceManager as wulinfeng:53137 with total resource of <memory:8192, vCores:8>
    17/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Notifying ContainerMan
    ager to unblock new container-requests

    ResourceManager

    ************************************************************/
    17/05/13 07:18:19 INFO conf.Configuration: found resource core-site.xml at file:
    /D:/hadoop-2.7.2/etc/hadoop/core-site.xml
    17/05/13 07:18:20 INFO security.Groups: clearing userToGroupsMap cache
    17/05/13 07:18:21 INFO conf.Configuration: found resource yarn-site.xml at file:
    /D:/hadoop-2.7.2/etc/hadoop/yarn-site.xml
    17/05/13 07:18:21 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.resourcemanager.RMFatalEventType for class org.apache.hadoop.yarn.
    server.resourcemanager.ResourceManager$RMFatalEventDispatcher
    17/05/13 07:18:29 INFO security.NMTokenSecretManagerInRM: NMTokenKeyRollingInter
    val: 86400000ms and NMTokenKeyActivationDelay: 900000ms
    17/05/13 07:18:29 INFO security.RMContainerTokenSecretManager: ContainerTokenKey
    RollingInterval: 86400000ms and ContainerTokenKeyActivationDelay: 900000ms
    17/05/13 07:18:29 INFO security.AMRMTokenSecretManager: AMRMTokenKeyRollingInter
    val: 86400000ms and AMRMTokenKeyActivationDelay: 900000 ms
    17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.resourcemanager.recovery.RMStateStoreEventType for class org.apach
    e.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandle
    r
    17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.resourcemanager.NodesListManagerEventType for class org.apache.had
    oop.yarn.server.resourcemanager.NodesListManager
    17/05/13 07:18:29 INFO resourcemanager.ResourceManager: Using Scheduler: org.apa
    che.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
    17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.resourcemanager.scheduler.event.SchedulerEventType for class org.a
    pache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatche
    r
    17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.resourcemanager.rmapp.RMAppEventType for class org.apache.hadoop.y
    arn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher
    17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEventType for class org.
    apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEven
    tDispatcher
    17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.resourcemanager.rmnode.RMNodeEventType for class org.apache.hadoop
    .yarn.server.resourcemanager.ResourceManager$NodeEventDispatcher
    17/05/13 07:18:29 INFO impl.MetricsConfig: loaded properties from hadoop-metrics
    2.properties
    17/05/13 07:18:30 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 s
    econd(s).
    17/05/13 07:18:30 INFO impl.MetricsSystemImpl: ResourceManager metrics system st
    arted
    17/05/13 07:18:30 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.resourcemanager.RMAppManagerEventType for class org.apache.hadoop.
    yarn.server.resourcemanager.RMAppManager
    17/05/13 07:18:30 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.server.resourcemanager.amlauncher.AMLauncherEventType for class org.apach
    e.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher
    17/05/13 07:18:30 INFO resourcemanager.RMNMInfo: Registered RMNMInfo MBean
    17/05/13 07:18:30 INFO security.YarnAuthorizationProvider: org.apache.hadoop.yar
    n.security.ConfiguredYarnAuthorizer is instiantiated.
    17/05/13 07:18:30 INFO util.HostsFileReader: Refreshing hosts (include/exclude)
    list
    17/05/13 07:18:30 INFO conf.Configuration: found resource capacity-scheduler.xml
     at file:/D:/hadoop-2.7.2/etc/hadoop/capacity-scheduler.xml
    17/05/13 07:18:30 INFO capacity.CapacitySchedulerConfiguration: max alloc mb per
     queue for root is undefined
    17/05/13 07:18:30 INFO capacity.CapacitySchedulerConfiguration: max alloc vcore
    per queue for root is undefined
    17/05/13 07:18:30 INFO capacity.ParentQueue: root, capacity=1.0, asboluteCapacit
    y=1.0, maxCapacity=1.0, asboluteMaxCapacity=1.0, state=RUNNING, acls=SUBMIT_APP:
    *ADMINISTER_QUEUE:*, labels=*,
    , reservationsContinueLooking=true
    17/05/13 07:18:30 INFO capacity.ParentQueue: Initialized parent-queue root name=
    root, fullname=root
    17/05/13 07:18:30 INFO capacity.CapacitySchedulerConfiguration: max alloc mb per
     queue for root.default is undefined
    17/05/13 07:18:30 INFO capacity.CapacitySchedulerConfiguration: max alloc vcore
    per queue for root.default is undefined
    17/05/13 07:18:30 INFO capacity.LeafQueue: Initializing default
    capacity = 1.0 [= (float) configuredCapacity / 100 ]
    asboluteCapacity = 1.0 [= parentAbsoluteCapacity * capacity ]
    maxCapacity = 1.0 [= configuredMaxCapacity ]
    absoluteMaxCapacity = 1.0 [= 1.0 maximumCapacity undefined, (parentAbsoluteMaxCa
    pacity * maximumCapacity) / 100 otherwise ]
    userLimit = 100 [= configuredUserLimit ]
    userLimitFactor = 1.0 [= configuredUserLimitFactor ]
    maxApplications = 10000 [= configuredMaximumSystemApplicationsPerQueue or (int)(
    configuredMaximumSystemApplications * absoluteCapacity)]
    maxApplicationsPerUser = 10000 [= (int)(maxApplications * (userLimit / 100.0f) *
     userLimitFactor) ]
    usedCapacity = 0.0 [= usedResourcesMemory / (clusterResourceMemory * absoluteCap
    acity)]
    absoluteUsedCapacity = 0.0 [= usedResourcesMemory / clusterResourceMemory]
    maxAMResourcePerQueuePercent = 0.1 [= configuredMaximumAMResourcePercent ]
    minimumAllocationFactor = 0.875 [= (float)(maximumAllocationMemory - minimumAllo
    cationMemory) / maximumAllocationMemory ]
    maximumAllocation = <memory:8192, vCores:32> [= configuredMaxAllocation ]
    numContainers = 0 [= currentNumContainers ]
    state = RUNNING [= configuredState ]
    acls = SUBMIT_APP:*ADMINISTER_QUEUE:* [= configuredAcls ]
    nodeLocalityDelay = 40
    labels=*,
    nodeLocalityDelay = 40
    reservationsContinueLooking = true
    preemptionDisabled = true
    
    17/05/13 07:18:30 INFO capacity.CapacityScheduler: Initialized queue: default: c
    apacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapac
    ity=0.0, absoluteUsedCapacity=0.0, numApps=0, numContainers=0
    17/05/13 07:18:30 INFO capacity.CapacityScheduler: Initialized queue: root: numC
    hildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCore
    s:0>usedCapacity=0.0, numApps=0, numContainers=0
    17/05/13 07:18:30 INFO capacity.CapacityScheduler: Initialized root queue root:
    numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, v
    Cores:0>usedCapacity=0.0, numApps=0, numContainers=0
    17/05/13 07:18:30 INFO capacity.CapacityScheduler: Initialized queue mappings, o
    verride: false
    17/05/13 07:18:30 INFO capacity.CapacityScheduler: Initialized CapacityScheduler
     with calculator=class org.apache.hadoop.yarn.util.resource.DefaultResourceCalcu
    lator, minimumAllocation=<<memory:1024, vCores:1>>, maximumAllocation=<<memory:8
    192, vCores:32>>, asynchronousScheduling=false, asyncScheduleInterval=5ms
    17/05/13 07:18:30 INFO metrics.SystemMetricsPublisher: YARN system metrics publi
    shing service is not enabled
    17/05/13 07:18:30 INFO resourcemanager.ResourceManager: Transitioning to active
    state
    17/05/13 07:18:31 INFO recovery.RMStateStore: Updating AMRMToken
    17/05/13 07:18:31 INFO security.RMContainerTokenSecretManager: Rolling master-ke
    y for container-tokens
    17/05/13 07:18:31 INFO security.NMTokenSecretManagerInRM: Rolling master-key for
     nm-tokens
    17/05/13 07:18:31 INFO delegation.AbstractDelegationTokenSecretManager: Updating
     the current master key for generating delegation tokens
    17/05/13 07:18:31 INFO security.RMDelegationTokenSecretManager: storing master k
    ey with keyID 1
    17/05/13 07:18:31 INFO recovery.RMStateStore: Storing RMDTMasterKey.
    17/05/13 07:18:31 INFO event.AsyncDispatcher: Registering class org.apache.hadoo
    p.yarn.nodelabels.event.NodeLabelsStoreEventType for class org.apache.hadoop.yar
    n.nodelabels.CommonNodeLabelsManager$ForwardingEventHandler
    17/05/13 07:18:31 INFO delegation.AbstractDelegationTokenSecretManager: Starting
     expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
    17/05/13 07:18:31 INFO delegation.AbstractDelegationTokenSecretManager: Updating
     the current master key for generating delegation tokens
    17/05/13 07:18:31 INFO security.RMDelegationTokenSecretManager: storing master k
    ey with keyID 2
    17/05/13 07:18:31 INFO recovery.RMStateStore: Storing RMDTMasterKey.
    17/05/13 07:18:31 INFO ipc.CallQueueManager: Using callQueue class java.util.con
    current.LinkedBlockingQueue
    17/05/13 07:18:31 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.had
    oop.yarn.server.api.ResourceTrackerPB to the server
    17/05/13 07:18:31 INFO ipc.Server: Starting Socket Reader #1 for port 8031
    17/05/13 07:18:32 INFO ipc.Server: IPC Server listener on 8031: starting
    17/05/13 07:18:32 INFO ipc.Server: IPC Server Responder: starting
    17/05/13 07:18:32 INFO ipc.CallQueueManager: Using callQueue class java.util.con
    current.LinkedBlockingQueue
    17/05/13 07:18:33 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.had
    oop.yarn.api.ApplicationMasterProtocolPB to the server
    17/05/13 07:18:33 INFO ipc.Server: IPC Server listener on 8030: starting
    17/05/13 07:18:33 INFO ipc.CallQueueManager: Using callQueue class java.util.con
    current.LinkedBlockingQueue
    17/05/13 07:18:33 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.had
    oop.yarn.api.ApplicationClientProtocolPB to the server
    17/05/13 07:18:33 INFO resourcemanager.ResourceManager: Transitioned to active s
    tate
    17/05/13 07:18:33 INFO ipc.Server: IPC Server listener on 8032: starting
    17/05/13 07:18:33 INFO ipc.Server: Starting Socket Reader #1 for port 8030
    17/05/13 07:18:33 INFO ipc.Server: IPC Server Responder: starting
    17/05/13 07:18:34 INFO ipc.Server: Starting Socket Reader #1 for port 8032
    17/05/13 07:18:34 INFO ipc.Server: IPC Server Responder: starting
    17/05/13 07:18:34 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter
    (org.mortbay.log) via org.mortbay.log.Slf4jLog
    17/05/13 07:18:34 INFO server.AuthenticationFilter: Unable to initialize FileSig
    nerSecretProvider, falling back to use random secrets.
    17/05/13 07:18:34 INFO http.HttpRequestLog: Http request log for http.requests.r
    esourcemanager is not defined
    17/05/13 07:18:34 INFO http.HttpServer2: Added global filter 'safety' (class=org
    .apache.hadoop.http.HttpServer2$QuotingInputFilter)
    17/05/13 07:18:34 INFO http.HttpServer2: Added filter RMAuthenticationFilter (cl
    ass=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to conte
    xt cluster
    17/05/13 07:18:34 INFO http.HttpServer2: Added filter RMAuthenticationFilter (cl
    ass=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to conte
    xt static
    17/05/13 07:18:34 INFO http.HttpServer2: Added filter RMAuthenticationFilter (cl
    ass=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to conte
    xt logs
    17/05/13 07:18:34 INFO http.HttpServer2: Added filter static_user_filter (class=
    org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context clus
    ter
    17/05/13 07:18:34 INFO http.HttpServer2: Added filter static_user_filter (class=
    org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context stat
    ic
    17/05/13 07:18:34 INFO http.HttpServer2: Added filter static_user_filter (class=
    org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
    
    17/05/13 07:18:34 INFO http.HttpServer2: adding path spec: /cluster/*
    17/05/13 07:18:34 INFO http.HttpServer2: adding path spec: /ws/*
    17/05/13 07:18:35 INFO webapp.WebApps: Registered webapp guice modules
    17/05/13 07:18:35 INFO http.HttpServer2: Jetty bound to port 8088
    17/05/13 07:18:35 INFO mortbay.log: jetty-6.1.26
    17/05/13 07:18:35 INFO mortbay.log: Extract jar:file:/D:/hadoop-2.7.2/share/hado
    op/yarn/hadoop-yarn-common-2.7.2.jar!/webapps/cluster to C:UsersADMINI~1AppDa
    taLocalTempJetty_0_0_0_0_8088_cluster____u0rgz3webapp
    17/05/13 07:18:36 INFO delegation.AbstractDelegationTokenSecretManager: Updating
     the current master key for generating delegation tokens
    17/05/13 07:18:36 INFO delegation.AbstractDelegationTokenSecretManager: Starting
     expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
    17/05/13 07:18:36 INFO delegation.AbstractDelegationTokenSecretManager: Updating
     the current master key for generating delegation tokens
    五月 13, 2017 7:18:36 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv
    iderFactory register
    信息: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBConte
    xtResolver as a provider class
    五月 13, 2017 7:18:36 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv
    iderFactory register
    信息: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServ
    ices as a root resource class
    五月 13, 2017 7:18:36 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv
    iderFactory register
    信息: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a pro
    vider class
    五月 13, 2017 7:18:36 上午 com.sun.jersey.server.impl.application.WebApplication
    Impl _initiate
    信息: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
    五月 13, 2017 7:18:37 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv
    iderFactory getComponentProvider
    信息: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextRe
    solver to GuiceManagedComponentProvider with the scope "Singleton"
    五月 13, 2017 7:18:38 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv
    iderFactory getComponentProvider
    信息: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceMana
    gedComponentProvider with the scope "Singleton"
    五月 13, 2017 7:18:40 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv
    iderFactory getComponentProvider
    信息: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices
     to GuiceManagedComponentProvider with the scope "Singleton"
    17/05/13 07:18:41 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWi
    thSafeStartup@0.0.0.0:8088
    17/05/13 07:18:41 INFO webapp.WebApps: Web app cluster started at 8088
    17/05/13 07:18:41 INFO ipc.CallQueueManager: Using callQueue class java.util.con
    current.LinkedBlockingQueue
    17/05/13 07:18:41 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.had
    oop.yarn.server.api.ResourceManagerAdministrationProtocolPB to the server
    17/05/13 07:18:41 INFO ipc.Server: IPC Server listener on 8033: starting
    17/05/13 07:18:41 INFO ipc.Server: IPC Server Responder: starting
    17/05/13 07:18:41 INFO ipc.Server: Starting Socket Reader #1 for port 8033
    17/05/13 07:18:49 INFO util.RackResolver: Resolved wulinfeng to /default-rack
    17/05/13 07:18:49 INFO resourcemanager.ResourceTrackerService: NodeManager from
    node wulinfeng(cmPort: 53137 httpPort: 8042) registered with capability: <memory
    :8192, vCores:8>, assigned nodeId wulinfeng:53137
    17/05/13 07:18:49 INFO rmnode.RMNodeImpl: wulinfeng:53137 Node Transitioned from
     NEW to RUNNING
    17/05/13 07:18:49 INFO capacity.CapacityScheduler: Added node wulinfeng:53137 cl
    usterResource: <memory:8192, vCores:8>
    17/05/13 07:28:30 INFO scheduler.AbstractYarnScheduler: Release request cache is
     cleaned up
  • 相关阅读:
    四种PHP异步执行的常用方式
    PHP 多进程和多线程的优缺点
    试着用workerman开发一个在线聊天应用
    Python代码报错看不懂?记住这20个报错提示单词轻松解决bug
    PHP面试题大全(值得收藏)
    常见排序算法(三)
    常见排序算法(二)
    常见排序算法(一)
    NumPy 学习笔记(四)
    JavaScript 事件
  • 原文地址:https://www.cnblogs.com/wuxun1997/p/6847950.html
Copyright © 2011-2022 走看看