zoukankan      html  css  js  c++  java
  • [Hadoop] Windows 下的 Hadoop 2.7.5 环境搭建

    原文地址:https://www.cnblogs.com/memento/p/9148721.html

    准备说明:

    jdk:jdk-8u161-windows-x64.exe

    hadoop:hadoop-2.7.5.tar.gz

    OS:Window 10

    一、JDK 安装配置

    详见:JDK 环境配置(图文)

    二、Hadoop 安装配置

    1、在 http://hadoop.apache.org/releases.html  处下载 hadoop-2.7.5.tar.gz

    2、将 hadoop-2.7.5.tar.gz 文件解压缩(以放在 D 盘根目录下为例);

    04505bb3-f871-481a-8a09-16f9403003e0

    3、配置 HADOOP_HOME 环境路径;

    0c87ca8a-a395-4cf8-ae60-c2737ddba153

    并追加目录下的 bin 和 sbin 文件夹路径到 PATH 变量中;

    964017531

    4、在命令行窗口中输入 hadoop 命令进行验证;

    563b4dd6-eb82-4a6d-a166-101ad5984148

    如果提示 JAVA_HOME 路径不对,需要去修改 %HADOOP_HOME%etchadoophadoop-env.cmd 里的配置:

    8985213f-d814-45f2-a2c4-9b0292682df0

    set JAVA_HOME=%JAVA_HOME%
    @rem 修改为
    set JAVA_HOME=C:Progra~1Javajdk1.8.0_161

    三、Hadoop 配置文件

    core-site.xml

    <configuration>
        <!-- 指定使用 hadoop 时产生文件的存放目录 -->
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/D:/hadoop/workplace/tmp</value>
            <description>namenode 上本地的 hadoop 临时文件夹</description>
        </property>
    	<property>
            <name>hadoop.name.dir</name>
            <value>/D:/hadoop/workplace/name</value>
        </property>
        <!-- 指定 namenode 地址 -->
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://localhost:9000</value>
            <description>HDFS 的 URI,文件系统://namenode标识:端口号</description>
        </property>
        <property>
            <name>io.file.buffer.size</name>
            <value>131072</value>
        </property>
    </configuration>

    hdfs-site.xml

    <configuration>
        <!-- 指定 hdfs 保存数据的副本数量 -->
        <property>
            <name>dfs.replication</name>
            <value>1</value>
            <description>副本个数,配置默认是 3,应小于 datanode 服务器数量</description>
        </property>
        <property>
            <name>dfs.namenode.name.dir</name>
            <value>/D:/hadoop/workplace/name</value>
            <description>namenode 上存储 HDFS 命名空间元数据</description>
        </property>
        <property>
            <name>dfs.datanode.data.dir</name>
            <value>/D:/hadoop/workplace/data</value>
            <description>datanode 上数据块的物理存储位置</description>
        </property>
        <property>
            <name>dfs.webhdfs.enabled</name>
            <value>true</value>
        </property>
        <property>
            <name>dfs.permissions</name>
            <value>true</value>
            <description>
                If "true", enable permission checking in HDFS.
                If "false", permission checking is turned off,
                but all other behavior is unchanged.
                Switching from one parameter value to the other does not change the mode,
                owner or group of files or directories.
        </description>
        </property>
    </configuration>

    mapred-site.xml

    <configuration>
        <!-- MR 运行在 YARN 上 -->
        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>
        <property>
            <name>mapreduce.jobhistory.address</name>
            <value>localhost:10020</value>
        </property>
        <property>
            <name>mapreduce.jobhistory.webapp.address</name>
            <value>localhost:19888</value>
        </property>
    </configuration>

    yarn-site.xml

    <configuration>
        <!-- nodemanager 获取数据的方式是 shuffle -->
        <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
    	<property>
           <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
           <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
    </configuration>

    四、格式化 namenode

    hadoop namenode –format 出现异常:

    DEPRECATED: Use of this script to execute hdfs command is deprecated.
    Instead use the hdfs command for it.
    18/02/09 12:18:11 ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path
    java.io.IOException: Could not locate executable D:hadoopinwinutils.exe in the Hadoop binaries.
            at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:382)
            at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:397)
            at org.apache.hadoop.util.Shell.<clinit>(Shell.java:390)
            at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
            at org.apache.hadoop.hdfs.server.common.HdfsServerConstants$RollingUpgradeStartupOption.getAllOptionString(HdfsServerConstants.java:80)
            at org.apache.hadoop.hdfs.server.namenode.NameNode.<clinit>(NameNode.java:265)

    下载 window-hadoop-bin.zip 压缩包,解压并替换掉 hadoopin 目录下的文件,然后再重新格式化:

    C:UsersMemento>hadoop namenode -format
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
    Instead use the hdfs command for it.
    18/06/07 06:25:02 INFO namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = PC-Name/IP
    STARTUP_MSG:   args = [-format]
    STARTUP_MSG:   version = 2.7.5
    STARTUP_MSG:   classpath = D:hadoopetchadoop;D:hadoopsharehadoopcommonlibactivation-1.1.jar;D:hadoopsharehadoopcommonlibapacheds-i18n-2.0.0-M15.jar;D:hadoopsharehadoopcommonlibapacheds-kerberos-codec-2.0.0-M15.jar;D:hadoopsharehadoopcommonlibapi-asn1-api-1.0.0-M20.jar;D:hadoopsharehadoopcommonlibapi-util-1.0.0-M20.jar;D:hadoopsharehadoopcommonlibasm-3.2.jar;D:hadoopsharehadoopcommonlibavro-1.7.4.jar;D:hadoopsharehadoopcommonlibcommons-beanutils-1.7.0.jar;D:hadoopsharehadoopcommonlibcommons-beanutils-core-1.8.0.jar;D:hadoopsharehadoopcommonlibcommons-cli-1.2.jar;D:hadoopsharehadoopcommonlibcommons-codec-1.4.jar;D:hadoopsharehadoopcommonlibcommons-collections-3.2.2.jar;D:hadoopsharehadoopcommonlibcommons-compress-1.4.1.jar;D:hadoopsharehadoopcommonlibcommons-configuration-1.6.jar;D:hadoopsharehadoopcommonlibcommons-digester-1.8.jar;D:hadoopsharehadoopcommonlibcommons-httpclient-3.1.jar;D:hadoopsharehadoopcommonlibcommons-io-2.4.jar;D:hadoopsharehadoopcommonlibcommons-lang-2.6.jar;D:hadoopsharehadoopcommonlibcommons-logging-1.1.3.jar;D:hadoopsharehadoopcommonlibcommons-math3-3.1.1.jar;D:hadoopsharehadoopcommonlibcommons-net-3.1.jar;D:hadoopsharehadoopcommonlibcurator-client-2.7.1.jar;D:hadoopsharehadoopcommonlibcurator-framework-2.7.1.jar;D:hadoopsharehadoopcommonlibcurator-recipes-2.7.1.jar;D:hadoopsharehadoopcommonlibgson-2.2.4.jar;D:hadoopsharehadoopcommonlibguava-11.0.2.jar;D:hadoopsharehadoopcommonlibhadoop-annotations-2.7.5.jar;D:hadoopsharehadoopcommonlibhadoop-auth-2.7.5.jar;D:hadoopsharehadoopcommonlibhamcrest-core-1.3.jar;D:hadoopsharehadoopcommonlibhtrace-core-3.1.0-incubating.jar;D:hadoopsharehadoopcommonlibhttpclient-4.2.5.jar;D:hadoopsharehadoopcommonlibhttpcore-4.2.5.jar;D:hadoopsharehadoopcommonlibjackson-core-asl-1.9.13.jar;D:hadoopsharehadoopcommonlibjackson-jaxrs-1.9.13.jar;D:hadoopsharehadoopcommonlibjackson-mapper-asl-1.9.13.jar;D:hadoopsharehadoopcommonlibjackson-xc-1.9.13.jar;D:hadoopsharehadoopcommonlibjava-xmlbuilder-0.4.jar;D:hadoopsharehadoopcommonlibjaxb-api-2.2.2.jar;D:hadoopsharehadoopcommonlibjaxb-impl-2.2.3-1.jar;D:hadoopsharehadoopcommonlibjersey-core-1.9.jar;D:hadoopsharehadoopcommonlibjersey-json-1.9.jar;D:hadoopsharehadoopcommonlibjersey-server-1.9.jar;D:hadoopsharehadoopcommonlibjets3t-0.9.0.jar;D:hadoopsharehadoopcommonlibjettison-1.1.jar;D:hadoopsharehadoopcommonlibjetty-6.1.26.jar;D:hadoopsharehadoopcommonlibjetty-sslengine-6.1.26.jar;D:hadoopsharehadoopcommonlibjetty-util-6.1.26.jar;D:hadoopsharehadoopcommonlibjsch-0.1.54.jar;D:hadoopsharehadoopcommonlibjsp-api-2.1.jar;D:hadoopsharehadoopcommonlibjsr305-3.0.0.jar;D:hadoopsharehadoopcommonlibjunit-4.11.jar;D:hadoopsharehadoopcommonliblog4j-1.2.17.jar;D:hadoopsharehadoopcommonlibmockito-all-1.8.5.jar;D:hadoopsharehadoopcommonlib
    etty-3.6.2.Final.jar;D:hadoopsharehadoopcommonlibparanamer-2.3.jar;D:hadoopsharehadoopcommonlibprotobuf-java-2.5.0.jar;D:hadoopsharehadoopcommonlibservlet-api-2.5.jar;D:hadoopsharehadoopcommonlibslf4j-api-1.7.10.jar;D:hadoopsharehadoopcommonlibslf4j-log4j12-1.7.10.jar;D:hadoopsharehadoopcommonlibsnappy-java-1.0.4.1.jar;D:hadoopsharehadoopcommonlibstax-api-1.0-2.jar;D:hadoopsharehadoopcommonlibxmlenc-0.52.jar;D:hadoopsharehadoopcommonlibxz-1.0.jar;D:hadoopsharehadoopcommonlibzookeeper-3.4.6.jar;D:hadoopsharehadoopcommonhadoop-common-2.7.5-tests.jar;D:hadoopsharehadoopcommonhadoop-common-2.7.5.jar;D:hadoopsharehadoopcommonhadoop-nfs-2.7.5.jar;D:hadoopsharehadoophdfs;D:hadoopsharehadoophdfslibasm-3.2.jar;D:hadoopsharehadoophdfslibcommons-cli-1.2.jar;D:hadoopsharehadoophdfslibcommons-codec-1.4.jar;D:hadoopsharehadoophdfslibcommons-daemon-1.0.13.jar;D:hadoopsharehadoophdfslibcommons-io-2.4.jar;D:hadoopsharehadoophdfslibcommons-lang-2.6.jar;D:hadoopsharehadoophdfslibcommons-logging-1.1.3.jar;D:hadoopsharehadoophdfslibguava-11.0.2.jar;D:hadoopsharehadoophdfslibhtrace-core-3.1.0-incubating.jar;D:hadoopsharehadoophdfslibjackson-core-asl-1.9.13.jar;D:hadoopsharehadoophdfslibjackson-mapper-asl-1.9.13.jar;D:hadoopsharehadoophdfslibjersey-core-1.9.jar;D:hadoopsharehadoophdfslibjersey-server-1.9.jar;D:hadoopsharehadoophdfslibjetty-6.1.26.jar;D:hadoopsharehadoophdfslibjetty-util-6.1.26.jar;D:hadoopsharehadoophdfslibjsr305-3.0.0.jar;D:hadoopsharehadoophdfslibleveldbjni-all-1.8.jar;D:hadoopsharehadoophdfsliblog4j-1.2.17.jar;D:hadoopsharehadoophdfslib
    etty-3.6.2.Final.jar;D:hadoopsharehadoophdfslib
    etty-all-4.0.23.Final.jar;D:hadoopsharehadoophdfslibprotobuf-java-2.5.0.jar;D:hadoopsharehadoophdfslibservlet-api-2.5.jar;D:hadoopsharehadoophdfslibxercesImpl-2.9.1.jar;D:hadoopsharehadoophdfslibxml-apis-1.3.04.jar;D:hadoopsharehadoophdfslibxmlenc-0.52.jar;D:hadoopsharehadoophdfshadoop-hdfs-2.7.5-tests.jar;D:hadoopsharehadoophdfshadoop-hdfs-2.7.5.jar;D:hadoopsharehadoophdfshadoop-hdfs-nfs-2.7.5.jar;D:hadoopsharehadoopyarnlibactivation-1.1.jar;D:hadoopsharehadoopyarnlibaopalliance-1.0.jar;D:hadoopsharehadoopyarnlibasm-3.2.jar;D:hadoopsharehadoopyarnlibcommons-cli-1.2.jar;D:hadoopsharehadoopyarnlibcommons-codec-1.4.jar;D:hadoopsharehadoopyarnlibcommons-collections-3.2.2.jar;D:hadoopsharehadoopyarnlibcommons-compress-1.4.1.jar;D:hadoopsharehadoopyarnlibcommons-io-2.4.jar;D:hadoopsharehadoopyarnlibcommons-lang-2.6.jar;D:hadoopsharehadoopyarnlibcommons-logging-1.1.3.jar;D:hadoopsharehadoopyarnlibguava-11.0.2.jar;D:hadoopsharehadoopyarnlibguice-3.0.jar;D:hadoopsharehadoopyarnlibguice-servlet-3.0.jar;D:hadoopsharehadoopyarnlibjackson-core-asl-1.9.13.jar;D:hadoopsharehadoopyarnlibjackson-jaxrs-1.9.13.jar;D:hadoopsharehadoopyarnlibjackson-mapper-asl-1.9.13.jar;D:hadoopsharehadoopyarnlibjackson-xc-1.9.13.jar;D:hadoopsharehadoopyarnlibjavax.inject-1.jar;D:hadoopsharehadoopyarnlibjaxb-api-2.2.2.jar;D:hadoopsharehadoopyarnlibjaxb-impl-2.2.3-1.jar;D:hadoopsharehadoopyarnlibjersey-client-1.9.jar;D:hadoopsharehadoopyarnlibjersey-core-1.9.jar;D:hadoopsharehadoopyarnlibjersey-guice-1.9.jar;D:hadoopsharehadoopyarnlibjersey-json-1.9.jar;D:hadoopsharehadoopyarnlibjersey-server-1.9.jar;D:hadoopsharehadoopyarnlibjettison-1.1.jar;D:hadoopsharehadoopyarnlibjetty-6.1.26.jar;D:hadoopsharehadoopyarnlibjetty-util-6.1.26.jar;D:hadoopsharehadoopyarnlibjsr305-3.0.0.jar;D:hadoopsharehadoopyarnlibleveldbjni-all-1.8.jar;D:hadoopsharehadoopyarnliblog4j-1.2.17.jar;D:hadoopsharehadoopyarnlib
    etty-3.6.2.Final.jar;D:hadoopsharehadoopyarnlibprotobuf-java-2.5.0.jar;D:hadoopsharehadoopyarnlibservlet-api-2.5.jar;D:hadoopsharehadoopyarnlibstax-api-1.0-2.jar;D:hadoopsharehadoopyarnlibxz-1.0.jar;D:hadoopsharehadoopyarnlibzookeeper-3.4.6-tests.jar;D:hadoopsharehadoopyarnlibzookeeper-3.4.6.jar;D:hadoopsharehadoopyarnhadoop-yarn-api-2.7.5.jar;D:hadoopsharehadoopyarnhadoop-yarn-applications-distributedshell-2.7.5.jar;D:hadoopsharehadoopyarnhadoop-yarn-applications-unmanaged-am-launcher-2.7.5.jar;D:hadoopsharehadoopyarnhadoop-yarn-client-2.7.5.jar;D:hadoopsharehadoopyarnhadoop-yarn-common-2.7.5.jar;D:hadoopsharehadoopyarnhadoop-yarn-registry-2.7.5.jar;D:hadoopsharehadoopyarnhadoop-yarn-server-applicationhistoryservice-2.7.5.jar;D:hadoopsharehadoopyarnhadoop-yarn-server-common-2.7.5.jar;D:hadoopsharehadoopyarnhadoop-yarn-server-nodemanager-2.7.5.jar;D:hadoopsharehadoopyarnhadoop-yarn-server-resourcemanager-2.7.5.jar;D:hadoopsharehadoopyarnhadoop-yarn-server-sharedcachemanager-2.7.5.jar;D:hadoopsharehadoopyarnhadoop-yarn-server-tests-2.7.5.jar;D:hadoopsharehadoopyarnhadoop-yarn-server-web-proxy-2.7.5.jar;D:hadoopsharehadoopmapreducelibaopalliance-1.0.jar;D:hadoopsharehadoopmapreducelibasm-3.2.jar;D:hadoopsharehadoopmapreducelibavro-1.7.4.jar;D:hadoopsharehadoopmapreducelibcommons-compress-1.4.1.jar;D:hadoopsharehadoopmapreducelibcommons-io-2.4.jar;D:hadoopsharehadoopmapreducelibguice-3.0.jar;D:hadoopsharehadoopmapreducelibguice-servlet-3.0.jar;D:hadoopsharehadoopmapreducelibhadoop-annotations-2.7.5.jar;D:hadoopsharehadoopmapreducelibhamcrest-core-1.3.jar;D:hadoopsharehadoopmapreducelibjackson-core-asl-1.9.13.jar;D:hadoopsharehadoopmapreducelibjackson-mapper-asl-1.9.13.jar;D:hadoopsharehadoopmapreducelibjavax.inject-1.jar;D:hadoopsharehadoopmapreducelibjersey-core-1.9.jar;D:hadoopsharehadoopmapreducelibjersey-guice-1.9.jar;D:hadoopsharehadoopmapreducelibjersey-server-1.9.jar;D:hadoopsharehadoopmapreducelibjunit-4.11.jar;D:hadoopsharehadoopmapreducelibleveldbjni-all-1.8.jar;D:hadoopsharehadoopmapreduceliblog4j-1.2.17.jar;D:hadoopsharehadoopmapreducelib
    etty-3.6.2.Final.jar;D:hadoopsharehadoopmapreducelibparanamer-2.3.jar;D:hadoopsharehadoopmapreducelibprotobuf-java-2.5.0.jar;D:hadoopsharehadoopmapreducelibsnappy-java-1.0.4.1.jar;D:hadoopsharehadoopmapreducelibxz-1.0.jar;D:hadoopsharehadoopmapreducehadoop-mapreduce-client-app-2.7.5.jar;D:hadoopsharehadoopmapreducehadoop-mapreduce-client-common-2.7.5.jar;D:hadoopsharehadoopmapreducehadoop-mapreduce-client-core-2.7.5.jar;D:hadoopsharehadoopmapreducehadoop-mapreduce-client-hs-2.7.5.jar;D:hadoopsharehadoopmapreducehadoop-mapreduce-client-hs-plugins-2.7.5.jar;D:hadoopsharehadoopmapreducehadoop-mapreduce-client-jobclient-2.7.5-tests.jar;D:hadoopsharehadoopmapreducehadoop-mapreduce-client-jobclient-2.7.5.jar;D:hadoopsharehadoopmapreducehadoop-mapreduce-client-shuffle-2.7.5.jar;D:hadoopsharehadoopmapreducehadoop-mapreduce-examples-2.7.5.jar
    STARTUP_MSG:   build = https://shv@git-wip-us.apache.org/repos/asf/hadoop.git -r 18065c2b6806ed4aa6a3187d77cbe21bb3dba075; compiled by 'kshvachk' on 2017-12-16T01:06Z
    STARTUP_MSG:   java = 1.8.0_151
    ************************************************************/
    18/06/07 06:25:02 INFO namenode.NameNode: createNameNode [-format]
    18/06/07 06:25:03 WARN common.Util: Path /usr/hadoop/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
    18/06/07 06:25:03 WARN common.Util: Path /usr/hadoop/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
    Formatting using clusterid: CID-923c0653-5a78-46ca-a788-6502dc43047d
    18/06/07 06:25:04 INFO namenode.FSNamesystem: No KeyProvider found.
    18/06/07 06:25:04 INFO namenode.FSNamesystem: fsLock is fair: true
    18/06/07 06:25:04 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
    18/06/07 06:25:04 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
    18/06/07 06:25:04 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
    18/06/07 06:25:04 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
    18/06/07 06:25:04 INFO blockmanagement.BlockManager: The block deletion will start around 2018 六月 07 06:25:04
    18/06/07 06:25:04 INFO util.GSet: Computing capacity for map BlocksMap
    18/06/07 06:25:04 INFO util.GSet: VM type       = 64-bit
    18/06/07 06:25:04 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
    18/06/07 06:25:04 INFO util.GSet: capacity      = 2^21 = 2097152 entries
    18/06/07 06:25:04 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
    18/06/07 06:25:04 INFO blockmanagement.BlockManager: defaultReplication         = 3
    18/06/07 06:25:04 INFO blockmanagement.BlockManager: maxReplication             = 512
    18/06/07 06:25:04 INFO blockmanagement.BlockManager: minReplication             = 1
    18/06/07 06:25:04 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
    18/06/07 06:25:04 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
    18/06/07 06:25:04 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
    18/06/07 06:25:04 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
    18/06/07 06:25:04 INFO namenode.FSNamesystem: fsOwner             = Memento (auth:SIMPLE)
    18/06/07 06:25:04 INFO namenode.FSNamesystem: supergroup          = supergroup
    18/06/07 06:25:04 INFO namenode.FSNamesystem: isPermissionEnabled = true
    18/06/07 06:25:04 INFO namenode.FSNamesystem: HA Enabled: false
    18/06/07 06:25:04 INFO namenode.FSNamesystem: Append Enabled: true
    18/06/07 06:25:04 INFO util.GSet: Computing capacity for map INodeMap
    18/06/07 06:25:04 INFO util.GSet: VM type       = 64-bit
    18/06/07 06:25:04 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
    18/06/07 06:25:04 INFO util.GSet: capacity      = 2^20 = 1048576 entries
    18/06/07 06:25:04 INFO namenode.FSDirectory: ACLs enabled? false
    18/06/07 06:25:04 INFO namenode.FSDirectory: XAttrs enabled? true
    18/06/07 06:25:04 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
    18/06/07 06:25:04 INFO namenode.NameNode: Caching file names occuring more than 10 times
    18/06/07 06:25:04 INFO util.GSet: Computing capacity for map cachedBlocks
    18/06/07 06:25:04 INFO util.GSet: VM type       = 64-bit
    18/06/07 06:25:04 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
    18/06/07 06:25:04 INFO util.GSet: capacity      = 2^18 = 262144 entries
    18/06/07 06:25:04 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
    18/06/07 06:25:04 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
    18/06/07 06:25:04 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
    18/06/07 06:25:04 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
    18/06/07 06:25:04 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
    18/06/07 06:25:04 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
    18/06/07 06:25:04 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
    18/06/07 06:25:04 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
    18/06/07 06:25:04 INFO util.GSet: Computing capacity for map NameNodeRetryCache
    18/06/07 06:25:04 INFO util.GSet: VM type       = 64-bit
    18/06/07 06:25:04 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
    18/06/07 06:25:04 INFO util.GSet: capacity      = 2^15 = 32768 entries
    18/06/07 06:25:04 INFO namenode.FSImage: Allocated new BlockPoolId: BP-869377568-192.168.1.104-1528323904862
    18/06/07 06:25:04 INFO common.Storage: Storage directory C:usrhadoophdfs
    ame has been successfully formatted.
    18/06/07 06:25:04 INFO namenode.FSImageFormatProtobuf: Saving image file C:usrhadoophdfs
    amecurrentfsimage.ckpt_0000000000000000000 using no compression
    18/06/07 06:25:05 INFO namenode.FSImageFormatProtobuf: Image file C:usrhadoophdfs
    amecurrentfsimage.ckpt_0000000000000000000 of size 324 bytes saved in 0 seconds.
    18/06/07 06:25:05 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
    18/06/07 06:25:05 INFO util.ExitUtil: Exiting with status 0
    18/06/07 06:25:05 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at Memento-PC/192.168.1.104
    ************************************************************/

    五、启动 Hadoop

    C:UsersMemento>start-all.cmd
    This script is Deprecated. Instead use start-dfs.cmd and start-yarn.cmd
    starting yarn daemons

    如果出现如下异常,提示说无法解析 master 地址:

    org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.IOException: Failed on local exception: java.net.SocketException: Unresolved address; Host Details : local host is: "master"; destination host is: (unknown):0

    此时需要在 C:WindowsSystem32driversetchosts 文件中追加 master 的映射:192.168.1.104    master

    然后再重新执行启动命令 start-all.cmd

    随后会出现四个命令窗口,依次如下:

    1、Apache Hadoop Distribution - hadoop namenode

    2、Apache Hadoop Distribution - yarn resourcemanager

    3、Apache Hadoop Distribution - yarn nodemanager

    4、Apache Hadoop Distribution - hadoop datanode

    六、JPS 查看启动进程

    C:UsersXXXXX>jps
    13460 Jps
    14676 NodeManager
    12444 NameNode
    14204 DataNode
    14348 ResourceManager

    七、MapReduce 任务和 hdfs 文件

    通过浏览器浏览 localhost:8080 和 localhost:50070 访问浏览:

    baeaa48d-e2e2-4410-b847-3ccef63091a5

    400b42d4-5aa1-4c49-9f81-0426a866411e

    至此,Hadoop 在 Windows 下的环境搭建完成!

    关闭 hadoop

    C:UsersXXXXX>stop-all.cmd
    This script is Deprecated. Instead use stop-dfs.cmd and stop-yarn.cmd
    成功: 给进程发送了终止信号,进程的 PID 为 27204。
    成功: 给进程发送了终止信号,进程的 PID 为 7884。
    stopping yarn daemons
    成功: 给进程发送了终止信号,进程的 PID 为 20464。
    成功: 给进程发送了终止信号,进程的 PID 为 12516。
    
    信息: 没有运行的带有指定标准的任务。

    相关参考:

    winutils:https://github.com/steveloughran/winutils

    不想下火车的人:https://www.cnblogs.com/wuxun1997/p/6847950.html

    bin 附件下载:https://pan.baidu.com/s/1XCTTQVKcsMoaLOLh4X4bhw

    By. Memento

  • 相关阅读:
    minimsg升级扩展
    一起学习Avalonia(十三)
    @Import注解源码
    Python入门随记(3)
    NET WebApi 后端重定向指定链接
    Net Nlog 持久化到数据库
    NetCore Xunit单元测试依赖注入
    VS 调试时,提示无法启动iis服务器
    NET 反射,对可空类型动态赋值
    MSSQL 查询表结构
  • 原文地址:https://www.cnblogs.com/memento/p/9148721.html
Copyright © 2011-2022 走看看