zoukankan      html  css  js  c++  java
  • 从零自学Hadoop(12):Hadoop命令中

    阅读目录

    本文版权归mephisto和博客园共有,欢迎转载,但须保留此段声明,并给出原文链接,谢谢合作。

    文章是哥(mephisto)写的,SourceLink

      上一篇,我们对Hadoop命令进行了简略的列举,但是Hadoop命令特多,还有一部分没有列举完,官网基本都是英文的,所以只能拙略的翻译下,妄大家见谅。

      下面,我们就开始对Hadoop命令中进行讲解。

    HDFS Commands

    一:介绍

      所有的HDFS命令通过bin/ HDFS脚本调用。指定参数运行HDFS脚本会打印所有命令的描述。

      用法: hdfs [SHELL_OPTIONS] COMMAND [GENERIC_OPTIONS] [COMMAND_OPTIONS]

      Hadoop有一个选项解析框架用于解析一般的选项和运行类。

    COMMAND_OPTIONSDescription
    --config
    --loglevel
    The common set of shell options. These are documented on the Commands Manual page.
    GENERIC_OPTIONS The common set of options supported by multiple commands. See the Hadoop Commands Manual for more information.
    COMMAND COMMAND_OPTIONS Various commands with their options are described in the following sections. The commands have been grouped into User Commands and Administration Commands.

    User Commands

    一:介绍

      用于Hadoop集群用户命令。

    二:classPath

      打印Hadoop jar和需要的库类路径。

      用法: hdfs classpath

    三:dfs

      运行在支持Hadoop文件系统文件系统的命令.The various COMMAND_OPTIONS can be found at File System Shell Guide.

      用法: hdfs dfs [COMMAND [COMMAND_OPTIONS]]

    四:fetchdt

      从NameNode获取授权令牌.See fetchdt for more info. 

      用法: hdfs fetchdt [--webservice <namenode_http_addr>] <path>

    五:fsck

      在HDFS文件系统检查工具.See fsck for more info.

      用法:hdfs fsck <path>
                  [-list-corruptfileblocks |
                  [-move | -delete | -openforwrite]
                  [-files [-blocks [-locations | -racks]]]
                  [-includeSnapshots]

    六:getconf

      从配置目录中获取配置信息然后处理。

      用法:hdfs getconf -namenodes
           hdfs getconf -secondaryNameNodes
           hdfs getconf -backupNodes
           hdfs getconf -includeFile
           hdfs getconf -excludeFile
           hdfs getconf -nnRpcAddresses
           hdfs getconf -confKey [key]

    七:groups

      返回给定的一个或多个用户组信息。

      用法: hdfs groups [username ...]

    八:lsSnapshottableDir

      获得snapshottable目录列表。当是超级用户运行时,它返回所有的snapshottable目录。否则它返回当前用户所拥有的目录。

      用法: hdfs lsSnapshottableDir [-help]

    九:jmxget

      把一个服务的JMX信息丢弃

      用法: hdfs jmxget [-localVM ConnectorURL | -port port | -server mbeanserver | -service service]

    十:oev

      Hadoop离线编辑查看器。

      用法: hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE

    十一:oiv

      用于查看较新的镜像文件的Hadoop离线镜像查看器

      用法: hdfs oiv [OPTIONS] -i INPUT_FILE

    十二:oiv_legacy

      老版本的Hadoop Hadoop离线镜像查看器。

      用法: hdfs oiv_legacy [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE

    十三:snapshotDiff

      确定HDFS的快照之间的差异。See the HDFS Snapshot Documentation for more information.

      用法: hdfs snapshotDiff <path> <fromSnapshot> <toSnapshot>

    十四:version

      打印版本。  

      用法: hdfs version

    Administration Commands

    一:介绍

      用于Hadoop集群用户命令。

    二:balancer

      运行群集平衡实用程序。管理员可以按Ctrl-C停止再平衡过程。See Balancer for more details.

      用法:hdfs balancer
                  [-threshold <threshold>]
                  [-policy <policy>]
                  [-exclude [-f <hosts-file> | <comma-separated list of hosts>]]
                  [-include [-f <hosts-file> | <comma-separated list of hosts>]]
                  [-idleiterations <idleiterations>]

    三:crypto

      See the HDFS Transparent Encryption Documentation for more information.

      用法:hdfs crypto -createZone -keyName <keyName> -path <path>
           hdfs crypto -help <command-name>
           hdfs crypto -listZones

    四:datanode

      运行一个HDFS datanode

      用法: hdfs datanode [-regular | -rollback | -rollingupgrace rollback]

    五:dfsadmin

      运行一个HDFS dfsadmin客户端

    用法:hdfs dfsadmin [GENERIC_OPTIONS]
              [-report [-live] [-dead] [-decommissioning]]
              [-safemode enter | leave | get | wait]
              [-saveNamespace]
              [-rollEdits]
              [-restoreFailedStorage true |false |check]
              [-refreshNodes]
              [-setQuota <quota> <dirname>...<dirname>]
              [-clrQuota <dirname>...<dirname>]
              [-setSpaceQuota <quota> <dirname>...<dirname>]
              [-clrSpaceQuota <dirname>...<dirname>]
              [-setStoragePolicy <path> <policyName>]
              [-getStoragePolicy <path>]
              [-finalizeUpgrade]
              [-rollingUpgrade [<query> |<prepare> |<finalize>]]
              [-metasave filename]
              [-refreshServiceAcl]
              [-refreshUserToGroupsMappings]
              [-refreshSuperUserGroupsConfiguration]
              [-refreshCallQueue]
              [-refresh <host:ipc_port> <key> [arg1..argn]]
              [-reconfig <datanode |...> <host:ipc_port> <start |status>]
              [-printTopology]
              [-refreshNamenodes datanodehost:port]
              [-deleteBlockPool datanode-host:port blockpoolId [force]]
              [-setBalancerBandwidth <bandwidth in bytes per second>]
              [-allowSnapshot <snapshotDir>]
              [-disallowSnapshot <snapshotDir>]
              [-fetchImage <local directory>]
              [-shutdownDatanode <datanode_host:ipc_port> [upgrade]]
              [-getDatanodeInfo <datanode_host:ipc_port>]
              [-triggerBlockReport [-incremental] <datanode_host:ipc_port>]
              [-help [cmd]]

    六:haadmin

      See HDFS HA with NFS or HDFS HA with QJM for more information on this command.

    用法:hdfs haadmin -checkHealth <serviceId>
        hdfs haadmin -failover [--forcefence] [--forceactive] <serviceId> <serviceId>
        hdfs haadmin -getServiceState <serviceId>
        hdfs haadmin -help <command>
        hdfs haadmin -transitionToActive <serviceId> [--forceactive]
        hdfs haadmin -transitionToStandby <serviceId>

    七:journalnode

      This comamnd starts a journalnode for use with HDFS HA with QJM.

      用法: hdfs journalnode

    八:mover

      运行数据迁移实用程序. See Mover for more details.

      用法: hdfs mover [-p <files/dirs> | -f <local file name>]

    九:namenode

      运行namenode. More info about the upgrade, rollback and finalize is at Upgrade Rollback.

      用法:hdfs namenode [-backup] |
              [-checkpoint] |
              [-format [-clusterid cid ] [-force] [-nonInteractive] ] |
              [-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] |
              [-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] |
              [-rollback] |
              [-rollingUpgrade <downgrade |rollback> ] |
              [-finalize] |
              [-importCheckpoint] |
              [-initializeSharedEdits] |
              [-bootstrapStandby] |
              [-recover [-force] ] |
              [-metadataVersion ]

    十:nfs3

      该指令从HDFS nfs3服务使用nfs3网关。

      用法: hdfs nfs3

    十一:portmap

      该指令从HDFS nfs3服务使用RPC portmap。

      用法:Usage: hdfs portmap

    十二:secondarynamenode

      运行 second namenode.See Secondary Namenode for more info.

      用法: hdfs secondarynamenode [-checkpoint [force]] | [-format] | [-geteditsize]

    十三:storagepolicies

      列出所有存储策。See the HDFS Storage Policy Documentation for more information.

    十四:zkfc

      这个指令开始一个管理员切换控制器的过程使用HDFS HA QJM。

      用法: hdfs zkfc [-formatZK [-force] [-nonInteractive]]

    Debug Commands

    一:介绍

      为了帮助管理员调试HDFS问题有用的命令,如验证块文件和调用recoverlease。

    二:verify

      HDFS的元数据和文件块的验证。如果一个块指定文件,我们将验证在元数据文件的校验和匹配块文件。

      用法: hdfs debug verify [-meta <metadata-file>] [-block <block-file>]

    三:recoverLease

      恢复指定路径上的租约。路径必须驻留在一个HDFS文件系统。重试的默认号码是1。

      用法: hdfs debug recoverLease [-path <path>] [-retries <num-retries>]

    --------------------------------------------------------------------

      到此,本章节的内容讲述完毕。

    引用

    Apache hadoop commands:http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html

    Apache 1.04 中文:http://hadoop.apache.org/docs/r1.0.4/cn/commands_manual.html

    系列索引

      【源】从零自学Hadoop系列索引

     

    本文版权归mephisto和博客园共有,欢迎转载,但须保留此段声明,并给出原文链接,谢谢合作。

    文章是哥(mephisto)写的,SourceLink

  • 相关阅读:
    广域网(ppp协议、HDLC协议)
    0120. Triangle (M)
    0589. N-ary Tree Preorder Traversal (E)
    0377. Combination Sum IV (M)
    1074. Number of Submatrices That Sum to Target (H)
    1209. Remove All Adjacent Duplicates in String II (M)
    0509. Fibonacci Number (E)
    0086. Partition List (M)
    0667. Beautiful Arrangement II (M)
    1302. Deepest Leaves Sum (M)
  • 原文地址:https://www.cnblogs.com/mephisto/p/4869965.html
Copyright © 2011-2022 走看看