zoukankan      html  css  js  c++  java
  • 【大数据系列】hadoop命令指导官方文档翻译

    Hadoop Commands Guide

    Overview

    All of the Hadoop commands and subprojects follow the same basic structure:

    Usage: shellcommand [SHELL_OPTIONS] [COMMAND] [GENERIC_OPTIONS] [COMMAND_OPTIONS]

    FIELDDescription
    shellcommand The command of the project being invoked. For example, Hadoop common uses hadoop, HDFS uses hdfs, and YARN uses yarn.
    SHELL_OPTIONS Options that the shell processes prior to executing Java.
    COMMAND Action to perform.
    GENERIC_OPTIONS The common set of options supported by multiple commands.
    COMMAND_OPTIONS Various commands with their options are described in this documention for the Hadoop common sub-project. HDFS and YARN are covered in other documents.

    总览

    所有的hadoop命令和子项目都有相同的基本结构:

    用法: shell命令 [SHELL_OPTIONS] [COMMAND] [GENERIC_OPTIONS] [COMMAND_OPTIONS]

     

    FIELD
    Description
    shellcommand
    The command of the project being invoked. For example, Hadoop common uses hadoop, HDFS uses hdfs, and YARN uses yarn.
    项目引用的命令。例如Hadoop common使用hadoop ,HDFS项目使用hdfs,YARN子项目使用yarn.
    SHELL_OPTIONS
    Options that the shell processes prior to executing Java.
    COMMAND
    Action to perform.
    GENERIC_OPTIONS
    The common set of options supported by multiple commands.
    COMMAND_OPTIONS
    Various commands with their options are described in this documention for the Hadoop common sub-project. HDFS and YARN are covered in other documents.

    Shell Options

    All of the shell commands will accept a common set of options. For some commands, these options are ignored. For example, passing ---hostnames on a command that only executes on a single host will be ignored.

    SHELL_OPTIONDescription
    --buildpaths Enables developer versions of jars.
    --config confdir Overwrites the default Configuration directory. Default is $HADOOP_HOME/etc/hadoop.
    --daemon mode If the command supports daemonization (e.g., hdfs namenode), execute in the appropriate mode. Supported modes are start to start the process in daemon mode, stop to stop the process, and status to determine the active status of the process. status will return an LSB-compliant result code. If no option is provided, commands that support daemonization will run in the foreground. For commands that do not support daemonization, this option is ignored.
    --debug Enables shell level configuration debugging information
    --help Shell script usage information.
    --hostnames When --workers is used, override the workers file with a space delimited list of hostnames where to execute a multi-host subcommand. If --workers is not used, this option is ignored.
    --hosts When --workers is used, override the workers file with another file that contains a list of hostnames where to execute a multi-host subcommand. If --workers is not used, this option is ignored.
    --loglevel loglevel Overrides the log level. Valid log levels are FATAL, ERROR, WARN, INFO, DEBUG, and TRACE. Default is INFO.
    --workers If possible, execute this command on all hosts in the workers file.

    Shell 选项

    所有的shell将接受一系列的选项。对于一些命令来说,这些选项是忽略的。例如如果只在单一的主机上执行主机名可以忽略。

    SHELL_OPTION
    Description
    --buildpaths
    Enables developer versions of jars.
    使开发版本的jar包起作用。
    --config confdir
    Overwrites the default Configuration directory. Default is $HADOOP_HOME/etc/hadoop.
    覆盖默认的配置路径,默认路径是$HADOOP_HOME/etc/hadoop
    --daemon mode
    If the command supports daemonization (e.g., hdfs namenode), execute in the appropriate mode. Supported modes are start to start the process in daemon mode, stop to stop the process, and status to determine the active status of the process. status will return an LSB-compliant result code. If no option is provided, commands that support daemonization will run in the foreground. For commands that do not support daemonization, this option is ignored.
    --debug
    Enables shell level configuration debugging information
    --help
    Shell script usage information.
    --hostnames
    When --workers is used, override the workers file with a space delimited list of hostnames where to execute a multi-host subcommand. If --workers is not used, this option is ignored.
    --hosts
    When --workers is used, override the workers file with another file that contains a list of hostnames where to execute a multi-host subcommand. If --workers is not used, this option is ignored.
    --loglevel loglevel
    Overrides the log level. Valid log levels are FATAL, ERROR, WARN, INFO, DEBUG, and TRACE. Default is INFO.
    --workers
    If possible, execute this command on all hosts in the workers file.

    Generic Options

    Many subcommands honor a common set of configuration options to alter their behavior:

    GENERIC_OPTIONDescription
    -archives <comma separated list of archives> Specify comma separated archives to be unarchived on the compute machines. Applies only to job.
    -conf <configuration file> Specify an application configuration file.
    -D <property>=<value> Use value for given property.
    -files <comma separated list of files> Specify comma separated files to be copied to the map reduce cluster. Applies only to job.
    -fs <file:///> or <hdfs://namenode:port> Specify default filesystem URL to use. Overrides ‘fs.defaultFS’ property from configurations.
    -jt <local> or <resourcemanager:port> Specify a ResourceManager. Applies only to job.
    -libjars <comma seperated list of jars> Specify comma separated jar files to include in the classpath. Applies only to job.

    属性选项

    许多的子命令执行一系列的参数选项去改变他们的行为。

    GENERIC_OPTION
    Description
    -archives <comma separated list of archives>
    Specify comma separated archives to be unarchived on the compute machines. Applies only to job.
    -conf <configuration file>
    Specify an application configuration file.
    -D <property>=<value>
    Use value for given property.
    -files <comma separated list of files>
    Specify comma separated files to be copied to the map reduce cluster. Applies only to job.
    -fs <file:///> or <hdfs://namenode:port>
    Specify default filesystem URL to use. Overrides ‘fs.defaultFS’ property from configurations.
    -jt <local> or <resourcemanager:port>
    Specify a ResourceManager. Applies only to job.
    -libjars <comma seperated list of jars>
    Specify comma separated jar files to include in the classpath. Applies only to job.

    Hadoop Common Commands

    All of these commands are executed from the hadoop shell command. They have been broken up into User Commands and Administration Commands.

    User Commands

    Commands useful for users of a hadoop cluster.

    archive

    Creates a hadoop archive. More information can be found at Hadoop Archives Guide.

    checknative

    Usage: hadoop checknative [-a] [-h]

    COMMAND_OPTIONDescription
    -a Check all libraries are available.
    -h print help

    This command checks the availability of the Hadoop native code. See Native Libaries for more information. By default, this command only checks the availability of libhadoop.

    classpath

    Usage: hadoop classpath [--glob |--jar <path> |-h |--help]

    COMMAND_OPTIONDescription
    --glob expand wildcards
    --jar path write classpath as manifest in jar named path
    -h--help print help

    Prints the class path needed to get the Hadoop jar and the required libraries. If called without arguments, then prints the classpath set up by the command scripts, which is likely to contain wildcards in the classpath entries. Additional options print the classpath after wildcard expansion or write the classpath into the manifest of a jar file. The latter is useful in environments where wildcards cannot be used and the expanded classpath exceeds the maximum supported command line length.

    credential

    Usage: hadoop credential <subcommand> [options]

    COMMAND_OPTIONDescription
    create alias [-provider provider-path] [-strict] [-value credential-value] Prompts the user for a credential to be stored as the given alias. The hadoop.security.credential.provider.path within the core-site.xml file will be used unless a -provider is indicated. The -strict flag will cause the command to fail if the provider uses a default password. Use -value flag to supply the credential value (a.k.a. the alias password) instead of being prompted.
    delete alias [-provider provider-path] [-strict] [-f] Deletes the credential with the provided alias. The hadoop.security.credential.provider.path within the core-site.xml file will be used unless a -provider is indicated. The -strict flag will cause the command to fail if the provider uses a default password. The command asks for confirmation unless -f is specified
    list [-provider provider-path] [-strict] Lists all of the credential aliases The hadoop.security.credential.provider.path within the core-site.xml file will be used unless a -provider is indicated. The -strict flag will cause the command to fail if the provider uses a default password.

    Command to manage credentials, passwords and secrets within credential providers.

    The CredentialProvider API in Hadoop allows for the separation of applications and how they store their required passwords/secrets. In order to indicate a particular provider type and location, the user must provide the hadoop.security.credential.provider.path configuration element in core-site.xml or use the command line option -provider on each of the following commands. This provider path is a comma-separated list of URLs that indicates the type and location of a list of providers that should be consulted. For example, the following path: user:///,jceks://file/tmp/test.jceks,jceks://hdfs@nn1.example.com/my/path/test.jceks

    indicates that the current user’s credentials file should be consulted through the User Provider, that the local file located at /tmp/test.jceks is a Java Keystore Provider and that the file located within HDFS at nn1.example.com/my/path/test.jceks is also a store for a Java Keystore Provider.

    When utilizing the credential command it will often be for provisioning a password or secret to a particular credential store provider. In order to explicitly indicate which provider store to use the -provider option should be used. Otherwise, given a path of multiple providers, the first non-transient provider will be used. This may or may not be the one that you intended.

    Providers frequently require that a password or other secret is supplied. If the provider requires a password and is unable to find one, it will use a default password and emit a warning message that the default password is being used. If the -strict flag is supplied, the warning message becomes an error message and the command returns immediately with an error status.

    Example: hadoop credential list -provider jceks://file/tmp/test.jceks

    distch

    Usage: hadoop distch [-f urilist_url] [-i] [-log logdir] path:owner:group:permissions

    COMMAND_OPTIONDescription
    -f List of objects to change
    -i Ignore failures
    -log Directory to log output

    Change the ownership and permissions on many files at once.

    distcp

    Copy file or directories recursively. More information can be found at Hadoop DistCp Guide.

    dtutil

    Usage: hadoop dtutil [-keytab keytab_file -principal principal_name ] subcommand [-format (java|protobuf)] [-alias alias ] [-renewer renewer ]filename…

    Utility to fetch and manage hadoop delegation tokens inside credentials files. It is intended to replace the simpler command fetchdt. There are multiple subcommands, each with their own flags and options.

    For every subcommand that writes out a file, the -format option will specify the internal format to use. java is the legacy format that matches fetchdt. The default is protobuf.

    For every subcommand that connects to a service, convenience flags are provided to specify the kerberos principal name and keytab file to use for auth.

    SUBCOMMANDDescription
    print 
       [-alias alias ] 
       filename [ filename2 ...]
    Print out the fields in the tokens contained in filename (and filename2 …). 
    If alias is specified, print only tokens matching alias. Otherwise, print all tokens.
    get URL 
       [-service scheme ] 
       [-format (java|protobuf)] 
       [-alias alias ] 
       [-renewer renewer ] 
       filename
    Fetch a token from service at URL and place it in filename
    URL is required and must immediately follow get.
    URL is the service URL, e.g. hdfs://localhost:9000
    alias will overwrite the service field in the token. 
    It is intended for hosts that have external and internal names, e.g. firewall.com:14000
    filename should come last and is the name of the token file. 
    It will be created if it does not exist. Otherwise, token(s) are added to existing file. 
    The -service flag should only be used with a URL which starts with http or https
    The following are equivalent: hdfs://localhost:9000/ vs. http://localhost:9000 -service hdfs
    append 
       [-format (java|protobuf)] 
       filename filename2 [ filename3...]
    Append the contents of the first N filenames onto the last filename. 
    When tokens with common service fields are present in multiple files, earlier files’ tokens are overwritten.
     That is, tokens present in the last file are always preserved.
    remove -alias alias 
       [-format (java|protobuf)] 
       filename [ filename2 ...]
    From each file specified, remove the tokens matching alias and write out each file using specified format. 
    alias must be specified.
    cancel -alias alias 
       [-format (java|protobuf)] 
       filename [ filename2 ...]
    Just like remove, except the tokens are also cancelled using the service specified in the token object. 
    alias must be specified.
    renew -alias alias 
       [-format (java|protobuf)] 
       filename [ filename2 ...]
    For each file specified, renew the tokens matching alias and write out each file using specified format. 
    alias must be specified.

    fs

    This command is documented in the File System Shell Guide. It is a synonym for hdfs dfs when HDFS is in use.

    gridmix

    Gridmix is a benchmark tool for Hadoop cluster. More information can be found in the Gridmix Guide.

    jar

    Usage: hadoop jar <jar> [mainClass] args...

    Runs a jar file.

    Use yarn jar to launch YARN applications instead.

    jnipath

    Usage: hadoop jnipath

    Print the computed java.library.path.

    kerbname

    Usage: hadoop kerbname principal

    Convert the named principal via the auth_to_local rules to the Hadoop user name.

    Example: hadoop kerbname user@EXAMPLE.COM

    key

    Usage: hadoop key <subcommand> [options]

    COMMAND_OPTIONDescription
    create keyname [-cipher cipher] [-size size] [-description description] [-attr attribute=value] [-provider provider] [-strict] [-help] Creates a new key for the name specified by the keyname argument within the provider specified by the -provider argument. The -strict flag will cause the command to fail if the provider uses a default password. You may specify a cipher with the -cipher argument. The default cipher is currently “AES/CTR/NoPadding”. The default keysize is 128. You may specify the requested key length using the -size argument. Arbitrary attribute=value style attributes may be specified using the -attr argument. -attr may be specified multiple times, once per attribute.
    roll keyname [-provider provider] [-strict] [-help] Creates a new version for the specified key within the provider indicated using the -provider argument. The -strict flag will cause the command to fail if the provider uses a default password.
    delete keyname [-provider provider] [-strict] [-f] [-help] Deletes all versions of the key specified by the keyname argument from within the provider specified by -provider. The -strict flag will cause the command to fail if the provider uses a default password. The command asks for user confirmation unless -f is specified.
    list [-provider provider] [-strict] [-metadata] [-help] Displays the keynames contained within a particular provider as configured in core-site.xml or specified with the -provider argument. The -strict flag will cause the command to fail if the provider uses a default password. -metadata displays the metadata.
    -help Prints usage of this command

    Manage keys via the KeyProvider. For details on KeyProviders, see the Transparent Encryption Guide.

    Providers frequently require that a password or other secret is supplied. If the provider requires a password and is unable to find one, it will use a default password and emit a warning message that the default password is being used. If the -strict flag is supplied, the warning message becomes an error message and the command returns immediately with an error status.

    NOTE: Some KeyProviders (e.g. org.apache.hadoop.crypto.key.JavaKeyStoreProvider) do not support uppercase key names.

    NOTE: Some KeyProviders do not directly execute a key deletion (e.g. performs a soft-delete instead, or delay the actual deletion, to prevent mistake). In these cases, one may encounter errors when creating/deleting a key with the same name after deleting it. Please check the underlying KeyProvider for details.

    kms

    Usage: hadoop kms

    Run KMS, the Key Management Server.

    trace

    View and modify Hadoop tracing settings. See the Tracing Guide.

    version

    Usage: hadoop version

    Prints the version.

    CLASSNAME

    Usage: hadoop CLASSNAME

    Runs the class named CLASSNAME. The class must be part of a package.

    envvars

    Usage: hadoop envvars

    Display computed Hadoop environment variables.

    Hadoop 常用命令

    所有的这些命令都懂hadoop脚本命令执行。它们已经被分为用户命令和管理员命令。

    用户命令

    对于hadoop集群用户有用的命令:

    archive存档

    创建一个hadoop存档,更多信息可以查看Hadoop Archives Guide.

    checknative 检查本地库

    Usage: hadoop checknative [-a] [-h]

    COMMAND_OPTIONDescription
    -a Check all libraries are available.
    -h print help

     

    该命令检测Hadoop本地代码的可用性。查看 Native Libaries获得更多信息。默认情况下这个命令只检查hadoop lib包的可用性。

    classpath 类路径

    Usage: hadoop classpath [--glob |--jar <path> |-h |--help]

    COMMAND_OPTIONDescription
    --glob expand wildcards 扩展通配符
    --jar path write classpath as manifest in jar named path
    -h--help print help

    Prints the class path needed to get the Hadoop jar and the required libraries. If called without arguments, then prints the classpath set up by the command scripts, which is likely to contain wildcards in the classpath entries. Additional options print the classpath after wildcard expansion or write the classpath into the manifest of a jar file. The latter is useful in environments where wildcards cannot be used and the expanded classpath exceeds the maximum supported command line length.

    打印所需的hadoop jar的类路径和库。如果不带参数执行,将会打印脚本设置的包括全部通配符的类路径。其他的选项打印通配符扩展或者指定的jar文件的类路径。后面的方式(也就是不带参数的方式)对于不能使用通配符或者扩展类路径超过了命令行的最大值的环境是非常有用的。

    credential  信任

    Usage: hadoop credential <subcommand> [options]

    COMMAND_OPTIONDescription
    create alias [-provider provider-path] [-strict] [-value credential-value] Prompts the user for a credential to be stored as the given alias. The hadoop.security.credential.provider.path within the core-site.xml file will be used unless a -provider is indicated. The -strict flag will cause the command to fail if the provider uses a default password. Use -value flag to supply the credential value (a.k.a. the alias password) instead of being prompted.
    delete alias [-provider provider-path] [-strict] [-f] Deletes the credential with the provided alias. The hadoop.security.credential.provider.path within the core-site.xml file will be used unless a -provider is indicated. The -strict flag will cause the command to fail if the provider uses a default password. The command asks for confirmation unless -f is specified
    list [-provider provider-path] [-strict] Lists all of the credential aliases The hadoop.security.credential.provider.path within the core-site.xml file will be used unless a -provider is indicated. The -strict flag will cause the command to fail if the provider uses a default password.

    用命令在凭据提供者中管理凭证、密码和秘钥。

    hadoop中凭证提供者API允许应用分离和如何存储他们需要的密码/秘钥。为了表明一个特别的提供者和路径,用户必须在core-site.xml提供hadoop.security.credential.provider.path配置元素或者在如下的命令中加入选项 -provider.提供者路径是一个由逗号分开的URL,这些路径指明了一系列提供者的类型和路径。例如,如下的路径: user:///,jceks://file/tmp/test.jceks,jceks://hdfs@nn1.example.com/my/path/test.jceks

    表明当前用户的凭证文件应该咨询用户提供者,在/tmp/test.jceks目录下的文件是Java Keystore提供商,而且在HDFS中nn1.example.com/my/path/test.jecks的文件同样也是一个Java Keystore提供商。

    当使用凭据命令时,它通常是为特定凭据存储提供程序提供密码或秘钥。为了显式地指明应该使用哪个提供程序存储区,应该使用-provider选项。否则,给定多个提供者的路径,将使用第一个非瞬时提供程序。这可能是也可能不是你想要的。

    供应商经常要求提供密码或其他秘钥。如果提供程序需要密码并且无法找到密码,它将使用默认密码并发出一个警告消息,默认密码正在使用中。如果提供了 -strict标志,则警告消息变成错误消息,命令立即返回错误状态。

    Example: hadoop credential list -provider jceks://file/tmp/test.jceks

    distch   (distance  chown)

    Usage: hadoop distch [-f urilist_url] [-i] [-log logdir] path:owner:group:permissions

    COMMAND_OPTIONDescription
    -f List of objects to change
    -i Ignore failures
    -log Directory to log output

    一次修改许多文件的所有者和权限。

    distcp (distance copy)

    递归的拷贝文件或者目录,查看Hadoop DistCp Guide.获得更多信息。

    dtutil

    Usage: hadoop dtutil [-keytab keytab_file -principal principal_name ] subcommand [-format (java|protobuf)] [-alias alias ] [-renewer renewer ]filename…

    一种实用工具用来获取和管理凭证文件中的Hadoop委托令牌。它的目的是取代简单的命令fetchdt。这里有许多子命令,每一个都有自己的标识和选项。

    写入文件的每个子命令,其格式选项可以指定要使用的内部格式。java是与fetchdt相匹配的传统格式。默认的是protobuf。

    每个子命令连接到服务后,提供的便利标识用于验证Kerberos主要名称和keytab文件。

    SUBCOMMANDDescription
    print 
       [-alias alias ] 
       filename [ filename2 ...]

    Print out the fields in the tokens contained in filename (and filename2 …). 

    打印filename文件中包含的标识字段(filename...)
    If alias is specified, print only tokens matching alias. Otherwise, print all tokens.

    如果alias指定了的话,只打印与alias匹配的标识,否则的话打印所有的标识、

    get URL 
       [-service scheme ] 
       [-format (java|protobuf)] 
       [-alias alias ] 
       [-renewer renewer ] 
       filename
    Fetch a token from service at URL and place it in filename
    URL is required and must immediately follow get.
    URL is the service URL, e.g. hdfs://localhost:9000
    alias will overwrite the service field in the token. 
    It is intended for hosts that have external and internal names, e.g. firewall.com:14000
    filename should come last and is the name of the token file. 
    It will be created if it does not exist. Otherwise, token(s) are added to existing file. 
    The -service flag should only be used with a URL which starts with http or https
    The following are equivalent: hdfs://localhost:9000/ vs. http://localhost:9000 -service hdfs
    append 
       [-format (java|protobuf)] 
       filename filename2 [ filename3...]

    Append the contents of the first N filenames onto the last filename. 

    将前N个文件的内容添加到最后一个文件中。
    When tokens with common service fields are present in multiple files, earlier files’ tokens are overwritten.
     That is, tokens present in the last file are always preserved.

    当具有多个文件中具有公共服务字段的令牌存在时,早期文件的标记被覆盖。

     也就是说,在最后的文件中的令牌总是有效的。

    remove -alias alias 
       [-format (java|protobuf)] 
       filename [ filename2 ...]

    From each file specified, remove the tokens matching alias and write out each file using specified format. 
    alias must be specified.

    从指定的每个文件中删除与别名匹配的令牌,并使用指定的格式写出每个文件。alias必须指定。

    cancel -alias alias 
       [-format (java|protobuf)] 
       filename [ filename2 ...]
    Just like remove, except the tokens are also cancelled using the service specified in the token object. 
    alias must be specified.
    renew -alias alias 
       [-format (java|protobuf)] 
       filename [ filename2 ...]

    For each file specified, renew the tokens matching alias and write out each file using specified format. 
    alias must be specified.

    对于指定的每个文件,更新令牌匹配别名,并使用指定格式写出每个文件。

    alias必须指定。

    fs

    这个命令文档在 File System Shell Guide,在使用HDFS的时候它是hdfs、dfs的同义词.

    gridmix

    gridmix是Hadoop集群的一个基准测试工具。更多的内容请查看 Gridmix Guide.。

    jar

    Usage: hadoop jar <jar> [mainClass] args...

    运行一个jar文件

    使用yarn jar去执行一个YARN应用。

    jnipath

    Usage: hadoop jnipath

    Print the computed java.library.path.

    kerbname

    Usage: hadoop kerbname principal

    通过auth_to_local规则改变主要的名字为Hadoo用户名。

    Example: hadoop kerbname user@EXAMPLE.COM

    key

    Usage: hadoop key <subcommand> [options]

    COMMAND_OPTIONDescription
    create keyname [-cipher cipher] [-size size] [-description description] [-attr attribute=value] [-provider provider] [-strict] [-help] Creates a new key for the name specified by the keyname argument within the provider specified by the -provider argument. The -strict flag will cause the command to fail if the provider uses a default password. You may specify a cipher with the -cipher argument. The default cipher is currently “AES/CTR/NoPadding”. The default keysize is 128. You may specify the requested key length using the -size argument. Arbitrary attribute=value style attributes may be specified using the -attr argument. -attr may be specified multiple times, once per attribute.
    roll keyname [-provider provider] [-strict] [-help] Creates a new version for the specified key within the provider indicated using the -provider argument. The -strict flag will cause the command to fail if the provider uses a default password.
    delete keyname [-provider provider] [-strict] [-f] [-help] Deletes all versions of the key specified by the keyname argument from within the provider specified by -provider. The -strict flag will cause the command to fail if the provider uses a default password. The command asks for user confirmation unless -f is specified.
    list [-provider provider] [-strict] [-metadata] [-help] Displays the keynames contained within a particular provider as configured in core-site.xml or specified with the -provider argument. The -strict flag will cause the command to fail if the provider uses a default password. -metadata displays the metadata.
    -help Prints usage of this command

     

    通过keyprovider管理密钥。想获得更多的信息,请查看 Transparent Encryption Guide.

    供应商经常要求提供密码或其他秘钥。如果提供程序需要密码并且无法找到密码,它将使用默认密码并发出一个警告消息,默认密码正在使用中。如果提供了 -strict标志,则警告消息变成错误消息,命令立即返回错误状态。

    注意:一些KeyProviders(比如 org.apache.hadoop.crypto.key.JavaKeyStoreProvider,不支持大写的key名称)

    注意:一些KeyProvider不会直接执行删除一个key(执行软删除,或延迟实际删除,以防止错误。)在这种情况下,在删除一个key之后新建或删除一个相同名称的key的时候可能会报错。详情请检查潜在的keyprovider。

    kms

    Usage: hadoop kms

    执行Key管理服务KMS。

    trace

    查看和修改Hadoop跟踪设置。查看Tracing Guide.

    version

    用法: hadoop version

    打印版本。

    CLASSNAME

    Usage: hadoop CLASSNAME

    运行名称为CLASSPATH的类。类必须是包的一部分。

    envvars

    用法: hadoop envvars

    显示计算Hadoop环境变量。

    Administration Commands

    Commands useful for administrators of a hadoop cluster.

    daemonlog

    Usage:

    hadoop daemonlog -getlevel <host:port> <classname> [-protocol (http|https)]
    hadoop daemonlog -setlevel <host:port> <classname> <level> [-protocol (http|https)]
    
    COMMAND_OPTIONDescription
    -getlevel host:port classname [-protocol (http|https)] Prints the log level of the log identified by a qualified classname, in the daemon running at host:port. The -protocol flag specifies the protocol for connection.
    -setlevel host:port classname level [-protocol (http|https)] Sets the log level of the log identified by a qualified classname, in the daemon running at host:port. The -protocol flag specifies the protocol for connection.

    Get/Set the log level for a Log identified by a qualified class name in the daemon dynamically. By default, the command sends a HTTP request, but this can be overridden by using argument -protocol https to send a HTTPS request.

    Example:

    $ bin/hadoop daemonlog -setlevel 127.0.0.1:9870 org.apache.hadoop.hdfs.server.namenode.NameNode DEBUG
    $ bin/hadoop daemonlog -getlevel 127.0.0.1:9871 org.apache.hadoop.hdfs.server.namenode.NameNode DEBUG -protocol https
    

    Note that the setting is not permanent and will be reset when the daemon is restarted. This command works by sending a HTTP/HTTPS request to the daemon’s internal Jetty servlet, so it supports the following daemons:

    • Common
      • key management server
    • HDFS
      • name node
      • secondary name node
      • data node
      • journal node
      • HttpFS server
    • YARN
      • resource manager
      • node manager
      • Timeline server

    Files

    etc/hadoop/hadoop-env.sh

    This file stores the global settings used by all Hadoop shell commands.

    etc/hadoop/hadoop-user-functions.sh

    This file allows for advanced users to override some shell functionality.

    ~/.hadooprc

    This stores the personal environment for an individual user. It is processed after the hadoop-env.sh and hadoop-user-functions.sh files and can contain the same settings.

    管理员命令

    对Hadoop集群管理员有用的命令。

    daemonlog

    Usage:

    hadoop daemonlog -getlevel <host:port> <classname> [-protocol (http|https)]
    hadoop daemonlog -setlevel <host:port> <classname> <level> [-protocol (http|https)]
    
    COMMAND_OPTIONDescription
    -getlevel host:port classname [-protocol (http|https)] Prints the log level of the log identified by a qualified classname, in the daemon running at host:port. The -protocol flag specifies the protocol for connection.
    -setlevel host:port classname level [-protocol (http|https)] Sets the log level of the log identified by a qualified classname, in the daemon running at host:port. The -protocol flag specifies the protocol for connection.

    为守护进程中的一个特定类名获取/设置日志级别。默认情况下,这个命令发送了一个HTTP请求,但是这个可以通过使用参数-protocol https覆盖发送一个HTTPS请求。

    例如:

    $ bin/hadoop daemonlog -setlevel 127.0.0.1:9870 org.apache.hadoop.hdfs.server.namenode.NameNode DEBUG
    $ bin/hadoop daemonlog -getlevel 127.0.0.1:9871 org.apache.hadoop.hdfs.server.namenode.NameNode DEBUG -protocol https

    注意,这个设置不是永久的,当进程重启的时候它将被重置。这个命令是通过发送一个HTTP / HTTPS请求进程的内部码头servlet,它支持以下程序:

    • Common
      • key management server
    • HDFS
      • name node
      • secondary name node
      • data node
      • journal node
      • HttpFS server
    • YARN
      • resource manager
      • node manager
      • Timeline server

    Files

    etc/hadoop/hadoop-env.sh

    此文件存储所有Hadoop shell命令所使用的全局设置。

    etc/hadoop/hadoop-user-functions.sh

    此文件允许高级用户重写某些shell功能。

    ~/.hadooprc

    这将为个人用户存储个人环境。这是hadoop-env.sh和hadoop-user-functions.sh文件处理后可以包含相同的设置。

  • 相关阅读:
    WPF 得一些问题汇总
    System.Rtti.TRttiObject.GetAttributes 简例
    ด้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้
    erlang局域网内节点通信——艰难四步曲 (转)
    delphi 单例模式实现
    NotePad++ delphi/Pascal函数过程列表插件
    用Visual C#实现MVC模式的简要方法
    Visual C#常用函数和方法集汇总
    需要Niagara邀请码的伙伴可以联系
    一个通过百度贴吧找到身份证失主的案例(供参考)
  • 原文地址:https://www.cnblogs.com/dream-to-pku/p/7337519.html
Copyright © 2011-2022 走看看