zoukankan      html  css  js  c++  java
  • Hbase 0.92.1 Replication

    原集群

    服务器名称 服务
    sht-sgmhadoopnn-01 Master,NameNode,JobTracker
    sht-sgmhadoopdn-01 RegionServer,DataNode,TaskTracker,ZK
    sht-sgmhadoopdn-02 RegionServer,DataNode,TaskTracker,ZK
    sht-sgmhadoopdn-03 RegionServer,DataNode,TaskTracker,ZK
    sht-sgmhadoopdn-04 RegionServer,DataNode,TaskTracker,ZK

    新集群

    服务器名称 服务
    ec2d-newcntprocnn-01 Master,NameNode,JobTracker
    ec2d-newcntprocdn-01 RegionServer,DataNode,TaskTracker,ZK
    ec2d-newcntprocdn-02 RegionServer,DataNode,TaskTracker,ZK
    ec2d-newcntprocdn-03 RegionServer,DataNode,TaskTracker,ZK
    ec2d-newcntprocdn-04 RegionServer,DataNode,TaskTracker,ZK

    将原表dept复制到目标集群

    1. 修改原集群和新集群所有节点hbase-site.xml文件,加入以下内容,并重启集群

    <property>
    <name>hbase.replication</name>
    <value>true</value>
    </property>

    2. 将所有主机名与IP地址关系写入到所有节点/etc/hosts文件

    172.16.101.55    sht-sgmhadoopnn-01
    172.16.101.58    sht-sgmhadoopdn-01
    172.16.101.59    sht-sgmhadoopdn-02
    172.16.101.60    sht-sgmhadoopdn-03
    172.16.101.66    sht-sgmhadoopdn-04
    10.189.100.146 ec2d-newcntprocnn-01
    10.189.102.101 ec2d-newcntprocdn-01
    10.189.102.94  ec2d-newcntprocdn-02
    10.189.102.236 ec2d-newcntprocnn-03
    10.189.102.176 ec2d-newcntprocdn-04

    3.在原集群新建表dept,在新集群新建相同表结构

    create 'dept', { NAME => 'cf1', REPLICATION_SCOPE => 1}

    如果是现有表,修改列族属性REPLICATION_SCOPE=1为启用该表中该列族的复制属性,注意复制是以列族为单位,并非以表为单位

    disable 'dept'
    alter 'dept', NAME => 'cf1', REPLICATION_SCOPE => '1'
    enable 'dept'

    4. 启用复制功能

    add_peer '1',"ec2d-newcntprocnn-01,ec2d-newcntprocdn-01,ec2d-newcntprocdn-02:2181:/hbase"
    start_replication

    5.插入数据测试

    put 'dept', 'row1', 'cf1:name', 'adams'
    put 'dept', 'row1', 'cf1:depart', 'research'
    put 'dept', 'row1', 'cf1:job', 'clerk'
    put 'dept', 'row1', 'cf1:id', '7876'
    put 'dept', 'row1', 'cf1:locate', 'dallas'

    注意:复制只是将开启该功能以后新增的数据复制到新集群,开启复制之前的数据并不会复制到新集群。

    6.验证两个集群复制数据的正确性

    export HADOOP_CLASSPATH=$HBASE_HOME/lib/guava-r09.jar
    
    $ hadoop jar $HBASE_HOME/hbase-0.92.1.jar verifyrep
    Usage: verifyrep [--starttime=X] [--stoptime=Y] [--families=A] <peerid> <tablename>
    
    Options:
     starttime    beginning of the time range
                  without endtime means from starttime to forever
     stoptime     end of the time range
     families     comma-separated list of families to copy
    
    Args:
     peerid       Id of the peer used for verification, must match the one given for replication
     tablename    Name of the table to verify
    
    Examples:
     To verify the data replicated from TestTable for a 1 hour window with peer #5 
     $ bin/hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication --starttime=1265875194289 --stoptime=1265878794289 5 TestTable 
    $ hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication 1 dept

    输出

    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:host.name=sht-sgmhadoopnn-01
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_45
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/local/contentplatform/jdk1.6.0_45/jre
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/tnuser/hbase/bin/../conf:/home/tnuser/jdk/lib/tools.jar:/home/tnuser/hbase/bin/..:/home/tnuser/hbase/bin/../hbase-0.92.1.jar:/home/tnuser/hbase/bin/../hbase-0.92.1-tests.jar:/home/tnuser/hbase/bin/../lib/activation-1.1.jar:/home/tnuser/hbase/bin/../lib/asm-3.1.jar:/home/tnuser/hbase/bin/../lib/avro-1.5.3.jar:/home/tnuser/hbase/bin/../lib/avro-ipc-1.5.3.jar:/home/tnuser/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/home/tnuser/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/tnuser/hbase/bin/../lib/commons-cli-1.2.jar:/home/tnuser/hbase/bin/../lib/commons-codec-1.4.jar:/home/tnuser/hbase/bin/../lib/commons-collections-3.2.1.jar:/home/tnuser/hbase/bin/../lib/commons-configuration-1.6.jar:/home/tnuser/hbase/bin/../lib/commons-digester-1.8.jar:/home/tnuser/hbase/bin/../lib/commons-el-1.0.jar:/home/tnuser/hbase/bin/../lib/commons-httpclient-3.1.jar:/home/tnuser/hbase/bin/../lib/commons-lang-2.5.jar:/home/tnuser/hbase/bin/../lib/commons-logging-1.1.1.jar:/home/tnuser/hbase/bin/../lib/commons-math-2.1.jar:/home/tnuser/hbase/bin/../lib/commons-net-1.4.1.jar:/home/tnuser/hbase/bin/../lib/core-3.1.1.jar:/home/tnuser/hbase/bin/../lib/guava-r09.jar:/home/tnuser/hbase/bin/../lib/hadoop-core-1.0.0.jar:/home/tnuser/hbase/bin/../lib/high-scale-lib-1.1.1.jar:/home/tnuser/hbase/bin/../lib/httpclient-4.0.1.jar:/home/tnuser/hbase/bin/../lib/httpcore-4.0.1.jar:/home/tnuser/hbase/bin/../lib/jackson-core-asl-1.5.5.jar:/home/tnuser/hbase/bin/../lib/jackson-jaxrs-1.5.5.jar:/home/tnuser/hbase/bin/../lib/jackson-mapper-asl-1.5.5.jar:/home/tnuser/hbase/bin/../lib/jackson-xc-1.5.5.jar:/home/tnuser/hbase/bin/../lib/jamon-runtime-2.3.1.jar:/home/tnuser/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/home/tnuser/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/home/tnuser/hbase/bin/../lib/jaxb-api-2.1.jar:/home/tnuser/hbase/bin/../lib/jaxb-impl-2.1.12.jar:/home/tnuser/hbase/bin/../lib/jersey-core-1.4.jar:/home/tnuser/hbase/bin/../lib/jersey-json-1.4.jar:/home/tnuser/hbase/bin/../lib/jersey-server-1.4.jar:/home/tnuser/hbase/bin/../lib/jettison-1.1.jar:/home/tnuser/hbase/bin/../lib/jetty-6.1.26.jar:/home/tnuser/hbase/bin/../lib/jetty-util-6.1.26.jar:/home/tnuser/hbase/bin/../lib/jruby-complete-1.6.5.jar:/home/tnuser/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/home/tnuser/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/tnuser/hbase/bin/../lib/libthrift-0.7.0.jar:/home/tnuser/hbase/bin/../lib/log4j-1.2.16.jar:/home/tnuser/hbase/bin/../lib/netty-3.2.4.Final.jar:/home/tnuser/hbase/bin/../lib/protobuf-java-2.4.0a.jar:/home/tnuser/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/tnuser/hbase/bin/../lib/servlet-api-2.5.jar:/home/tnuser/hbase/bin/../lib/slf4j-api-1.5.8.jar:/home/tnuser/hbase/bin/../lib/slf4j-log4j12-1.5.8.jar:/home/tnuser/hbase/bin/../lib/snappy-java-1.0.3.2.jar:/home/tnuser/hbase/bin/../lib/stax-api-1.0.1.jar:/home/tnuser/hbase/bin/../lib/velocity-1.7.jar:/home/tnuser/hbase/bin/../lib/xmlenc-0.52.jar:/home/tnuser/hbase/bin/../lib/zookeeper-3.4.3.jar:/home/tnuser/hadoop/conf:/usr/local/contentplatform/hadoop-1.0.3/libexec/../conf:/home/tnuser/jdk/lib/tools.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/..:/usr/local/contentplatform/hadoop-1.0.3/libexec/../hadoop-core-1.0.3.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/asm-3.2.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-cli-1.2.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-codec-1.4.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-configuration-1.6.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-daemon-1.0.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-digester-1.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-el-1.0.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-io-2.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-lang-2.4.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-math-2.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-net-1.4.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/core-3.1.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/hadoop-capacity-scheduler-1.0.3.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/hadoop-fairscheduler-1.0.3.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/hadoop-thriftfs-1.0.3.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jdeb-0.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jersey-core-1.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jersey-json-1.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jersey-server-1.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jets3t-0.6.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jetty-6.1.26.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jsch-0.1.42.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/junit-4.5.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/kfs-0.2.2.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/log4j-1.2.15.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/oro-2.0.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/xmlenc-0.52.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/home/tnuser/hbase/lib/guava-r09.jar
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/native/Linux-amd64-64:/home/tnuser/hbase/bin/../lib/native/Linux-amd64-64
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_64
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:user.name=tnuser
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/tnuser
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:user.dir=/usr/local/contentplatform/hbase-0.92.1
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-03:2181 sessionTimeout=60000 watcher=hconnection
    19/06/13 21:08:08 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.16.101.59:2181
    19/06/13 21:08:08 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 4905@sht-sgmhadoopnn-01
    19/06/13 21:08:08 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
    19/06/13 21:08:08 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
    19/06/13 21:08:08 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-02/172.16.101.59:2181, initiating session
    19/06/13 21:08:08 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-02/172.16.101.59:2181, sessionid = 0x16b5083320f0007, negotiated timeout = 60000
    19/06/13 21:08:08 ERROR zookeeper.RecoverableZooKeeper: Node /hbase/replication/peers already exists and this is not a retry
    19/06/13 21:08:08 ERROR zookeeper.RecoverableZooKeeper: Node /hbase/replication/rs already exists and this is not a retry
    19/06/13 21:08:08 INFO replication.ReplicationZookeeper: Replication is now started
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=ec2d-newcntprocdn-01:2181,ec2d-newcntprocnn-01:2181,ec2d-newcntprocdn-02:2181 sessionTimeout=60000 watcher=connection to cluster: ec2d-newcntprocnn-01,ec2d-newcntprocdn-01,ec2d-newcntprocdn-02:2181:/hbase
    19/06/13 21:08:08 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 4905@sht-sgmhadoopnn-01
    19/06/13 21:08:08 INFO zookeeper.ClientCnxn: Opening socket connection to server /10.189.102.101:2181
    19/06/13 21:08:08 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x16b5083320f0007
    19/06/13 21:08:08 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
    19/06/13 21:08:08 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
    19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Session: 0x16b5083320f0007 closed
    19/06/13 21:08:08 INFO zookeeper.ClientCnxn: EventThread shut down
    19/06/13 21:08:09 INFO zookeeper.ClientCnxn: Socket connection established to ec2d-newcntprocdn-01/10.189.102.101:2181, initiating session
    19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    19/06/13 21:08:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ec2d-newcntprocdn-01/10.189.102.101:2181, sessionid = 0x16b4fc6131e000f, negotiated timeout = 60000
    19/06/13 21:08:09 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
    19/06/13 21:08:09 DEBUG client.HConnectionManager$HConnectionImplementation: The connection to null was closed by the finalize method.
    19/06/13 21:08:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-03:2181 sessionTimeout=60000 watcher=hconnection
    19/06/13 21:08:10 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.16.101.60:2181
    19/06/13 21:08:10 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
    19/06/13 21:08:10 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
    19/06/13 21:08:10 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 4905@sht-sgmhadoopnn-01
    19/06/13 21:08:10 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-03/172.16.101.60:2181, initiating session
    19/06/13 21:08:10 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-03/172.16.101.60:2181, sessionid = 0x26b5083323d0005, negotiated timeout = 60000
    19/06/13 21:08:10 DEBUG client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61578aab; serverName=sht-sgmhadoopdn-02,60020,1560423906407
    19/06/13 21:08:10 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is sht-sgmhadoopdn-02:60020
    19/06/13 21:08:10 DEBUG client.MetaScanner: Scanning .META. starting at row=dept,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61578aab
    19/06/13 21:08:10 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for dept,,1560430116142.2ba8059eaf45d5048f418b8b2ef00600. is sht-sgmhadoopdn-01:60020
    19/06/13 21:08:10 DEBUG client.MetaScanner: Scanning .META. starting at row=dept,,00000000000000 for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61578aab
    19/06/13 21:08:10 DEBUG mapreduce.TableInputFormatBase: getSplits: split -> 0 -> sht-sgmhadoopdn-01:,
    19/06/13 21:08:11 INFO mapred.JobClient: Running job: job_201906081831_0002
    19/06/13 21:08:12 INFO mapred.JobClient:  map 0% reduce 0%
    19/06/13 21:08:31 INFO mapred.JobClient:  map 100% reduce 0%
    19/06/13 21:08:36 INFO mapred.JobClient: Job complete: job_201906081831_0002
    19/06/13 21:08:36 INFO mapred.JobClient: Counters: 19
    19/06/13 21:08:36 INFO mapred.JobClient:   Job Counters 
    19/06/13 21:08:36 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=17243
    19/06/13 21:08:36 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
    19/06/13 21:08:36 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
    19/06/13 21:08:36 INFO mapred.JobClient:     Launched map tasks=1
    19/06/13 21:08:36 INFO mapred.JobClient:     Data-local map tasks=1
    19/06/13 21:08:36 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
    19/06/13 21:08:36 INFO mapred.JobClient:   File Output Format Counters 
    19/06/13 21:08:36 INFO mapred.JobClient:     Bytes Written=0
    19/06/13 21:08:36 INFO mapred.JobClient:   org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication$Verifier$Counters
    19/06/13 21:08:36 INFO mapred.JobClient:     GOODROWS=1
    19/06/13 21:08:36 INFO mapred.JobClient:   FileSystemCounters
    19/06/13 21:08:36 INFO mapred.JobClient:     HDFS_BYTES_READ=71
    19/06/13 21:08:36 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=31428
    19/06/13 21:08:36 INFO mapred.JobClient:   File Input Format Counters 
    19/06/13 21:08:36 INFO mapred.JobClient:     Bytes Read=0
    19/06/13 21:08:36 INFO mapred.JobClient:   Map-Reduce Framework
    19/06/13 21:08:36 INFO mapred.JobClient:     Map input records=1
    19/06/13 21:08:36 INFO mapred.JobClient:     Physical memory (bytes) snapshot=87109632
    19/06/13 21:08:36 INFO mapred.JobClient:     Spilled Records=0
    19/06/13 21:08:36 INFO mapred.JobClient:     CPU time spent (ms)=1700
    19/06/13 21:08:36 INFO mapred.JobClient:     Total committed heap usage (bytes)=91226112
    19/06/13 21:08:36 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=1540784128
    19/06/13 21:08:36 INFO mapred.JobClient:     Map output records=0
    19/06/13 21:08:36 INFO mapred.JobClient:     SPLIT_RAW_BYTES=71
    View Code

     关键字

    19/06/13 21:21:40 INFO mapred.JobClient:     GOODROWS=1
  • 相关阅读:
    c++函数学习-关于c++函数的林林总总
    STL学习笔记(七) 程序中使用STL
    STL学习笔记(六) 函数对象
    本学期总结与课程建议
    12.19
    12.18Tomcat相关知识
    12.17
    12.16
    12.15
    12.14
  • 原文地址:https://www.cnblogs.com/ilifeilong/p/11019324.html
Copyright © 2011-2022 走看看