zoukankan      html  css  js  c++  java
  • 浅谈hbase表中数据导出导入(也就是备份)

    转自:http://blog.chinaunix.net/xmlrpc.php?r=blog/article&uid=23916356&id=3321832

    最近因为生产环境hbase中某张表的数据要导出到测试环境(数据不多,大概200W条左右),如果用程序掉接口导入的话太慢,所以考虑直接用 hbase的功能来导入导出。因为此次是实验,所以我在正式环境建了一张小表,只有两条数据,目的是将它导入到一张新表中(空表,但是表结构一样)
    hbase(main):004:0> scan 'xyz'
    ROW                   COLUMN+CELL                                              
     10000                column=cf1:val, timestamp=1345598242644, value=china     
     20000                column=cf1:val, timestamp=1345598283332, value=zengzhunzhu
                          n                                                        
    2 row(s) in 0.0350 seconds
    开始导出:
    [hadoop@master ~]$ hbase/bin/hbase org.apache.hadoop.hbase.mapreduce.Driver expo
    rt xyz file:///home/hadoop/xyz
    12/08/22 10:12:07 INFO mapreduce.Export: verisons=1, starttime=0, endtime=9223372036854775807
    12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:/home/hadoop/hbase/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/hadoop/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:host.name=master
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_14
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.6.0_14/jre
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/hadoop/hbase/bin/../conf:/usr/java/jdk1.6.0_14/lib/tools.jar:/home/hadoop/hbase:/home/hadoop/hbase/hbase-0.92.1.jar:/home/hadoop/hbase/hbase-0.92.1-tests.jar:/home/hadoop/hbase/lib/activation-1.1.jar:/home/hadoop/hbase/lib/asm-3.1.jar:/home/hadoop/hbase/lib/avro-1.5.3.jar:/home/hadoop/hbase/lib/avro-ipc-1.5.3.jar:/home/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hbase/lib/commons-cli-1.2.jar:/home/hadoop/hbase/lib/commons-codec-1.4.jar:/home/hadoop/hbase/lib/commons-collections-3.2.1.jar:/home/hadoop/hbase/lib/commons-configuration-1.6.jar:/home/hadoop/hbase/lib/commons-digester-1.8.jar:/home/hadoop/hbase/lib/commons-el-1.0.jar:/home/hadoop/hbase/lib/commons-httpclient-3.1.jar:/home/hadoop/hbase/lib/commons-lang-2.5.jar:/home/hadoop/hbase/lib/commons-logging-1.1.1.jar:/home/hadoop/hbase/lib/commons-math-2.1.jar:/home/hadoop/hbase/lib/commons-net-1.4.1.jar:/home/hadoop/hbase/lib/core-3.1.1.jar:/home/hadoop/hbase/lib/guava-r09.jar:/home/hadoop/hbase/lib/hadoop-core-1.0.0.jar:/home/hadoop/hbase/lib/high-scale-lib-1.1.1.jar:/home/hadoop/hbase/lib/httpclient-4.0.1.jar:/home/hadoop/hbase/lib/httpcore-4.0.1.jar:/home/hadoop/hbase/lib/jackson-core-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase/lib/jackson-mapper-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/home/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase/lib/jaxb-api-2.1.jar:/home/hadoop/hbase/lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase/lib/jersey-core-1.4.jar:/home/hadoop/hbase/lib/jersey-json-1.4.jar:/home/hadoop/hbase/lib/jersey-server-1.4.jar:/home/hadoop/hbase/lib/jettison-1.1.jar:/home/hadoop/hbase/lib/jetty-6.1.26.jar:/home/hadoop/hbase/lib/jetty-util-6.1.26.jar:/home/hadoop/hbase/lib/jruby-complete-1.6.5.jar:/home/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase/lib/libthrift-0.7.0.jar:/home/hadoop/hbase/lib/log4j-1.2.16.jar:/home/hadoop/hbase/lib/netty-3.2.4.Final.jar:/home/hadoop/hbase/lib/protobuf-java-2.4.0a.jar:/home/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase/lib/servlet-api-2.5.jar:/home/hadoop/hbase/lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase/lib/snappy-java-1.0.3.2.jar:/home/hadoop/hbase/lib/stax-api-1.0.1.jar:/home/hadoop/hbase/lib/velocity-1.7.jar:/home/hadoop/hbase/lib/xmlenc-0.52.jar:/home/hadoop/hbase/lib/zookeeper-3.4.3.jar:/home/hadoop/hadoop/libexec/../conf:/usr/java/jdk1.6.0_14/lib/tools.jar:/home/hadoop/hadoop/libexec/..:/home/hadoop/hadoop/libexec/../hadoop-core-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/asm-3.2.jar:/home/hadoop/hadoop/libexec/../lib/aspectjrt-1.6.5.jar:/home/hadoop/hadoop/libexec/../lib/aspectjtools-1.6.5.jar:/home/hadoop/hadoop/libexec/../lib/commons-beanutils-1.7.0.jar:/home/hadoop/hadoop/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hadoop/libexec/../lib/commons-cli-1.2.jar:/home/hadoop/hadoop/libexec/../lib/commons-codec-1.4.jar:/home/hadoop/hadoop/libexec/../lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-configuration-1.6.jar:/home/hadoop/hadoop/libexec/../lib/commons-daemon-1.0.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-digester-1.8.jar:/home/hadoop/hadoop/libexec/../lib/commons-el-1.0.jar:/home/hadoop/hadoop/libexec/../lib/commons-httpclient-3.0.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-io-2.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-lang-2.4.jar:/home/hadoop/hadoop/libexec/../lib/commons-logging-1.1.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop/libexec/../lib/commons-math-2.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-net-1.4.1.jar:/home/hadoop/hadoop/libexec/../lib/core-3.1.1.jar:/home/hadoop/hadoop/libexec/../lib/hadoop-capacity-scheduler-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/hadoop-fairscheduler-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/hadoop-thriftfs-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/hbase-0.92.1.jar:/home/hadoop/hadoop/libexec/../lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop/libexec/../lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop/libexec/../lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop/libexec/../lib/jdeb-0.8.jar:/home/hadoop/hadoop/libexec/../lib/jersey-core-1.8.jar:/home/hadoop/hadoop/libexec/../lib/jersey-json-1.8.jar:/home/hadoop/hadoop/libexec/../lib/jersey-server-1.8.jar:/home/hadoop/hadoop/libexec/../lib/jets3t-0.6.1.jar:/home/hadoop/hadoop/libexec/../lib/jetty-6.1.26.jar:/home/hadoop/hadoop/libexec/../lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop/libexec/../lib/jsch-0.1.42.jar:/home/hadoop/hadoop/libexec/../lib/junit-4.5.jar:/home/hadoop/hadoop/libexec/../lib/kfs-0.2.2.jar:/home/hadoop/hadoop/libexec/../lib/log4j-1.2.15.jar:/home/hadoop/hadoop/libexec/../lib/mockito-all-1.8.5.jar:/home/hadoop/hadoop/libexec/../lib/oro-2.0.8.jar:/home/hadoop/hadoop/libexec/../lib/servlet-api-2.5-20081211.jar:/home/hadoop/hadoop/libexec/../lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop/libexec/../lib/xmlenc-0.52.jar:/home/hadoop/hadoop/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hadoop/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/hadoop/libexec/../lib/native/Linux-amd64-64:/home/hadoop/hbase/lib/native/Linux-amd64-64
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:java.compiler=
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.9-89.ELsmp
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
    12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=slave2:2222,slave1:2222,slave3:2222 sessionTimeout=180000 watcher=hconnection
    12/08/22 10:12:09 INFO zookeeper.ClientCnxn: Opening socket connection to server /192.168.15.132:2222
    12/08/22 10:12:09 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 23606@master
    12/08/22 10:12:09 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: 无法定位登录配置 occurred when trying to find JAAS configuration.
    12/08/22 10:12:09 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
    12/08/22 10:12:09 INFO zookeeper.ClientCnxn: Socket connection established to slave3/192.168.15.132:2222, initiating session
    12/08/22 10:12:09 WARN zookeeper.ClientCnxnSocket: Connected to an old server; r-o mode will be unavailable
    12/08/22 10:12:09 INFO zookeeper.ClientCnxn: Session establishment complete on server slave3/192.168.15.132:2222, sessionid = 0x33943bafeb90005, negotiated timeout = 40000
    12/08/22 10:12:09 DEBUG client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@28bb494b; serverName=slave2,60020,1345461138645
    12/08/22 10:12:09 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is slave3:60020
    12/08/22 10:12:09 DEBUG client.MetaScanner: Scanning .META. starting at row=xyz,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@28bb494b
    12/08/22 10:12:09 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for xyz,,1340764906812.6aa4cb2fb4c9eb34f360953acdb1e21c. is slave2:60020
    12/08/22 10:12:09 DEBUG client.MetaScanner: Scanning .META. starting at row=xyz,,00000000000000 for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@28bb494b
    12/08/22 10:12:09 DEBUG mapreduce.TableInputFormatBase: getSplits: split -> 0 -> slave2:,
    12/08/22 10:12:10 INFO mapred.JobClient: Running job: job_201208201908_0002
    12/08/22 10:12:11 INFO mapred.JobClient:  map 0% reduce 0%
    12/08/22 10:12:30 INFO mapred.JobClient:  map 100% reduce 0%
    12/08/22 10:12:35 INFO mapred.JobClient: Job complete: job_201208201908_0002
    12/08/22 10:12:35 INFO mapred.JobClient: Counters: 18
    12/08/22 10:12:35 INFO mapred.JobClient:   Job Counters
    12/08/22 10:12:35 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=16203
    12/08/22 10:12:35 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
    12/08/22 10:12:35 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
    12/08/22 10:12:35 INFO mapred.JobClient:     Launched map tasks=1
    12/08/22 10:12:35 INFO mapred.JobClient:     Data-local map tasks=1
    12/08/22 10:12:35 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
    12/08/22 10:12:35 INFO mapred.JobClient:   File Output Format Counters
    12/08/22 10:12:35 INFO mapred.JobClient:     Bytes Written=255
    12/08/22 10:12:35 INFO mapred.JobClient:   FileSystemCounters
    12/08/22 10:12:35 INFO mapred.JobClient:     HDFS_BYTES_READ=58
    12/08/22 10:12:35 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=31370
    12/08/22 10:12:35 INFO mapred.JobClient:   File Input Format Counters
    12/08/22 10:12:35 INFO mapred.JobClient:     Bytes Read=0
    12/08/22 10:12:35 INFO mapred.JobClient:   Map-Reduce Framework
    12/08/22 10:12:35 INFO mapred.JobClient:     Map input records=2
    12/08/22 10:12:35 INFO mapred.JobClient:     Physical memory (bytes) snapshot=77606912
    12/08/22 10:12:35 INFO mapred.JobClient:     Spilled Records=0
    12/08/22 10:12:35 INFO mapred.JobClient:     CPU time spent (ms)=1830
    12/08/22 10:12:35 INFO mapred.JobClient:     Total committed heap usage (bytes)=31850496
    12/08/22 10:12:35 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=488656896
    12/08/22 10:12:35 INFO mapred.JobClient:     Map output records=2
    12/08/22 10:12:35 INFO mapred.JobClient:     SPLIT_RAW_BYTES=58
    上述红色字体输出已提示导出了两条数据了,因为我有3个datanode,加上数据比较少,所以肯定只会在一台datanode上有导出文件。 如果数据很多,可能每个datanode节点都会有导出文件,至于在哪台datanode上你就需要找一下/home/hadoop目录下有没有xyz目 录了。
    我这找到后文件如下:
    [hadoop@slave2 ~]$ cd xyz/
    [hadoop@slave2 xyz]$ ls
    part-m-00000  _SUCCESS
    开始新建一张新表,表结构和xyz表一样
    hbase(main):001:0> create 'zzz','cf1'
    0 row(s) in 2.0490 seconds
    然后开始导入,这里我就利用导出文件在哪我就在哪导入了。当然你也可以拿这个part-m-00000文件到其余的datanode上导入,顺便友情提醒一下,如果导出的数据很多,你导入的时候千万别把所有的part-m-0000*文件都放到一个目录下开始导入,肯定会失败的!你只能把part-m-0000*文件一个个开始导入。
    [hadoop@slave2 ~]$ hbase/bin/hbase org.apache.hadoop.hbase.mapreduce.Driver impo
    rt zzz file:///home/hadoop/xyz/
    12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:/home/hadoop/hbase/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/hadoop/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:host.name=slave2
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_14
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.6.0_14/jre
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/hadoop/hbase/bin/../conf:/usr/java/jdk1.6.0_14/lib/tools.jar:/home/hadoop/hbase:/home/hadoop/hbase/hbase-0.92.1.jar:/home/hadoop/hbase/hbase-0.92.1-tests.jar:/home/hadoop/hbase/lib/activation-1.1.jar:/home/hadoop/hbase/lib/asm-3.1.jar:/home/hadoop/hbase/lib/avro-1.5.3.jar:/home/hadoop/hbase/lib/avro-ipc-1.5.3.jar:/home/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hbase/lib/commons-cli-1.2.jar:/home/hadoop/hbase/lib/commons-codec-1.4.jar:/home/hadoop/hbase/lib/commons-collections-3.2.1.jar:/home/hadoop/hbase/lib/commons-configuration-1.6.jar:/home/hadoop/hbase/lib/commons-digester-1.8.jar:/home/hadoop/hbase/lib/commons-el-1.0.jar:/home/hadoop/hbase/lib/commons-httpclient-3.1.jar:/home/hadoop/hbase/lib/commons-lang-2.5.jar:/home/hadoop/hbase/lib/commons-logging-1.1.1.jar:/home/hadoop/hbase/lib/commons-math-2.1.jar:/home/hadoop/hbase/lib/commons-net-1.4.1.jar:/home/hadoop/hbase/lib/core-3.1.1.jar:/home/hadoop/hbase/lib/guava-r09.jar:/home/hadoop/hbase/lib/hadoop-core-1.0.0.jar:/home/hadoop/hbase/lib/high-scale-lib-1.1.1.jar:/home/hadoop/hbase/lib/httpclient-4.0.1.jar:/home/hadoop/hbase/lib/httpcore-4.0.1.jar:/home/hadoop/hbase/lib/jackson-core-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase/lib/jackson-mapper-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/home/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase/lib/jaxb-api-2.1.jar:/home/hadoop/hbase/lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase/lib/jersey-core-1.4.jar:/home/hadoop/hbase/lib/jersey-json-1.4.jar:/home/hadoop/hbase/lib/jersey-server-1.4.jar:/home/hadoop/hbase/lib/jettison-1.1.jar:/home/hadoop/hbase/lib/jetty-6.1.26.jar:/home/hadoop/hbase/lib/jetty-util-6.1.26.jar:/home/hadoop/hbase/lib/jruby-complete-1.6.5.jar:/home/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase/lib/libthrift-0.7.0.jar:/home/hadoop/hbase/lib/log4j-1.2.16.jar:/home/hadoop/hbase/lib/netty-3.2.4.Final.jar:/home/hadoop/hbase/lib/protobuf-java-2.4.0a.jar:/home/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase/lib/servlet-api-2.5.jar:/home/hadoop/hbase/lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase/lib/snappy-java-1.0.3.2.jar:/home/hadoop/hbase/lib/stax-api-1.0.1.jar:/home/hadoop/hbase/lib/velocity-1.7.jar:/home/hadoop/hbase/lib/xmlenc-0.52.jar:/home/hadoop/hbase/lib/zookeeper-3.4.3.jar:/home/hadoop/hadoop/libexec/../conf:/usr/java/jdk1.6.0_14/lib/tools.jar:/home/hadoop/hadoop/libexec/..:/home/hadoop/hadoop/libexec/../hadoop-core-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/asm-3.2.jar:/home/hadoop/hadoop/libexec/../lib/aspectjrt-1.6.5.jar:/home/hadoop/hadoop/libexec/../lib/aspectjtools-1.6.5.jar:/home/hadoop/hadoop/libexec/../lib/commons-beanutils-1.7.0.jar:/home/hadoop/hadoop/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hadoop/libexec/../lib/commons-cli-1.2.jar:/home/hadoop/hadoop/libexec/../lib/commons-codec-1.4.jar:/home/hadoop/hadoop/libexec/../lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-configuration-1.6.jar:/home/hadoop/hadoop/libexec/../lib/commons-daemon-1.0.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-digester-1.8.jar:/home/hadoop/hadoop/libexec/../lib/commons-el-1.0.jar:/home/hadoop/hadoop/libexec/../lib/commons-httpclient-3.0.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-io-2.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-lang-2.4.jar:/home/hadoop/hadoop/libexec/../lib/commons-logging-1.1.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop/libexec/../lib/commons-math-2.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-net-1.4.1.jar:/home/hadoop/hadoop/libexec/../lib/core-3.1.1.jar:/home/hadoop/hadoop/libexec/../lib/hadoop-capacity-scheduler-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/hadoop-fairscheduler-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/hadoop-thriftfs-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/hbase-0.92.1.jar:/home/hadoop/hadoop/libexec/../lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop/libexec/../lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop/libexec/../lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop/libexec/../lib/jdeb-0.8.jar:/home/hadoop/hadoop/libexec/../lib/jersey-core-1.8.jar:/home/hadoop/hadoop/libexec/../lib/jersey-json-1.8.jar:/home/hadoop/hadoop/libexec/../lib/jersey-server-1.8.jar:/home/hadoop/hadoop/libexec/../lib/jets3t-0.6.1.jar:/home/hadoop/hadoop/libexec/../lib/jetty-6.1.26.jar:/home/hadoop/hadoop/libexec/../lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop/libexec/../lib/jsch-0.1.42.jar:/home/hadoop/hadoop/libexec/../lib/junit-4.5.jar:/home/hadoop/hadoop/libexec/../lib/kfs-0.2.2.jar:/home/hadoop/hadoop/libexec/../lib/log4j-1.2.15.jar:/home/hadoop/hadoop/libexec/../lib/mockito-all-1.8.5.jar:/home/hadoop/hadoop/libexec/../lib/oro-2.0.8.jar:/home/hadoop/hadoop/libexec/../lib/servlet-api-2.5-20081211.jar:/home/hadoop/hadoop/libexec/../lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop/libexec/../lib/xmlenc-0.52.jar:/home/hadoop/hadoop/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hadoop/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/hadoop/libexec/../lib/native/Linux-amd64-64:/home/hadoop/hbase/lib/native/Linux-amd64-64
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:java.compiler=
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.9-89.ELsmp
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
    12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=slave2:2222,slave1:2222,slave3:2222 sessionTimeout=180000 watcher=hconnection
    12/08/22 10:30:44 INFO zookeeper.ClientCnxn: Opening socket connection to server /192.168.15.72:2222
    12/08/22 10:30:44 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 30654@slave2
    12/08/22 10:30:44 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: 无法定位登录配置 occurred when trying to find JAAS configuration.
    12/08/22 10:30:44 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
    12/08/22 10:30:44 INFO zookeeper.ClientCnxn: Socket connection established to slave1/192.168.15.72:2222, initiating session
    12/08/22 10:30:44 WARN zookeeper.ClientCnxnSocket: Connected to an old server; r-o mode will be unavailable
    12/08/22 10:30:44 INFO zookeeper.ClientCnxn: Session establishment complete on server slave1/192.168.15.72:2222, sessionid = 0x13943ba912f0007, negotiated timeout = 40000
    12/08/22 10:30:44 DEBUG client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1c23f1bb; serverName=slave2,60020,1345461138645
    12/08/22 10:30:44 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is slave3:60020
    12/08/22 10:30:44 DEBUG client.MetaScanner: Scanning .META. starting at row=zzz,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1c23f1bb
    12/08/22 10:30:44 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for zzz,,1345602149536.dbeb5fc388bcc537d40b5602b60798ff. is slave3:60020
    12/08/22 10:30:44 INFO mapreduce.TableOutputFormat: Created table instance for zzz
    12/08/22 10:30:44 INFO input.FileInputFormat: Total input paths to process : 1
    12/08/22 10:30:45 INFO mapred.JobClient: Running job: job_201208201908_0004
    12/08/22 10:30:46 INFO mapred.JobClient:  map 0% reduce 0%
    12/08/22 10:31:23 INFO mapred.JobClient: Task Id : attempt_201208201908_0004_m_000000_0, Status : FAILED
    java.io.FileNotFoundException: File file:/home/hadoop/xyz/part-m-00000 does not exist.
            at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:397)
            at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
            at org.apache.hadoop.fs.FileSystem.getLength(FileSystem.java:796)
            at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1475)
            at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1470)
            at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.initialize(SequenceFileRecordReader.java:50)
            at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:522)
            at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
            at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
            at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
            at org.apache.hadoop.mapred.Child.main(Child.java:249)
    12/08/22 10:31:35 INFO mapred.JobClient:  map 100% reduce 0%
    12/08/22 10:31:40 INFO mapred.JobClient: Job complete: job_201208201908_0004
    12/08/22 10:31:40 INFO mapred.JobClient: Counters: 19
    12/08/22 10:31:40 INFO mapred.JobClient:   Job Counters
    12/08/22 10:31:40 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=22927
    12/08/22 10:31:40 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
    12/08/22 10:31:40 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
    12/08/22 10:31:40 INFO mapred.JobClient:     Rack-local map tasks=2
    12/08/22 10:31:40 INFO mapred.JobClient:     Launched map tasks=2
    12/08/22 10:31:40 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
    12/08/22 10:31:40 INFO mapred.JobClient:   File Output Format Counters
    12/08/22 10:31:40 INFO mapred.JobClient:     Bytes Written=0
    12/08/22 10:31:40 INFO mapred.JobClient:   FileSystemCounters
    12/08/22 10:31:40 INFO mapred.JobClient:     FILE_BYTES_READ=255
    12/08/22 10:31:40 INFO mapred.JobClient:     HDFS_BYTES_READ=99
    12/08/22 10:31:40 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=31054
    12/08/22 10:31:40 INFO mapred.JobClient:   File Input Format Counters
    12/08/22 10:31:40 INFO mapred.JobClient:     Bytes Read=255
    12/08/22 10:31:40 INFO mapred.JobClient:   Map-Reduce Framework
    12/08/22 10:31:40 INFO mapred.JobClient:     Map input records=2
    12/08/22 10:31:40 INFO mapred.JobClient:     Physical memory (bytes) snapshot=72753152
    12/08/22 10:31:40 INFO mapred.JobClient:     Spilled Records=0
    12/08/22 10:31:40 INFO mapred.JobClient:     CPU time spent (ms)=260
    12/08/22 10:31:40 INFO mapred.JobClient:     Total committed heap usage (bytes)=18350080
    12/08/22 10:31:40 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=491810816
    12/08/22 10:31:40 INFO mapred.JobClient:     Map output records=2
    12/08/22 10:31:40 INFO mapred.JobClient:     SPLIT_RAW_BYTES=99
    上述输出中可以看到导入了2条记录,但是仍然会报错,报文件不存在,这就不知道是什么原因了。但是数据是导入进去了。
    查看zzz表中的数据

    hbase(main):003:0> scan 'zzz'
    ROW 

                      COLUMN+CELL                                              
     10000                column=cf1:val, timestamp=1345598242644, value=china     
     20000                column=cf1:val, timestamp=1345598283332, value=zengzhunzhu
                          n                                                        
    2 row(s) in 0.0410 seconds
    这样基本就完成了hbase表中的数据我们可以转化为mapreduce任务进程开始导出导入。当然也可以这么备份的。
  • 相关阅读:
    将一个类的Lambda转换成另一个类的研究
    欧拉计划 第10题
    C#4.0泛型中的out使用
    WP7应用开发笔记(4) 圆形滑动控件实现
    欧拉计划 第6题
    欧拉计划 第一题
    助手系列之python的FTP服务器
    Visual C++ 2008进行MySQL编程
    通过FTP命令上传下载
    助手系列之连接mysql数据库
  • 原文地址:https://www.cnblogs.com/1130136248wlxk/p/5141556.html
Copyright © 2011-2022 走看看