zoukankan      html  css  js  c++  java
  • Restore HBase Data

    方法 1: Restoring HBase data by importing dump files from HDFS

         The HBase Import utility is used to load data that has been exported by the Export utility into an existing HBase table. It is the process to restore data from the Export utility backup solution.
    Note :

    1. 导入的元数据及数据必须与之前导出的表一一对应(相同的表和相同的列簇)

    2. We create the target table what it exports. The table must have all the column families that exist in the dump files; without that, the import job will fail with a NoSuchColumnFamilyException error message.

    landen@Master:~/UntarFile/hbase-0.94.12$ bin/hbase org.apache.hadoop.hbase.mapreduce.Import backupIPInfo /backup/HBaseExport
    13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.zookeeper.ZooKeeper, using jar /home/landen/UntarFile/hbase-0.94.12/lib/zookeeper-3.4.5.jar
    13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class com.google.protobuf.Message, using jar /home/landen/UntarFile/hbase-0.94.12/lib/protobuf-java-2.4.0a.jar
    13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class com.google.common.collect.ImmutableSet, using jar /home/landen/UntarFile/hbase-0.94.12/lib/guava-11.0.2.jar
    13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.util.Bytes, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
    13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
    13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Writable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
    13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
    13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
    13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Writable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
    13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.mapreduce.TableOutputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
    13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
    13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
    13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:host.name=Master
    13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_17
    13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
    13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.home=/home/landen/UntarFile/jdk1.7.0_17/jre
    ...............................................................
    13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/landen/UntarFile/hadoop-1.0.4/libexec/../lib/native/Linux-i386-32:/home/landen/UntarFile/hbase-0.94.12/lib/native/Linux-i386-32
    13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
    13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
    13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
    13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
    13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:os.version=3.2.0-24-generic-pae
    13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:user.name=landen
    13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/landen
    13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/landen/UntarFile/hbase-0.94.12
    13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=Slave1:2222,Master:2222,Slave2:2222 sessionTimeout=180000 watcher=hconnection
    13/12/11 16:08:07 INFO zookeeper.ClientCnxn: Opening socket connection to server Master/10.21.244.79:2222. Will not attempt to authenticate using SASL (unknown error)
    13/12/11 16:08:07 INFO zookeeper.ClientCnxn: Socket connection established to Master/10.21.244.79:2222, initiating session
    13/12/11 16:08:07 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 13249@Master
    13/12/11 16:08:07 INFO zookeeper.ClientCnxn: Session establishment complete on server Master/10.21.244.79:2222, sessionid = 0x42e05be8c8000c, negotiated timeout = 180000
    13/12/11 16:08:07 DEBUG client.HConnectionManager$HConnectionImplementation: Looked up root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1ec068c; serverName=Slave1,60020,1386743579352
    13/12/11 16:08:07 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is Slave1:60020
    13/12/11 16:08:08 DEBUG client.MetaScanner: Scanning .META. starting at row=backupIPInfo,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1ec068c
    13/12/11 16:08:08 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for backupIPInfo,,1386747966286.a32b9be7a7aaa44b2c35e1d42116c6ee. is Slave2:60020
    13/12/11 16:08:08 INFO mapreduce.TableOutputFormat: Created table instance for backupIPInfo
    13/12/11 16:08:08 INFO input.FileInputFormat: Total input paths to process : 1
    13/12/11 16:08:08 INFO mapred.JobClient: Running job: job_201312111429_0006
    13/12/11 16:08:09 INFO mapred.JobClient:  map 0% reduce 0%
    13/12/11 16:08:25 INFO mapred.JobClient:  map 100% reduce 0%
    13/12/11 16:08:30 INFO mapred.JobClient: Job complete: job_201312111429_0006
    13/12/11 16:08:30 INFO mapred.JobClient: Counters: 18
    13/12/11 16:08:30 INFO mapred.JobClient:   Job Counters
    13/12/11 16:08:30 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=12039
    13/12/11 16:08:30 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
    13/12/11 16:08:30 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
    13/12/11 16:08:30 INFO mapred.JobClient:     Launched map tasks=1
    13/12/11 16:08:30 INFO mapred.JobClient:     Data-local map tasks=1
    13/12/11 16:08:30 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
    13/12/11 16:08:30 INFO mapred.JobClient:   File Output Format Counters
    13/12/11 16:08:30 INFO mapred.JobClient:     Bytes Written=0
    13/12/11 16:08:30 INFO mapred.JobClient:   FileSystemCounters
    13/12/11 16:08:30 INFO mapred.JobClient:     HDFS_BYTES_READ=886
    13/12/11 16:08:30 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=34871
    13/12/11 16:08:30 INFO mapred.JobClient:   File Input Format Counters
    13/12/11 16:08:30 INFO mapred.JobClient:     Bytes Read=771
    13/12/11 16:08:30 INFO mapred.JobClient:   Map-Reduce Framework
    13/12/11 16:08:30 INFO mapred.JobClient:     Map input records=3
    13/12/11 16:08:30 INFO mapred.JobClient:     Physical memory (bytes) snapshot=82030592
    13/12/11 16:08:30 INFO mapred.JobClient:     Spilled Records=0
    13/12/11 16:08:30 INFO mapred.JobClient:     CPU time spent (ms)=160
    13/12/11 16:08:30 INFO mapred.JobClient:     Total committed heap usage (bytes)=55443456
    13/12/11 16:08:30 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=395833344
    13/12/11 16:08:30 INFO mapred.JobClient:     Map output records=3
    13/12/11 16:08:30 INFO mapred.JobClient:     SPLIT_RAW_BYTES=115

  • 相关阅读:
    什么是布局?Android中的布局是怎样的?
    如何优化UI布局?
    Android SDK提供的常用控件Widget “常用控件”“Android原生”
    Android中Adapter类的使用 “Adapter”
    Android自定义属性
    Android中View的绘制流程(专题讲解)
    Android的自定义View及View的绘制流程
    如何创建新控件? “复合控件”“定制控件”
    Android应用程序支持不同屏幕(尺寸、密度)
    支持不同Android设备,包括:不同尺寸屏幕、不同屏幕密度、不同系统设置
  • 原文地址:https://www.cnblogs.com/likai198981/p/3470144.html
Copyright © 2011-2022 走看看