zoukankan      html  css  js  c++  java
  • 【转】在一个Job中同时写入多个HBase的table

    在进行Map/Reduce时,有的业务需要在一个job中将数据写入到多个HBase的表中,下面是实现方式。

    原文地址:http://lookfirst.com/2011/07/hbase-multitableoutputformat-writing-to.html

    HBase MultiTableOutputFormat writing to multiple tables in one Map Reduce Job

     
    Recently, I've been having a lot of fun learning about HBase and Hadoop. One esoteric thing I just learned about is the way that HBase tables are populated.

    By default, HBase / Map Reduce jobs can only write to a single table because you set the output handler at the job level with the job.setOutputFormatClass(). However, if you are creating an HBase table, chances are that you are going to want to build an index related to that table so that you can do fast queries on the master table. The most optimal way to do this is to write the data to both tables at the same time when you are importing the data. The alternative is to write another M/R job to do this after the fact, but that means reading all of the data twice, which is a lot of extra load on the system for no real benefit. Thus, in order to write to both tables at the same time, in the same M/R job, you need to take advantage of the MultiTableOutputFormat class to achieve this result. The key here is that when you write to the context, you specify the name of the table you are writing to. This is some basic example code (with a lot of the meat removed) which demonstrates this.

    static class TsvImporter extends Mapper<LongWritable, Text, ImmutableBytesWritable, Put> {
        @Override
        public void map(LongWritable offset, Text value, Context context) throws IOException {
            // contains the line of tab separated data we are working on (needs to be parsed out).
            byte[] lineBytes = value.getBytes();
    
            // rowKey is the hbase rowKey generated from lineBytes
            Put put = new Put(rowKey);
            // Create your KeyValue object
            put.add(kv);
            context.write("actions", put); // write to the actions table
    
            // rowKey2 is the hbase rowKey
            Put put = new Put(rowKey2);
            // Create your KeyValue object
            put.add(kv);
            context.write("actions_index", put); // write to the actions table
        }
    }
    
    public static Job createSubmittableJob(Configuration conf, String[] args) throws IOException {
        String pathStr = args[0];
        Path inputDir = new Path(pathStr);
        Job job = new Job(conf, "my_custom_job");
        job.setJarByClass(TsvImporter.class);
        FileInputFormat.setInputPaths(job, inputDir);
        job.setInputFormatClass(TextInputFormat.class);
        
        // this is the key to writing to multiple tables in hbase
        job.setOutputFormatClass(MultiTableOutputFormat.class);
        job.setMapperClass(TsvImporter.class);
        job.setNumReduceTasks(0);
    
        TableMapReduceUtil.addDependencyJars(job);
        TableMapReduceUtil.addDependencyJars(job.getConfiguration());
        return job;
    }
     
  • 相关阅读:
    事件与委托(二)
    事件与委托(一)
    c#不可变类型
    fastjson漏洞始末
    深入利用shiro反序列化漏洞
    一个半路出家的渗透测试工程师(四)
    线性模型与损失函数(分类问题)
    论文翻译:A Differentiable Perceptual Audio Metric Learned from Just Noticeable Differences
    SQL Server add auto increment primary key to existing table
    一次SQLServer實踐記錄——先一個表中三個日期的最大者,作爲查詢條件再查詢
  • 原文地址:https://www.cnblogs.com/sixiweb/p/3390820.html
Copyright © 2011-2022 走看看