zoukankan      html  css  js  c++  java
  • 在cdh5.1.3中在mapreduce使用hbase

    环境:centos6.5 cdh5.1.3

    一、hadoop命令找不到hbase相关类

    ()观察hadoop classpath的输出:

    1,classpath包含了/etc/hadoop/conf,这是hadoop当前使用的配置文件的目录。

    2,classpath*结尾,

    (),找到hbase相关jar包位置

    ()修改hadoop-env.sh

    打开/etc/hadoop/conf/hadoop-env.sh,为HADOOP_CLASSPATH添加上一步找到的jar文件路径。

    1,指定路径是没用的。需要路径名后面加上”/*”

    2,hadoop classpath命令能随时显示出来/etc/hadoop/conf/hadoop-env.sh的内容。

    修改完成后:

     附修改后的hadoop.env

    1. export HADOOP_MAPRED_HOME=$(([[!'/opt/cloudera-manager/cloudera/parcels/CDH/lib/hadoop-mapreduce'=~ CDH_MR2_HOME ]]&& echo /opt/cloudera-manager/cloudera/parcels/CDH/lib/hadoop-mapreduce )|| echo ${CDH_MR2_HOME:-/usr/lib/hadoop-mapreduce/})
      HADOOP_CLASSPATH=/usr/share/cmf/lib/cdh5/*:/opt/cloudera-manager/cm-5.1.3/share/cmf/lib/cdh5/*:/opt/cloudera-manager/cm-5.1.3/share/cmf/cloudera-navigator-server/libs/cdh5/*:/opt/cloudera-manager/cloudera/parcels/CDH-5.1.3-1.cdh5.1.3.p0.12/lib/hbase/*:/opt/cloudera-manager/cloudera/parcels/CDH-5.1.3-1.cdh5.1.3.p0.12/lib/hbase/lib/*:/opt/cloudera-manager/cloudera/parcels/CDH-5.1.3-1.cdh5.1.3.p0.12/lib/hbase/*:
      # JAVA_LIBRARY_PATH={{JAVA_LIBRARY_PATH}}
      export YARN_OPTS="-Xms825955249 -Xmx825955249 -Djava.net.preferIPv4Stack=true $YARN_OPTS"
      export HADOOP_CLIENT_OPTS="-Djava.net.preferIPv4Stack=true $HADOOP_CLIENT_OPTS"
     

    二、在代码里面指定

    To run MapReduce jobs that use HBase, you need to add the HBase and Zookeeper JAR files to the Hadoop Java classpath. You can do this by adding the following statement to each job:

    TableMapReduceUtil.addDependencyJars(job); 

     

    TableMapReduceUtil.addDependencyJars(job);

    加上这句还是需要修改上一步中的/etc/hadoop/conf/hadoop-env.sh

     

    参考:

    http://www.cloudera.com/content/cloudera/en/documentation/cdh5/v5-0-0/CDH5-Installation-Guide/cdh5ig_mapreduce_hbase.html

     

    附部分代码:

    1. import java.text.SimpleDateFormat;
      import java.util.Date;
      
      import org.apache.hadoop.conf.Configuration;
      import org.apache.hadoop.hbase.client.HTableUtil;
      import org.apache.hadoop.hbase.client.Put;
      import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
      import org.apache.hadoop.hbase.mapreduce.TableOutputFormat;
      import org.apache.hadoop.hbase.mapreduce.TableReducer;
      import org.apache.hadoop.hbase.util.Bytes;
      import org.apache.hadoop.io.LongWritable;
      import org.apache.hadoop.io.NullWritable;
      import org.apache.hadoop.io.Text;
      import org.apache.hadoop.mapreduce.Job;
      import org.apache.hadoop.mapreduce.Mapper;
      import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
      import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
      
      public class BatchImport {
      
          public static void main(String[] args) throws Exception {
              final Configuration configuration = new Configuration();
              // 设置zookeeper
              configuration.set("hbase.zookeeper.quorum", "192.168.1.170:2181");
              // 设置hbase表名称
              configuration.set(TableOutputFormat.OUTPUT_TABLE, "ww_log");
          
              
              final Job job = new Job(configuration, BatchImport.class.getSimpleName());
              TableMapReduceUtil.addDependencyJars(job); 
              job.setJarByClass(BatchImport.class);
              job.setMapperClass(BatchImportMapper.class);
              job.setReducerClass(BatchImportReducer.class);
              // 设置map的输出,不设置reduce的输出类型
              job.setMapOutputKeyClass(LongWritable.class);
              job.setMapOutputValueClass(Text.class);
      
              job.setInputFormatClass(TextInputFormat.class);
              // 不再设置输出路径,而是设置输出格式类型
              job.setOutputFormatClass(TableOutputFormat.class);
      
              FileInputFormat.setInputPaths(job, "hdfs://192.168.1.170:8020/data/ww_log");
      
              job.waitForCompletion(true);
          }
      }

     

     





  • 相关阅读:
    Jquery 跨域请求JSON数据问题
    js定时器实现图片轮播
    Redis数据一致性
    Redis缓存击穿、缓存穿透、缓存雪崩
    数据库连接池druid连接mysql数据库‘链路断开’问题
    Mysql启动错误: Can’t create test file xxx lower-test
    DB2-表空间
    DB2-Schema
    DB2-数据库
    DB2-实例
  • 原文地址:https://www.cnblogs.com/xfly/p/4137867.html
Copyright © 2011-2022 走看看