zoukankan      html  css  js  c++  java
  • Spark操作HBase报:org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException异常解决方案

    一.异常信息

      19/03/21 15:01:52 WARN scheduler.TaskSetManager: Lost task 4.0 in stage 21.0 (TID 14640, hntest07, executor 64)  org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 3 actions: AAA.bbb: 3 times,
      at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:258)
      at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$2000(AsyncProcess.java:238)
      at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1810)
      at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240)
      at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190)
      at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1498)
      at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1094)
      at org.com.tl.spark.main.LabelSummaryTaskEntrance$$anonfun$main$1$$anonfun$apply$mcVI$sp$1.apply(LabelSummaryTaskEntrance.scala:163)
      at org.com.tl.spark.main.LabelSummaryTaskEntrance$$anonfun$main$1$$anonfun$apply$mcVI$sp$1.apply(LabelSummaryTaskEntrance.scala:127)
      at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
      at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
      at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1888)
      at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1888)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      at org.apache.spark.scheduler.Task.run(Task.scala:89)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:242)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      at java.lang.Thread.run(Thread.java:745)

    二.代码

     val config = HBaseConfiguration.create()
     config.set("hbase.zookeeper.quorum", "hbase01,hbase02,hbase03")
     config.set("hbase.zookeeper.property.clientPort", "2181")
     val connection = ConnectionFactory.createConnection(config)
     val admin = connection.getAdmin
     val table = connection.getTable(TableName.valueOf("ZHEN:TABLENAME"))

    三.解决方案

      1.在代码标红的地方把库名+表名全部大写,中间用":"间隔。

      2.Hbase版本不一致【服务器上启动的Hbase和Spark导入的Hbase-lib不一致】。

      3.hdfs的datanode或namenode宕机。

      4.Hbase的Hmaster或者HRegionServer挂了。

  • 相关阅读:
    滴滴快车奖励政策,高峰奖励,翻倍奖励,按成交率,指派单数分级(2月16日)
    优步UBER司机全国各地奖励政策汇总 (2月15日-2月21日)
    优步UBER司机全国各地奖励政策汇总 (2月8日-2月14日)
    滴滴快车奖励政策,高峰奖励,翻倍奖励,按成交率,指派单数分级(2月7日~2月13日)
    成都Uber优步司机奖励政策(2月7日)
    file does not exist 阿里云OSS图片上传遇到的问题
    redis.conf 配置项说明
    redis 五大数据类型的常用指令
    redis配置外部访问
    linux安装redis
  • 原文地址:https://www.cnblogs.com/yszd/p/10574191.html
Copyright © 2011-2022 走看看