zoukankan      html  css  js  c++  java
  • hbase的启动

    start-dfs.sh
    start-yarn.sh
    start-hbase.sh

    1,先启动hbase:hbase有内置的zookeeper,如果没有装zookeeper,启动hbase的时候会有一个HQuorumPeer进程。
    2.先启动zookeeper:如果用外置的zookeeper管理hbase,则先启动zookeeper,然后启动hbase,启动后会有一个QuorumPeerMain进程。

    两个进程的名称不一样
    HQuorumPeer表示hbase管理的zookeeper
    QuorumPeerMain表示zookeeper独立的进程

    如果遇到正在初始化无法使用。
    重置hdfs 即可。
    当然,数据也没了。

    如果:

    bigdata@/usr/local/spark/mycode/hbase| sbt package
    [info] Updated file /usr/local/spark/mycode/hbase/project/build.properties: set sbt.version to 1.3.8
    [info] Loading settings for project global-plugins from build.sbt ...
    [info] Loading global plugins from /home/bigdata/.sbt/1.0/plugins
    [info] Loading project definition from /usr/local/spark/mycode/hbase/project
    [info] Loading settings for project hbase from simple.sbt ...
    [info] Set current project to Simple Project (in build file:/usr/local/spark/mycode/hbase/)
    [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings.
    [info] Compiling 1 Scala source to /usr/local/spark/mycode/hbase/target/scala-2.11/classes ...
    [error] /usr/local/spark/mycode/hbase/src/main/scala/SparkOperateHbase.scala:4:8: object TableInputFormat is not a member of package org.apache.hadoop.hbase.mapreduce
    [error] import org.apache.hadoop.hbase.mapreduce.TableInputFormat
    [error]        ^
    [error] /usr/local/spark/mycode/hbase/src/main/scala/SparkOperateHbase.scala:16:14: not found: value TableInputFormat
    [error]     conf.set(TableInputFormat.INPUT_TABLE, "student")
    [error]              ^
    [error] /usr/local/spark/mycode/hbase/src/main/scala/SparkOperateHbase.scala:17:51: not found: type TableInputFormat
    [error]     val stuRDD = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat],
    [error]                                                   ^
    [error] three errors found
    [error] (Compile / compileIncremental) Compilation failed
    [error] Total time: 22 s, completed May 16, 2020 10:36:21 AM
    bigdata@/usr/local/spark/mycode/hbase| 
    

    加上依赖即可解决,参考:解决找不到TableInputFormat
    在原来的simple.sbt 中加入一行(hbase-mapreduce):

    name := "Simple Project"
    version := "1.0"
    scalaVersion := "2.11.12"
    libraryDependencies += "org.apache.spark" %% "spark-core" % "2.4.5"
    libraryDependencies += "org.apache.hbase" % "hbase-client" % "2.1.10"
    libraryDependencies += "org.apache.hbase" % "hbase-common" % "2.1.10"
    libraryDependencies += "org.apache.hbase" % "hbase-server" % "2.1.10"
    libraryDependencies += "org.apache.hbase" % "hbase-mapreduce" % "2.1.10"
    
    

    打包成功:

    bigdata@/usr/local/spark/mycode/hbase| sbt package
    [info] Loading settings for project global-plugins from build.sbt ...
    [info] Loading global plugins from /home/bigdata/.sbt/1.0/plugins
    [info] Loading project definition from /usr/local/spark/mycode/hbase/project
    [info] Loading settings for project hbase from simple.sbt ...
    [info] Set current project to Simple Project (in build file:/usr/local/spark/mycode/hbase/)
    [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings.
    [info] Compiling 1 Scala source to /usr/local/spark/mycode/hbase/target/scala-2.11/classes ...
    [success] Total time: 17 s, completed May 16, 2020 4:50:28 PM
    bigdata@/usr/local/spark/mycode/hbase| 
    

    NotServingRegionException

    2020-05-17 21:43:49,193 INFO  [ubuntuForBigdata:16000.activeMasterManager] zookeeper.MetaTableLocator: Failed verification of hbase:meta,,1 at address=ubuntuforbigdata,16201,1589722735655, exception=org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is not online on ubuntuforbigdata,16201,1589723019231
        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2899)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:960)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionInfo(RSRpcServices.java:1245)
        at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22233)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
        at java.lang.Thread.run(Thread.java:748)
    
    
    2020-05-20 17:41:24,254 INFO  [ubuntuForBigdata:16000.activeMasterManager] master.MasterFileSystem: Log folder hdfs://localhost:9000/hbase/WALs/ubuntuforbigdata,16201,1589967677343 belongs to an existing region server
    2020-05-20 17:41:24,354 INFO  [ubuntuForBigdata:16000.activeMasterManager] zookeeper.MetaTableLocator: Failed verification of hbase:meta,,1 at address=ubuntuforbigdata,16201,1589945558750, exception=org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is not online on ubuntuforbigdata,16201,1589967677343
        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2899)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:960)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionInfo(RSRpcServices.java:1245)
        at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22233)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
        at java.lang.Thread.run(Thread.java:748)
    
    2020-05-20 17:41:24,358 INFO  [ubuntuForBigdata:16000.activeMasterManager] master.MasterFileSystem: Log dir for server ubuntuforbigdata,16201,1589945558750 does not exist
    
    

    停止Hbase服务时导致zookeeper的meta数据丢失或损毁所致

    停止HBase服务,停止ZooKeeper服务

    把zookeeper的每个节点的zoo.cfg指定的dataDir=/hadoop/zookeeper-data目录的文件清除掉

    如果使用的是HBase自带的zookeeper, 需要清空 hbase-site.xml 配置下的目录

        <property>
                <name>hbase.zookeeper.property.dataDir</name>
                <value>/download/hbase-1.2.5/tmp</value>
        </property>
    

    然后重启zookeeper,再重启hbas

    引自:http://www.lizhe.name/node/78

    Security is off.
    
    Safe mode is ON. Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.
    
    26 files and directories, 5 blocks = 31 total filesystem object(s).
    
    Heap Memory used 46.42 MB of 166 MB Heap Memory. Max Heap Memory is 889 MB.
    
    Non Heap Memory used 46.26 MB of 47 MB Commited Non Heap Memory. Max Non Heap Memory is -1 B. 
    

    spark-submit 提交程序报错

    Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
    	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
    	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
    	at org.apache.spark.deploy.SparkHadoopUtil.appendS3AndSparkHadoopConfigurations(SparkHadoopUtil.scala:106)
    	at org.apache.spark.deploy.SparkHadoopUtil.newConfiguration(SparkHadoopUtil.scala:116)
    	at org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:50)
    	at org.apache.spark.deploy.SparkHadoopUtil$.hadoop$lzycompute(SparkHadoopUtil.scala:384)
    	at org.apache.spark.deploy.SparkHadoopUtil$.hadoop(SparkHadoopUtil.scala:384)
    
    

    hbase shell 执行 create 'student','info'报错:

    ERROR: java.util.concurrent.ExecutionException: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/data/default/student/644db1e708712afe0920984db5a31c55/.regioninfo could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.
    
    
  • 相关阅读:
    Spring 定时任务
    JSOUP 爬虫
    Google 翻译(中英,英中)
    linux mysql 卸载与安装及配置命令
    2.logback+slf4j+janino 配置项目的日志输出
    DW3 消息推送
    JQuery 常用知识
    intellij idea 2016.3.5 控制台取消行数限制
    1.搭建Maven 多模块应用 --Intellij IDEA 2016.3.5
    JSON 解析工具的封装(Java)
  • 原文地址:https://www.cnblogs.com/amnotgcs/p/12898726.html
Copyright © 2011-2022 走看看