zoukankan      html  css  js  c++  java
  • hbase安装配置

    环境准备

    1. Java
    2. HDFS
    3. zookeeper
    4. SSH,NTP时间同步
    5. 系统调优,这个可以等安装完后改,文件打开数(ulimit和nproc)
    6. 修改Hadoop HDFS Datanode同时处理文件的上限:dfs.datanode.max.xcievers


    下载HBASE

    http://mirror.bit.edu.cn/apache/hbase/
    


    解压设置权限

    1.tar -zxf habse.tar.gz -C /usr/local/habse
    2.sudo chown -R hadoop:hadoop /usr/local/habse



    配置conf/hbase-env.sh

    #设置JAVA路径及CLASSPATH
    export JAVA_HOME=/usr/local/java/jdk1.8.0_121
    export JAVA_CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
    
    #设置不让HBASE管理ZK
    export HBASE_MANAGES_ZK=false
    


    配置conf/hbase-site.xml

    <configuration>
    
    	<property>
    	<name>hbase.master</name>
    	<value>master:60000</value>
    	</property>
    
    	<property>
    		<name>hbase.master.maxclockskew</name> 
    		<value>180000</value>
    	</property>
    
    	<property>
    		<name>hbase.rootdir</name>
    		<value>hdfs://master:19000/hbase</value>
    	</property>
    
    	<property>
    		<name>hbase.cluster.distributed</name> 
    		<value>true</value>
    	</property>
    
    	<property>
    		<name>hbase.zookeeper.quorum</name>
    		<value>master,slave1,slave2</value>
    	</property>
    
    	<property>
    		<name>hbase.zookeeper.property.dataDir</name>
    		<value>/usr/local/zookeeper/zookeeper-3.4.10/</value>
    	</property>
    
    </configuration>
    

    参数说明:

    1. hbase.rootdir,HDFS的入口地址,地址和端口要和你的hadoop配置一样(core-site.xml中的fs.default.name),所有节点公用地址
    2. hbase.cluster.distributed,ture表示分布式
    3. hbase.zookeeper.property.clientPort , zookeeper端口
    4. hbase.zookeeper.quorum , zookeeper节点
    5. hbase.zookeeper.property.dataDir , zookeeper 保持信息的文件,默认为/tmp 重启会丢失


    配置conf/regionservers

    master
    slave1
    slave2
    


    分发配置好的文件

    scp -r hbase/ hadoop@slavex:/usr/local
    


    启动&停止HBASE

    ./bin/start-hbase.sh
    ./bin/stop-hbase.sh
    

    启动成功后使用jps命令即可看到相关进程



    查看HABSE日志

    1. 在logs目录下可以看到HBASE日志.如果启动不成功可以查看相关log日志中是否报错.



    错误[zookeeper.MetaTableLocator: Failed]

    一般认为是,停止Hbase服务时导致zookeeper的meta数据丢失或损毁所致,解决办法时,停止HBase服务,停止ZooKeeper服务,把zookeeper的每个节点的zoo.cfg指定的dataDir=/hadoop/zookeeper-data目录的文件清除掉,然后重启zookeeper,再重启hbase,再去观察hbase主控节点日志hbase-hadoop-master-master.log,发现一切正常,问题已经得到解决!
    http://blog.csdn.net/davylee2008/article/details/70157957

    7-05-09 11:11:56,975 INFO  [master:16000.activeMasterManager] master.MasterFileSystem: Log folder hdfs://master:19000/hbase/WALs/master,16020,1494299508333 belongs to an existing region server
    2017-05-09 11:11:57,063 INFO  [master:16000.activeMasterManager] zookeeper.MetaTableLocator: Failed verification of hbase:meta,,1 at address=master,16020,1494298875879, exception=org.apache.hadoo hbase:meta,,1 is not online on master,16020,1494299508333
    	at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2915)
    	at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:979)
    	at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionInfo(RSRpcServices.java:1258)
    	at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22233)
    	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2137)
    	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
    	at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
    	at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
    	at java.lang.Thread.run(Thread.java:745)
    


    解决第一步错误后master启动成功,slavex报错

    查看错误信息后发现是hdfs-site.xml中 hdfs://master;19000/hbase 配置项错误

     -Dhbase.root.logger=INFO,RFA, -Dhbase.security.logger=INFO,RFAS]
    2017-05-09 11:44:30,649 INFO  [main] regionserver.RSRpcServices: regionserver/slave1/172.26.203.134:16020 server-side HConnection retries=350
    2017-05-09 11:44:30,841 INFO  [main] ipc.SimpleRpcScheduler: Using deadline as user call queue, count=3
    2017-05-09 11:44:30,859 INFO  [main] ipc.RpcServer: regionserver/slave1/172.26.203.134:16020: started 10 reader(s) listening on port=16020
    2017-05-09 11:44:31,299 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    2017-05-09 11:44:31,442 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
    java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
    	at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2652)
    	at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64)
    	at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
    	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    	at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
    	at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2667)
    Caused by: java.lang.reflect.InvocationTargetException
    	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    	at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2650)
    	... 5 more
    Caused by: java.io.IOException: Incomplete HDFS URI, no host: hdfs://master;19000/hbase
    	at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
    	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
    	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
    	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
    	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
    	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
    	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
    	at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:1002)
    	at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:563)
    	... 10 more
    
    
  • 相关阅读:
    Matlab实现bwlabel函数(区域标记)功能
    Matlab实现medfilt2函数功能
    Matlab实现基于频域对二维信号的低通滤波
    Matlab实现基于频域对一维信号利用傅里叶低通滤波平滑
    Matlab实现直方图规定化
    Matlab实现直方图均衡化
    Matlab实现imresize函数功能
    lc279贪心
    lc347 解法
    numpy中的np.mat(1)
  • 原文地址:https://www.cnblogs.com/0xcafedaddy/p/6830400.html
Copyright © 2011-2022 走看看