zoukankan      html  css  js  c++  java
  • 安装HBase

    安装HBase
    
    1.默认已经安装好java+hadoop+zookeeper
    2.下载对应版本的HBase
    3.解压安装包
     tar zxvf hbase-1.0.2-bin.tar.gz
    4.配置环境变量(/etc/profile)
        #hbase
        export HBASE_HOME=/opt/hbase-1.1.2
        export PATH=$PATH:$HBASE_HOME/bin
        export CLASSPATH=$CLASSPATH:$HBASE_HOME/lib
        
        保存退出
        source /etc/profile
    5.修改配置文件
        ####backup-masters#### HA备份节点主机名
        hadoop.slaver1
        
        ####hbase-env.sh####
        export JAVA_HOME=/usr/java/jdk1.8.0_65
        export HBASE_CLASSPATH=/opt/hadoop-2.5.2/etc/hadoop
        export HBASE_HEAPSIZE=8000
        export HBASE_BACKUP_MASTERS=/opt/hbase-1.1.2/conf/backup-masters
        export HBASE_LOG_DIR=/opt/hbase-1.1.2/logs
        export HBASE_MANAGES_ZK=false
        
        
        ####hbase-site.xml####
        <!--需跟集群core-site.xml中配置一致-->
        <property>
            <name>hbase.rootdir</name>
            <value>hdfs://Ip地址:8020/hbase</value>   //这里有点问题  需要用active节点Ip代替  使用主机名报错
            <description>The directory shared by RegionServers.</description>
        </property>
    
        <!--这里注意了,只需端口即可,不必再写主机名称了! -->
        <property>
            <name>hbase.master</name>
            <value>60000</value> 
        </property>
    
        <!--配置zookeeper 主机和端口-->
        <property>
          <name>hbase.zookeeper.quorum</name>
          <value>hadoop.master,hadoop.slaver1,hadoop.slaver2</value>
        </property>
        <property>
          <name>hbase.zookeeper.property.clientPort</name>
          <value>2181</value>
        </property>
    
        <!--hbase.zookeeper.property.dataDir配置跟zookeeperper配置的dataDir一致-->
        <property>
             <name>hbase.zookeeper.property.dataDir</name>
             <value>/opt/hbase-1.1.2/data/zookeeper</value>
        </property>
    
        <!-- 配置分布式-->
        <property>
            <name>hbase.cluster.distributed</name>
            <value>true</value>
        </property>
    
        <property>
            <name>hbase.tmp.dir</name>
            <value>/opt/hbase-1.1.2/tmp</value>
        </property>
        
        
        
        ####log4j.properties####
        hbase.root.logger=INFO,console
        hbase.security.logger=INFO,console
        hbase.log.dir=/opt/hbase-1.1.2/logs
        hbase.log.file=hbase.log
        
        
        ####regionservers####  子节点主机名
        hadoop.slaver1
        hadoop.slaver2
        hadoop.slaver3
        
        
        ####hdfs-site.xml#### 加入此项
        <property>
        <name>dfs.datanode.max.transfer.threads</name>
        <value>4096</value>
        </property>
        
    6.替换lib下 hadoop相关的jar包(从hadoop 的jar包里面复制)
        1.删除hbase lib目录下hadoop的相关jar包
            rm -rf /opt/hbase-1.1.2/lib/hadoop*.jar
        2.从hadoop目录下将相关jar包拷贝过来
            find /opt/hadoop-2.5.2/share/hadoop -name "hadoop*jar" | xargs -i cp {} /opt/hbase-1.1.2/lib
        3.由于hbase自带的zookeeper包跟现有的zookeeper包不一样 将其替换成zookeeper包
            mv /opt/hbase-1.1.2/lib/zookeeper-3.4.6.jar  /opt/hbase-1.1.2/lib/zookeeper-3.4.6.jar.bak
            cp /opt/zookeeper-3.4.7/dist-maven/zookeeper-3.4.7.jar /opt/hbase-1.1.2/lib
        
    7.创建相关文件夹
        1.创建配置文件中hbase.tmp.dir配置的目录
            mkdir -p /opt/hbase-1.1.2/tmp
        2.创建日志文件目录
            mkdir -p /opt/hbase-1.1.2/logs
    8.分发到各个节点    
    9.启动
        主节点
        start-hbase.sh
        备份节点
        hbase-daemon.sh start master
        注意:一定要先启动hadoop集群,才能启动hbase
    10.验证
        1.jps
        2.web
            主节点主机名:16010
            从节点主机名:16030
    
    11.常用命令
        1.显示表
            list 'table'
        2.创建表
            create 'test','colfam1'
        3.插入数据
            put 'test','rowkey','colfam:key','value'  // 表名 行键 列族 值
        4.获取数据
            get 'test','rowkey'  //表名 行键
        5.启动
            start-hbase.sh
            hbase-daemon.sh start master
            hbase-daemon.sh start regionserver
        6.关闭
            stop-hbase.sh
            hbase-daemon.sh stop master
            hbase-daemon.sh stop regionserver
            
        
        
        
        
        
        
        
  • 相关阅读:
    CNN comprehension
    Gradient Descent
    Various Optimization Algorithms For Training Neural Network
    gerrit workflow
    jenkins job配置脚本化
    Jenkins pipeline jobs隐式传参
    make words counter for image with the help of paddlehub model
    make words counter for image with the help of paddlehub model
    git push and gerrit code review
    image similarity
  • 原文地址:https://www.cnblogs.com/ciade/p/5141287.html
Copyright © 2011-2022 走看看