zoukankan      html  css  js  c++  java
  • Hive伪分布式下安装

    本安装过程只作为个人笔记用,非标准教程,请酌情COPY。:-D

    Hive下载

    下载之前,需先查看兼容的Hadoop版本,并安装hadoop,参考 http://www.cnblogs.com/yongjian/p/6552647.html

    因为自己安装的是hadoop2.7.0,所以就直接下载了Hive2.0.1版本安装。

    下载连接apache-hive-2.0.1-bin.tar.gz

    Hive安装

    注:由于Hive运行在Hadoop上,每个Hive发布的版本都可以和多个Hadoop版本共同工作。一般来说,Hive支持Hadoop的新老版本。

    1. 解压后hive包位置在 /opt/apache-hive-2.0.1-bin 下。

    [root@hadoop001 opt]# tar apache-hive-2.0.1-bin.tar.gz

    2. 安装包授权给hadoop用户

    [root@hadoop001 opt]# chown hadoop:hadoop -R apache-hive-2.0.1-bin/
    

    3. 切回hadoop用户,并添加hive环境变量

    [hadoop@hadoop001 ~]$ vim ~/.bash_profile 
    

    添加Hive路径

    # User specific environment and startup programs

    #java
    export JAVA_HOME=/usr/java/jdk1.8.0_40/

    # hadoop
    HADOOP_HOME=/opt/hadoop-2.7.3
    HIVE_HOME=/opt/apache-hive-2.0.1-bin

    PATH=$PATH:$HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:JAVA_HOME/bin:$HIVE_HOME/bin

    export PATH

    应用一下环境变量文件

    [hadoop@hadoop001 ~]$ source ~/.bash_profile 
    

    4. Hive的元数据

    Hive元数据有三种存储方式

    • Derby:Hive默认的存储模式,缺点是不可并发调用Hive
    • 本地Mysql:单节点存储,数据风险大
    • 远程Mysql:需要网络传输

    这里采用第二种方式,本地搭建Mysql元数据。

    首先是安装Mysql

    [hadoop@hadoop001 ~]$ yum -y install mysql-server
    

    完成后配置开机启动

    [root@hadoop001 hadoop]# chkconfig mysqld on
    

    启动Mysql

    [root@hadoop001 hadoop]# service mysqld start

    因为是第一次安装,需要先初始化用户root的密码

    [root@hadoop001 hadoop]# mysqladmin -u root password 'hive'

    随后登录root用户,输入密码hive

    [root@hadoop001 hadoop]# mysql -uroot –p

    创建hive用户,密码hive,并创建hive源数据库

    mysql> insert into mysql.user(Host,User,Password) values("localhost","hive",password("hive"));
    Query OK, 1 row affected, 3 warnings (0.00 sec)
    
    mysql> create database hive;
    Query OK, 1 row affected (0.00 sec)
    
    mysql> grant all on hive.* to hive@'%' identified by 'hive';
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> grant all on hive.* to hive@'localhost' identified by 'hive';
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> grant all on hive.* to hive@'hadoop001' identified by 'hive';
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> flush privileges;
    Query OK, 0 rows affected (0.00 sec)
    
    

    5. 修改Hive配置文件

    创建hive临时文件目录并全部授权给hadoop用户

    [root@hadoop001 hive]# mkdir -p /tmp/hive//iotmp
    [root@hadoop001 hive]# chown  hadoop:hadoop -R /tmp/hive/
    

    然后生成hive-site.xml

    [root@hadoop001 hive]# cp /opt/apache-hive-2.0.1-bin/conf/hive-default.xml.template /opt/apache-hive-2.0.1-bin/conf/hive-site.xml
    

    以下几项需要修改

    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://hadoop001:3306/hive</value>
        <description>JDBC connect string for a JDBC metastore</description>
      </property>
     <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
        <description>Driver class name for a JDBC metastore</description>
      </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>     <value>hive </value>
    </property>
      <property>
        <name>hive.hwi.listen.port</name>
        <value>3306</value>
        <description>This is the port the Hive Web Interface will listen on</description>
      </property>
      <property>
        <name>datanucleus.schema.autoCreateAll</name>
        <value>true</value>
        <description>creates necessary schema on a startup if one doesn't exist. set this to false, after creating it once</description>
      </property>
      <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>hive</value>
        <description>Username to use against metastore database</description>
      </property>
      <property>
        <name>hive.exec.local.scratchdir</name>
        <value>/tmp/hive/iotmp</value>
        <description>Local scratch space for Hive jobs</description>
      </property>
      <property>
        <name>hive.downloaded.resources.dir</name>
        <value>/tmp/hive/iotmp</value>
        <description>Temporary local directory for added resources in the remote file system.</description>
      </property>
    <property>
        <name>hive.querylog.location</name>
        <value>/home/hdpsrc/hive/iotmp</value>
        <description>Location of Hive run time structured log file</description>
    </property>
    

    6. 配置mysql的jdbc驱动

    下载mysql的jdbc驱动包,将mysql驱动包copy到 $HIVE_HOME/lib下

    [root@hadoop001 lib]# mv /opt/soft/mysql-connector-java-5.1.17.jar /opt/apache-hive-2.0.1-bin/lib/

    7.启动hadoop

    start-dfs.sh

    8. 启动hive,创建测试表

    [hadoop@hadoop001 conf]$ hive
    which: no hbase in (/usr/java/jdk1.8.0_40//bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hadoop/bin:/opt/hadoop-2.7.3/bin:/opt/hadoop-2.7.3/sbin:JAVA_HOME/bin:/opt/apache-hive-2.0.1-bin/bin)
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:/opt/apache-hive-2.0.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
    
    Logging initialized using configuration in jar:file:/opt/apache-hive-2.0.1-bin/lib/hive-common-2.0.1.jar!/hive-log4j2.properties
    Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
    hive> show databases;
    OK
    default
    Time taken: 1.079 seconds, Fetched: 1 row(s)
    hive> create table test(x int);
    OK
    Time taken: 0.56 seconds
    hive> show tables;
    OK
    test
    Time taken: 0.075 seconds, Fetched: 1 row(s)

    8. 在mysql中查看新建表test的元数据

    [root@hadoop001 apache-hive-2.0.1-bin]# mysql -u root -p
    
    mysql> use hive;
    mysql> show tables;
    +---------------------------+
    | Tables_in_hive            |
    +---------------------------+
    | BUCKETING_COLS            |
    | CDS                       |
    | COLUMNS_V2                |
    | DATABASE_PARAMS           |
    | DBS                       |
    | FUNCS                     |
    | FUNC_RU                   |
    | GLOBAL_PRIVS              |
    | PARTITIONS                |
    | PARTITION_KEYS            |
    | PARTITION_KEY_VALS        |
    | PARTITION_PARAMS          |
    | PART_COL_STATS            |
    | ROLES                     |
    | SDS                       |
    | SD_PARAMS                 |
    | SEQUENCE_TABLE            |
    | SERDES                    |
    | SERDE_PARAMS              |
    | SKEWED_COL_NAMES          |
    | SKEWED_COL_VALUE_LOC_MAP  |
    | SKEWED_STRING_LIST        |
    | SKEWED_STRING_LIST_VALUES |
    | SKEWED_VALUES             |
    | SORT_COLS                 |
    | TABLE_PARAMS              |
    | TAB_COL_STATS             |
    | TBLS                      |
    | TBL_PRIVS                 |
    | VERSION                   |
    +---------------------------+
    30 rows in set (0.00 sec)
    

    image

    查看TBLS表,可以看到新增的test表的属性信息。

    至此,Hive安装完毕。

  • 相关阅读:
    实现Path2.0中绚丽的的旋转菜单
    ColorMatrixColorFilter颜色过滤(离线用户的灰色头像处理)
    网上发现的一个android UI包
    圆角背景的ListView
    自定义Gallery 滑动中图片自动突出显示
    python文件读写操作(r/r+/rb/w/w+/wb/a/a+/ab)
    Linux(deepin) 系统: 解决 matplotlib 中文乱码问题
    python文件读写操作(r/r+/rb/w/w+/wb/a/a+/ab)
    API接口防止参数篡改和重放攻击
    API接口防止参数篡改和重放攻击
  • 原文地址:https://www.cnblogs.com/yongjian/p/6607984.html
Copyright © 2011-2022 走看看