zoukankan      html  css  js  c++  java
  • hive安装部署

    ------------------------------本案例hive和mysql安装在同一台服务器上-----------------------------------------

    1、上传hive、mysql、mysql driver到服务器/mnt目录下:

    [root@chavin mnt]# ll mysql-5.6.24-linux-glibc2.5-x86_64.tar.gz apache-hive-0.13.1-bin.tar.gz mysql-connector-java-5.1.22-bin.jar
    -rw-r--r--. 1 root root  54246778 Mar 13 10:46 apache-hive-0.13.1-bin.tar.gz
    -rw-r--r--. 1 root root 312043744 Mar 13 10:46 mysql-5.6.24-linux-glibc2.5-x86_64.tar.gz
    -rw-r--r--. 1 root root    832960 Mar 13 10:46 mysql-connector-java-5.1.22-bin.jar

    2、安装mysql数据库:

    #tar -zxvf mysql-5.6.24-linux-glibc2.5-x86_64.tar.gz
    #mv mysql-5.6.24-linux-glibc2.5-x86_64 /usr/local/mysql-5.6.24
    #cd /usr/local/
    #chown -R hadoop:hadoop hive-0.13.1/
    #cd mysql-5.6.24/
    #vim support-files/my-default.cnf
    #cp my-default.cnf /etc/my.cnf
    #scripts/mysql_install_db --user=mysql
    #cp support-files/mysql.server /etc/rc.d/init.d/mysql
    #chkconfig --add /etc/rc.d/init.d/mysql
    #chkconfig --list mysql
    #service mysql status
    #service mysql start
    #ps -ef|grep mysql
    #bin/mysql -uroot
    mysql>use mysql;
    mysql>set password=password('mysql');
    mysql>delete from user where password is null;

    3、安装hive

    #tar -zxvf apache-hive-0.13.1-bin.tar.gz
    #mv apache-hive-0.13.1-bin /usr/local/hive-0.13.1
    #cp mysql-connector-java-5.1.22-bin.jar /usr/local/hive-0.13.1/lib/
    #cd /usr/local/hive-0.13.1/conf
    #vim hive-env.sh

    HADOOP_HOME=/usr/local/hadoop
    export HIVE_CONF_DIR=/usr/local/hive-0.13.1/conf

    #vim hive-site.xml

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <configuration>

        <property>
                <name>javax.jdo.option.ConnectionURL</name>
                    <value>jdbc:mysql://chavin.king:3306/metastore?createDatabaseIfNotExist=true</value>
                    <description>JDBC connect string for a JDBC metastore</description>
            </property>

        <property>
                <name>javax.jdo.option.ConnectionDriverName</name>
                    <value>com.mysql.jdbc.Driver</value>
                    <description>Driver class name for a JDBC metastore</description>
            </property>

        <property>
                <name>javax.jdo.option.ConnectionUserName</name>
                    <value>root</value>
                    <description>username to use against metastore database</description>
            </property>

        <property>
            <name>javax.jdo.option.ConnectionPassword</name>
            <value>mysql</value>
            <description>password to use against metastore database</description>
        </property>

    </configuration>

    #cd ../
    #bin/hive
    hive>
    --------------------------------------------------hive安装完成-----------------------------------------------------

    操作日志:

        > create database chavin;
    OK
    Time taken: 1.297 seconds
    hive> show databases;
    OK
    chavin
    default
    Time taken: 0.042 seconds, Fetched: 2 row(s)
    hive> use chavin;
    OK
    Time taken: 0.023 seconds

        > create table student(id int,
        > name string) row format delimited fields terminated by ' ';
    OK
    Time taken: 0.584 seconds
    hive> show tables;
    OK
    student
    Time taken: 0.024 seconds, Fetched: 1 row(s)
    hive> desc formatted student;
    OK
    # col_name                data_type               comment            
             
    id                      int                                        
    name                    string                                     
             
    # Detailed Table Information         
    Database:               chavin                  
    Owner:                  hadoop                  
    CreateTime:             Wed Mar 15 17:51:57 CST 2017    
    LastAccessTime:         UNKNOWN                 
    Protect Mode:           None                    
    Retention:              0                       
    Location:               hdfs://ns1/user/hive/warehouse/chavin.db/student    
    Table Type:             MANAGED_TABLE           
    Table Parameters:         
        transient_lastDdlTime    1489571517         
             
    # Storage Information         
    SerDe Library:          org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe    
    InputFormat:            org.apache.hadoop.mapred.TextInputFormat    
    OutputFormat:           org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat    
    Compressed:             No                      
    Num Buckets:            -1                      
    Bucket Columns:         []                      
    Sort Columns:           []                      
    Storage Desc Params:         
        field.delim                              
        serialization.format                     
    Time taken: 0.712 seconds, Fetched: 28 row(s)


        > load data local inpath '/usr/local/hive-0.13.1/data/student.txt' into table chavin.student;
    Copying data from file:/usr/local/hive-0.13.1/data/student.txt
    Copying file: file:/usr/local/hive-0.13.1/data/student.txt
    Loading data to table chavin.student
    Table chavin.student stats: [numFiles=1, numRows=0, totalSize=34, rawDataSize=0]
    OK
    Time taken: 2.519 seconds
    hive> select * from chavin.student;
    OK
    1001    张三
    1002    lisi
    1003    wangwu
    Time taken: 0.881 seconds, Fetched: 3 row(s)

        > show functions;   --查看hive自带的函数有哪些

    查看函数的信息:

    hive> desc function upper;        
    OK
    upper(str) - Returns str with all characters changed to uppercase
    Time taken: 0.015 seconds, Fetched: 1 row(s)
    hive> desc function extended upper;
    OK
    upper(str) - Returns str with all characters changed to uppercase
    Synonyms: ucase
    Example:
      > SELECT upper('Facebook') FROM src LIMIT 1;
      'FACEBOOK'
    Time taken: 0.015 seconds, Fetched: 5 row(s)

        > select id,upper(name) from student;
    Total jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks is set to 0 since there's no reduce operator
    Starting Job = job_1489547833587_0004, Tracking URL = http://db02:8088/proxy/application_1489547833587_0004/
    Kill Command = /usr/local/hadoop-2.5.0/bin/hadoop job  -kill job_1489547833587_0004
    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
    2017-03-15 18:06:10,911 Stage-1 map = 0%,  reduce = 0%
    2017-03-15 18:06:22,974 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.78 sec
    MapReduce Total cumulative CPU time: 1 seconds 780 msec
    Ended Job = job_1489547833587_0004
    MapReduce Jobs Launched:
    Job 0: Map: 1   Cumulative CPU: 1.78 sec   HDFS Read: 248 HDFS Write: 34 SUCCESS
    Total MapReduce CPU Time Spent: 1 seconds 780 msec
    OK
    1001    张三
    1002    LISI
    1003    WANGWU
    Time taken: 41.349 seconds, Fetched: 3 row(s)

    --配置hive命令行显示库及表信息:在配置文件(hive-site.xml)中加入以下信息,重启客户端即可。

    <property>
      <name>hive.cli.print.header</name>
      <value>true</value>
      <description>Whether to print the names of the columns in query output.</description>
    </property>

    <property>
      <name>hive.cli.print.current.db</name>
      <value>true</value>
      <description>Whether to include the current database in the Hive prompt.</description>
    </property>

    测试:
    hive (chavin)> select * from student;
    OK
    student.id    student.name
    1001    张三
    1002    lisi
    1003    wangwu
    Time taken: 1.925 seconds, Fetched: 3 row(s)

    --------------------------------------hive logs配置-------------------------------------

    [hadoop@db01 conf]$ cd /usr/local/hive-0.13.1/conf/
    [hadoop@db01 conf]$ cp hive-log4j.properties.template hive-log4j.properties
    [hadoop@db01 conf]$ vim hive-log4j.properties


    启动时设置日志:

    [hadoop@db01 hive-0.13.1]$ bin/hive --hiveconf hive.root.logger=info,console

  • 相关阅读:
    人生转折点:弃文从理
    人生第一站:大三暑假实习僧
    监听器启动顺序和java常见注解
    java常识和好玩的注释
    182. Duplicate Emails (Easy)
    181. Employees Earning More Than Their Managers (Easy)
    180. Consecutive Numbers (Medium)
    178. Rank Scores (Medium)
    177. Nth Highest Salary (Medium)
    176. Second Highest Salary(Easy)
  • 原文地址:https://www.cnblogs.com/wcwen1990/p/6652013.html
Copyright © 2011-2022 走看看