zoukankan      html  css  js  c++  java
  • Hive经常使用命令

    创建表:
    hive> CREATE TABLE pokes (foo INT, bar STRING); 
            Creates a table called pokes with two columns, the first being an integer and the other a string

    创建一个新表,结构与其它一样
    hive> create table new_table like records;

    创建分区表:
    hive> create table logs(ts bigint,line string) partitioned by (dt String,country String);

    载入分区表数据:
    hive> load data local inpath '/home/Hadoop/input/hive/partitions/file1' into table logs partition (dt='2001-01-01',country='GB');

    展示表中有多少分区:
    hive> show partitions logs;

    展示全部表:
    hive> SHOW TABLES;
            lists all the tables
    hive> SHOW TABLES '.*s';

    lists all the table that end with 's'. The pattern matching follows Java regular
    expressions. Check out this link for documentation http://java.sun.com/javase/6/docs/api/java/util/regex/Pattern.html

    显示表的结构信息
    hive> DESCRIBE invites;
            shows the list of columns

    更新表的名称:
    hive> ALTER TABLE source RENAME TO target;

    加入新一列
    hive> ALTER TABLE invites ADD COLUMNS (new_col2 INT COMMENT 'a comment');
     
    删除表:
    hive> DROP TABLE records;
    删除表中数据,但要保持表的结构定义
    hive> dfs -rmr /user/hive/warehouse/records;

    从本地文件载入数据:
    hive> LOAD DATA LOCAL INPATH '/home/hadoop/input/ncdc/micro-tab/sample.txt' OVERWRITE INTO TABLE records;

    显示全部函数:
    hive> show functions;

    查看函数使用方法:
    hive> describe function substr;

    查看数组、map、结构
    hive> select col1[0],col2['b'],col3.c from complex;


    内连接:
    hive> SELECT sales.*, things.* FROM sales JOIN things ON (sales.id = things.id);

    查看hive为某个查询使用多少个MapReduce作业
    hive> Explain SELECT sales.*, things.* FROM sales JOIN things ON (sales.id = things.id);

    外连接:
    hive> SELECT sales.*, things.* FROM sales LEFT OUTER JOIN things ON (sales.id = things.id);
    hive> SELECT sales.*, things.* FROM sales RIGHT OUTER JOIN things ON (sales.id = things.id);
    hive> SELECT sales.*, things.* FROM sales FULL OUTER JOIN things ON (sales.id = things.id);

    in查询:Hive不支持,但能够使用LEFT SEMI JOIN
    hive> SELECT * FROM things LEFT SEMI JOIN sales ON (sales.id = things.id);


    Map连接:Hive能够把较小的表放入每一个Mapper的内存来运行连接操作
    hive> SELECT /*+ MAPJOIN(things) */ sales.*, things.* FROM sales JOIN things ON (sales.id = things.id);

    INSERT OVERWRITE TABLE ..SELECT:新表预先存在
    hive> FROM records2
        > INSERT OVERWRITE TABLE stations_by_year SELECT year, COUNT(DISTINCT station) GROUP BY year 
        > INSERT OVERWRITE TABLE records_by_year SELECT year, COUNT(1) GROUP BY year
        > INSERT OVERWRITE TABLE good_records_by_year SELECT year, COUNT(1) WHERE temperature != 9999 AND (quality = 0 OR quality = 1 OR quality = 4 OR quality = 5 OR quality = 9) GROUP BY year;  

    CREATE TABLE ... AS SELECT:新表表预先不存在
    hive>CREATE TABLE target AS SELECT col1,col2 FROM source;

    创建视图:
    hive> CREATE VIEW valid_records AS SELECT * FROM records2 WHERE temperature !=9999;

    查看视图具体信息:
    hive> DESCRIBE EXTENDED valid_records;

    原文出自:http://www.linuxidc.com/Linux/2012-05/61215.htm

  • 相关阅读:
    github上用golang写的项目
    golang项目:goa和micro
    lua远程调试,跨平台支持N多平台
    谈谈逆向android里面的so
    windows server 2016安装
    skynet记录7:第一个服务logger和第二个服务bootstrap
    skynet记录7:服务(c和lua)
    skynet记录6:定时器
    skynet记录5:框架综述
    skynet记录4:简单demo分析
  • 原文地址:https://www.cnblogs.com/yxysuanfa/p/7052422.html
Copyright © 2011-2022 走看看