zoukankan      html  css  js  c++  java
  • Hive表数据导出

    方式一: hadoop命令导出

    hadoop fs -get hdfs://hadoop000:8020/data/page_views2   pv2 

    方式二:通过insert...directory导出 【spark暂不支持】

    导出到本地:

    INSERT OVERWRITE LOCAL directory '/home/spark/hivetmp/'
    ROW FORMAT DELIMITED FIELDS TERMINATED BY '	' LINES TERMINATED BY '
    '
    select * from page_views;

    导出到HDFS:

    INSERT OVERWRITE directory '/hivetmp/'
    ROW FORMAT DELIMITED FIELDS TERMINATED BY '	' LINES TERMINATED BY '
    '
    select * from page_views;

    报错:cannot recognize input near 'ROW' 'FORMAT' 'DELIMITED' in select clause

    INSERT OVERWRITE directory '/hivetmp/'
    select * from page_views;

    注意: 导出到本地可以通过ROW FORMAT来设置分隔符,导出到HDFS是不能设置分隔符的

    方式三: shell命令 + 管道(hive -f/-e | sed/grep/awk > file)

    hive -e "select * from page_views limit 5"
    hive -S -e "select * from page_views limit 5" | grep B58W48U4WKZCJ5D1T3Z9ZY88RU7QA7B1
    hive -S -e "select * from page_views limit 5" | grep B58W48U4WKZCJ5D1T3Z9ZY88RU7QA7B1 > file

    方式四: sqoop

    详见sqoop章节:http://www.cnblogs.com/luogankun/category/601761.html

  • 相关阅读:
    工作流调度器Azkaban的安装配置
    MySQL初学入门基础知识-sql语句
    spark大数据生态架构
    快速排序算法——分析及总结 (非常好)
    经典的大数据面试题总结
    flume采集数据报错问题解决
    haproxy官方文档
    问题
    2016/6/7学习记录
    2016
  • 原文地址:https://www.cnblogs.com/luogankun/p/4108482.html
Copyright © 2011-2022 走看看