zoukankan      html  css  js  c++  java
  • Sparksql 取代 Hive?

    sparksql  hive

    https://databricks.com/blog/2014/07/01/shark-spark-sql-hive-on-spark-and-the-future-of-sql-on-spark.html

    https://cwiki.apache.org/confluence/display/Hive/Home

    【服务数仓,支持sql强标准】

    Apache Hive

    The Apache Hive™ data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage and queried using SQL syntax. 


    【执行引擎有Spark】


    Built on top of Apache Hadoop™, Hive provides the following features:

    • Tools to enable easy access to data via SQL, thus enabling data warehousing tasks such as extract/transform/load (ETL), reporting, and data analysis.
    • A mechanism to impose structure on a variety of data formats
    • Access to files stored either directly in Apache HDFS or in other data storage systems such as Apache HBase 

    • Query execution via Apache TezApache Spark, or MapReduce
    • Procedural language with HPL-SQL
    • Sub-second query retrieval via Hive LLAPApache YARN and Apache Slider.

    Hive provides standard SQL functionality, including many of the later SQL:2003 and SQL:2011 features for analytics. 
    Hive's SQL can also be extended with user code via user defined functions (UDFs), user defined aggregates (UDAFs), and user defined table functions (UDTFs).

    There is not a single "Hive format" in which data must be stored. Hive comes with built in connectors for comma and tab-separated values (CSV/TSV) text files, Apache ParquetApache ORC, and other formats. 
    Users can extend Hive with connectors for other formats. Please see File Formats and Hive SerDe in the Developer Guide for details.

    Hive is not designed for online transaction processing (OLTP) workloads. It is best used for traditional data warehousing tasks. 
    Hive is designed to maximize scalability (scale out with more machines added dynamically to the Hadoop cluster), performance, extensibility, fault-tolerance, and loose-coupling with its input formats.

    Components of Hive include HCatalog and WebHCat.

    • HCatalog is a component of Hive. It is a table and storage management layer for Hadoop that enables users with different data processing tools — including Pig and MapReduce — to more easily read and write data on the grid.
    • WebHCat provides a service that you can use to run Hadoop MapReduce (or YARN), Pig, Hive jobs or perform Hive metadata operations using an HTTP (REST style) interface.

     

    https://issues.apache.org/jira/browse/HIVE-7292

    Spark as an open-source data analytics cluster computing framework has gained significant momentum recently. Many Hive users already have Spark installed as their computing backbone. To take advantages of Hive, they still need to have either MapReduce or Tez on their cluster. This initiative will provide user a new alternative so that those user can consolidate their backend.

    Secondly, providing such an alternative further increases Hive's adoption as it exposes Spark users to a viable, feature-rich de facto standard SQL tools on Hadoop.

    【在多reducer阶段,性能佳】

    Finally, allowing Hive to run on Spark also has performance benefits. Hive queries, especially those involving multiple reducer stages, will run faster, thus improving user experience as Tez does.

    This is an umbrella JIRA which will cover many coming subtask. Design doc will be attached here shortly, and will be on the wiki as well. Feedback from the community is greatly appreciated!

    【共享Hive元数据】

    Sparksql   没有元数据? 通过临时创建元数据 或者 直接用Hive的元数据?

  • 相关阅读:
    微众银行面试小总结
    关于撑开父容器高度的小探讨
    2015年9月阿里校招前端工程师笔试题
    高性能JavaScript 重排与重绘
    高性能JavaScript DOM编程
    纯CSS3动画实现小黄人
    JS+css3实现图片画廊效果总结
    新游戏《机械险境》
    Twitter "fave"动画
    fragment 与 activity
  • 原文地址:https://www.cnblogs.com/rsapaper/p/7758338.html
Copyright © 2011-2022 走看看