zoukankan      html  css  js  c++  java
  • Spark SQL相关总结

    1.spark 数据透视图:

    pivot(pivot_colvalues=None)

    Pivots a column of the current [[DataFrame]] and perform the specified aggregation. There are two versions of pivot function: one that requires the caller to specify the list of distinct values to pivot on, and one that does not. The latter is more concise but less efficient, because Spark needs to first compute the list of distinct values internally.

    Parameters:
    • pivot_col – Name of the column to pivot.
    • values – List of values that will be translated to columns in the output DataFrame.

    # Compute the sum of earnings for each year by course with each course as a separate column

    >>> df4.groupBy("year").pivot("course", ["dotNET", "Java"]).sum("earnings").collect()
    [Row(year=2012, dotNET=15000, Java=20000), Row(year=2013, dotNET=48000, Java=30000)]
    

    # Or without specifying column values (less efficient)

    >>> df4.groupBy("year").pivot("course").sum("earnings").collect()
    [Row(year=2012, Java=20000, dotNET=15000), Row(year=2013, Java=30000, dotNET=48000)]
  • 相关阅读:
    PAT之我要通过
    卡拉兹(Callatz)猜想
    数组元素循环右移问题
    Reorder List
    一个fork的面试题
    内存流和null字节
    标准C IO函数和 内核IO函数 效率(时间)比较
    由fdopen和fopen想到的
    VS经常报错的link error 2019
    VS快捷键设置
  • 原文地址:https://www.cnblogs.com/arachis/p/spark_sql.html
Copyright © 2011-2022 走看看