zoukankan      html  css  js  c++  java
  • spark 按照key 分组 然后统计每个key对应的最大、最小、平均值思路——使用groupby,或者reduceby

    What you're getting back is an object which allows you to iterate over the results. You can turn the results of groupByKey into a list by calling list() on the values, e.g.
    
    example = sc.parallelize([(0, u'D'), (0, u'D'), (1, u'E'), (2, u'F')])
    
    example.groupByKey().collect()
    # Gives [(0, <pyspark.resultiterable.ResultIterable object ......]
    
    example.groupByKey().map(lambda x : (x[0], list(x[1]))).collect()
    # Gives [(0, [u'D', u'D']), (1, [u'E']), (2, [u'F'])]

    # OR:
    example.groupByKey().mapValues(list)
     
    Hey Ron, 
    
    It was pretty much exactly as Sean had depicted. I just needed to provide
    count an anonymous function to tell it which elements to count. Since I
    wanted to count them all, the function is simply "true".
    
            val grouped = rdd.groupByKey().mapValues { mcs =>
              val values = mcs.map(_.foo.toDouble)
              val n = values.count(x => true)
              val sum = values.sum
              val sumSquares = values.map(x => x * x).sum
              val stddev = math.sqrt(n * sumSquares - sum * sum) / n
              print("stddev: " + stddev)
              stddev
            }
    
    
    I hope that helps
    Just don't. Use reduce by key:
    
    lines.map(lambda x: (x[1][0:4], (x[0], float(x[3])))).map(lambda x: (x, x)) 
        .reduceByKey(lambda x, y: (
            min(x[0], y[0], key=lambda x: x[1]), 
            max(x[1], y[1], , key=lambda x: x[1])))
  • 相关阅读:
    java要注意的问题1
    广义表(线性表的推广)
    java基本类型和包装器类
    面试题10:斐波那契数列
    面试题9:用两个栈实现队列
    面试题8:二叉树的下一个结点
    10 分组数据
    9 汇总数据
    8 使用数据处理函数
    7 创建计算字段
  • 原文地址:https://www.cnblogs.com/bonelee/p/7156188.html
Copyright © 2011-2022 走看看