zoukankan      html  css  js  c++  java
  • spark通过合理设置spark.default.parallelism参数提高执行效率

    spark中有partition的概念(和slice是同一个概念,在spark1.2中官网已经做出了说明),一般每个partition对应一个task。在我的测试过程中,如果没有设置spark.default.parallelism参数,spark计算出来的partition非常巨大,与我的cores非常不搭。我在两台机器上(8cores *2 +6g * 2)上,spark计算出来的partition达到2.8万个,也就是2.9万个tasks,每个task完成时间都是几毫秒或者零点几毫秒,执行起来非常缓慢。在我尝试设置了 spark.default.parallelism 后,任务数减少到10,执行一次计算过程从minute降到20second。

    参数可以通过spark_home/conf/spark-default.conf配置文件设置。

    eg.

     spark.master                       spark://master:7077
     spark.default.parallelism          10
     spark.driver.memory                2g
     spark.serializer                   org.apache.spark.serializer.KryoSerializer
     spark.sql.shuffle.partitions       50

    下面是官网的相关描述:

    from:http://spark.apache.org/docs/latest/configuration.html

    Property NameDefaultMeaning
    spark.default.parallelism For distributed shuffle operations like reduceByKey and join, the largest number of partitions in a parent RDD. For operations likeparallelize with no parent RDDs, it depends on the cluster manager:
    • Local mode: number of cores on the local machine
    • Mesos fine grained mode: 8
    • Others: total number of cores on all executor nodes or 2, whichever is larger
    Default number of partitions in RDDs returned by transformations like joinreduceByKey, and parallelize when not set by user.

    from:http://spark.apache.org/docs/latest/tuning.html

    Level of Parallelism

    Clusters will not be fully utilized unless you set the level of parallelism for each operation high enough. Spark automatically sets the number of “map” tasks to run on each file according to its size (though you can control it through optional parameters to SparkContext.textFile, etc), and for distributed “reduce” operations, such as groupByKey and reduceByKey, it uses the largest parent RDD’s number of partitions. You can pass the level of parallelism as a second argument (see the spark.PairRDDFunctions documentation), or set the config propertyspark.default.parallelism to change the default. In general, we recommend 2-3 tasks per CPU core in your cluster.

  • 相关阅读:
    获取JVM的dump文件
    jmeter正则表达式提取器提取特定字符串后的全部内容
    mysql数据库开启慢查询日志
    正则中需要转义的特殊字符
    LoadRunner 调用Dll完成加密解密
    压缩十进制数据的一次实践
    记Judith此人和我对美国教育的感触
    在 sql server 中,不允许用户查看到所有数据库
    在 asp.net core 中使用 HttpContext.Current
    nginx 配置将某站点所有页面指向一个路径
  • 原文地址:https://www.cnblogs.com/wrencai/p/4231966.html
Copyright © 2011-2022 走看看