zoukankan      html  css  js  c++  java
  • 1.3 集群资源管理

    spark-submit帮助

    [centos@s101 ~/myspark]$ spark-submit --help
    Usage: spark-submit [options] <app jar | python file> [app arguments]
    Usage: spark-submit --kill [submission ID] --master [spark://...]
    Usage: spark-submit --status [submission ID] --master [spark://...]
    Usage: spark-submit run-example [options] example-class [example args]
    
    Options:
      --master MASTER_URL         spark://host:port, mesos://host:port, yarn, or local.
      --deploy-mode DEPLOY_MODE   Whether to launch the driver program locally ("client") or
                                  on one of the worker machines inside the cluster ("cluster")
                                  (Default: client).
      --class CLASS_NAME          Your application's main class (for Java / Scala apps).
      --name NAME                 A name of your application.
      --jars JARS                 Comma-separated list of local jars to include on the driver
                                  and executor classpaths.
      --packages                  Comma-separated list of maven coordinates of jars to include
                                  on the driver and executor classpaths. Will search the local
                                  maven repo, then maven central and any additional remote
                                  repositories given by --repositories. The format for the
                                  coordinates should be groupId:artifactId:version.
      --exclude-packages          Comma-separated list of groupId:artifactId, to exclude while
                                  resolving the dependencies provided in --packages to avoid
                                  dependency conflicts.
      --repositories              Comma-separated list of additional remote repositories to
                                  search for the maven coordinates given with --packages.
      --py-files PY_FILES         Comma-separated list of .zip, .egg, or .py files to place
                                  on the PYTHONPATH for Python apps.
      --files FILES               Comma-separated list of files to be placed in the working
                                  directory of each executor.
    
      --conf PROP=VALUE           Arbitrary Spark configuration property.
      --properties-file FILE      Path to a file from which to load extra properties. If not
                                  specified, this will look for conf/spark-defaults.conf.
    
      --driver-memory MEM         Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
      --driver-java-options       Extra Java options to pass to the driver.
      --driver-library-path       Extra library path entries to pass to the driver.
      --driver-class-path         Extra class path entries to pass to the driver. Note that
                                  jars added with --jars are automatically included in the
                                  classpath.
    
      --executor-memory MEM       Memory per executor (e.g. 1000M, 2G) (Default: 1G).
    
      --proxy-user NAME           User to impersonate when submitting the application.
                                  This argument does not work with --principal / --keytab.
    
      --help, -h                  Show this help message and exit.
      --verbose, -v               Print additional debug output.
      --version,                  Print the version of current Spark.
    
     Spark standalone with cluster deploy mode only:
      --driver-cores NUM          Cores for driver (Default: 1).
    
     Spark standalone or Mesos with cluster deploy mode only:
      --supervise                 If given, restarts the driver on failure.
      --kill SUBMISSION_ID        If given, kills the driver specified.
      --status SUBMISSION_ID      If given, requests the status of the driver specified.
    
     Spark standalone and Mesos only:
      --total-executor-cores NUM  Total cores for all executors.
    
     Spark standalone and YARN only:
      --executor-cores NUM        Number of cores per executor. (Default: 1 in YARN mode,
                                  or all available cores on the worker in standalone mode)
    
     YARN-only:
      --driver-cores NUM          Number of cores used by the driver, only in cluster mode
                                  (Default: 1).
      --queue QUEUE_NAME          The YARN queue to submit to (Default: "default").
      --num-executors NUM         Number of executors to launch (Default: 2).
                                  If dynamic allocation is enabled, the initial number of
                                  executors will be at least NUM.
      --archives ARCHIVES         Comma separated list of archives to be extracted into the
                                  working directory of each executor.
      --principal PRINCIPAL       Principal to be used to login to KDC, while running on
                                  secure HDFS.
      --keytab KEYTAB             The full path to the file that contains the keytab for the
                                  principal specified above. This keytab will be copied to
                                  the node running the Application Master via the Secure
                                  Distributed Cache, for renewing the login tickets and the
                                  delegation tokens periodically.

    core : 使用worker节点的所有内核,内核进行物理检测。
    memory : 内存使用1g内存,内存不进行物理检测。

    --driver-memory 2g //控制driver堆内存,默认1g
    --executor-memory MEM //每个executor的内存,默认 1G.

    [standalone + cluster]
    --driver-cores NUM //控制driver的内核数

    
    

    [Spark standalone和 Mesos]
    --total-executor-cores NUM //用于所有executor的总的内核数

    
    

    [spark standalone | yarn]
    --executor-cores //每个执行器的内核数,yarn模式是1,standalone是所有可能内核。

    
    

    [YARN-only]
    --driver-cores NUM //driver内核数,只用于cluster模式(Default: 1).
    --num-executors NUM //启动的执行器个数(Default: 2).

     

    模式standalone

    [spark/conf/spark-env.sh]
    export JAVA_HOME=/soft/jdk
    
    # 每个worker使用的内核数
    export SPARK_WORKER_CORES=2
    #每个worker使用内存数
    export SPARK_WORKER_MEMORY=2g
    #是否可以在一个节点启动几个worker进程
    export SPARK_WORKER_INSTANCES=2
    #master和worker进程本身的内存数
    export SPARK_DAEMON_MEMORY=200m

    配置完分发

    //启动spark-shell,s102-s104各两个worker,每个worker启动四个CoarseGrainedExecutorBackend
    spark-shell --master spark://s101:7077 --driver-memory 2g --executor-memory 1g --driver-cores 2 --executor-cores 1
    渐变 --> 突变
  • 相关阅读:
    Proj THUDBFuzz Paper Reading: The Art, Science, and Engineering of Fuzzing: A Survey
    Proj THUDBFuzz Paper Reading: A systematic review of fuzzing based on machine learning techniques
    9.3 付费代理的使用
    11.1 Charles 的使用
    第十一章 APP 的爬取
    10.2 Cookies 池的搭建
    10.1 模拟登录并爬取 GitHub
    11.5 Appium 爬取微信朋友圈
    11.4 Appium 的基本使用
    11.3 mitmdump 爬取 “得到” App 电子书信息
  • 原文地址:https://www.cnblogs.com/lybpy/p/9778299.html
Copyright © 2011-2022 走看看