zoukankan      html  css  js  c++  java
  • spark教程(六)-Python 编程与 spark-submit 命令

    hadoop 是 java 开发的,原生支持 java;spark 是 scala 开发的,原生支持 scala;

    spark 还支持 java、python、R,本文只介绍 python

    spark 1.x 和 spark 2.x 用法略有不同,spark 1.x 的用法大部分也适用于 spark 2.x 

    Pyspark

    它是 python 的一个库,python + spark,简单来说,想用 python 操作 spark,就必须用 pyspark 模块

    编程逻辑

    环境

    首先需要配置 /etc/profile

    # python can call pyspark directly
    export PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/pyspark:$SPARK_HOME/python/lib/py4j-0.10.4-src.zip:$PYTHONPATH

    python 的搜索路径 ,加上 spark 中 python 和 pyspark,以及 py4j-0.10.4-src.zip,他的作用是 负责 python 和 java 之间的 转换。

    编程 

    第一步,创建 SparkSession 或者 SparkContext

    在 spark1.x 中是创建 SparkContext

    在 spark2.x 中创建 SparkSession,或者说在 sparkSQL 应用中创建 SparkSession

    第二步,创建 RDD 并操作

    完整示例

    from __future__ import print_function
    from pyspark import *
    import os
    print(os.environ['SPARK_HOME'])
    print(os.environ['HADOOP_HOME'])
    if __name__ == '__main__':
        sc = SparkContext("spark://hadoop10:7077")
        rdd = sc.parallelize("hello Pyspark world".split(' '))
        counts = rdd.map(lambda word: (word, 1)) 
            .reduceByKey(lambda a, b: a + b)
        counts.saveAsTextFile('/usr/lib/spark/out')
        counts.foreach(print)
    
        sc.stop()

    运行方式

    1. python 命令 

    2. spark 命令   

    bin/spark-submit test1.py

    这里只是简单操作,下面会详细介绍 spark-submit 命令

    任务监控

    脚本模式 通过 http://192.168.10.10:8080/ 查看任务

    spark-submit

    [root@hadoop10 hadoop-2.6.5]# spark-submit --help
    Options:
      --master MASTER_URL         spark://host:port, mesos://host:port, yarn,       指定 spark 运行模式,即使在 代码里指定了 spark master,此处也需要重新指定
                                  k8s://https://host:port, or local (Default: local[*]).
      --deploy-mode DEPLOY_MODE   Whether to launch the driver program locally ("client") or        client 模式 or cluster 模式
                                  on one of the worker machines inside the cluster ("cluster")
                                  (Default: client).
      --class CLASS_NAME          Your application's main class (for Java / Scala apps).
      --name NAME                 A name of your application.
      --jars JARS                 Comma-separated list of jars to include on the driver
                                  and executor classpaths.
      --packages                  Comma-separated list of maven coordinates of jars to include
                                  on the driver and executor classpaths. Will search the local
                                  maven repo, then maven central and any additional remote
                                  repositories given by --repositories. The format for the
                                  coordinates should be groupId:artifactId:version.
      --exclude-packages          Comma-separated list of groupId:artifactId, to exclude while
                                  resolving the dependencies provided in --packages to avoid
                                  dependency conflicts.
      --repositories              Comma-separated list of additional remote repositories to
                                  search for the maven coordinates given with --packages.
      --py-files PY_FILES         Comma-separated list of .zip, .egg, or .py files to place
                                  on the PYTHONPATH for Python apps.
      --files FILES               Comma-separated list of files to be placed in the working
                                  directory of each executor. File paths of these files
                                  in executors can be accessed via SparkFiles.get(fileName).
    
      --conf PROP=VALUE           Arbitrary Spark configuration property.
      --properties-file FILE      Path to a file from which to load extra properties. If not
                                  specified, this will look for conf/spark-defaults.conf.
    
      --driver-memory MEM         Memory for driver (e.g. 1000M, 2G) (Default: 1024M).      指定 driver 内存,
      --driver-java-options       Extra Java options to pass to the driver.
      --driver-library-path       Extra library path entries to pass to the driver.
      --driver-class-path         Extra class path entries to pass to the driver. Note that
                                  jars added with --jars are automatically included in the
                                  classpath.
    
      --executor-memory MEM       Memory per executor (e.g. 1000M, 2G) (Default: 1G).       指定 executor 内存
    
      --proxy-user NAME           User to impersonate when submitting the application.
                                  This argument does not work with --principal / --keytab.
    
      --help, -h                  Show this help message and exit.      查看所有参数
      --verbose, -v               Print additional debug output.
      --version,                  Print the version of current Spark.
    
     Cluster deploy mode only:
      --driver-cores NUM          Number of cores used by the driver, only in cluster mode      指定 cpu 个数
                                  (Default: 1).
    
     Spark standalone or Mesos with cluster deploy mode only:
      --supervise                 If given, restarts the driver on failure.
      --kill SUBMISSION_ID        If given, kills the driver specified.
      --status SUBMISSION_ID      If given, requests the status of the driver specified.
    
     Spark standalone and Mesos only:
      --total-executor-cores NUM  Total cores for all executors.
    
     Spark standalone and YARN only:
      --executor-cores NUM        Number of cores per executor. (Default: 1 in YARN mode,
                                  or all available cores on the worker in standalone mode)
    
     YARN-only:
      --queue QUEUE_NAME          The YARN queue to submit to (Default: "default").
      --num-executors NUM         Number of executors to launch (Default: 2).
                                  If dynamic allocation is enabled, the initial number of
                                  executors will be at least NUM.
      --archives ARCHIVES         Comma separated list of archives to be extracted into the
                                  working directory of each executor.
      --principal PRINCIPAL       Principal to be used to login to KDC, while running on
                                  secure HDFS.
      --keytab KEYTAB             The full path to the file that contains the keytab for the
                                  principal specified above. This keytab will be copied to
                                  the node running the Application Master via the Secure
                                  Distributed Cache, for renewing the login tickets and the
                                  delegation tokens periodically.

    注意参数写在前面,运行的文件写在后面,如下

    spark-submit --master yarn-client  --driver-memory 512m  xx.py
  • 相关阅读:
    认识dojo
    CommonJS规范
    点滴
    快速排序
    npm常用命令
    http详解
    js经验点滴js apply/call/caller/callee/bind使用方法与区别分析
    给string添加新的函数
    大马隐藏锁定研究
    一键购买
  • 原文地址:https://www.cnblogs.com/yanshw/p/11655598.html
Copyright © 2011-2022 走看看