zoukankan      html  css  js  c++  java
  • Local模式运行Spark

    1. 下载Scala,并设置SCALA_HOME和PATH。
    2. 下载Hadoop,并设置HADOOP_HOME和PATH。我下载的是 hadoop2.6_Win_x64-master。主要用到了winutils.exe 这个工具
    3. 下载Spark,主要是为了得到 spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar
    4. 下载scala-SDK,设置JRE Library和Scala Library
    5. 新建 Scala Project,并在工程的Libraries引入 spark-assembly-1.6.0-hadoop2.6.0.jar
    6. val conf = new SparkConf().setMaster("local").setAppName("FileWordCount"); 即可在本地运行Spark应用。  

      也可以在spark-submit 里通过 --master 参数来指定为本地模式:

    local
    Run Spark locally with one worker thread (i.e. no parallelism at all).


    local[K]
    Run Spark locally with K worker threads (ideally, set this to the number of cores on your machine).


    local[*]
    Run Spark locally with as many worker threads as logical cores on your machine.


    spark://HOST:PORT
    Connect to the given Spark standalone cluster master. The port must be whichever one your master is configured to use, which is 7077 by default.


    mesos://HOST:PORT
    Connect to the given Mesos cluster. The port must be whichever one your is configured to use, which is 5050 by default. Or, for a Mesos cluster using ZooKeeper, use mesos://zk://.... To submit with --deploy-mode cluster, the HOST:PORT should be configured to connect to the MesosClusterDispatcher.


    yarn
    Connect to a YARN cluster in client or cluster mode depending on the value of --deploy-mode. The cluster location will be found based on the HADOOP_CONF_DIR or YARN_CONF_DIR variable.

    请参考:

    http://spark.apache.org/docs/latest/submitting-applications.html

  • 相关阅读:
    记一次5000并发的调试过程
    Thread Local Area内存溢出的处理方法
    关于不停机部署方案的选择
    Spring Cloud版本选择
    python学习记录-机器学习
    SAM宏观生态学空间分析帮助文档
    arcgis raster clip and mask difference 栅格 提取 clip 和 mask 方法的区别
    R语言 重命名目录下所有文件
    R语言并行运算示例 parallel 包
    R语言查看栅格值
  • 原文地址:https://www.cnblogs.com/machong/p/5806284.html
Copyright © 2011-2022 走看看