zoukankan      html  css  js  c++  java
  • spark安装(实战)

    sparksql+hive :http://lxw1234.com/archives/2015/06/294.htm

    1,安装scala

    http://scala-lang.org/download/2.11.8.html

    scala-2.11.8.tgz

    放在/usr/bigdata 目录下

    tar -zxvf scala-2.11.8.tgz

    vi /etc/profile

    export SCALA_HOME=/usr/bigdata/scala-2.11.8
    export PATH=$PATH:$SCALA_HOME/bin

    source /etc/profile

    2,安装spark

    /usr/bigdata/spark-1.6.2-bin-hadoop2.6

    版本:spark-1.6.2-bin-hadoop2.6.tgz

    放在/usr/bigdata下面

    tar -zxvf  spark-1.6.2-bin-hadoop2.6.tgz

    vi /etc/profile

    export SPARK_HOME=/usr/bigdata/spark-1.6.2-bin-hadoop2.6
    export PATH=$PATH:$SPARK_HOME/bin

    source /etc/profile

    3,配置spark

    vi /usr/bigdata/spark-1.6.2-bin-hadoop2.6/conf/spark-env.sh

    export JAVA_HOME=/usr/java/jdk1.7.0_80
    export SCALA_HOME=/usr/bigdata/scala-2.11.8
    export HADOOP_HOME=/usr/bigdata/hadoop-2.6.2
    #HADOOP_OPTS=-Djava.library.path=/usr/bigdata/hadoop-2.6.2/lib/native
    export HADOOP_CONF_DIR=/usr/bigdata/hadoop-2.6.2/etc/hadoop

    export SPARK_CLASSPATH=$SPARK_CLASSPATH:$SPARK_HOME/lib/mysql-connector-java-5.1.38.jar

    vi slaves

    vm-10-112-29-172
    vm-10-112-29-174

    .sbin/start-all.sh

    每个节点上都同样配置:

    scp -r /usr/bigdata/scala-2.11.8 root@vm-10-112-29-172:/usr/bigdata/

    scp -r /usr/bigdata/spark-1.6.2-bin-hadoop2.6 root@vm-10-112-29-172:/usr/bigdata/

    scp /usr/bigdata/spark-1.6.2-bin-hadoop2.6/conf/slaves root@vm-10-112-29-172:/usr/bigdata/spark-1.6.2-bin-hadoop2.6/conf

     

     

    4,检查配置是否成功

    jps命令:

    在master节点上出现“Master”,在slave节点上出现“Worker”;

    5,运行检测

    cd bin/

    run-example SparkPi

    返回结果:

    Pi is roughly 3.14506

    6,运行spark自带的实例:

     ./bin/run-example org.apache.spark.examples.sql.JavaSparkSQL

    7,spark实例体验:

    http://my.oschina.net/scipio/blog/284957

    启动jdbc服务:

    ./start-thriftserver.sh --master yarn --hiveconf hive.server2.thrift.port=10009

    spark-sql 客户端启动:

    ./bin/spark-sql --master yarn-client --jars /usr/bigdata/spark-1.6.2-bin-hadoop2.6/lib/mysql-connector-java-5.1.17.jar

     -------------------------命令-----------------------------------

    1: ./spark-sql --master yarn-client
    2: ./spark-sql --master yarn-client --total-executor-cores 20 --driver-memory 1g --executor-memory 6g --executor-cores 6 --num-executors 100 --conf spark.default.parallelism=1000 --conf spark.storage.memoryFraction=0.5 --conf spark.shuffle.memoryFraction=0.3

    3: ./spark-sql --master yarn-client --total-executor-cores 20 --driver-memory 1g --executor-memory 6g --executor-cores 6 --num-executors 200 --conf spark.default.parallelism=1200 --conf spark.storage.memoryFraction=0.4 --conf spark.shuffle.memoryFraction=0.4

    ./spark-sql --master yarn-client --total-executor-cores 20 --driver-memory 1g --executor-memory 6g --executor-cores 6 --num-executors 200 --conf spark.default.parallelism=1200 --conf spark.storage.memoryFraction=0.4 --conf spark.shuffle.memoryFraction=0.4 --conf spark.sql.shuffle.partitions=300

    ./start-thriftserver.sh  --hiveconf hive.server2.thrift.port=10009

    !connect jdbc:hive2://node6:10009

  • 相关阅读:
    系统架构师学习笔记_第十三章(上)_连载
    PHP开发不能违背的安全规则
    五种常见的PHP设计模式
    系统架构师学习笔记_第十四章_连载
    Agile PLM Setting Up EC Attributes and Attribute Mapping
    Agile PLM EC Client Product Structure
    Agile EC 301 SolidWorks Connector Administration
    Agile PLM Create Item /BOM Dialog
    Agile PLM 权限控制
    Agile PLM EC Understand the BOM Publishing Process
  • 原文地址:https://www.cnblogs.com/8899man/p/5795642.html
Copyright © 2011-2022 走看看