zoukankan      html  css  js  c++  java
  • HA分布式集群配置三 spark集群配置

    (一)HA下配置spark

    1,spark版本型号:spark-2.1.0-bin-hadoop2.7

    2,解压,修改配置环境变量

    tar -zxvf spark-2.1.0-bin-hadoop2.7.tgz
    mv spark-2.1.0-bin-hadoop2.7 /usr/spark-2.1.0
    
    vim /etc/profile
    export JAVA_HOME=/usr/java
    export SCALA_HOME=/usr/scala
    export HADOOP_HOME=/usr/hadoop-2.7.3
    export ZK_HOME=/usr/zookeeper-3.4.8
    export MYSQL_HOME=/usr/local/mysql
    export HIVE_HOME=/usr/hive-2.1.1
    export SPARK_HOME=/usr/spark-2.1.0
    export PATH=$SPARK_HOME/bin:$HIVE_HOME/bin:$MYSQL_HOME/bin:$ZK_HOME/bin:$JAVA_HOME/bin:$SCALA_HOME/bin:$HADOOP_HOME/bin:$PATH 
    export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar 

    3,修改spark-env.sh文件

    cd $SPARK_HOME/conf 
    vim spark-env.sh
    #添加
    export JAVA_HOME=/usr/java
    export SCALA_HOME=/usr/scala
    export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=ha1:2181,ha2:2181,ha3:2181 -Dspark.deploy.zookeeper.dir=/spark"  
    export HADOOP_CONF_DIR=/usr/hadoop-2.7.3/conf/etc/hadoop
    export SPARK_MASTER_PORT=7077
    export SPARK_EXECUTOR_INSTANCES=1
    export SPARK_WORKER_INSTANCES=1
    export SPARK_WORKER_CORES=1
    export SPARK_WORKER_MEMORY=1024M
    export SPARK_MASTER_WEBUI_PORT=8080
    export SPARK_CONF_DIR=/usr/spark-2.1.0/conf

    4,修改slaves文件

    vim slaves
    #添加
    ha2
    ha3
    ha4

    5,分发及启动

    cd /usr
    scp -r spark-2.1.0 root@ha4:/usr
    scp -r spark-2.1.0 root@ha3:/usr
    scp -r spark-2.1.0 root@ha2:/usr
    scp -r spark-2.1.0 root@ha1:/usr
    #在ha1上
    ./$SPARK_HOME/sbin/start-all.sh
    #ha2,ha3上
    ./$SPARK_HOME/sbin/start-master.sh

    各个节点jps情况:

    [root@ha1 spark-2.1.0]# jps
    2464 NameNode
    2880 ResourceManager
    2771 DFSZKFailoverController
    3699 Jps
    2309 QuorumPeerMain
    3622 Master
    [root@ha2 zookeeper-3.4.8]# jps
    2706 NodeManager
    3236 Jps
    2485 JournalNode
    3189 Worker
    2375 DataNode
    2586 DFSZKFailoverController
    2236 QuorumPeerMain
    2303 NameNode
    3622 Master
    [root@ha3 zookeeper-3.4.8]# jps
    2258 DataNode
    2466 NodeManager
    2197 QuorumPeerMain
    2920 Jps
    2873 Worker
    2331 JournalNode
    3622 Master
    [root@ha4 ~]# jps
    2896 Jps
    2849 Worker
    2307 JournalNode
    2443 NodeManager
    2237 DataNode
    View Code

    6,关机,快照 sparkok

    #启动集群顺序
    #ha1,ha2,ha3
    cd $ZK_HOME
    ./bin/zkServer.sh start
    #ha1
    cd $HADOOP_HOME
    ./sbin/start-all.sh
    cd $SPARK_HOME
    ./sbin/start-all.sh
    #ha2,ha3
    ./sbin/start-master.sh
  • 相关阅读:
    Java集合之LinkedHashMap
    ConcurrentHashMap原理分析
    Java集合之HashMap
    JAVA集合之ArrayList
    Python内建函数
    Vscode 安装Java Spring项目
    音频质量评估-2
    音频质量评估-1
    Python list 实现
    怎么测试大数据
  • 原文地址:https://www.cnblogs.com/ksWorld/p/7295646.html
Copyright © 2011-2022 走看看