zoukankan      html  css  js  c++  java
  • 大数据运维(43)Spark 2.4.6集群部署

    安装spark之前先安装hadoop集群。

    spark下载地址:

    1
    https://downloads.apache.org/spark/

    下载安装包:

    1
    wget https://downloads.apache.org/spark/spark-2.4.6/spark-2.4.6-bin-hadoop2.7.tgz

    安装包复制到各个节点:

    1
    scp spark-2.4.6-bin-hadoop2.7.tgz root@hadoop-node1:/root

    解压安装:

    1
    2
    3
    tar -xf spark-2.4.6-bin-hadoop2.7.tgz -C /usr/local/
    cd /usr/local/
    ln -sv spark-2.4.6-bin-hadoop2.7/ spark

    配置环境变量:

    1
    2
    3
    4
    5
    cat /etc/profile.d/spark.sh <<EOF
    export SPARK_HOME=/usr/local/spark
    export PATH=$PATH:$SPARK_HOME/bin
    EOF
    /etc/profile.d/spark.sh

    配置工作节点:这里将master节点也作为工作节点。

    1
    2
    3
    4
    5
    cat /usr/local/spark/conf/slaves <<EOF
    hadoop-master
    hadoop-node1
    hadoop-node2
    EOF

    复制配置文件:

    1
    cp /usr/local/spark/conf/spark-env.sh.template /usr/local/spark/conf/spark-env.sh

    修改环境变量:spark会先加载这个文件里的环境变量

    1
    2
    3
    cat >> /usr/local/spark/conf/spark-env.sh <<EOF
    export SPARK_MASTER_HOST=hadoop-master
    EOF

    修改属组属主:

    1
    2
    cd /usr/local/
    chown -R hadoop.hadoop spark/ spark

    复制配置到其他节点:

    1
    2
    scp ./* root@hadoop-node1:/usr/local/spark/conf/
    scp ./* root@hadoop-node2:/usr/local/spark/conf/

    启动master节点:使用hadoop用户启动。

    1
    2
    3
    su hadoop
    ~]$ ./start-master.sh 
    starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-hadoop-master.out

    查看主节点运行的进程:

    1
    2
    3
    4
    ~]$ jps
    5078 Master
    5163 Worker
    ...

    启动worker节点:

    1
    2
    3
    ]$ ./start-slaves.sh 
    hadoop-node1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop-node1.out
    hadoop-node2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop-node2.out

    node1节点:

    1
    2
    3
    ~]$ jps
    2898 Worker
    ...

    同时启动master和node节点:

    1
    2
    3
    4
    5
    ]$ ./start-all.sh 
    starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-hadoop-master.out
    hadoop-master: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop-master.out
    hadoop-node2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop-node2.out
    hadoop-node1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop-node1.out

    web页面:

    1
    http://192.168.0.54:8080/

    image.png

    作者:大码王

    -------------------------------------------

    个性签名:独学而无友,则孤陋而寡闻。做一个灵魂有趣的人!

    如果觉得这篇文章对你有小小的帮助的话,记得在右下角点个“推荐”哦,博主在此感谢!

    万水千山总是情,打赏一分行不行,所以如果你心情还比较高兴,也是可以扫码打赏博主,哈哈哈(っ•?ω•?)っ???!

  • 相关阅读:
    知乎
    热磁性储存系统转载
    超薄纳米纸张 比钢强250倍转载
    TFT LCD数据存储为BMP文件的C语言代码
    GPS NMEA0183协议详解 转载
    JPG文件结构分析转载
    SD/TF 引脚
    调试错误:No Algorithm found for(转载)
    STM32 USB IAP 步骤
    追踪“善恶有报” 解开生命健康福寿秘密(转载)
  • 原文地址:https://www.cnblogs.com/huanghanyu/p/13784064.html
Copyright © 2011-2022 走看看