apache的各个软件各个版本下载: http://archive.apache.org/dist/
1.下载spark.
sudo tar -zxf ~/下载/spark-2.0.2-bin-without-hadoop.tgz -C /usr/local/
cd /usr/local
sudo mv ./spark-2.0.2-bin-without-hadoop/ ./spark
sudo chown -R ubuntu ./spark
2.在Mster节点主机的终端中执行如下命令:
vim ~/.bashrc
export SPARK_HOME=/usr/local/spark
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
执行如下命令使得配置立即生效:
source ~/.bashrc
3.在Master节点主机上进行如下操作:
配置slaves文件
将 slaves.template 拷贝到 slaves
- cd /usr/local/spark/
- cp ./conf/slaves.template ./conf/slaves
slaves文件设置Worker节点。编辑slaves内容,把默认内容localhost替换成如下内容:
slave01
配置spark-env.sh文件
将 spark-env.sh.template 拷贝到 spark-env.sh
cp ./conf/spark-env.sh.template ./conf/spark-env.sh
编辑spark-env.sh,添加如下内容:
export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SPARK_MASTER_IP=192.168.1.104
SPARK_MASTER_IP 指定 Spark 集群 Master 节点的 IP 地址;
配置好后,将Master主机上的/usr/local/spark文件夹复制到各个节点上。在Master主机上执行如下命令:
- cd /usr/local/
- tar -zcf ~/spark.master.tar.gz ./spark
- cd ~
- scp ./spark.master.tar.gz slave01:/home/hadoop
- scp ./spark.master.tar.gz slave02:/home/hadoop
在slave01,slave02节点上分别执行下面同样的操作:
- sudo rm -rf /usr/local/spark/
- sudo tar -zxf ~/spark.master.tar.gz -C /usr/local
- sudo chown -R hadoop /usr/local/spark
4.启动hadoop集群,在master节点上运行。
- cd /usr/local/hadoop/
- sbin/start-all.sh
5.启动spark集群,在master节点上运行。
cd /usr/local/spark/
sbin/start-master.sh
在Master节点上运行jps命令,可以看到多了个Master进程:
15093 Jps
14343 SecondaryNameNode
14121 NameNode
14891 Master
14509 ResourceManager
启动所有Slave节点,在Master节点主机上运行如下命令:
sbin/start-slaves.sh
37553 DataNode
37684 NodeManager
37876 Worker
37924 Jps
http://172.19.57.221:8080/ spark web网页。
6.关闭spark集群
关闭Master节点
sbin/stop-master.sh
关闭Worker节点
sbin/stop-slaves.sh
关闭Hadoop集群
- cd /usr/local/hadoop/
- sbin/stop-all.sh