一、安装jdk
二、安装scala
三、安装Spark
1、解压
tar -zxvf spark-1.5.1-bin-hadoop2.6.tgz
2、cp spark-env.sh.template spark-env.sh
vi spark-env.sh
# Hadoop配置文件目录 export HADOOP_CONF_DIR=/data/hadoop-2.7.1/etc/hadoop # SCALA路径 export SCALA_HOME=/data/scala-2.10.6 export JAVA_HOME=/data/jdk1.7.0_79 export SPARK_LOCAL_DIRS=/data/spark-1.5.1-bin-hadoop2.6 export SPARK_CONF_DIR=/data/spark-1.5.1-bin-hadoop2.6/conf export SPARK_MASTER_IP=192.168.1.105 export SPARK_MASTER_PORT=7077 #work执行任务使用本地磁盘的位置 export SPARK_WORKER_DIR=/data/spark-1.5.1-bin-hadoop2.6/tmp
3、work节点,输入主机名或者ip
cp slaves.template slaves
vi slaves
# A Spark Worker will be started on each of the machines listed below. node1 node2 node3
4、启动:
sbin/start-all.sh
查看进程:master1:
7438 Master
其他work 节点:
21454 Worker
5、关闭
sbin/stop-all.sh