Step0:安装好Java ,jdk
Step1:下载好:
Step2: 将解压后的hadoop和spark设置好环境变量:
在系统path变量里面+:
Step3:
使用pip安装 py4j : pip install py4j
如果没装pip那就先装一下
例程:wordcount.py
运行worldcount例程发现,SPARK_HOME keyerror 然后 使用os设置了临时的环境变量。 麻蛋~ 发现重启一下编译器pycharm就好了
from pyspark import SparkContext import os os.environ["SPARK_HOME"] = "H:Sparkspark-2.0.1-bin-hadoop2.7" sc = SparkContext('local') doc = sc.parallelize([['a', 'b', 'c'], ['b', 'd', 'd']]) words = doc.flatMap(lambda d: d).distinct().collect() word_dict = {w: i for w, i in zip(words, range(len(words)))} word_dict_b = sc.broadcast(word_dict) def word_count_per_doc(d): dict_tmp = {} wd = word_dict_b.value for w in d: dict_tmp[wd[w]] = dict_tmp.get(wd[w], 0) + 1 return dict_tmp print(doc.map(word_count_per_doc).collect()) print("successful!")