zoukankan      html  css  js  c++  java
  • spark0.9分布式安装

    http://blog.csdn.net/myboyliu2007/article/details/18990277

    spark安装包:spark-0.9.0-incubating-bin-hadoop2.tgz

    操作系统:     CentOS6.4

    jdk版本:      jdk1.7.0_21

    1. Cluster模式

    1.1安装Hadoop

    用VMware Workstation创建三台CentOS虚拟机,hostname分别设置为 master,slaver01, slaver02,设置SSH无密码登陆,安装hadoop,然后启动hadoop集群。参考我的这篇博客,hadoop-2.2.0分布式安装.

    1.2 Scala

    在三台机器上都要安装 Scala 2.9.3,按照我的博客SparK安装的步骤。JDK在安装Hadoop时已经安装了。进入master节点。

    $ cd
    $ scp -r scala-2.10.3 root@slaver01:~
    $ scp -r scala-2.10.3 root@slaver02:~

     

    1.3在master上安装并配置Spark

    解压

    $ tar -zxf spark-0.9.0-incubating-bin-hadoop2.tgz
    $ mv spark-0.9.0-incubating-bin-hadoop2 spark-0.9

    在 inconf/spark-env.sh中设置SCALA_HOME

    $ cd ~/spark-0.9/conf
    $ mv spark-env.sh.template spark-env.sh
    $ vi spark-env.sh
    # add the following line
    export SCALA_HOME=/root/scala-2.10.3
    export JAVA_HOME=/usr/java/jdk1.7.0_21
    export SPARK_MASTER_IP=192.168.159.129

    export SPARK_WORKER_MEMORY=1000m

    # save and exit

    conf/slaves,添加Sparkworker的hostname,一行一个。

    $ vim slaves
    slaver01
    slaver02
    master
    # save and exit

    (可选)设置 SPARK_HOME环境变量,并将SPARK_HOME/bin加入PATH

    $ vim /etc/profile
    # add the following lines at the end
    export SPARK_HOME=$HOME/spark-0.9
    export PATH=$PATH:$SPARK_HOME/bin
    # save and exit vim
    #make the bash profile take effect immediately
    $ source /etc/profile

    1.4在所有worker上安装并配置Spark

    既然master上的这个文件件已经配置好了,把它拷贝到所有的worker即可。注意,三台机器spark所在目录必须一致,因为master会登陆到worker上执行命令,master认为worker的spark路径与自己一样。

    $ cd
    $ scp -r spark-0.9 root@slaver01:~
    $ scp -r spark-0.9 root@slaver02:~

    1.5启动 Spark集群

    在master上执行

    $ cd ~/spark-0.9
    $ ./sbin/start-all.sh

    检测进程是否启动

    [root@master ~]# jps
    3200 SecondaryNameNode
    7350 Master
    7470 Worker
    3025 NameNode
    8022 Jps
    3332 ResourceManager
     

    浏览master的web UI(默认http://master:8080).这是你应该可以看到所有的work节点,以及他们的CPU个数和内存等信息。


    1.6运行Spark自带的例子

    运行SparkPi

    $ cd ~/spark-0.9

    root@master spark-0.9]# ./bin/run-example org.apache.spark.examples.SparkPi local
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:/root/spark-0.9/examples/target/scala-2.10/spark-examples_2.10-assembly-0.9.0-incubating.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/root/spark-0.9/assembly/target/scala-2.10/spark-assembly_2.10-0.9.0-incubating-hadoop2.2.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
    1 [spark-akka.actor.default-dispatcher-2] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
    101 [spark-akka.actor.default-dispatcher-5] INFO Remoting - Starting remoting
    439 [spark-akka.actor.default-dispatcher-4] INFO Remoting - Remoting started; listening on addresses :[akka.tcp://spark@master:37615]
    444 [spark-akka.actor.default-dispatcher-4] INFO Remoting - Remoting now listens on addresses: [akka.tcp://spark@master:37615]
    510 [main] INFO org.apache.spark.SparkEnv - Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
    510 [main] INFO org.apache.spark.SparkEnv - Registering BlockManagerMaster
    590 [main] INFO org.apache.spark.storage.DiskBlockManager - Created local directory at /tmp/spark-local-20140209203346-b5da
    596 [main] INFO org.apache.spark.storage.MemoryStore - MemoryStore started with capacity 148.5 MB.
    634 [main] INFO org.apache.spark.network.ConnectionManager - Bound socket to port 57070 with id = ConnectionManagerId(master,57070)
    642 [main] INFO org.apache.spark.storage.BlockManagerMaster - Trying to register BlockManager
    648 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.storage.BlockManagerMasterActor$BlockManagerInfo - Registering block manager master:57070 with 148.5 MB RAM
    651 [main] INFO org.apache.spark.storage.BlockManagerMaster - Registered BlockManager
    693 [main] INFO org.apache.spark.HttpServer - Starting HTTP Server
    781 [main] INFO org.eclipse.jetty.server.Server - jetty-7.x.y-SNAPSHOT
    810 [main] INFO org.eclipse.jetty.server.AbstractConnector - Started SocketConnector@0.0.0.0:59449
    813 [main] INFO org.apache.spark.broadcast.HttpBroadcast - Broadcast server started at http://192.168.159.129:59449
    823 [main] INFO org.apache.spark.SparkEnv - Registering MapOutputTracker
    831 [main] INFO org.apache.spark.HttpFileServer - HTTP File server directory is /tmp/spark-d98132d9-6758-47ef-83ea-5bb749e14f4c
    831 [main] INFO org.apache.spark.HttpServer - Starting HTTP Server
    833 [main] INFO org.eclipse.jetty.server.Server - jetty-7.x.y-SNAPSHOT
    837 [main] INFO org.eclipse.jetty.server.AbstractConnector - Started SocketConnector@0.0.0.0:55967
    1135 [main] INFO org.eclipse.jetty.server.Server - jetty-7.x.y-SNAPSHOT
    1136 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/storage/rdd,null}
    1136 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/storage,null}
    1136 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/stages/stage,null}
    1136 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/stages/pool,null}
    1136 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/stages,null}
    1136 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/environment,null}
    1136 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/executors,null}
    1136 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/metrics/json,null}
    1136 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/static,null}
    1136 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/,null}
    1149 [main] INFO org.eclipse.jetty.server.AbstractConnector - Started SelectChannelConnector@0.0.0.0:4040
    1150 [main] INFO org.apache.spark.ui.SparkUI - Started Spark Web UI at http://master:4040
    14/02/09 20:33:47 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    2224 [main] INFO org.apache.spark.SparkContext - Added JAR /root/spark-0.9/examples/target/scala-2.10/spark-examples_2.10-assembly-0.9.0-incubating.jar at http://192.168.159.129:55967/jars/spark-examples_2.10-assembly-0.9.0-incubating.jar with timestamp 1391949228471
    2681 [main] INFO org.apache.spark.SparkContext - Starting job: reduce at SparkPi.scala:39
    2712 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.DAGScheduler - Got job 0 (reduce at SparkPi.scala:39) with 2 output partitions (allowLocal=false)
    2713 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.DAGScheduler - Final stage: Stage 0 (reduce at SparkPi.scala:39)
    2714 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.DAGScheduler - Parents of final stage: List()
    2723 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.DAGScheduler - Missing parents: List()
    2732 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.DAGScheduler - Submitting Stage 0 (MappedRDD[1] at map at SparkPi.scala:35), which has no missing parents
    2851 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.DAGScheduler - Submitting 2 missing tasks from Stage 0 (MappedRDD[1] at map at SparkPi.scala:35)
    2854 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Adding task set 0.0 with 2 tasks
    2882 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 0.0:0 as TID 0 on executor localhost: localhost (PROCESS_LOCAL)
    2899 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 0.0:0 as 1426 bytes in 8 ms
    2913 [Executor task launch worker-0] INFO org.apache.spark.executor.Executor - Running task ID 0
    2935 [Executor task launch worker-0] INFO org.apache.spark.executor.Executor - Fetching http://192.168.159.129:55967/jars/spark-examples_2.10-assembly-0.9.0-incubating.jar with timestamp 1391949228471
    2937 [Executor task launch worker-0] INFO org.apache.spark.util.Utils - Fetching http://192.168.159.129:55967/jars/spark-examples_2.10-assembly-0.9.0-incubating.jar to /tmp/fetchFileTemp2801445921033654743.tmp
    5486 [Executor task launch worker-0] INFO org.apache.spark.executor.Executor - Adding file:/tmp/spark-539f9793-313d-4293-b83f-3bcfda6be08a/spark-examples_2.10-assembly-0.9.0-incubating.jar to class loader
    5568 [Executor task launch worker-0] INFO org.apache.spark.executor.Executor - Serialized size of result for 0 is 641
    5569 [Executor task launch worker-0] INFO org.apache.spark.executor.Executor - Sending result for 0 directly to driver
    5572 [Executor task launch worker-0] INFO org.apache.spark.executor.Executor - Finished task ID 0
    5573 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 0.0:1 as TID 1 on executor localhost: localhost (PROCESS_LOCAL)
    5575 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 0.0:1 as 1426 bytes in 0 ms
    5576 [Executor task launch worker-0] INFO org.apache.spark.executor.Executor - Running task ID 1
    5583 [Result resolver thread-0] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 0 in 2701 ms on localhost (progress: 0/2)
    5591 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(0, 0)
    5630 [Executor task launch worker-0] INFO org.apache.spark.executor.Executor - Serialized size of result for 1 is 641
    5630 [Executor task launch worker-0] INFO org.apache.spark.executor.Executor - Sending result for 1 directly to driver
    5631 [Executor task launch worker-0] INFO org.apache.spark.executor.Executor - Finished task ID 1
    5636 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 1 in 63 ms on localhost (progress: 1/2)
    5640 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(0, 1)
    5643 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.DAGScheduler - Stage 0 (reduce at SparkPi.scala:39) finished in 2.765 s
    5648 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Remove TaskSet 0.0 from pool
    5661 [main] INFO org.apache.spark.SparkContext - Job finished: reduce at SparkPi.scala:39, took 2.979330573 s
    Pi is roughly 3.143
    5681 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/,null}
    5681 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/static,null}
    5682 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/metrics/json,null}
    5682 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/executors,null}
    5682 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/environment,null}
    5682 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/stages,null}
    5682 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/stages/pool,null}
    5682 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/stages/stage,null}
    5682 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/storage,null}
    5682 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/storage/rdd,null}
    6817 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.MapOutputTrackerMasterActor - MapOutputTrackerActor stopped!
    6873 [connection-manager-thread] INFO org.apache.spark.network.ConnectionManager - Selector thread was interrupted!
    6875 [main] INFO org.apache.spark.network.ConnectionManager - ConnectionManager stopped
    6882 [main] INFO org.apache.spark.storage.MemoryStore - MemoryStore cleared
    6884 [main] INFO org.apache.spark.storage.BlockManager - BlockManager stopped
    6886 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.storage.BlockManagerMasterActor - Stopping BlockManagerMaster
    6889 [main] INFO org.apache.spark.storage.BlockManagerMaster - BlockManagerMaster stopped
    6901 [main] INFO org.apache.spark.SparkContext - Successfully stopped SparkContext
    6924 [spark-akka.actor.default-dispatcher-2] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
    6947 [spark-akka.actor.default-dispatcher-13] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
    7025 [spark-akka.actor.default-dispatcher-5] INFO Remoting - Remoting shut down
    7027 [spark-akka.actor.default-dispatcher-2] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down.

    [root@master spark-0.9]#
     

    运行SparkLR

    #Logistic Regression

     [root@master spark-0.9]# ./bin/run-example org.apache.spark.examples.SparkLR spark://192.168.159.129:7077
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:/root/spark-0.9/examples/target/scala-2.10/spark-examples_2.10-assembly-0.9.0-incubating.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/root/spark-0.9/assembly/target/scala-2.10/spark-assembly_2.10-0.9.0-incubating-hadoop2.2.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
    0 [spark-akka.actor.default-dispatcher-4] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
    95 [spark-akka.actor.default-dispatcher-4] INFO Remoting - Starting remoting
    441 [spark-akka.actor.default-dispatcher-2] INFO Remoting - Remoting started; listening on addresses :[akka.tcp://spark@master:59496]
    441 [spark-akka.actor.default-dispatcher-2] INFO Remoting - Remoting now listens on addresses: [akka.tcp://spark@master:59496]
    494 [main] INFO org.apache.spark.SparkEnv - Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
    494 [main] INFO org.apache.spark.SparkEnv - Registering BlockManagerMaster
    565 [main] INFO org.apache.spark.storage.DiskBlockManager - Created local directory at /tmp/spark-local-20140208155527-140e
    569 [main] INFO org.apache.spark.storage.MemoryStore - MemoryStore started with capacity 148.5 MB.
    607 [main] INFO org.apache.spark.network.ConnectionManager - Bound socket to port 46771 with id = ConnectionManagerId(master,46771)
    kend - Registered executor: Actor[akka.tcp://sparkExecutor@slaver02:37832/user/Executor#1651743406] with ID 3
    45103 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Executor 1 disconnected, so removing it
    45103 [spark-akka.actor.default-dispatcher-4] ERROR org.apache.spark.scheduler.TaskSchedulerImpl - Lost executor 1 on slaver01: remote Akka client disassociated
    45103 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.TaskSetManager - Re-queueing tasks for 1 from TaskSet 0.0
    45104 [spark-akka.actor.default-dispatcher-4] WARN org.apache.spark.scheduler.TaskSetManager - Lost TID 1 (task 0.0:0)
    45104 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 0.0:0 as TID 3 on executor 3: slaver02 (PROCESS_LOCAL)
    45105 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.DAGScheduler - Executor lost: 1 (epoch 1)
    45105 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.storage.BlockManagerMasterActor - Trying to remove executor 1 from BlockManagerMaster.
    45106 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.storage.BlockManagerMaster - Removed 1 successfully in removeExecutor
    45433 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 0.0:0 as 551899 bytes in 329 ms
    45453 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor updated: app-20140208155019-0001/1 is now FAILED (Command exited with code 1)
    45453 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Executor app-20140208155019-0001/1 removed: Command exited with code 1
    45468 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor added: app-20140208155019-0001/5 on worker-20140208153257-slaver01-49982 (slaver01:49982) with 1 cores
    45469 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Granted executor ID app-20140208155019-0001/5 on hostPort slaver01:49982 with 1 cores, 512.0 MB RAM
    45472 [spark-akka.actor.default-dispatcher-5] ERROR akka.remote.EndpointWriter - AssociationError [akka.tcp://spark@master:52897] -> [akka.tcp://sparkExecutor@slaver01:57904]: Error [Association failed with [akka.tcp://sparkExecutor@slaver01:57904]] [
    akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor@slaver01:57904]
    Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: slaver01/192.168.159.130:57904
    ]
    45483 [spark-akka.actor.default-dispatcher-4] ERROR akka.remote.EndpointWriter - AssociationError [akka.tcp://spark@master:52897] -> [akka.tcp://sparkExecutor@slaver01:57904]: Error [Association failed with [akka.tcp://sparkExecutor@slaver01:57904]] [
    akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor@slaver01:57904]
    Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: slaver01/192.168.159.130:57904
    ]
    45780 [spark-akka.actor.default-dispatcher-5] ERROR akka.remote.EndpointWriter - AssociationError [akka.tcp://spark@master:52897] -> [akka.tcp://sparkExecutor@slaver01:57904]: Error [Association failed with [akka.tcp://sparkExecutor@slaver01:57904]] [
    akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor@slaver01:57904]
    Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: slaver01/192.168.159.130:57904
    ]
    45781 [spark-akka.actor.default-dispatcher-15] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor updated: app-20140208155019-0001/5 is now RUNNING
    46589 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.storage.BlockManagerMasterActor$BlockManagerInfo - Added rdd_0_1 in memory on master:44542 (size: 717.5 KB, free: 296.3 MB)
    46762 [Result resolver thread-0] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 2 in 20510 ms on master (progress: 0/2)
    46767 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(0, 1)
    50080 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Registered executor: Actor[akka.tcp://sparkExecutor@slaver01:60056/user/Executor#1166039681] with ID 5
    54828 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.storage.BlockManagerMasterActor$BlockManagerInfo - Registering block manager slaver01:43329 with 297.0 MB RAM
    62254 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.storage.BlockManagerMasterActor$BlockManagerInfo - Registering block manager slaver02:55569 with 297.0 MB RAM
    89782 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.storage.BlockManagerMasterActor$BlockManagerInfo - Added rdd_0_0 in memory on slaver02:55569 (size: 717.5 KB, free: 296.3 MB)
    90043 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 3 in 44939 ms on slaver02 (progress: 1/2)
    90043 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(0, 0)
    90061 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Stage 0 (reduce at SparkLR.scala:64) finished in 84.257 s
    90066 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Remove TaskSet 0.0 from pool
    90075 [main] INFO org.apache.spark.SparkContext - Job finished: reduce at SparkLR.scala:64, took 84.881569465 s
    On iteration 2
    90100 [main] INFO org.apache.spark.SparkContext - Starting job: reduce at SparkLR.scala:64
    90100 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Got job 1 (reduce at SparkLR.scala:64) with 2 output partitions (allowLocal=false)
    90100 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Final stage: Stage 1 (reduce at SparkLR.scala:64)
    90101 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Parents of final stage: List()
    90103 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Missing parents: List()
    90104 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Submitting Stage 1 (MappedRDD[2] at map at SparkLR.scala:62), which has no missing parents
    90164 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Submitting 2 missing tasks from Stage 1 (MappedRDD[2] at map at SparkLR.scala:62)
    90165 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Adding task set 1.0 with 2 tasks
    90169 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 1.0:1 as TID 4 on executor 4: master (PROCESS_LOCAL)
    90191 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 1.0:1 as 551895 bytes in 21 ms
    90194 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 1.0:0 as TID 5 on executor 3: slaver02 (PROCESS_LOCAL)
    90220 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 1.0:0 as 551895 bytes in 25 ms
    91222 [Result resolver thread-2] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 4 in 1053 ms on master (progress: 0/2)
    91224 [spark-akka.actor.default-dispatcher-16] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(1, 1)
    91609 [Result resolver thread-3] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 5 in 1415 ms on slaver02 (progress: 1/2)
    91610 [spark-akka.actor.default-dispatcher-16] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(1, 0)
    91610 [spark-akka.actor.default-dispatcher-16] INFO org.apache.spark.scheduler.DAGScheduler - Stage 1 (reduce at SparkLR.scala:64) finished in 1.437 s
    91611 [main] INFO org.apache.spark.SparkContext - Job finished: reduce at SparkLR.scala:64, took 1.510467278 s
    On iteration 3
    91616 [Result resolver thread-3] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Remove TaskSet 1.0 from pool
    91622 [main] INFO org.apache.spark.SparkContext - Starting job: reduce at SparkLR.scala:64
    91623 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Got job 2 (reduce at SparkLR.scala:64) with 2 output partitions (allowLocal=false)
    91623 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Final stage: Stage 2 (reduce at SparkLR.scala:64)
    91623 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Parents of final stage: List()
    91624 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Missing parents: List()
    91624 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Submitting Stage 2 (MappedRDD[3] at map at SparkLR.scala:62), which has no missing parents
    91777 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Submitting 2 missing tasks from Stage 2 (MappedRDD[3] at map at SparkLR.scala:62)
    91777 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Adding task set 2.0 with 2 tasks
    91779 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 2.0:1 as TID 6 on executor 4: master (PROCESS_LOCAL)
    91899 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 2.0:1 as 551896 bytes in 119 ms
    91899 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 2.0:0 as TID 7 on executor 3: slaver02 (PROCESS_LOCAL)
    91922 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 2.0:0 as 551896 bytes in 23 ms
    92290 [Result resolver thread-0] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 6 in 511 ms on master (progress: 0/2)
    92291 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(2, 1)
    92694 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 7 in 794 ms on slaver02 (progress: 1/2)
    92694 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Remove TaskSet 2.0 from pool
    92694 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(2, 0)
    92695 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Stage 2 (reduce at SparkLR.scala:64) finished in 0.913 s
    92695 [main] INFO org.apache.spark.SparkContext - Job finished: reduce at SparkLR.scala:64, took 1.072671482 s
    On iteration 4
    92704 [main] INFO org.apache.spark.SparkContext - Starting job: reduce at SparkLR.scala:64
    92704 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Got job 3 (reduce at SparkLR.scala:64) with 2 output partitions (allowLocal=false)
    92704 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Final stage: Stage 3 (reduce at SparkLR.scala:64)
    92704 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Parents of final stage: List()
    92707 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Missing parents: List()
    92707 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Submitting Stage 3 (MappedRDD[4] at map at SparkLR.scala:62), which has no missing parents
    92734 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Submitting 2 missing tasks from Stage 3 (MappedRDD[4] at map at SparkLR.scala:62)
    92734 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Adding task set 3.0 with 2 tasks
    92736 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 3.0:1 as TID 8 on executor 4: master (PROCESS_LOCAL)
    92759 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 3.0:1 as 551899 bytes in 23 ms
    92760 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 3.0:0 as TID 9 on executor 3: slaver02 (PROCESS_LOCAL)
    92789 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 3.0:0 as 551899 bytes in 28 ms
    93091 [Result resolver thread-2] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 8 in 356 ms on master (progress: 0/2)
    93092 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(3, 1)
    96638 [Result resolver thread-3] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 9 in 3878 ms on slaver02 (progress: 1/2)
    96638 [Result resolver thread-3] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Remove TaskSet 3.0 from pool
    96639 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(3, 0)
    96639 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Stage 3 (reduce at SparkLR.scala:64) finished in 3.899 s
    96639 [main] INFO org.apache.spark.SparkContext - Job finished: reduce at SparkLR.scala:64, took 3.935444196 s
    On iteration 5
    96646 [main] INFO org.apache.spark.SparkContext - Starting job: reduce at SparkLR.scala:64
    96646 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Got job 4 (reduce at SparkLR.scala:64) with 2 output partitions (allowLocal=false)
    96646 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Final stage: Stage 4 (reduce at SparkLR.scala:64)
    96646 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Parents of final stage: List()
    96649 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Missing parents: List()
    96649 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Submitting Stage 4 (MappedRDD[5] at map at SparkLR.scala:62), which has no missing parents
    96677 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Submitting 2 missing tasks from Stage 4 (MappedRDD[5] at map at SparkLR.scala:62)
    96677 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Adding task set 4.0 with 2 tasks
    96678 [spark-akka.actor.default-dispatcher-16] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 4.0:1 as TID 10 on executor 4: master (PROCESS_LOCAL)
    96702 [spark-akka.actor.default-dispatcher-16] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 4.0:1 as 551896 bytes in 24 ms
    96703 [spark-akka.actor.default-dispatcher-16] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 4.0:0 as TID 11 on executor 3: slaver02 (PROCESS_LOCAL)
    96726 [spark-akka.actor.default-dispatcher-16] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 4.0:0 as 551896 bytes in 23 ms
    97554 [Result resolver thread-0] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 10 in 876 ms on master (progress: 0/2)
    97555 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(4, 1)
    97810 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 11 in 1108 ms on slaver02 (progress: 1/2)
    97811 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Remove TaskSet 4.0 from pool
    97811 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(4, 0)
    97811 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.DAGScheduler - Stage 4 (reduce at SparkLR.scala:64) finished in 1.130 s
    97811 [main] INFO org.apache.spark.SparkContext - Job finished: reduce at SparkLR.scala:64, took 1.165505777 s
    Final w: (5816.075967498865, 5222.008066011391, 5754.751978607454, 3853.1772062206846, 5593.565827145932, 5282.387874201054, 3662.9216051953435, 4890.78210340607, 4223.371512250292, 5767.368579668863)
    [root@master spark-0.9]# [root@master spark-0.9]#

    1.6从HDFS读取文件并运行WordCount

    $ cd ~/spark-0.9
    $ MASTER=spark:// 192.168.159.129:7077 ./spark-shell
    [root@master spark-0.9]# MASTER=spark://192.168.159.129:7077 bin/spark-shell
    14/02/08 16:17:57 INFO HttpServer: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
    14/02/08 16:17:57 INFO HttpServer: Starting HTTP Server
    Welcome to
          ____              __
         / __/__  ___ _____/ /__
        _ / _ / _ `/ __/  '_/
       /___/ .__/\_,_/_/ /_/\_   version 0.9.0
          /_/
     
    Using Scala version 2.10.3 (Java HotSpot(TM) Client VM, Java 1.7.0_21)
    Type in expressions to have them evaluated.
    Type :help for more information.
    14/02/08 16:18:04 INFO Slf4jLogger: Slf4jLogger started
    14/02/08 16:18:04 INFO Remoting: Starting remoting
    14/02/08 16:18:04 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark@master:57338]
    14/02/08 16:18:04 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@master:57338]
    14/02/08 16:18:04 INFO SparkEnv: Registering BlockManagerMaster
    14/02/08 16:18:04 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140208161804-d96e
    14/02/08 16:18:04 INFO MemoryStore: MemoryStore started with capacity 297.0 MB.
    14/02/08 16:18:04 INFO ConnectionManager: Bound socket to port 54967 with id = ConnectionManagerId(master,54967)
    14/02/08 16:18:04 INFO BlockManagerMaster: Trying to register BlockManager
    14/02/08 16:18:04 INFO BlockManagerMasterActor$BlockManagerInfo: Registering block manager master:54967 with 297.0 MB RAM
    14/02/08 16:18:04 INFO BlockManagerMaster: Registered BlockManager
    14/02/08 16:18:04 INFO HttpServer: Starting HTTP Server
    14/02/08 16:18:04 INFO HttpBroadcast: Broadcast server started at http://192.168.159.129:51193
    14/02/08 16:18:04 INFO SparkEnv: Registering MapOutputTracker
    14/02/08 16:18:04 INFO HttpFileServer: HTTP File server directory is /tmp/spark-a63d283e-90dd-4a09-b2de-d9feda31f607
    14/02/08 16:18:04 INFO HttpServer: Starting HTTP Server
    14/02/08 16:18:05 INFO SparkUI: Started Spark Web UI at http://master:4040
    14/02/08 16:18:05 INFO AppClient$ClientActor: Connecting to master spark://192.168.159.129:7077...
    14/02/08 16:18:06 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20140208161806-0007
    14/02/08 16:18:06 INFO AppClient$ClientActor: Executor added: app-20140208161806-0007/0 on worker-20140208153258-master-37524 (master:37524) with 1 cores
    14/02/08 16:18:06 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140208161806-0007/0 on hostPort master:37524 with 1 cores, 512.0 MB RAM
    14/02/08 16:18:06 INFO AppClient$ClientActor: Executor added: app-20140208161806-0007/1 on worker-20140208153257-slaver01-49982 (slaver01:49982) with 1 cores
    14/02/08 16:18:06 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140208161806-0007/1 on hostPort slaver01:49982 with 1 cores, 512.0 MB RAM
    14/02/08 16:18:06 INFO AppClient$ClientActor: Executor added: app-20140208161806-0007/2 on worker-20140208153257-slaver02-53193 (slaver02:53193) with 1 cores
    14/02/08 16:18:06 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140208161806-0007/2 on hostPort slaver02:53193 with 1 cores, 512.0 MB RAM
    14/02/08 16:18:06 INFO AppClient$ClientActor: Executor updated: app-20140208161806-0007/1 is now RUNNING
    14/02/08 16:18:06 INFO AppClient$ClientActor: Executor updated: app-20140208161806-0007/0 is now RUNNING
    14/02/08 16:18:06 INFO AppClient$ClientActor: Executor updated: app-20140208161806-0007/2 is now RUNNING
    14/02/08 16:18:07 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Created spark context..
    Spark context available as sc.
    scala> val file = sc.textFile("hdfs://192.168.159.129:9000//tmp-output101/part-r-00000")
    14/02/08 17:16:29 INFO MemoryStore: ensureFreeSpace(132668) called with curMem=132636, maxMem=311387750
    14/02/08 17:16:29 INFO MemoryStore: Block broadcast_1 stored as values to memory (estimated size 129.6 KB, free 296.7 MB)
    file: org.apache.spark.rdd.RDD[String] = MappedRDD[3] at textFile at <console>:12
     
    scala>  val count = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_+_)
    14/02/08 17:16:42 INFO FileInputFormat: Total input paths to process : 1
    count: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[8] at reduceByKey at <console>:14
     
    scala>  count.collect()
    14/02/08 17:16:49 INFO SparkContext: Starting job: collect at <console>:17
    14/02/08 17:16:49 INFO DAGScheduler: Registering RDD 6 (reduceByKey at <console>:14)
    14/02/08 17:16:49 INFO DAGScheduler: Got job 0 (collect at <console>:17) with 2 output partitions (allowLocal=false)
    14/02/08 17:16:49 INFO DAGScheduler: Final stage: Stage 0 (collect at <console>:17)
    14/02/08 17:16:49 INFO DAGScheduler: Parents of final stage: List(Stage 1)
    14/02/08 17:16:49 INFO DAGScheduler: Missing parents: List(Stage 1)
    14/02/08 17:16:49 INFO DAGScheduler: Submitting Stage 1 (MapPartitionsRDD[6] at reduceByKey at <console>:14), which has no missing parents
    14/02/08 17:16:49 INFO DAGScheduler: Submitting 2 missing tasks from Stage 1 (MapPartitionsRDD[6] at reduceByKey at <console>:14)
    14/02/08 17:16:49 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
    14/02/08 17:16:49 INFO TaskSetManager: Starting task 1.0:0 as TID 0 on executor 2: slaver02 (NODE_LOCAL)
    14/02/08 17:16:49 INFO TaskSetManager: Serialized task 1.0:0 as 1955 bytes in 47 ms
    14/02/08 17:16:49 INFO TaskSetManager: Starting task 1.0:1 as TID 1 on executor 1: slaver01 (NODE_LOCAL)
    14/02/08 17:16:49 INFO TaskSetManager: Serialized task 1.0:1 as 1955 bytes in 0 ms
    14/02/08 17:17:02 INFO TaskSetManager: Finished TID 1 in 13049 ms on slaver01 (progress: 0/2)
    14/02/08 17:17:02 INFO DAGScheduler: Completed ShuffleMapTask(1, 1)
    14/02/08 17:17:05 INFO TaskSetManager: Finished TID 0 in 15876 ms on slaver02 (progress: 1/2)
    14/02/08 17:17:05 INFO DAGScheduler: Completed ShuffleMapTask(1, 0)
    14/02/08 17:17:05 INFO DAGScheduler: Stage 1 (reduceByKey at <console>:14) finished in 15.877 s
    14/02/08 17:17:05 INFO DAGScheduler: looking for newly runnable stages
    14/02/08 17:17:05 INFO DAGScheduler: running: Set()
    14/02/08 17:17:05 INFO DAGScheduler: waiting: Set(Stage 0)
    14/02/08 17:17:05 INFO DAGScheduler: failed: Set()
    14/02/08 17:17:05 INFO DAGScheduler: Missing parents for Stage 0: List()
    14/02/08 17:17:05 INFO DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[8] at reduceByKey at <console>:14), which is now runnable
    14/02/08 17:17:05 INFO TaskSchedulerImpl: Remove TaskSet 1.0 from pool
    14/02/08 17:17:05 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0 (MapPartitionsRDD[8] at reduceByKey at <console>:14)
    14/02/08 17:17:05 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
    14/02/08 17:17:05 INFO TaskSetManager: Starting task 0.0:0 as TID 2 on executor 2: slaver02 (PROCESS_LOCAL)
    14/02/08 17:17:05 INFO TaskSetManager: Serialized task 0.0:0 as 1807 bytes in 1 ms
    14/02/08 17:17:05 INFO TaskSetManager: Starting task 0.0:1 as TID 3 on executor 1: slaver01 (PROCESS_LOCAL)
    14/02/08 17:17:05 INFO TaskSetManager: Serialized task 0.0:1 as 1807 bytes in 0 ms
    14/02/08 17:17:05 INFO MapOutputTrackerMasterActor: Asked to send map output locations for shuffle 0 to spark@slaver01:43048
    14/02/08 17:17:05 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 0 is 147 bytes
    14/02/08 17:17:05 INFO MapOutputTrackerMasterActor: Asked to send map output locations for shuffle 0 to spark@slaver02:42393
    14/02/08 17:17:07 INFO TaskSetManager: Finished TID 2 in 1741 ms on slaver02 (progress: 0/2)
    14/02/08 17:17:07 INFO DAGScheduler: Completed ResultTask(0, 0)
    14/02/08 17:17:07 INFO TaskSetManager: Finished TID 3 in 2311 ms on slaver01 (progress: 1/2)
    14/02/08 17:17:07 INFO DAGScheduler: Completed ResultTask(0, 1)
    14/02/08 17:17:07 INFO DAGScheduler: Stage 0 (collect at <console>:17) finished in 2.321 s
    14/02/08 17:17:07 INFO TaskSchedulerImpl: Remove TaskSet 0.0 from pool
    14/02/08 17:17:07 INFO SparkContext: Job finished: collect at <console>:17, took 18.921360252 s
    res1: Array[(String, Int)] = Array((Fate        1,1), (wayside. 1,1), (energetic        3,1), (fourth   7,1), (gouging       1,1), (Frances  1,1), (NOVEMBER 1,1), (about.   2,1), (propel   1,1), (beginning.       2,1), (faunas        1,1), (natural  26,1), (calves, 1,1), (tends    5,1), (sea-level.       1,1), (molluscs.    1,1), (supported 3,1), (stoneless        1,1), (Here     9,1), (Planet   1,1), (dwell    1,1), (behaviour--experimenting,     1,1), (Actual   1,1), (Sluggish 1,1), (Looked   1,1), (mysterious       12,1), (fauna,  2,1), (primaries     1,1), (Mercury's        2,1), (_instinctively_  2,1), (females  2,1), (ELECTRICITY?     1,1), (claws,        1,1), (sedimentary      6,1), (Vertebrates      4,1), ("little-brain"   4,1), (fertilises   3,1), (word.     1,1), (all;     3,1), ("atomism."       1,1), (reactions.       3,1), (away.    10,1), (beginning,   2,1), (groove.  1,1), (Egg-eating       1,1), (depreciate       1,1), (know,    ...
    scala>

    1.8停止 Spark集群

    $ cd ~/spark-0.9
    $ ./sbin/stop-all.sh
  • 相关阅读:
    重温数据结构与算法(1) 构建自己的时间测试类
    读<<CLR via C#>>总结(11) 详谈事件
    读<<CLR via C#>>总结(13) 详谈泛型
    重温数据结构与算法(2) 编程中最常用,最通用的数据结构数组和ArrayList
    由String类的Split方法所遇到的两个问题
    读<<CLR via C#>>总结(6) 详谈实例构造器和类型构造器
    让我们都建立自己的知识树吧
    读<<CLR via C#>>总结(5) 如何合理使用类型的可见性和成员的可访问性来定义类
    读<<CLR via C#>>总结(10) 详谈委托
    读<<CLR via C#>>总结(4) 值类型的装箱和拆箱
  • 原文地址:https://www.cnblogs.com/DjangoBlog/p/3725177.html
Copyright © 2011-2022 走看看