zoukankan      html  css  js  c++  java
  • Caused by: java.net.ConnectException: Connection refused: master/192.168.3.129:7077

    1:启动Spark Shell,spark-shell是Spark自带的交互式Shell程序,方便用户进行交互式编程,用户可以在该命令行下用scala编写spark程序。

    启动Spark Shell,出现的错误如下所示:

      1 [root@master spark-1.6.1-bin-hadoop2.6]# bin/spark-shell --master spark://master:7077 --executor-memory 512M --total-executor-cores 2
      2 18/02/22 01:42:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
      3 18/02/22 01:42:10 INFO SecurityManager: Changing view acls to: root
      4 18/02/22 01:42:10 INFO SecurityManager: Changing modify acls to: root
      5 18/02/22 01:42:10 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
      6 18/02/22 01:42:11 INFO HttpServer: Starting HTTP Server
      7 18/02/22 01:42:11 INFO Utils: Successfully started service 'HTTP class server' on port 52961.
      8 Welcome to
      9       ____              __
     10      / __/__  ___ _____/ /__
     11     _ / _ / _ `/ __/  '_/
     12    /___/ .__/\_,_/_/ /_/\_   version 1.6.1
     13       /_/
     14 
     15 Using Scala version 2.10.5 (Java HotSpot(TM) Client VM, Java 1.7.0_65)
     16 Type in expressions to have them evaluated.
     17 Type :help for more information.
     18 18/02/22 01:42:15 INFO SparkContext: Running Spark version 1.6.1
     19 18/02/22 01:42:15 INFO SecurityManager: Changing view acls to: root
     20 18/02/22 01:42:15 INFO SecurityManager: Changing modify acls to: root
     21 18/02/22 01:42:15 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
     22 18/02/22 01:42:15 INFO Utils: Successfully started service 'sparkDriver' on port 43566.
     23 18/02/22 01:42:16 INFO Slf4jLogger: Slf4jLogger started
     24 18/02/22 01:42:16 INFO Remoting: Starting remoting
     25 18/02/22 01:42:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.3.129:43806]
     26 18/02/22 01:42:16 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 43806.
     27 18/02/22 01:42:16 INFO SparkEnv: Registering MapOutputTracker
     28 18/02/22 01:42:16 INFO SparkEnv: Registering BlockManagerMaster
     29 18/02/22 01:42:16 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-7face114-24b5-4f0e-adb6-8a104e387c78
     30 18/02/22 01:42:16 INFO MemoryStore: MemoryStore started with capacity 517.4 MB
     31 18/02/22 01:42:16 INFO SparkEnv: Registering OutputCommitCoordinator
     32 18/02/22 01:42:16 INFO Utils: Successfully started service 'SparkUI' on port 4040.
     33 18/02/22 01:42:16 INFO SparkUI: Started SparkUI at http://192.168.3.129:4040
     34 18/02/22 01:42:17 INFO AppClient$ClientEndpoint: Connecting to master spark://master:7077...
     35 18/02/22 01:42:17 WARN AppClient$ClientEndpoint: Failed to connect to master master:7077
     36 java.io.IOException: Failed to connect to master/192.168.3.129:7077
     37     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)
     38     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)
     39     at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200)
     40     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
     41     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183)
     42     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
     43     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     44     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     45     at java.lang.Thread.run(Thread.java:745)
     46 Caused by: java.net.ConnectException: Connection refused: master/192.168.3.129:7077
     47     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
     48     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
     49     at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
     50     at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
     51     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
     52     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
     53     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
     54     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
     55     at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
     56     ... 1 more
     57 18/02/22 01:42:37 INFO AppClient$ClientEndpoint: Connecting to master spark://master:7077...
     58 18/02/22 01:42:37 WARN AppClient$ClientEndpoint: Failed to connect to master master:7077
     59 java.io.IOException: Failed to connect to master/192.168.3.129:7077
     60     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)
     61     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)
     62     at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200)
     63     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
     64     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183)
     65     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
     66     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     67     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     68     at java.lang.Thread.run(Thread.java:745)
     69 Caused by: java.net.ConnectException: Connection refused: master/192.168.3.129:7077
     70     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
     71     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
     72     at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
     73     at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
     74     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
     75     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
     76     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
     77     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
     78     at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
     79     ... 1 more
     80 18/02/22 01:42:57 INFO AppClient$ClientEndpoint: Connecting to master spark://master:7077...
     81 18/02/22 01:42:57 WARN AppClient$ClientEndpoint: Failed to connect to master master:7077
     82 java.io.IOException: Failed to connect to master/192.168.3.129:7077
     83     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)
     84     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)
     85     at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200)
     86     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
     87     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183)
     88     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
     89     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     90     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     91     at java.lang.Thread.run(Thread.java:745)
     92 Caused by: java.net.ConnectException: Connection refused: master/192.168.3.129:7077
     93     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
     94     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
     95     at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
     96     at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
     97     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
     98     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
     99     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
    100     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
    101     at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
    102     ... 1 more
    103 18/02/22 01:42:57 INFO AppClient$ClientEndpoint: Connecting to master spark://master:7077...
    104 18/02/22 01:42:57 WARN AppClient$ClientEndpoint: Failed to connect to master master:7077
    105 java.io.IOException: Failed to connect to master/192.168.3.129:7077
    106     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)
    107     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)
    108     at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200)
    109     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
    110     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183)
    111     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    112     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    113     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    114     at java.lang.Thread.run(Thread.java:745)
    115 Caused by: java.net.ConnectException: Connection refused: master/192.168.3.129:7077
    116     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    117     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    118     at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
    119     at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
    120     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
    121     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
    122     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
    123     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
    124     at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
    125     ... 1 more
    126 18/02/22 01:43:17 INFO AppClient$ClientEndpoint: Connecting to master spark://master:7077...
    127 18/02/22 01:43:17 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
    128 18/02/22 01:43:17 INFO AppClient$ClientEndpoint: Connecting to master spark://master:7077...
    129 18/02/22 01:43:17 WARN SparkDeploySchedulerBackend: Application ID is not initialized yet.
    130 18/02/22 01:43:17 WARN AppClient$ClientEndpoint: Failed to connect to master master:7077
    131 java.io.IOException: Failed to connect to master/192.168.3.129:7077
    132     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)
    133     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)
    134     at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200)
    135     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
    136     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183)
    137     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    138     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    139     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    140     at java.lang.Thread.run(Thread.java:745)
    141 Caused by: java.net.ConnectException: Connection refused: master/192.168.3.129:7077
    142     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    143     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    144     at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
    145     at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
    146     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
    147     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
    148     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
    149     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
    150     at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
    151     ... 1 more
    152 18/02/22 01:43:17 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50513.
    153 18/02/22 01:43:17 INFO NettyBlockTransferService: Server created on 50513
    154 18/02/22 01:43:17 INFO BlockManagerMaster: Trying to register BlockManager
    155 18/02/22 01:43:17 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.3.129:50513 with 517.4 MB RAM, BlockManagerId(driver, 192.168.3.129, 50513)
    156 18/02/22 01:43:17 INFO BlockManagerMaster: Registered BlockManager
    157 18/02/22 01:43:17 INFO SparkUI: Stopped Spark web UI at http://192.168.3.129:4040
    158 18/02/22 01:43:17 INFO SparkDeploySchedulerBackend: Shutting down all executors
    159 18/02/22 01:43:17 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
    160 18/02/22 01:43:17 WARN AppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master
    161 18/02/22 01:43:17 ERROR MapOutputTrackerMaster: Error communicating with MapOutputTracker
    162 java.lang.InterruptedException
    163     at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1038)
    164     at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
    165     at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
    166     at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
    167     at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
    168     at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
    169     at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
    170     at scala.concurrent.Await$.result(package.scala:107)
    171     at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
    172     at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
    173     at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)
    174     at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:110)
    175     at org.apache.spark.MapOutputTracker.sendTracker(MapOutputTracker.scala:120)
    176     at org.apache.spark.MapOutputTrackerMaster.stop(MapOutputTracker.scala:462)
    177     at org.apache.spark.SparkEnv.stop(SparkEnv.scala:93)
    178     at org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1756)
    179     at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229)
    180     at org.apache.spark.SparkContext.stop(SparkContext.scala:1755)
    181     at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:127)
    182     at org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264)
    183     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:134)
    184     at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163)
    185     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:129)
    186     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    187     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
    188     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
    189     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    190     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    191     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    192     at java.lang.Thread.run(Thread.java:745)
    193 18/02/22 01:43:17 ERROR Utils: Uncaught exception in thread appclient-registration-retry-thread
    194 org.apache.spark.SparkException: Error communicating with MapOutputTracker
    195     at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:114)
    196     at org.apache.spark.MapOutputTracker.sendTracker(MapOutputTracker.scala:120)
    197     at org.apache.spark.MapOutputTrackerMaster.stop(MapOutputTracker.scala:462)
    198     at org.apache.spark.SparkEnv.stop(SparkEnv.scala:93)
    199     at org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1756)
    200     at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229)
    201     at org.apache.spark.SparkContext.stop(SparkContext.scala:1755)
    202     at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:127)
    203     at org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264)
    204     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:134)
    205     at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163)
    206     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:129)
    207     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    208     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
    209     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
    210     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    211     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    212     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    213     at java.lang.Thread.run(Thread.java:745)
    214 Caused by: java.lang.InterruptedException
    215     at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1038)
    216     at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
    217     at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
    218     at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
    219     at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
    220     at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
    221     at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
    222     at scala.concurrent.Await$.result(package.scala:107)
    223     at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
    224     at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
    225     at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)
    226     at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:110)
    227     ... 18 more
    228 18/02/22 01:43:17 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
    229 18/02/22 01:43:17 INFO SparkContext: Successfully stopped SparkContext
    230 18/02/22 01:43:17 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[appclient-registration-retry-thread,5,main]
    231 org.apache.spark.SparkException: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.
    232     at org.apache.spark.scheduler.TaskSchedulerImpl.error(TaskSchedulerImpl.scala:438)
    233     at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:124)
    234     at org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264)
    235     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:134)
    236     at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163)
    237     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:129)
    238     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    239     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
    240     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
    241     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    242     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    243     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    244     at java.lang.Thread.run(Thread.java:745)
    245 18/02/22 01:43:17 INFO DiskBlockManager: Shutdown hook called
    246 18/02/22 01:43:17 INFO ShutdownHookManager: Shutdown hook called
    247 18/02/22 01:43:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-bf09944d-9867-4256-89c6-e8b415c9c315/userFiles-12e582c1-0438-490f-a8a2-64264d764463
    248 18/02/22 01:43:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-7c648867-90b8-4d3c-af09-b1f3d16d1b30
    249 18/02/22 01:43:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-bf09944d-9867-4256-89c6-e8b415c9c315

    2:解决方法,是你必须先启动你的Spark集群,这样再启动Spark Shell即可:

    在master节点,启动你的spark集群,启动方式如下所示:

    [root@master spark-1.6.1-bin-hadoop2.6]# sbin/start-all.sh

    然后再启动你的Spark Shell即可,解决上面的错误:

    [root@master spark-1.6.1-bin-hadoop2.6]# bin/spark-shell --master spark://master:7077 --executor-memory 512M --total-executor-cores 2

    启动的内容,注意一些重点内容:

    第33行,第47行,第119行。

    [root@master spark-1.6.1-bin-hadoop2.6]# bin/spark-shell --master spark://master:7077 --executor-memory 512M --total-executor-cores 2
    18/02/22 01:51:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    18/02/22 01:51:00 INFO SecurityManager: Changing view acls to: root
    18/02/22 01:51:00 INFO SecurityManager: Changing modify acls to: root
    18/02/22 01:51:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
    18/02/22 01:51:00 INFO HttpServer: Starting HTTP Server
    18/02/22 01:51:00 INFO Utils: Successfully started service 'HTTP class server' on port 58729.
    Welcome to
          ____              __
         / __/__  ___ _____/ /__
        _ / _ / _ `/ __/  '_/
       /___/ .__/\_,_/_/ /_/\_   version 1.6.1
          /_/
    
    Using Scala version 2.10.5 (Java HotSpot(TM) Client VM, Java 1.7.0_65)
    Type in expressions to have them evaluated.
    Type :help for more information.
    18/02/22 01:51:06 INFO SparkContext: Running Spark version 1.6.1
    18/02/22 01:51:06 INFO SecurityManager: Changing view acls to: root
    18/02/22 01:51:06 INFO SecurityManager: Changing modify acls to: root
    18/02/22 01:51:06 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
    18/02/22 01:51:06 INFO Utils: Successfully started service 'sparkDriver' on port 45298.
    18/02/22 01:51:06 INFO Slf4jLogger: Slf4jLogger started
    18/02/22 01:51:06 INFO Remoting: Starting remoting
    18/02/22 01:51:07 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.3.129:36868]
    18/02/22 01:51:07 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 36868.
    18/02/22 01:51:07 INFO SparkEnv: Registering MapOutputTracker
    18/02/22 01:51:07 INFO SparkEnv: Registering BlockManagerMaster
    18/02/22 01:51:07 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-3b5e312e-7f6b-491d-8539-4a5f38d3839a
    18/02/22 01:51:07 INFO MemoryStore: MemoryStore started with capacity 517.4 MB
    18/02/22 01:51:07 INFO SparkEnv: Registering OutputCommitCoordinator
    18/02/22 01:51:07 INFO Utils: Successfully started service 'SparkUI' on port 4040.
    18/02/22 01:51:07 INFO SparkUI: Started SparkUI at http://192.168.3.129:4040
    18/02/22 01:51:07 INFO AppClient$ClientEndpoint: Connecting to master spark://master:7077...
    18/02/22 01:51:08 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20180222015108-0000
    18/02/22 01:51:08 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46282.
    18/02/22 01:51:08 INFO NettyBlockTransferService: Server created on 46282
    18/02/22 01:51:08 INFO BlockManagerMaster: Trying to register BlockManager
    18/02/22 01:51:08 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.3.129:46282 with 517.4 MB RAM, BlockManagerId(driver, 192.168.3.129, 46282)
    18/02/22 01:51:08 INFO BlockManagerMaster: Registered BlockManager
    18/02/22 01:51:08 INFO AppClient$ClientEndpoint: Executor added: app-20180222015108-0000/0 on worker-20180222174932-192.168.3.130-39616 (192.168.3.130:39616) with 1 cores
    18/02/22 01:51:08 INFO SparkDeploySchedulerBackend: Granted executor ID app-20180222015108-0000/0 on hostPort 192.168.3.130:39616 with 1 cores, 512.0 MB RAM
    18/02/22 01:51:08 INFO AppClient$ClientEndpoint: Executor added: app-20180222015108-0000/1 on worker-20180222174932-192.168.3.131-58163 (192.168.3.131:58163) with 1 cores
    18/02/22 01:51:08 INFO SparkDeploySchedulerBackend: Granted executor ID app-20180222015108-0000/1 on hostPort 192.168.3.131:58163 with 1 cores, 512.0 MB RAM
    18/02/22 01:51:09 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
    18/02/22 01:51:09 INFO SparkILoop: Created spark context..
    Spark context available as sc.
    18/02/22 01:51:10 INFO AppClient$ClientEndpoint: Executor updated: app-20180222015108-0000/1 is now RUNNING
    18/02/22 01:51:10 INFO AppClient$ClientEndpoint: Executor updated: app-20180222015108-0000/0 is now RUNNING
    18/02/22 01:51:16 INFO HiveContext: Initializing execution hive, version 1.2.1
    18/02/22 01:51:17 INFO ClientWrapper: Inspected Hadoop version: 2.6.0
    18/02/22 01:51:17 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
    18/02/22 01:51:21 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
    18/02/22 01:51:22 INFO ObjectStore: ObjectStore, initialize called
    18/02/22 01:51:26 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
    18/02/22 01:51:26 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
    18/02/22 01:51:26 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
    18/02/22 01:51:30 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
    18/02/22 01:51:35 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
    18/02/22 01:51:38 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
    18/02/22 01:51:38 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
    18/02/22 01:51:39 INFO SparkDeploySchedulerBackend: Registered executor NettyRpcEndpointRef(null) (slaver2:55056) with ID 1
    18/02/22 01:51:39 INFO BlockManagerMasterEndpoint: Registering block manager slaver2:57607 with 146.2 MB RAM, BlockManagerId(1, slaver2, 57607)
    18/02/22 01:51:39 INFO SparkDeploySchedulerBackend: Registered executor NettyRpcEndpointRef(null) (slaver1:47165) with ID 0
    18/02/22 01:51:40 INFO BlockManagerMasterEndpoint: Registering block manager slaver1:38278 with 146.2 MB RAM, BlockManagerId(0, slaver1, 38278)
    18/02/22 01:51:40 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
    18/02/22 01:51:40 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
    18/02/22 01:51:40 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
    18/02/22 01:51:40 INFO ObjectStore: Initialized ObjectStore
    18/02/22 01:51:41 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
    18/02/22 01:51:42 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
    Java HotSpot(TM) Client VM warning: You have loaded library /tmp/libnetty-transport-native-epoll870809507217922299.so which might have disabled stack guard. The VM will try to fix the stack guard now.
    It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    18/02/22 01:51:44 INFO HiveMetaStore: Added admin role in metastore
    18/02/22 01:51:44 INFO HiveMetaStore: Added public role in metastore
    18/02/22 01:51:44 INFO HiveMetaStore: No user is added in admin role, since config is empty
    18/02/22 01:51:45 INFO HiveMetaStore: 0: get_all_databases
    18/02/22 01:51:45 INFO audit: ugi=root    ip=unknown-ip-addr    cmd=get_all_databases    
    18/02/22 01:51:45 INFO HiveMetaStore: 0: get_functions: db=default pat=*
    18/02/22 01:51:45 INFO audit: ugi=root    ip=unknown-ip-addr    cmd=get_functions: db=default pat=*    
    18/02/22 01:51:45 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
    18/02/22 01:51:46 INFO SessionState: Created HDFS directory: /tmp/hive/root
    18/02/22 01:51:46 INFO SessionState: Created local directory: /tmp/root
    18/02/22 01:51:46 INFO SessionState: Created local directory: /tmp/afacd186-3b65-4cf9-a9b3-dad36055ed80_resources
    18/02/22 01:51:46 INFO SessionState: Created HDFS directory: /tmp/hive/root/afacd186-3b65-4cf9-a9b3-dad36055ed80
    18/02/22 01:51:46 INFO SessionState: Created local directory: /tmp/root/afacd186-3b65-4cf9-a9b3-dad36055ed80
    18/02/22 01:51:46 INFO SessionState: Created HDFS directory: /tmp/hive/root/afacd186-3b65-4cf9-a9b3-dad36055ed80/_tmp_space.db
    18/02/22 01:51:46 INFO HiveContext: default warehouse location is /user/hive/warehouse
    18/02/22 01:51:46 INFO HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
    18/02/22 01:51:46 INFO ClientWrapper: Inspected Hadoop version: 2.6.0
    18/02/22 01:51:46 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
    18/02/22 01:51:47 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
    18/02/22 01:51:47 INFO ObjectStore: ObjectStore, initialize called
    18/02/22 01:51:47 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
    18/02/22 01:51:47 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
    18/02/22 01:51:47 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
    18/02/22 01:51:48 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
    18/02/22 01:51:49 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
    18/02/22 01:51:51 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
    18/02/22 01:51:51 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
    18/02/22 01:51:51 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
    18/02/22 01:51:51 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
    18/02/22 01:51:51 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
    18/02/22 01:51:51 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
    18/02/22 01:51:51 INFO ObjectStore: Initialized ObjectStore
    18/02/22 01:51:51 INFO HiveMetaStore: Added admin role in metastore
    18/02/22 01:51:51 INFO HiveMetaStore: Added public role in metastore
    18/02/22 01:51:51 INFO HiveMetaStore: No user is added in admin role, since config is empty
    18/02/22 01:51:52 INFO HiveMetaStore: 0: get_all_databases
    18/02/22 01:51:52 INFO audit: ugi=root    ip=unknown-ip-addr    cmd=get_all_databases    
    18/02/22 01:51:52 INFO HiveMetaStore: 0: get_functions: db=default pat=*
    18/02/22 01:51:52 INFO audit: ugi=root    ip=unknown-ip-addr    cmd=get_functions: db=default pat=*    
    18/02/22 01:51:52 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
    18/02/22 01:51:52 INFO SessionState: Created local directory: /tmp/e209230b-e230-4688-9b83-b04d182b952d_resources
    18/02/22 01:51:52 INFO SessionState: Created HDFS directory: /tmp/hive/root/e209230b-e230-4688-9b83-b04d182b952d
    18/02/22 01:51:52 INFO SessionState: Created local directory: /tmp/root/e209230b-e230-4688-9b83-b04d182b952d
    18/02/22 01:51:52 INFO SessionState: Created HDFS directory: /tmp/hive/root/e209230b-e230-4688-9b83-b04d182b952d/_tmp_space.db
    18/02/22 01:51:52 INFO SparkILoop: Created sql context (with Hive support)..
    SQL context available as sqlContext.
    
    scala> 
    
  • 相关阅读:
    Linux Hung Task分析
    Linux内存都去哪了:(1)分析memblock在启动过程中对内存的影响
    《Linux/UNIX系统编程手册》第63章 IO多路复用、信号驱动IO以及epoll
    Linux内核和用户空间通信之netlink
    Linux soft lockup分析
    一款DMA性能优化记录:异步传输和指定实时信号做async IO
    Linux下时钟框架实践---一款芯片的时钟树配置
    使用Kernel NetEm和tc模拟复杂网络环境
    使用Flame Graph进行系统性能分析
    sigsuspend()阻塞:异步信号SIGIO为什么会被截胡?
  • 原文地址:https://www.cnblogs.com/biehongli/p/8459702.html
Copyright © 2011-2022 走看看