zoukankan      html  css  js  c++  java
  • IDEA 开发环境中 调试Spark SQL及遇到问题解决办法

    1.问题

     java.lang.OutOfMemoryError: PermGen space

      java.lang.OutOfMemoryError: Java heap space

    7/04/17 17:00:05 WARN NettyRpcEndpointRef: Error sending message [message = Heartbeat(driver,[Lscala.Tuple2;@631e6c90,BlockManagerId(driver, localhost, 53273))] in 1 attempts
    org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval
        at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
        at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
        at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
        at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
        at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
        at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
        at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:449)
        at org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:470)
        at org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:470)
        at org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:470)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1765)
        at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:470)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 seconds]
        at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
        at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
        at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
        at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
        at scala.concurrent.Await$.result(package.scala:107)
        at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
        ... 14 more

    17/04/17 17:46:36 ERROR TaskSetManager: Task 1 in stage 3.0 failed 1 times; aborting job
    Exception in thread "qtp502891368-59" java.lang.OutOfMemoryError: Java heap space
    17/04/17 17:57:36 ERROR Utils: uncaught error in thread Spark Context Cleaner, stopping SparkContext
    java.lang.OutOfMemoryError: Java heap space
    17/04/17 17:57:36 WARN TaskSetManager: Lost task 0.0 in stage 3.0 (TID 413, localhost): ExecutorLostFailure (executor driver exited caused by one of the running tasks) Reason: Executor heartbeat timed out after 182499 ms
    17/04/17 17:57:36 INFO TaskSchedulerImpl: Removed TaskSet 3.0, whose tasks have all completed, from pool
    Exception in thread "qtp502891368-62" java.lang.OutOfMemoryError: Java heap space
    17/04/17 17:57:36 WARN SingleThreadEventExecutor: Unexpected exception from an event executor:
    java.lang.OutOfMemoryError: Java heap space
    17/04/17 17:57:36 ERROR Executor: Exception in task 0.0 in stage 3.0 (TID 413)
    java.lang.OutOfMemoryError: Java heap space
    17/04/17 17:57:36 WARN NettyRpcEnv: Ignored message: true

    猜测原因:

    Spark对内存的消耗主要分为三部分(即取决于你的应用程序的需求):

    1. 数据集中对象的大小
    2. 访问这些对象的内存消耗
    3. 垃圾回收GC的消耗

    由网络或者gc引起,worker或executor没有接收到executor或task的心跳反馈,导致 Executor&Task Lost,这时要提高 spark.network.timeout 的值,根据情况改成300(5min)或更高。

    解决办法:

    这个问题,需要设置IEDA的JVM参数:  -Xms256m -Xmx512m -XX:PermSize=256m -XX:MaxPermSize=256M

     若在Linux上命令方式的话:

    参考: Hadoop与Spark常用配置参数总结 http://www.tuicool.com/articles/naaAzq2

  • 相关阅读:
    C# 设计原则-单一职责原则
    C# Linq的简单运用
    .Net Core中的管道机制
    .Net Core和.Net Framework的区别
    C# 9.0 新特性简析
    .Net core的依赖注入
    .Net IOC容器unity的使用
    网站被黑客攻击百度出现警示
    七牛云免费对象存储(解决图片加载缓慢问题)
    今天第一篇博客 说点随意的内容
  • 原文地址:https://www.cnblogs.com/nucdy/p/6728550.html
Copyright © 2011-2022 走看看