zoukankan      html  css  js  c++  java
  • Container [pid=6263,containerID=container_1494900155967_0001_02_000001] is running beyond virtual memory limits

    以Spark-Client模式运行,Spark-Submit时出现了下面的错误:

    User:  hadoop  
    Name:  Spark Pi  
    Application Type:  SPARK  
    Application Tags:   
    YarnApplicationState:  FAILED  
    FinalStatus Reported by AM:  FAILED  
    Started:  16-五月-2017 10:03:02  
    Elapsed:  14sec  
    Tracking URL:  History  
    Diagnostics:  Application application_1494900155967_0001 failed 2 times due to AM Container for appattempt_1494900155967_0001_000002 exited with exitCode: -103 
    For more detailed output, check application tracking page:http://master:8088/proxy/application_1494900155967_0001/Then, click on links to logs of each attempt. 
    Diagnostics: Container [pid=6263,containerID=container_1494900155967_0001_02_000001] is running beyond virtual memory limits. Current usage: 107.3 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.  

    意思是说Container要用2.2GB的内存,而虚拟内存只有2.1GB,不够用了,所以Kill了Container。

    我的SPARK-EXECUTOR-MEMORY设置的是1G,即物理内存是1G,Yarn默认的虚拟内存和物理内存比例是2.1,也就是说虚拟内存是2.1G,小于了需要的内存2.2G。解决的办法是把拟内存和物理内存比例增大,在yarn-site.xml中增加一个设置:

        <property>
            <name>yarn.nodemanager.vmem-pmem-ratio</name>
            <value>2.5</value>
        </property>

    再重启Yarn,这样一来就能有2.5G的虚拟内存,运行时就不会出错了。

  • 相关阅读:
    pycharm快捷键及一些常用设置
    常用笔记
    python开发之路day01
    [转]C++ Unicode SBCS 函数对照表
    PB15151793+PB16001775
    《梦断代码》读书笔记——第四周
    《人件》读书笔记——第三周
    《人月神话》读书笔记——第一周
    软件工程个人作业——词频统计
    《创新者》读书笔记——第五周读书笔记
  • 原文地址:https://www.cnblogs.com/mstk/p/6860035.html
Copyright © 2011-2022 走看看