zoukankan      html  css  js  c++  java
  • 【hadoop】 running beyond virtual memory错误原因及解决办法

    问题描述:

       在hadoop中运行应用,出现了running beyond virtual memory错误。提示如下:

    Container [pid=28920,containerID=container_1389136889967_0001_01_000121] is running beyond virtual memory limits. Current usage: 1.2 GB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.

    原因:从机上运行的
    Container试图使用过多的内存,而被NodeManager kill掉了。
    [摘录] The NodeManager is killing your container. It sounds like you are trying to use hadoop streaming which is running as a child process of the map-reduce task. The NodeManager monitors the entire process tree of the task and if it eats up more memory than the maximum set in mapreduce.map.memory.mb or mapreduce.reduce.memory.mb respectively, we would expect the Nodemanager to kill the task, otherwise your task is stealing memory belonging to other containers, which you don't want.
    解决方法:
    mapred-site.xml中设置map和reduce任务的内存配置如下:(value中实际配置的内存需要根据自己机器内存大小及应用情况进行修改)

    <property>
      <name>mapreduce.map.memory.mb</name>
      <value>1536</value>
    </property>
    <property>
      <name>mapreduce.map.java.opts</name>
      <value>-Xmx1024M</value>
    </property>
    <property>
      <name>mapreduce.reduce.memory.mb</name>
      <value>3072</value>
    </property>
    <property>
      <name>mapreduce.reduce.java.opts</name>
      <value>-Xmx2560M</value>
    </property>


    附录:

    Container is running beyond memory limits

    http://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits

  • 相关阅读:
    怎样练习一万小时
    新闻的未来
    有些人无缘再见,却一生想念
    媒体该如何展示事实之美?
    传统媒体:广告都去哪儿了?
    一线从业者干货分享:不做“忧伤”的媒体人
    整理者与信息平台
    把媒体当手段还是当目的?
    媒体人转身,转身去哪里?
    腾讯新闻的海量服务
  • 原文地址:https://www.cnblogs.com/scw2901/p/4331682.html
Copyright © 2011-2022 走看看