zoukankan      html  css  js  c++  java
  • hdfs OutOfMemoryError

    大量推送本地文件到hdfs如下

    hadoop fs -put ${local_path} ${hdfs_path}报错。

    Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.util.Arrays.copyOfRange(Arrays.java:3664)
        at java.lang.StringBuffer.toString(StringBuffer.java:669)
        at java.net.URI.toString(URI.java:1945)
        at java.net.URI.<init>(URI.java:742)
        at org.apache.hadoop.fs.Path.initialize(Path.java:202)
        at org.apache.hadoop.fs.Path.<init>(Path.java:196)
        at org.apache.hadoop.fs.Path.getPathWithoutSchemeAndAuthority(Path.java:80)
        at org.apache.hadoop.fs.shell.CommandWithDestination.checkPathsForReservedRaw(CommandWithDestination.java:355)
        at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:322)
        at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
        at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
        at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
        at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
        at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)
        at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
        at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
        at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)
        at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
        at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
        at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)
        at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
        at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
        at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)
        at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
        at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
        at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)
        at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
        at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
        at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)
        at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
        at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
        at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)

    在hadoop命令下发现:

    hadoop-2.7.3/bin/hadoop:    exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"

    查找JAVA_HEAP_MAX

    发现

    hadoop-2.7.3/libexec/hadoop-config.sh:  JAVA_HEAP_MAX="-Xmx""$HADOOP_HEAPSIZE""m"

    继续查找HADOOP_HEAPSIZE

    发现

    hadoop-2.7.3/libexec/hadoop-config.sh:  JAVA_HEAP_MAX="-Xmx""$HADOOP_HEAPSIZE""m"

    以及

    hadoop-2.7.3/share/doc/hadoop/hadoop-project-dist/hadoop-common/ClusterSetup.html:<li><tt>HADOOP_HEAPSIZE</tt> / <tt>YARN_HEAPSIZE</tt> - The maximum amount of heapsize to use, in MB e.g. if the varibale is set to 1000 the heap will be set to 1000MB. This is used to configure the heap size for the daemon. By default, the value is 1000. If you want to configure the values separately for each deamon you can use.</li>
    <li>
        <tt>HADOOP_HEAPSIZE</tt>
        / <tt>YARN_HEAPSIZE</tt> - The maximum amount of heapsize to use, in MB e.g. if the varibale is set to 1000 the heap
        will be set to 1000MB. This is used to configure the heap size for the daemon. By default, the value is 1000. If you
        want to configure the values separately for each deamon you can use.
    </li>

    调整JVM堆的最大值:

    export HADOOP_HEAPSIZE=100000
  • 相关阅读:
    MySQL 数据库主从复制架构
    程序员的双十一
    MySQL 数据库事务与复制
    十字路口的程序员
    瞬息之间与时间之门
    HDFS 与 GFS 的设计差异
    HDFS 异常处理与恢复
    HDFS Client 设计实现解析
    HDFS DataNode 设计实现解析
    HDFS NameNode 设计实现解析
  • 原文地址:https://www.cnblogs.com/suanec/p/9878612.html
Copyright © 2011-2022 走看看