zoukankan      html  css  js  c++  java
  • hadoop 问题及解决方式

    转自http://www.bkjia.com/ASPjc/931209.html

    解决Exception: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z 等一系列问题,ljavalangstring


      

    一.简介

       Windows下的 Eclipse上调试Hadoop2代码,所以我们在windows下的Eclipse配置hadoop-eclipse-plugin-2.6.0.jar插件,并在运行Hadoop代码时出现了一系列的问题,搞了好几天终于能运行起代码。接下来我们来看看问题并怎么解决,提供给跟我同样遇到的问题作为参考。

      Hadoop2的WordCount.java统计代码如下:

         

    import java.io.IOException;
    import java.util.StringTokenizer;
    
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Mapper;
    import org.apache.hadoop.mapreduce.Reducer;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    
    public class WordCount {
    
      public static class TokenizerMapper
           extends Mapper<Object, Text, Text, IntWritable>{
    
        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();
    
        public void map(Object key, Text value, Context context
                        ) throws IOException, InterruptedException {
          StringTokenizer itr = new StringTokenizer(value.toString());
          while (itr.hasMoreTokens()) {
            word.set(itr.nextToken());
            context.write(word, one);
          }
        }
      }
    
      public static class IntSumReducer
           extends Reducer<Text,IntWritable,Text,IntWritable> {
        private IntWritable result = new IntWritable();
    
        public void reduce(Text key, Iterable<IntWritable> values,
                           Context context
                           ) throws IOException, InterruptedException {
          int sum = 0;
          for (IntWritable val : values) {
            sum += val.get();
          }
          result.set(sum);
          context.write(key, result);
        }
      }
    
      public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "word count");
        job.setJarByClass(WordCount.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(IntSumReducer.class);
        job.setReducerClass(IntSumReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
      }
    }
    


    问题一.An internal error occurred during: "Map/Reducelocation status updater".java.lang.NullPointerException

       我们hadoop-eclipse-plugin-2.6.0.jar放到Eclipse的plugins目录下,我们的Eclipse目录是F: ooleclipse-jee-juno-SR2eclipse-jee-juno-SR2plugins,重启一下Eclipse,然后,打开Window-->Preferens,可以看到Hadoop Map/Reduc选项,然后点击出现了An internal error occurredduring: "Map/Reduce location status updater".java.lang.NullPointerException,如图所示:

       

      解决:

       我们发现刚配置部署的Hadoop2还没创建输入和输出目录,先在hdfs上建个文件夹 。

       #bin/hdfs dfs -mkdir –p /user/root/input

       #bin/hdfs dfs -mkdir -p  /user/root/output

     我们在Eclipse的DFS Locations目录下看到我们这两个目录,如图所示:

      

    问题二.Exception in thread "main" java.lang.NullPointerException atjava.lang.ProcessBuilder.start(Unknown Source)
      

    运行Hadoop2的WordCount.java代码时出现了这样错误,

         

      log4j:WARNPlease initialize the log4j system properly.
    log4j:WARN Seehttp://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
    Exception in thread "main" java.lang.NullPointerException
           atjava.lang.ProcessBuilder.start(Unknown Source)
           atorg.apache.hadoop.util.Shell.runCommand(Shell.java:482)
           atorg.apache.hadoop.util.Shell.run(Shell.java:455)
           atorg.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
           atorg.apache.hadoop.util.Shell.execCommand(Shell.java:808)
           atorg.apache.hadoop.util.Shell.execCommand(Shell.java:791)
           at

    分析:

      下载Hadoop2以上版本时,在Hadoop2的bin目录下没有winutils.exe

    解决:

      1.下载https://codeload.github.com/srccodes/hadoop-common-2.2.0-bin/zip/master下载hadoop-common-2.2.0-bin-master.zip,然后解压后,把hadoop-common-2.2.0-bin-master下的bin全部复制放到我们下载的Hadoop2的binHadoop2/bin目录下。如图所示:

         

      2.Eclipse-》window-》Preferences 下的Hadoop Map/Peduce 把下载放在我们的磁盘的Hadoop目录引进来,如图所示:

        

      3.Hadoop2配置变量环境HADOOP_HOME 和path,如图所示:

     

     问题三.Exception in thread "main"java.lang.UnsatisfiedLinkError:org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z

      当我们解决了问题三时,在运行WordCount.java代码时,出现这样的问题

        

    log4j:WARN No appenders could be found forlogger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
    log4j:WARN Please initialize the log4jsystem properly.
    log4j:WARN Seehttp://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
    Exception in thread "main"java.lang.UnsatisfiedLinkError:org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
           atorg.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
           atorg.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:557)
           atorg.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
           atorg.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:187)
           atorg.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174)
           atorg.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:108)
           atorg.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:285)
           atorg.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
           atorg.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
           atorg.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
           atorg.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
           atorg.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:131)

     分析:

        C:WindowsSystem32下缺少hadoop.dll,把这个文件拷贝到C:WindowsSystem32下面即可。

     解决:

        hadoop-common-2.2.0-bin-master下的bin的hadoop.dll放到C:WindowsSystem32下,然后重启电脑,也许还没那么简单,还是出现这样的问题。

      我们在继续分析:

        我们在出现错误的的atorg.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:557)我们来看这个类NativeIO的557行,如图所示:

           

       Windows的唯一方法用于检查当前进程的请求,在给定的路径的访问权限,所以我们先给以能进行访问,我们自己先修改源代码,return true 时允许访问。我们下载对应hadoop源代码,hadoop-2.6.0-src.tar.gz解压,hadoop-2.6.0-srchadoop-common-projecthadoop-commonsrcmainjavaorgapachehadoopio ativeio下NativeIO.java 复制到对应的Eclipse的project,然后修改557行为return true如图所示:

      

       

      问题四:org.apache.hadoop.security.AccessControlException: Permissiondenied: user=zhengcy, access=WRITE,inode="/user/root/output":root:supergroup:drwxr-xr-x

      我们在执行运行WordCount.java代码时,出现这样的问题

        

    2014-12-18 16:03:24,092  WARN (org.apache.hadoop.mapred.LocalJobRunner:560) - job_local374172562_0001
    org.apache.hadoop.security.AccessControlException: Permission denied: user=zhengcy, access=WRITE, inode="/user/root/output":root:supergroup:drwxr-xr-x
    	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
    	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
    	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
    	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
    	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6512)
    	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6494)
    	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6446)
    	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4248)
    	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4218)
    	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    

       

     分析:

      我们没权限访问output目录。

    解决:

        我们 在设置hdfs配置的目录是在hdfs-site.xml配置hdfs文件存放的地方,我在hadoop伪分布式部署那边有介绍过,我们在这边在复习一下,如图所示:

    我们在这个etc/hadoop下的hdfs-site.xml添加

      <property> 

         <name>dfs.permissions</name> 
         <value>false</value> 
      </property>

    设置没有权限,不过我们在正式的 服务器上不能这样设置。

      问题五:File/usr/root/input/file01._COPYING_ could only be replicated to 0 nodes instead ofminRepLication (=1) There are 0 datanode(s) running and no node(s) are excludedin this operation

         如图所示:

          

      分析:  

      我们在第一次执行#hadoop namenode –format 完然后在执行#sbin/start-all.sh 

    在执行#jps,能看到Datanode,在执行#hadoop namenode –format然后执行#jps这时看不到Datanode ,如图所示:

          

       然后我们想把文本放到输入目录执行bin/hdfs dfs -put/usr/local/hadoop/hadoop-2.6.0/test/* /user/root/input  把/test/*文件上传到hdfs的/user/root/input中,出现这样的问题,

     解决:

      是我们执行太多次了hadoopnamenode –format,在创建了多个,我们对应的hdfs目录删除hdfs-site.xml配置的保存datanode和namenode目录。


  • 相关阅读:
    poj 2763 Housewife Wind
    hdu 3966 Aragorn's Story
    poj 1655 Balancing Act 求树的重心
    有上下界的网络流问题
    URAL 1277 Cops and Thieves 最小割 无向图点带权点连通度
    ZOJ 2532 Internship 网络流求关键边
    ZOJ 2760 How Many Shortest Path 最大流+floyd求最短路
    SGU 438 The Glorious Karlutka River =) 拆点+动态流+最大流
    怎么样仿写已知网址的网页?
    5-10 公路村村通 (30分)
  • 原文地址:https://www.cnblogs.com/elenz/p/6718038.html
Copyright © 2011-2022 走看看