zoukankan      html  css  js  c++  java
  • 第一个map reduce程序

    完成了第一个mapReduce例子,记录一下。

    实验环境:

    hadoop在三台ubuntu机器上部署
    开发在window7上进行
    hadoop版本2.2.0

    下载了hadoop-eclipse-plugin-2.2.0.jar放入eclipse的plugin文件夹中,重启后有如下标识

    下方右击: add hadoop location

    此时,eclipse 左侧会有

    上图即简单的实现了一个嵌于eclipse中的用于访问hdfs系统的client端,其中可以增删改查文件。

    -------------------------------

    下面研究一下编程吧...

    1 新建一个map/Reduce Project,本例中由于eclpise与hadoop不在一台主机上,着实费了一些时间

    如下图,此时应该选择第二项(specify hadoop library locaiton), 导入的应该是hadoop目录下的lib/native文件夹,这里如果选错了,project是建不起来的!

    2. 取出hadoop-2.2.0sharehadoopmapreducesourceshadoop-mapreduce-examples-2.2.0-sources.jar下,解压后导入eclipse,好了,可以开始研究代码了。.

    基本流程如下

    (input) <k1, v1> -> map -> <k2, v2> -> combine -> <k2, v2> -> reduce -> <k3, v3> (output)

    /**
     * Licensed to the Apache Software Foundation (ASF) under one
     * or more contributor license agreements.  See the NOTICE file
     * distributed with this work for additional information
     * regarding copyright ownership.  The ASF licenses this file
     * to you under the Apache License, Version 2.0 (the
     * "License"); you may not use this file except in compliance
     * with the License.  You may obtain a copy of the License at
     *
     *     http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     */
    package org.apache.hadoop.examples;
    
    import java.io.IOException;
    import java.util.StringTokenizer;
    
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Mapper;
    import org.apache.hadoop.mapreduce.Reducer;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    import org.apache.hadoop.util.GenericOptionsParser;
    
    public class WordCount {
    /*Mapper(14-26行)中的map方法(18-25行)通过指定的 TextInputFormat(49行)一次处理一行。然后,它通过StringTokenizer 以空格为分隔符将一行切分为若干tokens,之后,输出< <word>, 1> 形式的键值对。*/
      public static class TokenizerMapper 
           extends Mapper<Object, Text, Text, IntWritable>{    
        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();
        public void map(Object key, Text value, Context context
                        ) throws IOException, InterruptedException {
          StringTokenizer itr = new StringTokenizer(value.toString());
          while (itr.hasMoreTokens()) {
            word.set(itr.nextToken());
            context.write(word, one);
          }
        }
      }
      
    /*
    做Combiner: 每次map运行之后,会对输出按照key进行排序,然后把输出传递给本地的combiner(按照作业的配置与Reducer一样),进行本地聚合
    做Reducer中的reduce方法 仅是将每个key(本例中就是单词)出现的次数求和。
    */
    public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); // combiner与reducer用的类一样,减轻reducer负担 job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }

    3. wordCount执行效果如下

    root@kali:/data/hadoop/share/hadoop# hadoop jar ./mapreduce/hadoop-mapreduce-examples-2.2.0.jar  wordcount /abc/start-all.sh /abd/out
    14/03/09 21:24:43 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
    14/03/09 21:24:43 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
    14/03/09 21:24:44 INFO input.FileInputFormat: Total input paths to process : 1
    14/03/09 21:24:44 INFO mapreduce.JobSubmitter: number of splits:1
    14/03/09 21:24:44 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
    14/03/09 21:24:44 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
    14/03/09 21:24:44 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
    14/03/09 21:24:44 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
    14/03/09 21:24:44 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class

    http://phz50.iteye.com/blog/932373

    http://www.ibm.com/developerworks/cn/java/j-javadev2-15/

    http://www.cnblogs.com/flyoung2008/archive/2011/12/09/2281400.html

    http://f.dataguru.cn/thread-167597-1-1.html  hadoop在eclipse中上传文件size为0 

    http://cs.smith.edu/dftwiki/index.php/Hadoop_Tutorial_1_--_Running_WordCount  step by step!

  • 相关阅读:
    js
    原型、原型链、闭包、继承
    js6.22
    js
    js
    在浏览器窗口上添加一个遮罩层
    git使用笔记
    前端开发面试题
    Web Worker
    js实现图片预加载
  • 原文地址:https://www.cnblogs.com/vigarbuaa/p/3580236.html
Copyright © 2011-2022 走看看