zoukankan      html  css  js  c++  java
  • Hadoop_17_MapRduce_案例2_实现用户手机流量统计(ReduceTask并行度控制)

    需求:1.统计每一个用户(手机号)所耗费的总上行流量、下行流量,总流量

    1.数据如下:保存为.dat文件(因为以 切分数据,文件格式必须合适)

    1363157985066     13726230503    00-FD-07-A4-72-B8:CMCC    120.196.100.82    i02.c.aliimg.com        24    27    2481    24681    200
    1363157995052     13826544101    5C-0E-8B-C7-F1-E0:CMCC    120.197.40.4            4    0    264    0    200
    1363157991076     13926435656    20-10-7A-28-CC-0A:CMCC    120.196.100.99            2    4    132    1512    200
    1363154400022     13926251106    5C-0E-8B-8B-B1-50:CMCC    120.197.40.4            4    0    240    0    200
    1363157993044     18211575961    94-71-AC-CD-E6-18:CMCC-EASY    120.196.100.99    iface.qiyi.com    视频网站    15    12    1527    2106    200
    1363157995074     84138413       5C-0E-8B-8C-E8-20:7DaysInn    120.197.40.4    122.72.52.12        20    16    4116    1432    200
    1363157993055     13560439658    C4-17-FE-BA-DE-D9:CMCC    120.196.100.99            18    15    1116    954    200
    1363157995033     15920133257    5C-0E-8B-C7-BA-20:CMCC    120.197.40.4    sug.so.360.cn    信息安全    20    20    3156    2936    200
    1363157983019     13719199419    68-A1-B7-03-07-B1:CMCC-EASY    120.196.100.82            4    0    240    0    200
    1363157984041     13660577991    5C-0E-8B-92-5C-20:CMCC-EASY    120.197.40.4    s19.cnzz.com    站点统计    24    9    6960    690    200
    1363157973098     15013685858    5C-0E-8B-C7-F7-90:CMCC    120.197.40.4    rank.ie.sogou.com    搜索引擎    28    27    3659    3538    200
    1363157986029     15989002119    E8-99-C4-4E-93-E0:CMCC-EASY    120.196.100.99    www.umeng.com    站点统计    3    3    1938    180    200
    1363157992093     13560439658    C4-17-FE-BA-DE-D9:CMCC    120.196.100.99            15    9    918    4938    200
    1363157986041     13480253104    5C-0E-8B-C7-FC-80:CMCC-EASY    120.197.40.4            3    3    180    180    200
    1363157984040     13602846565    5C-0E-8B-8B-B6-00:CMCC    120.197.40.4    2052.flash2-http.qq.com    综合门户    15    12    1938    2910    200
    1363157995093     13922314466    00-FD-07-A2-EC-BA:CMCC    120.196.100.82    img.qfc.cn        12    12    3008    3720    200
    1363157982040     13502468823    5C-0A-5B-6A-0B-D4:CMCC-EASY    120.196.100.99    y0.ifengimg.com    综合门户    57    102    7335    110349    200
    1363157986072     18320173382    84-25-DB-4F-10-1A:CMCC-EASY    120.196.100.99    input.shouji.sogou.com    搜索引擎    21    18    9531    2412    200
    1363157990043     13925057413    00-1F-64-E1-E6-9A:CMCC    120.196.100.55    t3.baidu.com    搜索引擎    69    63    11058    48243    200
    1363157988072     13760778710    00-FD-07-A4-7B-08:CMCC    120.196.100.82            2    2    120    120    200
    1363157985066     13726238888    00-FD-07-A4-72-B8:CMCC    120.196.100.82    i02.c.aliimg.com        24    27    2481    24681    200
    1363157993055     13560436666    C4-17-FE-BA-DE-D9:CMCC    120.196.100.99            18    15    1116    954    200

    2.技术实现过程:

      1.首先将Map输入中的手机号,上行流量,下行流量数据抽取出来(每一行输入数据调用一次自定义map方法处理数据),

    然后根据相同的key进行数据分发,以便于相同key会到同一个ReduceTask

      2.Map输出为<手机号,bean>,自定义javaBean来封装流量信息,并将javaBean充当Map输出的Value来传输,javaBean

    要实现Writable序列化接口,实现两个方法

      3.Reduce在获得<手机号,list>后进行累积,然后输出结果即可(框架每传递进来一个kv组,reduce方法被调用一次)

    3.代码:FlowCount.java

    package cn.bigdata.hdfs.flowsum;
    import java.io.IOException;
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.LongWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Mapper;
    import org.apache.hadoop.mapreduce.Reducer;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    
    public class FlowCount {
        static class FlowCountMapper extends Mapper<LongWritable, Text, Text, FlowBean>{
            @Override
            protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
                //将一行内容转成string
                String line = value.toString();
                //切分字段
                String[] fields = line.split("	");
                //取出手机号
                String phoneNbr = fields[1];
                //取出上行流量下行流量
                long upFlow = Long.parseLong(fields[fields.length-3]);
                long dFlow = Long.parseLong(fields[fields.length-2]);
                
                context.write(new Text(phoneNbr), new FlowBean(upFlow, dFlow));
            }
        }
        
        static class FlowCountReducer extends Reducer<Text, FlowBean, Text, FlowBean>{
            
            //<183323,bean1><183323,bean2><183323,bean3><183323,bean4>.......
            @Override
            protected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException {
                long sum_upFlow = 0;
                long sum_dFlow = 0;
                
                //遍历所有bean,将其中的上行流量,下行流量分别累加
                for(FlowBean bean: values){
                    sum_upFlow += bean.getUpFlow();
                    sum_dFlow += bean.getdFlow();
                }
                
                FlowBean resultBean = new FlowBean(sum_upFlow, sum_dFlow);
                context.write(key, resultBean);
            }
        }
        
        public static void main(String[] args) throws Exception {
            Configuration conf = new Configuration();
            /*conf.set("mapreduce.framework.name", "yarn");
            conf.set("yarn.resoucemanager.hostname", "mini1");*/
            Job job = Job.getInstance(conf);
            
            /*job.setJar("/home/hadoop/wc.jar");*/
            //指定本程序的jar包所在的本地路径
            job.setJarByClass(FlowCount.class);
            
            //指定本业务job要使用的mapper/Reducer业务类
            job.setMapperClass(FlowCountMapper.class);
            job.setReducerClass(FlowCountReducer.class);
            
            //指定mapper输出数据的kv类型
            job.setMapOutputKeyClass(Text.class);
            job.setMapOutputValueClass(FlowBean.class);
            
            //指定最终输出的数据的kv类型
            job.setOutputKeyClass(Text.class);
            job.setOutputValueClass(FlowBean.class);
            
            //指定job的输入原始文件所在目录
            FileInputFormat.setInputPaths(job, new Path(args[0]));
            //指定job的输出结果所在目录
            FileOutputFormat.setOutputPath(job, new Path(args[1]));
            
            //将job中配置的相关参数,以及job所用的java类所在的jar包,提交给yarn去运行
            /*job.submit();*/
            boolean res = job.waitForCompletion(true);
            System.exit(res?0:1);
        }
    }

    FlowBean.java

    如果想在Reducer的输出结果中使用自定义的数据类型,重写FlowBean的toString()方法即可。

    package cn.bigdata.hdfs.flowsum;
    import java.io.DataInput;
    import java.io.DataOutput;
    import java.io.IOException;
    import org.apache.hadoop.io.Writable;
    public class FlowBean implements Writable{  
        private long upFlow;
        private long dFlow;
        private long sumFlow;
        
        //反序列化时,需要反射调用空参构造函数,所以要显示定义一个
        public FlowBean(){}
        
        public FlowBean(long upFlow, long dFlow) {
            this.upFlow = upFlow;
            this.dFlow = dFlow;
            this.sumFlow = upFlow + dFlow;
        }
        
        public long getUpFlow() {
            return upFlow;
        }
        public void setUpFlow(long upFlow) {
            this.upFlow = upFlow;
        }
        public long getdFlow() {
            return dFlow;
        }
        public void setdFlow(long dFlow) {
            this.dFlow = dFlow;
        }
    
    
        public long getSumFlow() {
            return sumFlow;
        }
    
    
        public void setSumFlow(long sumFlow) {
            this.sumFlow = sumFlow;
        }
    
        /**
         * 序列化方法
         */
        @Override
        public void write(DataOutput out) throws IOException {
            out.writeLong(upFlow);
            out.writeLong(dFlow);
            out.writeLong(sumFlow);
            
        }
    
        /**
         * 反序列化方法
         * 注意:反序列化的顺序跟序列化的顺序完全一致
         */
        @Override
        public void readFields(DataInput in) throws IOException {
             upFlow = in.readLong();
             dFlow = in.readLong();
             sumFlow = in.readLong();
        }
        
        @Override
        public String toString() {   
            return upFlow + "	" + dFlow + "	" + sumFlow;
        }
    }

    4.执行程序:

      4.1.创建HDFS文件存放目录:hadoop fs -mkdir -p /wordcount/phoneFlum

      4.2.运行MapReduce程序jar包:

        hadoop jar flowsum.jar cn.bigdata.hdfs.flowsum.FlowCount /wordcount/phoneFlum /wordcount/phoneFlumOut

    5.查看执行结果:

      


     需求:2.将流量统计结果按照手机归属地省份不同输出到不同文件中(ReduceTask并行度控制,自定义Partitioner)

    2.技术实现过程: 

      1.Mapreduce中会将map输出的kv对,按照相同key分组(调用getPartition),然后分发给不同的reducetask

      2.Map输出结果的时候调用了Partitioner组件(返回分区号),由它决定将数据放到哪个区中,默认的分组规则为

    :根据key的hashcode%reducetask数来分发,源代码如下:

    public class HashPartitioner<K, V> extends Partitioner<K, V> {
      /** Use {@link Object#hashCode()} to partition. */
      public int getPartition(K key, V value,int numReduceTasks) {
        return (key.hashCode() & Integer.MAX_VALUE) % numReduceTasks;
      }
    }

      3.所以:如果要按照我们自己的需求进行分组,则需要改写数据分发(分组)组件Partitioner自定义一个

    CustomPartitioner继承抽象类:Partitioner,来返回一个分区编号

      4.然后在job对象中,设置自定义partitioner: job.setPartitionerClass(CustomPartitioner.class)

      5.自定义partition后,要根据自定义partitioner的逻辑设置相应数量的ReduceTask

    3.代码实现自定义partitioner数据分区规则

    package cn.bigdata.hdfs.flowsum;
    import java.util.HashMap;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Partitioner;
    /**
     * Partitioner<Text, FlowBean>中分别 对应的是map输出kv的类型
     */
    public class ProvincePartitioner extends Partitioner<Text, FlowBean>{
        public static HashMap<String, Integer> proviceDict = new HashMap<String, Integer>();
        static{//分为5个区
            proviceDict.put("136", 0);
            proviceDict.put("137", 1);
            proviceDict.put("138", 2);
            proviceDict.put("139", 3);
        }
        
        @Override
        public int getPartition(Text key, FlowBean value, int numPartitions) {
            String prefix = key.toString().substring(0, 3);
            Integer provinceId = proviceDict.get(prefix);
            return provinceId==null?4:provinceId;
        }
    }
    //指定我们自定义的数据分区器
    job.setPartitionerClass(ProvincePartitioner.class);
    //同时指定相应“分区”数量的reducetask
    job.setNumReduceTasks(5);

    运行程序:hadoop jar flowsum.jar cn.bigdata.hdfs.flowsum.FlowCount /wordcount/phoneFlum /wordcount/phoneFlumOut1

     

    此时生成了五个分区文件:

     

    注意:如果reduceTask的数量>= getPartition的结果数  ,则会多产生几个空的输出文件part-r-000xx

        如果1<reduceTask的数量<getPartition的结果数 ,则有一部分分区数据无处安放,会Exception

        如果 reduceTask的数量=1,则不管mapTask端输出多少个分区文件,最终结果都交给这一个reduceTask,

    最终也就只会产生一个结果文件 part-r-00000


    需求:3.将统计结果按照总流量倒序排序

     

     

    //指定我们自定义的数据分区器
            job.setPartitionerClass(ProvincePartitioner.class);
            //同时指定相应“分区”数量的reducetask
            job.setNumReduceTasks(5);

  • 相关阅读:
    168. Excel Sheet Column Title
    461. Hamming Distance
    Tree Representation Implementation & Traversal
    404. Sum of Left Leaves
    572. Subtree of Another Tree
    20. Valid Parentheses
    Check time of different search methods
    Binary search tree or not
    Coin Change
    JS DOM:文档对象模型 --树模型 文档:标签文档,对象:文档中每个元素对象,模型:抽象化的东西
  • 原文地址:https://www.cnblogs.com/yaboya/p/9204932.html
Copyright © 2011-2022 走看看