zoukankan      html  css  js  c++  java
  • Spark:java api实现word count统计

    方案一:使用reduceByKey

    数据word.txt

    张三
    李四
    王五
    李四
    王五
    李四
    王五
    李四
    王五
    王五
    李四
    李四
    李四
    李四
    李四

    代码:

    import org.apache.spark.api.java.JavaPairRDD;
    import org.apache.spark.api.java.JavaRDD;
    import org.apache.spark.api.java.JavaSparkContext;
    import org.apache.spark.api.java.function.Function2;
    import org.apache.spark.api.java.function.PairFunction;
    import org.apache.spark.rdd.RDD;
    import org.apache.spark.sql.SparkSession;
    import scala.Tuple2;
    
    public class HelloWord {
        public static void main(String[] args) {
            SparkSession spark = SparkSession.builder().master("local[*]").appName("Spark").getOrCreate();
            final JavaSparkContext ctx = JavaSparkContext.fromSparkContext(spark.sparkContext());
    
            RDD<String> rdd = spark.sparkContext().textFile("C:\Users\boco\Desktop\word.txt", 1);
            JavaRDD<String> javaRDD = rdd.toJavaRDD();
    
            JavaPairRDD<String, Integer> javaRDDMap = javaRDD.mapToPair(new PairFunction<String, String, Integer>() {
                public Tuple2<String, Integer> call(String s) {
                    return new Tuple2<String, Integer>(s, 1);
                }
            });
    
            JavaPairRDD<String, Integer> result = javaRDDMap.reduceByKey(new Function2<Integer, Integer, Integer>() {
                @Override
                public Integer call(Integer integer, Integer integer2) throws Exception {
                    return integer + integer2;
                }
            });
    
            System.out.println(result.collect());
        }
    }

    输出:

    [(张三,1), (李四,9), (王五,5)]

    方案二:使用spark sql

    使用spark sql实现代码:

    import org.apache.spark.api.java.JavaRDD;
    import org.apache.spark.api.java.JavaSparkContext;
    import org.apache.spark.sql.Dataset;
    import org.apache.spark.sql.Row;
    import org.apache.spark.sql.SparkSession;
    import org.apache.spark.sql.types.DataTypes;
    import org.apache.spark.sql.types.StructField;
    import org.apache.spark.sql.types.StructType;
    
    import java.util.ArrayList;
    
    public class HelloWord {
        public static void main(String[] args) {
            SparkSession spark = SparkSession.builder().master("local[*]").appName("Spark").getOrCreate();
            final JavaSparkContext ctx = JavaSparkContext.fromSparkContext(spark.sparkContext());
    
            JavaRDD<Row> rows = spark.read().text("C:\Users\boco\Desktop\word.txt").toJavaRDD();
    
            ArrayList<StructField> fields = new ArrayList<StructField>();
            StructField field = null;
            field = DataTypes.createStructField("key", DataTypes.StringType, true);
            fields.add(field);
    
            StructType schema = DataTypes.createStructType(fields);
    
            Dataset<Row> ds = spark.createDataFrame(rows, schema);
    
            ds.createOrReplaceTempView("words");
    
            Dataset<Row> result = spark.sql("select key,count(0) as key_count from words group by key");
    
            result.show();
        }
    }

    结果:

    +---+---------+
    |key|key_count|
    +---+---------+
    | 王五|        5|
    | 李四|        9|
    | 张三|        1|
    +---+---------+

    方案二:使用spark streaming实时流分析

    参考《http://spark.apache.org/docs/latest/streaming-programming-guide.html》

    First, we create a JavaStreamingContext object, which is the main entry point for all streaming functionality. We create a local StreamingContext with two execution threads, and a batch interval of 1 second.

    import org.apache.spark.*;
    import org.apache.spark.api.java.function.*;
    import org.apache.spark.streaming.*;
    import org.apache.spark.streaming.api.java.*;
    import scala.Tuple2;
    
    // Create a local StreamingContext with two working thread and batch interval of 1 second
    SparkConf conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount");
    JavaStreamingContext jssc = new JavaStreamingContext(conf, Durations.seconds(1));

    Using this context, we can create a DStream that represents streaming data from a TCP source, specified as hostname (e.g. localhost) and port (e.g. 9999).

    // Create a DStream that will connect to hostname:port, like localhost:9999
    JavaReceiverInputDStream<String> lines = jssc.socketTextStream("localhost", 9999);

    This lines DStream represents the stream of data that will be received from the data server. Each record in this stream is a line of text. Then, we want to split the lines by space into words.

    // Split each line into words
    JavaDStream<String> words = lines.flatMap(x -> Arrays.asList(x.split(" ")).iterator());

    flatMap is a DStream operation that creates a new DStream by generating multiple new records from each record in the source DStream. In this case, each line will be split into multiple words and the stream of words is represented as the words DStream. Note that we defined the transformation using a FlatMapFunction object. As we will discover along the way, there are a number of such convenience classes in the Java API that help defines DStream transformations.

    Next, we want to count these words.

    // Count each word in each batch
    JavaPairDStream<String, Integer> pairs = words.mapToPair(s -> new Tuple2<>(s, 1));
    JavaPairDStream<String, Integer> wordCounts = pairs.reduceByKey((i1, i2) -> i1 + i2);
    
    // Print the first ten elements of each RDD generated in this DStream to the console
    wordCounts.print();

    The words DStream is further mapped (one-to-one transformation) to a DStream of (word, 1) pairs, using a PairFunction object. Then, it is reduced to get the frequency of words in each batch of data, using a Function2 object. Finally, wordCounts.print() will print a few of the counts generated every second.

    Note that when these lines are executed, Spark Streaming only sets up the computation it will perform after it is started, and no real processing has started yet. To start the processing after all the transformations have been setup, we finally call start method.

    jssc.start();              // Start the computation
    jssc.awaitTermination();   // Wait for the computation to terminate

    The complete code can be found in the Spark Streaming example JavaNetworkWordCount

    If you have already downloaded and built Spark, you can run this example as follows. You will first need to run Netcat (a small utility found in most Unix-like systems) as a data server by using

    $ nc -lk 9999

    Then, in a different terminal, you can start the example by using

    $ ./bin/run-example streaming.JavaNetworkWordCount localhost 9999

    完整代码:

    import java.util.Arrays;
    
    import org.apache.spark.SparkConf;
    import org.apache.spark.api.java.JavaSparkContext;
    import org.apache.spark.streaming.Durations;
    import org.apache.spark.streaming.api.java.JavaDStream;
    import org.apache.spark.streaming.api.java.JavaPairDStream;
    import org.apache.spark.streaming.api.java.JavaReceiverInputDStream;
    import org.apache.spark.streaming.api.java.JavaStreamingContext;
    
    import scala.Tuple2;
    
    public class HelloWord {
        public static void main(String[] args) throws InterruptedException {
            // Create a local StreamingContext with two working thread and batch interval of
            // 1 second
            SparkConf conf = new SparkConf().setMaster("local[*]").setAppName("NetworkWordCount");
            JavaSparkContext jsc=new JavaSparkContext(conf);
            jsc.setLogLevel("WARN");
            JavaStreamingContext jssc = new JavaStreamingContext(jsc, Durations.seconds(60));
            
            // Create a DStream that will connect to hostname:port, like localhost:9999
            JavaReceiverInputDStream<String> lines = jssc.socketTextStream("xx.xx.xx.xx", 19999);
    
            // Split each line into words
            JavaDStream<String> words = lines.flatMap(x -> Arrays.asList(x.split(" ")).iterator());
    
            // Count each word in each batch
            JavaPairDStream<String, Integer> pairs = words.mapToPair(s -> new Tuple2<>(s, 1));
            JavaPairDStream<String, Integer> wordCounts = pairs.reduceByKey((i1, i2) -> i1 + i2);
    
            // Print the first ten elements of each RDD generated in this DStream to the
            // console
            wordCounts.print();
    
            jssc.start(); // Start the computation
            jssc.awaitTermination(); // Wait for the computation to terminate
        }
    }
    View Code

    测试:

    [root@abced dx]# nc -lk 19999
    hellow wrd
    hello word
    hello word
    hello dkk
    hl
    hello
    hello
    hello word
    hello word
    hello java
    hello c@
    hello hadoop]
    hello spark
    hello word
    hello kafka
    hello c
    hello c#
    hello .net core
    net cre
    workd
    hle
    hello words
    hke hjh
    hek 23
    hel 23
    hl3 323
    hhk 68
    hke 84

    程序执行结果:

    -------------------------------------------
    Time: 1533781920000 ms
    -------------------------------------------
    (c,1)
    (spark,1)
    (kafka,1)
    (c#,1)
    (hello,9)
    (java,1)
    (c@,1)
    (hadoop],1)
    (word,2)
    
    18/08/09 10:32:05 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.
    18/08/09 10:32:05 WARN BlockManager: Block input-0-1533781925200 replicated to only 0 peer(s) instead of 1 peers
    18/08/09 10:32:08 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.
    18/08/09 10:32:08 WARN BlockManager: Block input-0-1533781928000 replicated to only 0 peer(s) instead of 1 peers
    18/08/09 10:32:11 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.
    18/08/09 10:32:11 WARN BlockManager: Block input-0-1533781931200 replicated to only 0 peer(s) instead of 1 peers
    18/08/09 10:32:14 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.
    18/08/09 10:32:14 WARN BlockManager: Block input-0-1533781934600 replicated to only 0 peer(s) instead of 1 peers
    -------------------------------------------
    Time: 1533781980000 ms
    -------------------------------------------
    (hle,1)
    (words,1)
    (.net,1)
    (hello,2)
    (workd,1)
    (cre,1)
    (net,1)
    (core,1)
    
    18/08/09 10:33:08 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.
    18/08/09 10:33:08 WARN BlockManager: Block input-0-1533781988000 replicated to only 0 peer(s) instead of 1 peers
    18/08/09 10:33:11 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.
    18/08/09 10:33:11 WARN BlockManager: Block input-0-1533781991000 replicated to only 0 peer(s) instead of 1 peers
    18/08/09 10:33:14 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.
    18/08/09 10:33:14 WARN BlockManager: Block input-0-1533781994200 replicated to only 0 peer(s) instead of 1 peers
    18/08/09 10:33:17 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.
    18/08/09 10:33:17 WARN BlockManager: Block input-0-1533781997400 replicated to only 0 peer(s) instead of 1 peers
    18/08/09 10:33:20 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.
    18/08/09 10:33:20 WARN BlockManager: Block input-0-1533782000400 replicated to only 0 peer(s) instead of 1 peers
    18/08/09 10:33:25 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s.
    18/08/09 10:33:25 WARN BlockManager: Block input-0-1533782005600 replicated to only 0 peer(s) instead of 1 peers
    -------------------------------------------
    Time: 1533782040000 ms
    -------------------------------------------
    (68,1)
    (hhk,1)
    (hek,1)
    (hel,1)
    (84,1)
    (hjh,1)
    (23,2)
    (hke,2)
    (323,1)
    (hl3,1)

    结论:是一批一批的处理的,不进行累加,每一批统计并不是累加之前的数据,而是针对当前接收到这一批数据的处理。

  • 相关阅读:
    WinScan2PDF ----将图片批量选中后转换成pdf
    画质王(iCYPlayer)播放器
    如何让UEFI BIOS主板在Windows XP SP3 32位系统下识别GPT格式移动硬盘
    windows xp 文件排列全部默认以“详细信息”显示
    asus 笔记本从window 7系统安装到window xp系统的过程
    FastCopy --- 快速复制文件工具
    《极地》纪录片
    Android 机器全擦之后无法驻网了,如何备份和恢复QCN?
    Google Camera
    Android 命令行工具-apkanalyzer
  • 原文地址:https://www.cnblogs.com/yy3b2007com/p/9370331.html
Copyright © 2011-2022 走看看