zoukankan      html  css  js  c++  java
  • 3. scala-spark wordCount 案例

    1. 创建maven 工程

    2. 相关依赖和插件

    <dependencies>
            <dependency>
                <groupId>org.apache.spark</groupId>
                <artifactId>spark-core_2.11</artifactId>
                <version>2.1.1</version>
            </dependency>
     </dependencies>
        <build>
            <finalName>wordCount</finalName>
            <plugins>
                <plugin>
                    <groupId>net.alchim31.maven</groupId>
                    <artifactId>scala-maven-plugin</artifactId>
                    <version>4.2.0</version>
                    <executions>
                        <execution>
                            <goals>
                                <goal>compile</goal>
                                <goal>testCompile</goal>
                            </goals>
                        </execution>
                    </executions>
                </plugin>
    
                <plugin>
                    <groupId>org.apache.maven.plugins</groupId>
                    <artifactId>maven-assembly-plugin</artifactId>
                    <version>3.1.0</version>
                    <configuration>
                        <archive>
                            <manifest>
                                <mainClass>wordCount</mainClass>
                            </manifest>
                        </archive>
                        <descriptorRefs>
                            <descriptorRef>jar-with-dependencies</descriptorRef>
                        </descriptorRefs>
                    </configuration>
                    <executions>
                        <execution>
                            <id>make-assembly</id>
                            <phase>package</phase>
                            <goals>
                                <goal>single</goal>
                            </goals>
                        </execution>
                    </executions>
                </plugin>
            </plugins>
        </build>

    3. wordCount 案例

    package com.atgu.bigdata.spark
    import org.apache.spark._
    import org.apache.spark.rdd.RDD
    object wordCount extends App {
      // local模式
      // 1.创建sparkConf 对象
       val conf: SparkConf = new SparkConf().setMaster("local[*]").setAppName("wordCount")
      // 2. 创建spark 上下文对象
      val sc:SparkContext=new SparkContext(config = conf)
      // 3. 读取文件
     val lines: RDD[String] = sc.textFile("file:///opt/data/1.txt")
      // 4. 切割单词
      val words: RDD[String] = lines.flatMap(_.split(" "))
    //  words.collect().foreach(println)
      // map
      private val keycounts: RDD[(String, Int)] = words.map((_, 1))
      //
      private val results: RDD[(String, Int)] = keycounts.reduceByKey(_ + _)
      private val res: Array[(String, Int)] = results.collect
      res.foreach(println)
    
    }

     4. 项目目录结构

  • 相关阅读:
    最易懂的语音自动增益原理介绍
    共振峰估计基础
    语音基音周期估计基础
    语音信号临界带宽的概念解释
    语音信号的时域维纳滤波器原理简介
    几种改进的谱减算法简介
    谱减算法的缺点和过减因子、谱下限的关系
    关于语音分帧时有重叠部分的原因分析
    x264命令参数与代码中变量的对应关系
    笔记--语音信号的预加重
  • 原文地址:https://www.cnblogs.com/knighterrant/p/13780982.html
Copyright © 2011-2022 走看看