zoukankan      html  css  js  c++  java
  • 针对小文件的spark wholeTextFiles()

    场景:推送过来的数据文件数量很多,并且每个只有10-30M的大小

    spark读取hdfs一般都是用textfile(),但是对于这种情况,如果使用textFile默认产生的分区数将与文件数目一致,产生大量的任务。

    对应这种小文件,spark提供了一个特殊的api, wholeTextFiles(), wholeTextFiles主要用于处理大量的小文件,源码如下:

      /**
       * Read a directory of text files from HDFS, a local file system (available on all nodes), or any
       * Hadoop-supported file system URI. Each file is read as a single record and returned in a
       * key-value pair, where the key is the path of each file, the value is the content of each file.
       *
       * <p> For example, if you have the following files:
       * {{{
       *   hdfs://a-hdfs-path/part-00000
       *   hdfs://a-hdfs-path/part-00001
       *   ...
       *   hdfs://a-hdfs-path/part-nnnnn
       * }}}
       *
       * Do `val rdd = sparkContext.wholeTextFile("hdfs://a-hdfs-path")`,
       *
       * <p> then `rdd` contains
       * {{{
       *   (a-hdfs-path/part-00000, its content)
       *   (a-hdfs-path/part-00001, its content)
       *   ...
       *   (a-hdfs-path/part-nnnnn, its content)
       * }}}
       *
       * @note Small files are preferred, large file is also allowable, but may cause bad performance.
       * @note On some filesystems, `.../path/&#42;` can be a more efficient way to read all files
       *       in a directory rather than `.../path/` or `.../path`
       * @note Partitioning is determined by data locality. This may result in too few partitions
       *       by default.
       *
       * @param path Directory to the input data files, the path can be comma separated paths as the
       *             list of inputs.
       * @param minPartitions A suggestion value of the minimal splitting number for input data.
       * @return RDD representing tuples of file path and the corresponding file content
       */
      def wholeTextFiles(
          path: String,
          minPartitions: Int = defaultMinPartitions): RDD[(String, String)] = withScope {
        assertNotStopped()
        val job = NewHadoopJob.getInstance(hadoopConfiguration)
        // Use setInputPaths so that wholeTextFiles aligns with hadoopFile/textFile in taking
        // comma separated files as input. (see SPARK-7155)
        NewFileInputFormat.setInputPaths(job, path)
        val updateConf = job.getConfiguration
        new WholeTextFileRDD(
          this,
          classOf[WholeTextFileInputFormat],
          classOf[Text],
          classOf[Text],
          updateConf,
          minPartitions).map(record => (record._1.toString, record._2.toString)).setName(path)
      }

    wholeTextFiles读取文件,输入参数为路径,并且可以设置为多个路径,多个路径之间以逗号分隔。wholeTextFiles读取数据会生成一个Tuple2,Tuple2的第一个元素是该文件的完整路径名,第二个元素表示该文件的文本内容(context)。比如两行数据:
      jack,1011,shanghai

      kevin,2022,beijing

    返回的文本内容是一行字符串,源数据的每行数据以换行符 分隔,也即:jack,1011,shanghai kevin,2022,beijing

    分区数可以自定义,如果不显示指定,则默认分区数定义如下:

    def defaultMinPartitions: Int = math.min(defaultParallelism, 2)

    也就是在不指定分区的情况下,大部分情况都是以2个分区来处理数据。

    样例代码:

    处理逻辑可以理解为每个小文件对应一个城市的某个区下的所有道路相关的数据(当然了实际数据并不是,哪个城市有几万个几十万个区)。文件名为区的名字,文件内容为道路的名称以及相关数据,在每行道路数据上加上区的名字。

    import org.apache.spark.SparkConf;
    import org.apache.spark.api.java.JavaPairRDD;
    import org.apache.spark.api.java.JavaRDD;
    import org.apache.spark.api.java.JavaSparkContext;
    import org.apache.spark.api.java.function.Function;
    import org.apache.spark.sql.SparkSession;
    import org.apache.spark.util.SizeEstimator;
    import scala.Tuple2;
    
    public class TestWholeTextFiles {
    
        public static void main(String[] args) {
            SparkConf conf = new SparkConf();
            SparkSession spark = SparkSession
                    .builder()
                    .appName("TestWholeTextFiles")
                    .master("local")
                    .config(conf)
                    .enableHiveSupport()
                    .getOrCreate();
            JavaSparkContext sc = JavaSparkContext.fromSparkContext(spark.sparkContext());
            JavaPairRDD<String, String> javaPairRDD =
                    sc.wholeTextFiles("hdfs://master01.xx.xx.cn:8020/kong/capacityLusunData_bak");
    
            System.out.println("javaPairRDD分区数:"+javaPairRDD.getNumPartitions());//2
            JavaRDD<String> map = javaPairRDD.map((Function<Tuple2<String, String>, String>) v1 -> {
                int index = v1._1.lastIndexOf("/");
                String road_id = v1._1.substring(index+1).split("\.")[0];
                return v1._2.replace("
    ", "\|"+road_id + "
    ");
            });
            System.out.println("mapRDD分区数:"+map.getNumPartitions());//2
            map.saveAsTextFile("hdfs://master01.xx.xx.cn:8020/kong/data/testwholetextfiles/out");
        }
    }

    1

  • 相关阅读:
    css:chorm调试工具(修改样式、重置缩放比例、错误提示、语法快速生成)
    多线程:线程不安全案例(买票、银行取钱、集合)
    css:选择器(标签、类、ID、通配符)
    多线程:(优先级、守护线程)
    多线程(线程的状态、终止、休眠、礼让、合并)
    html:标签(标题、段落、换行、文本格式化、图像)
    多线程:多线程的应用(网图下载、模拟售票、龟兔赛跑)
    Struts2
    框架、MVC、MVC框架
    html5、css3、BootStrap
  • 原文地址:https://www.cnblogs.com/zz-ksw/p/12221219.html
Copyright © 2011-2022 走看看