zoukankan      html  css  js  c++  java
  • Spark Structured Streaming:将数据落地按照数据字段进行分区方案

    方案一(使用ForeachWriter Sink方式):

    val query = wordCounts.writeStream.trigger(ProcessingTime(5.seconds))
      .outputMode("complete")
      .foreach(new ForeachWriter[Row] {
          var fileWriter: FileWriter = _
    
          override def process(value: Row): Unit = {
            fileWriter.append(value.toSeq.mkString(","))
          }
    
          override def close(errorOrNull: Throwable): Unit = {
            fileWriter.close()
          }
    
          override def open(partitionId: Long, version: Long): Boolean = {
            FileUtils.forceMkdir(new File(s"/tmp/example/${partitionId}"))
            fileWriter = new FileWriter(new File(s"/tmp/example/${partitionId}/temp"))
            true
          }
        }).start()

    方案二(ds.writeStream().partitionBy("field")):

    import org.apache.spark.sql.streaming.ProcessingTime
     
    val query =  
      streamingSelectDF
        .writeStream
        .format("parquet")
        .option("path", "/mnt/sample/test-data")
        .option("checkpointLocation", "/mnt/sample/check")
        .partitionBy("zip", "day")
        .trigger(ProcessingTime("25 seconds"))
        .start()

    java代码:

            // Write new data to Parquet files
            // can be "orc", "json", "csv", etc.
            String hdfsFileFormat = SparkHelper.getInstance().getLTEBaseSaveHdfsFileFormat();
            String queryName = "save" + this.getTopicEncodeName(topicName) + "DataToHdfs";
            String saveHdfsPath = SparkHelper.getInstance().getLTEBaseSaveHdfsPath();
            // The file path which partitioned by scan_start_time (format:yyyyMMddHH0000)
            dsParsed.writeStream()
                    .format(hdfsFileFormat)
                    .option("path", saveHdfsPath + topicName + "/")
                    .option("checkpointLocation", this.checkPointPath + queryName + "/")
                    .outputMode("append")
                    .partitionBy("scan_start_time")
                    .trigger(Trigger.ProcessingTime(5, TimeUnit.MINUTES))
                    .start();

    更多方式,请参考《在Spark结构化流readStream、writeStream 输入输出,及过程ETL

  • 相关阅读:
    1008: 约瑟夫问题
    1009: 恺撒Caesar密码
    1006: 日历问题
    1007: 生理周期
    Asp.Net Core 发布和部署( MacOS + Linux + Nginx )
    ASP.NET Core Docker部署
    Asp.Net Core 发布和部署(Linux + Jexus )
    ASP.NET Core 十种方式扩展你的 Views
    基于机器学习的web异常检测
    Disruptor深入解读
  • 原文地址:https://www.cnblogs.com/yy3b2007com/p/9776876.html
Copyright © 2011-2022 走看看