zoukankan      html  css  js  c++  java
  • Spring Batch Framework– introduction chapter(下)

    Extract,Transform, and load(ETL)

    Briefly stated, ETL is a process in the database anddata-warehousing world that performs the following steps:

    1. Extracts data from an external data source
    2. Transforms the extracted data to match a specific purpose
    3. Loads the transformed data into a data target; a database or data warehouse.

    Many products, both free and commercial, can help create ETLprocesses. This is a bigger topic than we can address here, bt it isn’t alwaysas simple as these three steps. Writing an ETL process can present its own setof challenges invloving parallel processing, rerunnability, and recoverability.The ETL community has developeed its own set of best practices to meet theseand other requirements.

    For the prurpose of our discussion, this ETL process is ablack box; it could be implemented with an ETL tool(like Talend) or even withanother Spring Batch job.

    Spring Batch includes many ready-to-use components to readfrom and write to daa stores like files and databases.

    Chunk Processing is particularly well suited to handle largedata operations because a job handles itenms in small chunks instead ofprocessing them all at once. Practically speaking, a large file won’t be loadedin memory; instead it’s streamed, which is more efficient in terms of memory consumption.Chunk processing allows more flexibility to manage the data flow in a job.Spring Batch also handles transactions and errors around read and writeoperations.

    Spring Batch provides the FlatFileItemReader class to readrecords from a flat file. To use a FlatFileItemReader, you need to configuraresome Spring beans and implement a component that creates domain objects fromwhat the FlatFileItemReader reads;Spring Batch will handle the rest.

    Choosing a chunk size and commit interval

    First, the size of a chunk and the commit interval are thesame thing! Second, there’s no definitive value to choose. Our recommendationis a value between 10 and 200. Too small a chunk size creates too many transactions,which is costly and makes the job run slowly. Too alrge a chunk size makestransactional resources-like databases-run slowly too, because a database mustbe able to roll back operations. The best value for the commit interval dependson many factors:data, processing, nature of the resources, and so on. Thecommit interval is a parameter in Spring Batch, so don’t hesitate to change itto find the most appropriate value for your jobs.

    Decompressing a file isn’t a read-write step, but Springbatch is flexible enough to implement such a task as part of a job.A 1-GB flatfile can compress to 100MB, which is a more reasonable size for file transfersover the internet.

    Note that you could encrypt the file as well, ensuring thatno one could read the product data if the file were intercepted duringtransfer. The encryption could be done before the compression or as part of it.Spring Batch provides an extension point to handle processing in a batchprocess step: The Tasklet. You implement a Tasklet that decompresses a ZIParchive into its source flat file.

    How does a job refer to the job repository?

    You may have noticed that we say a job needs the jobrepository to run but we don’t make any reference to the job repository bean inthe job configuration. The XML step element can have its job-repositoryattribute refer to a job repository bean. This attribute isn’t mandatory,because by default the job uses a jobRepository bean. As long as you declare ajobRepository bean of type JobRepository, you don’t need to explicitly refer toit in your job configuration.


  • 相关阅读:
    php与nginx配置,不能运行php程序
    奇葩php之数组
    奇葩之mysql
    for语法研究
    php short tag不显示排查
    奇葩之mysql【三】我只想获得一个自增Id,我容易吗我
    男女不同
    Restart explorer
    iOS面试贴士
    phpmyadmin万能登陆密码
  • 原文地址:https://www.cnblogs.com/snake-hand/p/3184559.html
Copyright © 2011-2022 走看看