zoukankan      html  css  js  c++  java
  • PipelineWise illustrates the power of Singer

    Stitch is based on Singer, an open source standard for moving data between databases, web APIs, files, queues, and just about anything else. Because it's open source, anyone can use Singer to write data extraction and loading scripts or more comprehensive utilities. TransferWise, the company I work for, used Singer to create a data pipeline framework called PipelineWise that replicates data from multiple sources to multiple destinations.

    TransferWise uses more than a hundred microservices, which means we have hundreds of different type of data sources (MySQL, PostgreSQL, Kafka, Zendesk, Jira, etc.). We wanted to create a centralised analytics data store that could hold data from all of our sources, with due attention paid to security and scalability. We wanted to use change data capture (CDC) wherever possible to keep lag low. In addition, our solution had to:

    • Apply schema changes automatically
    • Avoid vendor lock-in — we wanted access to the source code to develop new features and fix issues quickly
    • Keep configuration as code

    We looked at traditional ETL tools, commercial replication tools, and Kafka streaming ETL. None of them met all of our needs. (You can read more details in my post on Medium.)

    After several months we found the Singer specification and realised that we could get to a solution more quickly by building on this great work.

    A data pipeline is born

    Our analytics platform team created PipelineWise as an experiment in close cooperation with our data analysts and some of the product teams that use the data. It proved to be successful — PipelineWise now meets all of our initial requirements. We use it to replicate hundreds of gigabytes of data every day from 120 microservices, 1,500+ tables, and a bunch of external tools into our Snowflake data warehouse, with only minutes of lag.

    PipelineWise-console
    Monitoring with Grafana: Replicating 120 data sources, 1,500+ tables into Snowflake with PipelineWise on three nodes of c5.2xlarge EC2 instances

    Like any tool, PipelineWise has limitations:

    • Not real-time: The currently supported target connectors are microbatch-oriented. We have to load data from S3 via the COPY command into Snowflake or Amazon Redshift because individual INSERT statements are inefficient. Creating these batches adds an extra layer to the process, so replication is not real-time. The replication lag from source to target is between 5 and 30 minutes depending on the data source.
    • Very active transactional tables: PipelineWise tries to do parallel processing wherever possible. Microbatches are created in parallel as well, one batch for each table, but currently we can’t create one individual batch in parallel. This means that replicating extremely large tables with millions of only INSERTS and UPDATES can be slow when the CDC replication method is enabled. In this case key-based incremental replication is faster and still reliable, as there are no deleted rows in source.

    An evolving solution

    PipelineWise is likely to evolve for some time to come, but it’s mature enough to release back to the open source community. Our hope is that others might benefit from and contribute toward the project, and possibly open up new and exciting ways of analysing data.

    For detailed information on PipelineWise features and architecture, check out the documentation.

  • 相关阅读:
    20145321 《Java程序设计》课程总结
    20145321 实验五实验报告
    20145321 《Java程序设计》第10周学习总结
    20145321 《Java程序设计》第9周学习总结
    20145321 实验四实验报告
    20145321 实验三实验报告
    20145321 《Java程序设计》第8周学习总结
    20145321 《Java程序设计》第7周学习总结
    20145321 实验二实验报告
    20145319 《信息安全系统设计基础》课程总结
  • 原文地址:https://www.cnblogs.com/rongfengliang/p/11531537.html
Copyright © 2011-2022 走看看