zoukankan      html  css  js  c++  java
  • Kafka

    Kafka

          构建实时数据管线,和流式应用。

         水平扩展、容错、奇快无比。

    Kafka® is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies.

           分布式流处理平台。

    Apache Kafka® is a distributed streaming platform. What exactly does that mean?

           流处理平台三大能力:

           - 对记录流的发布订阅

           - 存储记录流,以容错和持久形式

           - 处理记录流

    A streaming platform has three key capabilities:

    • Publish and subscribe to streams of records, similar to a message queue or enterprise messaging system.
    • Store streams of records in a fault-tolerant durable way.
    • Process streams of records as they occur.

          一般两大类应用:

          - 构建实时数据管线,可靠的获取数据

          - 构建实时流处理平台,转换和响应数据流

    Kafka is generally used for two broad classes of applications:

    • Building real-time streaming data pipelines that reliably get data between systems or applications
    • Building real-time streaming applications that transform or react to the streams of data

    To understand how Kafka does these things, let's dive in and explore Kafka's capabilities from the bottom up.

    架构

         之所以其能够做到,高可用、容错,是因为其本身是一个集群。依赖zookeeper。

        抽象出五类接口:

        生产者API

        消费者API

        流处理API

        连接器API, 对接其它应用和存储到其它数据系统

        管理API

    First a few concepts:
        Kafka is run as a cluster on one or more servers that can span multiple datacenters.
        The Kafka cluster stores streams of records in categories called topics.
        Each record consists of a key, a value, and a timestamp.


    Kafka has five core APIs:
        The Producer API allows an application to publish a stream of records to one or more Kafka topics.
        The Consumer API allows an application to subscribe to one or more topics and process the stream of records produced to them.
        The Streams API allows an application to act as a stream processor, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams.
        The Connector API allows building and running reusable producers or consumers that connect Kafka topics to existing applications or data systems. For example, a connector to a relational database might capture every change to a table.
        The Admin API allows managing and inspecting topics, brokers and other Kafka objects.

    应用场景

         消息队列。 对应应用 RabbitMQ

    Messaging

    Kafka works well as a replacement for a more traditional message broker. Message brokers are used for a variety of reasons (to decouple processing from data producers, to buffer unprocessed messages, etc). In comparison to most messaging systems Kafka has better throughput, built-in partitioning, replication, and fault-tolerance which makes it a good solution for large scale message processing applications.

    In our experience messaging uses are often comparatively low-throughput, but may require low end-to-end latency and often depend on the strong durability guarantees Kafka provides.

    In this domain Kafka is comparable to traditional messaging systems such as ActiveMQ or RabbitMQ.

         WEB行为跟踪。 页面浏览、搜索。

    Website Activity Tracking

    The original use case for Kafka was to be able to rebuild a user activity tracking pipeline as a set of real-time publish-subscribe feeds. This means site activity (page views, searches, or other actions users may take) is published to central topics with one topic per activity type. These feeds are available for subscription for a range of use cases including real-time processing, real-time monitoring, and loading into Hadoop or offline data warehousing systems for offline processing and reporting.

    Activity tracking is often very high volume as many activity messages are generated for each user page view.

         日志聚合。类似ELK功能。

    Log Aggregation

    Many people use Kafka as a replacement for a log aggregation solution. Log aggregation typically collects physical log files off servers and puts them in a central place (a file server or HDFS perhaps) for processing. Kafka abstracts away the details of files and gives a cleaner abstraction of log or event data as a stream of messages. This allows for lower-latency processing and easier support for multiple data sources and distributed data consumption. In comparison to log-centric systems like Scribe or Flume, Kafka offers equally good performance, stronger durability guarantees due to replication, and much lower end-to-end latency.

          流处理。

    Stream Processing

    Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing. For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic; further processing might normalize or deduplicate this content and publish the cleansed article content to a new topic; a final processing stage might attempt to recommend this content to users. Such processing pipelines create graphs of real-time data flows based on the individual topics. Starting in 0.10.0.0, a light-weight but powerful stream processing library called Kafka Streams is available in Apache Kafka to perform such data processing as described above. Apart from Kafka Streams, alternative open source stream processing tools include Apache Storm and Apache Samza.

    容错设计 - partition

    https://www.cnblogs.com/xjh713/p/7388262.html

    https://www.cnblogs.com/yitianyouyitian/p/10287293.html

    Kafka has five core APIs:

    • The Producer API allows an application to publish a stream of records to one or more Kafka topics.
    • The Consumer API allows an application to subscribe to one or more topics and process the stream of records produced to them.
    • The Streams API allows an application to act as a stream processor, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams.
    • The Connector API allows building and running reusable producers or consumers that connect Kafka topics to existing applications or data systems. For example, a connector to a relational database might capture every change to a table.
    • The Admin API allows managing and inspecting topics, brokers and other Kafka objects.
  • 相关阅读:
    React 创建一个自动跟新时间的组件
    React 组件传值 父传递儿子
    React 以两种形式去创建组件 类或者函数(二)
    React 语法基础(一)之表达式和jsx
    ref的使用
    使用scale等比例缩放图片
    Vue动态加载图片图片不显示
    div里面的元素在【垂直 方向】上水平分布 使用calc()函数动态计算
    控制label标签的宽度,不让它换行 label标签左对齐
    表单验证
  • 原文地址:https://www.cnblogs.com/lightsong/p/13044452.html
Copyright © 2011-2022 走看看