zoukankan      html  css  js  c++  java
  • 【原创】大数据基础之Logstash(4)高可用

    logstash高可用体现为不丢数据(前提为服务器短时间内不可用后可恢复比如重启服务器或重启进程),具体有两个方面:

    • 进程重启(服务器重启)
    • 事件消息处理失败

    在logstash中对应的解决方案为:

    • Persistent Queues
    • Dead Letter Queues

    默认都没有开启;

    另外可以通过docker或marathon或systemd来实现进程的自动重启;

    As data flows through the event processing pipeline, Logstash may encounter situations that prevent it from delivering events to the configured output. For example, the data might contain unexpected data types, or Logstash might terminate abnormally.
    To guard against data loss and ensure that events flow through the pipeline without interruption, Logstash provides the following data resiliency features.

    • Persistent Queues protect against data loss by storing events in an internal queue on disk.
    • Dead Letter Queues provide on-disk storage for events that Logstash is unable to process. You can easily reprocess events in the dead letter queue by using the dead_letter_queue input plugin.

    These resiliency features are disabled by default.

    1 Persistent Queues

    By default, Logstash uses in-memory bounded queues between pipeline stages (inputs → pipeline workers) to buffer events. The size of these in-memory queues is fixed and not configurable. If Logstash experiences a temporary machine failure, the contents of the in-memory queue will be lost. Temporary machine failures are scenarios where Logstash or its host machine are terminated abnormally but are capable of being restarted.
    In order to protect against data loss during abnormal termination, Logstash has a persistent queue feature which will store the message queue on disk. Persistent queues provide durability of data within Logstash.

    logstash默认使用内存queue来缓冲事件消息,一旦进程重启则内存queue里的数据全部丢失;

    好处

    • Absorbs bursts of events without needing an external buffering mechanism like Redis or Apache Kafka.
    • Provides an at-least-once delivery guarantee against message loss during a normal shutdown as well as when Logstash is terminated abnormally.

    实现

    The queue sits between the input and filter stages in the same process:

    input → queue → filter + output

    When an input has events ready to process, it writes them to the queue. When the write to the queue is successful, the input can send an acknowledgement to its data source.
    When processing events from the queue, Logstash acknowledges events as completed, within the queue, only after filters and outputs have completed. The queue keeps a record of events that have been processed by the pipeline. An event is recorded as processed (in this document, called "acknowledged" or "ACKed") if, and only if, the event has been processed completely by the Logstash pipeline.

    配置

    queue.type: persisted
    path.queue: "path/to/data/persistent_queue"

    其他配置

    queue.page_capacity
    queue.drain
    queue.max_events
    queue.max_bytes

    更进一步

    First, the queue itself is a set of pages. There are two kinds of pages: head pages and tail pages. The head page is where new events are written. There is only one head page. When the head page is of a certain size (see queue.page_capacity), it becomes a tail page, and a new head page is created. Tail pages are immutable, and the head page is append-only. Second, the queue records details about itself (pages, acknowledgements, etc) in a separate file called a checkpoint file.
    When recording a checkpoint, Logstash will:

    Call fsync on the head page.
    Atomically write to disk the current state of the queue.
    The process of checkpointing is atomic, which means any update to the file is saved if successful.

    If Logstash is terminated, or if there is a hardware-level failure, any data that is buffered in the persistent queue, but not yet checkpointed, is lost.
    You can force Logstash to checkpoint more frequently by setting queue.checkpoint.writes. This setting specifies the maximum number of events that may be written to disk before forcing a checkpoint. The default is 1024. To ensure maximum durability and avoid losing data in the persistent queue, you can set queue.checkpoint.writes: 1 to force a checkpoint after each event is written. Keep in mind that disk writes have a resource cost. Setting this value to 1 can severely impact performance.

    即使开启persistent queue,也有可能会有数据丢失,影响因素是flush间隔(checkpoint),默认是1024个事件flush一次,设置为1则每个事件flush一次,虽然不丢消息,但是对性能影响较大;

    queue.checkpoint.writes: 1


    2 Dead Letter Queues

    By default, when Logstash encounters an event that it cannot process because the data contains a mapping error or some other issue, the Logstash pipeline either hangs or drops the unsuccessful event. In order to protect against data loss in this situation, you can configure Logstash to write unsuccessful events to a dead letter queue instead of dropping them.
    Each event written to the dead letter queue includes the original event, along with metadata that describes the reason the event could not be processed, information about the plugin that wrote the event, and the timestamp for when the event entered the dead letter queue.
    To process events in the dead letter queue, you simply create a Logstash pipeline configuration that uses the dead_letter_queue input plugin to read from the queue.

    当logstash遇到无法处理的数据(mapping错误等),logstash要么卡住,要么丢掉不成功的事件;为了避免这种情况下的数据丢失,可以配置logstash将不成功的事件写到一个dead letter queue而不是直接丢掉;

    使用限制

    The dead letter queue feature is currently supported for the elasticsearch output only. Additionally, The dead letter queue is only used where the response code is either 400 or 404, both of which indicate an event that cannot be retried. Support for additional outputs will be available in future releases of the Logstash plugins. Before configuring Logstash to use this feature, refer to the output plugin documentation to verify that the plugin supports the dead letter queue feature.

    目前dead letter queue只支持elasticsearch output;其他output将在未来支持;

    配置

    dead_letter_queue.enable: true
    path.dead_letter_queue: "path/to/data/dead_letter_queue"

    参考:
    https://www.elastic.co/guide/en/logstash/current/resiliency.html
    https://www.elastic.co/guide/en/logstash/current/persistent-queues.html
    https://www.elastic.co/guide/en/logstash/current/dead-letter-queues.html

  • 相关阅读:
    每个人都有属于自己的机会
    [转]Android动画开发——Animation动画效果
    [转]android 使用WebView
    深圳 2012 职称英语 报名
    [转]java中的io笔记
    [转]手机蓝牙各类服务对应的UUID(常用的几个已通过验证)
    [文摘20111215]急事慢慢说
    [转]Android XML解析
    [转]J2SE复习笔记2线程
    queryScopedSelector
  • 原文地址:https://www.cnblogs.com/barneywill/p/10671946.html
Copyright © 2011-2022 走看看