zoukankan      html  css  js  c++  java
  • Kafka 分布式的,基于发布/订阅的消息系统

    Kafka是一种分布式的,基于发布/订阅的消息系统。主要设计目标如下:

    • 通过O(1)的磁盘数据结构提供消息的持久化,这种结构对于即使数以TB的消息存储也能够保持长时间的稳定性能。

    • 高吞吐量:即使是非常普通的硬件kafka也可以支持每秒数十万的消息。

    • Consumer客户端pull,随机读,利用sendfile系统调用进行zero-copy ,批量拉数据

    • 消费状态保存在客户端

    • 支持Kafka Server间的消息分区,及分布式消费,同时保证每个Partition内的消息顺序传输。

    • 数据迁移、扩容对用户透明

    • 支持Hadoop并行数据加载。

    • 支持online(在线)和offline(离线)的场景。

    • 持久化:通过将数据持久化到硬盘以及replication防止数据丢失。

    • scale out:无需停机即可扩展机器。

    • 定期删除机制,支持设定partitions的segment file保留时间。

    项目实例:https://github.com/windwant/kafka-demo

     

    kafka.properties

    value.serializer=org.apache.kafka.common.serialization.StringSerializer
    key.serializer=org.apache.kafka.common.serialization.StringSerializer
    request.required.acks=1
    bootstrap.servers=localhost:9092
    
    value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
    key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
    group.id=test-consumer-group

    Producer:

    package org.windwant.kafka;
    
    import org.apache.commons.configuration.ConfigurationException;
    import org.apache.commons.configuration.PropertiesConfiguration;
    import org.apache.commons.configuration.reloading.FileChangedReloadingStrategy;
    import org.apache.kafka.clients.producer.KafkaProducer;
    import org.apache.kafka.clients.producer.Producer;
    import org.apache.kafka.clients.producer.ProducerRecord;
    import org.apache.kafka.clients.producer.RecordMetadata;
    
    import java.io.IOException;
    import java.util.Properties;
    import java.util.concurrent.ExecutionException;
    
    /**
     * Producer
     */
    public class MyKafkaProducer {
    
        private Properties props;
        public static void main(String[] args) throws ConfigurationException {
            new MyKafkaProducer().start();
        }
    
        public MyKafkaProducer() throws ConfigurationException {
            props = new Properties();
            PropertiesConfiguration config = new PropertiesConfiguration("kafka.properties");
            config.setReloadingStrategy(new FileChangedReloadingStrategy());
            //×Ô¶¯±£´æ
            config.setAutoSave(true);
            props.put("value.serializer", config.getString("value.serializer"));
            props.put("key.serializer", config.getString("key.serializer"));
            props.put("request.required.acks", config.getString("request.required.acks"));
            props.put("bootstrap.servers", config.getString("bootstrap.servers"));
        }
    
        public void start(){
            try {
                Producer<String, String> producer = new KafkaProducer<>(props);
                for(int i = 0; i < 100; i++) {
                    RecordMetadata result = producer.send(new ProducerRecord<>("mykafka",
                            "kafka key: " + Integer.toString(i),
                            "kafka value: " + Integer.toString(i))).get();
                    System.out.println("producer send: " + result);
                    Thread.sleep(1000);
                }
                producer.close();
            } catch (InterruptedException e) {
                e.printStackTrace();
            } catch (ExecutionException e) {
                e.printStackTrace();
            }
        }
    }

    Consumer:

    package org.windwant.kafka;
    
    import org.apache.commons.configuration.ConfigurationException;
    import org.apache.commons.configuration.PropertiesConfiguration;
    import org.apache.commons.configuration.reloading.FileChangedReloadingStrategy;
    import org.apache.kafka.clients.consumer.ConsumerRecord;
    import org.apache.kafka.clients.consumer.ConsumerRecords;
    import org.apache.kafka.clients.consumer.KafkaConsumer;
    
    import java.util.Arrays;
    import java.util.Properties;
    
    /**
     * Consumer.
     */
    public class MyKafkaConsumer {
        private Properties props;
        public static void main(String[] args) throws ConfigurationException {
            new MyKafkaConsumer().start();
        }
    
        public MyKafkaConsumer() throws ConfigurationException {
            props = new Properties();
            PropertiesConfiguration config = new PropertiesConfiguration("kafka.properties");
            config.setReloadingStrategy(new FileChangedReloadingStrategy());
            //自动保存
            config.setAutoSave(true);
            props.put("value.deserializer", config.getString("value.deserializer"));
            props.put("key.deserializer", config.getString("key.deserializer"));
            props.put("bootstrap.servers", config.getString("bootstrap.servers"));
            props.put("group.id", config.getString("group.id"));
        }
    
        public void  start(){
            KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
            consumer.subscribe(Arrays.asList("mykafka"));
            while (true) {
                ConsumerRecords<String, String> records = consumer.poll(100);
                for (ConsumerRecord<String, String> record : records) {
                    System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
                    System.out.println();
                }
            }
        }
    }
  • 相关阅读:
    vue项目搭建步骤
    文件的操作总结
    WPF 使用皮肤影响按钮自定义
    WPF中:未找到可以 register Name“XXX”的 NameScope
    WPF Label或者其他控件(以Content显示内容的)单个下划线不显示的问题。
    wpf 中GridControl后面总是多一空白列
    WPF设置控件层次问题(最顶层,最底层)
    WPF中设置TreeView的高度随着窗口改变
    C# 检测文件是否被其他进程占用
    XML文件的操作
  • 原文地址:https://www.cnblogs.com/niejunlei/p/5978279.html
Copyright © 2011-2022 走看看