zoukankan      html  css  js  c++  java
  • Window7搭建Kafka环境总结

    1.安装zooeleeper

    下载链接:http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.14/

    安装步骤如下:

    1)解压zookeeper-3.4.14.tar.gz文件

    2)解压后进入目录conf下,将“zoo_sample.cfg”重命名为“zoo.cfg”

    注:zookeeper的默认端口是2181,如需改变可以在对zoo.cfg进行编辑

    3)进入bin目录下,点击zkServer.cmd运行结果如下表示安装成功

    2.安装kafka

    下载链接:http://kafka.apache.org/downloads

    安装步骤如下:

    1)解压kafka_2.11-1.0.2.tgz文件

    2)解压后进入目录如下:

    3)在此目录下打开cmd,输入如下命令:

    .inwindowskafka-server-start.bat .configserver.properties

    出现如下页面表示启动成功

    注:下次启动之前要删除kafka-logs这个文件夹,不然会提示ERROR Error while loading log dir E: mpkafka-logs (kafka.log.LogManager)

    3.使用Python连接测试

    生产者和消费者

    打开pycharm,新建目录kafka,然后在该目录下新建test_producer.py和test_consumer.py文件

    test_producer.py代码如下:

    # coding=utf-8
    # 生产者代码
    from kafka import KafkaProducer

    producer = KafkaProducer(bootstrap_servers='localhost:9092')
    msg = "HelloWorld".encode('utf-8')
    print(msg)
    producer.send('demo', msg, partition=0)
    producer.close()

    test_consumer.py代码如下:

    # coding=utf-8
    # 消费者代码
    from kafka import KafkaConsumer

    consumer = KafkaConsumer('demo', bootstrap_servers=['localhost:9092'])
    for msg in consumer:
    info = "%s:%d:%d: key=%s value=%s" % (msg.topic, msg.partition, msg.offset, msg.key, msg.value)
    print(info)

    然后先运行test_consumer.py,再运行test_producer.py,观察终端如下:

    4.使用Java连接测试

    方法1:

    生产者代码:

    package kafka;

    import org.apache.kafka.clients.producer.KafkaProducer;
    import org.apache.kafka.clients.producer.Producer;
    import org.apache.kafka.clients.producer.ProducerRecord;

    import java.util.Date;
    import java.text.SimpleDateFormat;

    import java.util.Properties;

    public class ProducerTest {
    public static void main(String[] args) {
    Properties properties = new Properties();
    //broker的地址清单,建议至少填写两个,避免宕机
    properties.put("bootstrap.servers", "localhost:9092");
    /*
    acks:指定必须有多少个分区副本接收消息,生产者才认为消息写入成功,用户检测数据丢失的可能性
    acks=0:生产者在成功写入消息之前不会等待任何来自服务器的响应,无法监控数据是否发送成功,
    但可以以网络能够支持的最大速度发送消息,达到很高的吞吐量
    acks=1:只要集群的首领节点收到消息,生产者就会收到来自服务器的成功响应
    acks=all:只有所有参与复制的节点全部收到消息时,生产者才会收到来自服务器的成功响应,这种模式是最安全的
    */
    properties.put("acks", "all");
    //retries:生产者从服务器收到的错误有可能是临时性的错误的次数
    properties.put("retries", 0);
    //batch.size:该参数指定了一个批次可以使用的内存大小,按照字节数计算(而不是消息个数)
    properties.put("batch.size", 16384);
    //linger.ms:该参数指定了生产者在发送批次之前等待更多消息加入批次的时间,增加延迟,提高吞吐量
    properties.put("linger.ms", 1);
    //buffer.memory该参数用来设置生产者内存缓冲区的大小,生产者用它缓冲要发送到服务器的消息
    properties.put("buffer.memory", 33554432);
    //compression.type:数据压缩格式,有snappy、gzip和lz4,snappy算法比较均衡,gzip会消耗更高的cpu,但压缩比更高
    //key和value的序列化
    properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    /*
    client.id:该参数可以是任意的字符串,服务器会用它来识别消息的来源
    max.in.flight.requests.per.connection:生产者在收到服务器晌应之前可以发送多少个消息;越大越占用内存,但会提高吞吐量
    timeout.ms:指定了broker等待同步副本返回消息确认的时间
    request.timeout.ms:生产者在发送数据后等待服务器返回响应的时间
    metadata.fetch.timeout.ms:生产者在获取元数据(比如目标分区的首领是谁)时等待服务器返回响应的时间
    max.block.ms:该参数指定了在调用send()方法或使用partitionsFor()方法获取元数据时生产者阻塞时间
    max.request.size:该参数用于控制生产者发送的请求大小
    receive.buffer.bytes和send.buffer.bytes:指定了TCP socket接收和发送数据包的缓冲区大小,默认值为-1
    */
    Producer<String, String> producer = null;
    //设置日期格式
    SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
    try {
    producer = new KafkaProducer<>(properties);
    for (int i = 0; i < 10; i++) {
    String msg = "test" + i + " " + df.format(new Date()) + " ";
    producer.send(new ProducerRecord<String, String>("kafka_test", msg));
    Thread.sleep(500);
    System.out.println("Sent:" + msg);
    }
    } catch (Exception e) {
    e.printStackTrace();

    } finally {
    producer.close();
    }
    }
    }

    消费者代码

    package kafka;

    import org.apache.kafka.clients.consumer.ConsumerRecord;
    import org.apache.kafka.clients.consumer.ConsumerRecords;
    import org.apache.kafka.clients.consumer.KafkaConsumer;

    import java.util.Arrays;
    import java.util.Properties;

    public class ConsumerTest {
    public static void main(String[] args) throws InterruptedException {
    Properties properties = new Properties();
    properties.put("bootstrap.servers", "localhost:9092");
    properties.put("group.id", "group-2");
    //session.timeout.ms:消费者在被认为死亡之前可以与服务器断开连接的时间,默认是3s
    properties.put("session.timeout.ms", "30000");
    //消费者是否自动提交偏移量,默认值是true,避免出现重复数据和数据丢失,可以把它设为 false
    properties.put("enable.auto.commit", "false");
    properties.put("auto.commit.interval.ms", "1000");
    //auto.offset.reset:消费者在读取一个没有偏移量的分区或者偏移量无效的情况下的处理
    //earliest:在偏移量无效的情况下,消费者将从起始位置读取分区的记录
    //latest:在偏移量无效的情况下,消费者将从最新位置读取分区的记录
    properties.put("auto.offset.reset", "earliest");
    properties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    // max.partition.fetch.bytes:服务器从每个分区里返回给消费者的最大字节数
    //fetch.max.wait.ms:消费者等待时间,默认是500。
    // fetch.min.bytes:消费者从服务器获取记录的最小字节数
    // client.id:该参数可以是任意的字符串,服务器会用它来识别消息的来源
    // max.poll.records:用于控制单次调用call()方住能够返回的记录数量
    //receive.buffer.bytes和send.buffer.bytes:指定了TCPsocket接收和发送数据包的缓冲区大小,默认值为-1

    KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<>(properties);
    kafkaConsumer.subscribe(Arrays.asList("kafka_test"));
    while (true) {
    ConsumerRecords<String, String> records = kafkaConsumer.poll(100);
    for (ConsumerRecord<String, String> record : records) {
    System.out.printf("offset = %d, value = %s", record.offset(), record.value());
    System.out.println("=====================>");
    }
    }

    }
    }

    同上,先运行生产者代码,再运行消费者代码,终端结果如下:

    方法2:

    package kafka;

    import org.apache.kafka.clients.consumer.ConsumerConfig;
    import org.apache.kafka.clients.consumer.ConsumerRecord;
    import org.apache.kafka.clients.consumer.ConsumerRecords;
    import org.apache.kafka.clients.consumer.KafkaConsumer;
    import org.apache.kafka.clients.producer.KafkaProducer;
    import org.apache.kafka.clients.producer.ProducerConfig;
    import org.apache.kafka.clients.producer.ProducerRecord;
    import org.apache.kafka.clients.producer.RecordMetadata;
    import org.apache.kafka.common.serialization.StringDeserializer;
    import org.apache.kafka.common.serialization.StringSerializer;
    import org.junit.Test;

    import java.util.Collections;
    import java.util.Properties;
    import java.util.concurrent.Future;
    import java.util.concurrent.TimeUnit;

    public class KafkaDemo {
    //服务器地址
    private static final String SERVERS = "localhost:9092";
    //topic
    private static final String TOPIC = "test-kafka";
    //消费组
    private static final String COMSUMER_GROUP = "test-comsumer";

    @Test
    public void TestProduct() throws Exception {
    Properties properties = new Properties();
    properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, SERVERS);
    properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);

    KafkaProducer<String, String> kafkaProducer = new KafkaProducer<>(properties);
    for (int i = 0; i <= 5; i++) {
    String msg = "hello kafka" + i;
    ProducerRecord<String, String> record = new ProducerRecord<>(TOPIC, msg);

    Future<RecordMetadata> future = kafkaProducer.send(record);
    RecordMetadata recordMetadata = future.get(1, TimeUnit.SECONDS);
    System.out.println(recordMetadata.offset());
    }

    kafkaProducer.close();
    }

    @Test
    public void TestConsumer() {
    Properties properties = new Properties();
    properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, SERVERS);
    properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    properties.put(ConsumerConfig.GROUP_ID_CONFIG, COMSUMER_GROUP);

    KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<>(properties);
    kafkaConsumer.subscribe(Collections.singletonList(TOPIC));
    while (true) {
    ConsumerRecords<String, String> records = kafkaConsumer.poll(1000);
    for (ConsumerRecord<String, String> record : records) {
    System.out.println(record.value());
    }
    }
    }
    }
    以上代码Maven依赖如下:
    <dependencies>
    <dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>0.11.0.0</version>
    </dependency>

    <dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka_2.11</artifactId>
    <version>0.11.0.0</version>
    </dependency>
    <dependency>
    <groupId>com.github.eljah</groupId>
    <artifactId>xmindjbehaveplugin</artifactId>
    <version>0.8</version>
    </dependency>
    <dependency>
    <groupId>cn.hutool</groupId>
    <artifactId>hutool-all</artifactId>
    <version>4.5.6</version>
    </dependency>
    <dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>21.0</version>
    </dependency>
    <dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.11</version>
    </dependency>
    </dependencies>

    至此Window下Kafka环境搭建完毕
  • 相关阅读:
    .NET体系结构
    命名空间和程序集
    网站不加www和.com 也能访问的设置
    如何从本机直接复制粘贴文件到服务器
    无法访问已释放的对象。 对象名:“System.ServiceModel.Channels.HttpChannelFactory+HttpRequestChannel”。
    silverlight 数据库更新,UI控件同步更新
    Apache Solr使用自定义QParser后同义词扩展及Token去重的感悟
    Apache Nutch 1.3 学习笔记十(插件机制分析)
    Apache Nutch 1.3 学习笔记十(Ntuch 插件机制简单介绍)
    Apache Nutch 1.3 学习笔记八(LinkDb)
  • 原文地址:https://www.cnblogs.com/wanyuan/p/13103421.html
Copyright © 2011-2022 走看看