zoukankan      html  css  js  c++  java
  • 【Kafka】kafka常用命令以及kafka压力测试

    前言

    本文kafka命令适用于kafka版本在0.10以上;

    演示环境:kafka 版本 0.11.0.2 ,scala版本2.11

    查看所有topic

    kafka-topics.sh --zookeeper hadoop111:2181 --list
    

    选项说明:

    --zookeeper :设置zk的链接信息

    --list :打印topic列表

    创建topic

    kafka-topics.sh --zookeeper hadoop111:2181 --create --replication-factor 3 --partitions 1 --topic test
    

    选项说明:

    --create :创建topic命令

    --topic :定义topic名

    --replication-factor : 定义副本数

    --partitions :定义分区数

    以 -- 双横杠开头的配置,在kafka命令中没有先后顺序的规定,可以按照自己习惯书写。

    删除topic

    kafka-topics.sh --zookeeper hadoop111:2181 --delete --topic test
    

    注意事项:

    需要config/server.properties中设置delete.topic.enable=true删除操作才会立即生效,默认配置为false,此时只是标记删除,重启kafka服务才会正式删除;

    发送消息

    kafka-console-producer.sh --broker-list hadoop111:9092 --topic test
    

    参数说明:

    --broker-list :指定集群中任意一台kafka服务器的地址和端口号

    消费消息

    kafka-console-consumer.sh --bootstrap-server hadoop111:9092 --from-beginning --topic test
    

    参数说明:

    --bootstrap-server:指定集群中任意一台kafka服务器的地址和端口号

    --from-beginning:将主题中所有的消息从头开始消费

    查看topic详情

    kafka-topics.sh --zookeeper hadoop111:2181 --describe --topic test 
    

    启动kafka

    kafka-server-start.sh config/server.properties &
    

    & 代表后台运行

    关闭kafka

    kafka-server-stop.sh stop
    

    Kafka Producer压力测试

    kafka-producer-perf-test.sh  --topic test --record-size 100 --num-records 100000 --throughput 1000 --producer-props bootstrap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092
    

    参数说明:

    record-size:是一条信息有多大,单位是字节

    num-records:是总共发送多少条信息

    throughput :是每秒多少条信息

    生产压力测试过程如下:

    5002 records sent, 1000.2 records/sec (0.10 MB/sec), 3.1 ms avg latency, 165.0 max latency.
    5033 records sent, 1006.6 records/sec (0.10 MB/sec), 1.0 ms avg latency, 35.0 max latency.
    5001 records sent, 1000.0 records/sec (0.10 MB/sec), 1.5 ms avg latency, 66.0 max latency.
    5002 records sent, 1000.4 records/sec (0.10 MB/sec), 0.9 ms avg latency, 14.0 max latency.
    4998 records sent, 998.4 records/sec (0.10 MB/sec), 0.9 ms avg latency, 34.0 max latency.
    5008 records sent, 1001.6 records/sec (0.10 MB/sec), 0.7 ms avg latency, 13.0 max latency.
    5003 records sent, 1000.6 records/sec (0.10 MB/sec), 0.9 ms avg latency, 46.0 max latency.
    5001 records sent, 1000.0 records/sec (0.10 MB/sec), 0.9 ms avg latency, 50.0 max latency.
    5002 records sent, 1000.2 records/sec (0.10 MB/sec), 0.5 ms avg latency, 5.0 max latency.
    5003 records sent, 1000.2 records/sec (0.10 MB/sec), 0.8 ms avg latency, 22.0 max latency.
    5002 records sent, 1000.2 records/sec (0.10 MB/sec), 0.6 ms avg latency, 7.0 max latency.
    5001 records sent, 1000.2 records/sec (0.10 MB/sec), 0.7 ms avg latency, 31.0 max latency.
    5002 records sent, 1000.0 records/sec (0.10 MB/sec), 0.7 ms avg latency, 15.0 max latency.
    5003 records sent, 1000.6 records/sec (0.10 MB/sec), 0.8 ms avg latency, 15.0 max latency.
    5002 records sent, 1000.4 records/sec (0.10 MB/sec), 0.8 ms avg latency, 14.0 max latency.
    5001 records sent, 1000.0 records/sec (0.10 MB/sec), 0.6 ms avg latency, 15.0 max latency.
    5001 records sent, 1000.2 records/sec (0.10 MB/sec), 0.8 ms avg latency, 18.0 max latency.
    5003 records sent, 1000.4 records/sec (0.10 MB/sec), 0.8 ms avg latency, 13.0 max latency.
    5001 records sent, 1000.2 records/sec (0.10 MB/sec), 0.9 ms avg latency, 31.0 max latency.
    100000 records sent, 999.970001 records/sec (0.10 MB/sec), 0.94 ms avg latency, 165.00 ms max latency, 1 ms 50th, 2 ms 95th, 7 ms 99th, 42 ms 99.9th.
    

    参数解析:

    本例中一共写入10万条消息,平均是999.970001条消息/秒,每秒向Kafka写入了0.10MB的数据,每次写入的平均延迟为0.94毫秒,最大的延迟为165毫秒

    Kafka Consumer压力测试

    kafka-consumer-perf-test.sh --zookeeper hadoop111:2181 --topic test --fetch-size 10000 --messages 10000000 --threads 1
    

    参数说明:

    --zookeeper :指定zookeeper的链接信息,集群中任意一台kafka服务器的地址和端口号

    --topic :指定topic的名称

    --fetch-size :指定每次拉取的数据的大小

    --messages :总共要消费的消息个数

    消费压力测试过程如下:

    start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec
    2020-05-15 17:41:57:339, 2020-05-15 17:41:59:243, 9.5367, 5.0088, 100000, 52521.0084
    

    测试结果说明:

    开始测试时间,测试结束数据,最大吞吐率9.5367MB/s,平均每秒消费5.0088MB/s,最大每秒消费100000条,平均每秒消费52521.0084条。

    查看所有消费者

    [ssrs@hadoop112 bin]$ kafka-consumer-groups.sh --bootstrap-server hadoop111:9092 --list
    Note: This will only show information about consumers that use the Java consumer API (non-ZooKeeper-based consumers).
    
    flume
    KMOffsetCache-hadoop111
    

    注意:

    以上只显示有关使用Java消费者API的消费者的信息(非基于ZooKeeper的消费者)

    [ssrs@hadoop112 bin]$ kafka-consumer-groups.sh --zookeeper hadoop111:2181 --list
    Note: This will only show information about consumers that use ZooKeeper (not those using the Java consumer API).
    
    mygroup
    perf-consumer-52487
    console-consumer-20318
    console-consumer-44724
    perf-consumer-49290
    

    注意:

    以上只显示有关使用ZooKeeper的消费者的信息(而不是那些使用Java consumer API的消费者)

  • 相关阅读:
    Docker入门之docker-compose [转]
    防火墙和iptables
    MariaDB/MySQL备份和恢复(三):xtrabackup用法和原理详述
    Veritas NetBackup™ for VMware 介绍 (NBU版本 8.2)
    RMAN备份恢复所需要了解的Oracle术语
    mac 下使用命令行打开项目
    ORACLE 11g RAC-RAC DG Duplicate 搭建(生产操作文档)
    15-vuex
    14-Promise
    13-vue-router2
  • 原文地址:https://www.cnblogs.com/ShadowFiend/p/12916938.html
Copyright © 2011-2022 走看看