zoukankan      html  css  js  c++  java
  • debian创建kafka集群带密码

    准备三台机器

    172.17.0.2

    172.17.0.12

    172.17.0.13

    1.下载kafka安装包

    第一步安装kafka

    mkdir /data

    cd /data

    wget https://mirrors.tuna.tsinghua.edu.cn/apache/kafka/2.4.1/kafka_2.12-2.4.1.tgz

    新版kafka都自带zookeeper

    tar xf kafka_2.12-2.4.1.tgz

    mv kafka_2.12-2.4.1 kafka

    第二步:zk配置文件,三台一样

    cd kafka

    egrep -v "(^#|^$)" config/zookeeper.properties

    dataDir=/data/kafka/zookeeper_data
    tickTime=2000
    initLimit=10
    syncLimit=5
    clientPort=2181
    maxClientCnxns=0
    admin.enableServer=false
    server.1=172.17.0.2:2888:3888

    server.2=172.17.0.12:2888:3888

    server.3=172.17.0.13:2888:3888

    第三步:创建myid文件

    cd

    cd ..

    cd /data/kafka/zookeeper_data

    vim myid

    分别设置成1,2,3

    第四步:启动zk

    cd ..

    cd bin/

    ./zookeeper-server-start.sh -daemon ../config/zookeeper.properties

    这样zk集群就搭建成功了,记得防火墙开启 TCP:2181,2888,3888,9095

    第五步:搭建kafka集群,每一个的broker.id要不一样,listeners要是本机ip

    cd ..

    egrep -v "(^#|^$)" config/server.properties

    broker.id=0

    listeners=SASL_PLAINTEXT://172.17.0.2:9095

    security.inter.broker.protocol=SASL_PLAINTEXT

    sasl.enabled.mechanisms=PLAIN

    sasl.mechanism.inter.broker.protocol=PLAIN

    num.network.threads=3

    num.io.threads=8

    socket.send.buffer.bytes=102400

    socket.receive.buffer.bytes=102400

    socket.request.max.bytes=104857600

    log.dirs=/tmp/kafka-logs

    num.partitions=1

    num.recovery.threads.per.data.dir=1

    offsets.topic.replication.factor=1

    transaction.state.log.replication.factor=1

    transaction.state.log.min.isr=1

    log.retention.hours=168

    log.segment.bytes=1073741824

    log.retention.check.interval.ms=300000

    zookeeper.connect=172.17.0.2:2181,172.17.0.12:2181,172.17.0.13:2181

    zookeeper.connection.timeout.ms=6000

    group.initial.rebalance.delay.ms=0

    第六步:创建 kafka_server_jaas.conf,3台主机都要添加

    vim config/kafka_server_jaas.conf

    KafkaServer {

    org.apache.kafka.common.security.plain.PlainLoginModule required

    username="kafka" password="kafkapswd"

    user_kafka="kafkapswd" user_mooc="moocpswd";

    };

    第七步:创建 kafka_client_jaas.conf,3台主机都要添加

    vim kafka_client_jaas.conf

    KafkaClient {

    org.apache.kafka.common.security.plain.PlainLoginModule required

    username="mooc"

    password="moocpswd";

    };

    第八步:在bin/kafka-server-start.sh添加,3台主机都要添加

    if [ "x$KAFKA_OPTS" ]; then

      export KAFKA_OPTS="-Djava.security.auth.login.config=/data/kafka/config/kafka_server_jaas.conf"

    fi

    第九步:在kafka-console-producer.sh 和 kafka-console-consumer.sh 文件添加,3台主机都要添加

    if [ "x$KAFKA_OPTS" ]; then

      export KAFKA_OPTS="-Djava.security.auth.login.config=/data/kafka/config/kafka_client_jaas.conf"

    fi

    cd bin/

    ./kafka-server-start.sh -daemon ../config/server.properties

    都是三台机器操作 kafka集群也搭建成功了

    bin/kafka-topics.sh --create --zookeeper 172.17.0.2:2181,172.17.0.12:2181,172.17.0.13:2181 --replication-factor 3 --partitions 3 --topic topicTest

    #集群中查看topic
    bin/kafka-topics.sh --list --zookeeper 172.17.0.2:2181,172.17.0.12:2181,172.17.0.13:2181

    #生产者
    bin/kafka-console-producer.sh --broker-list 172.17.0.2:9095,172.17.0.12:9095,172.17.0.13:9095 --topic topicTest --producer-property security.protocol=SASL_PLAINTEXT --producer-property sasl.mechanism=PLAIN

    #消费者
    bin/kafka-console-consumer.sh --bootstrap-server 172.17.0.2:9095,172.17.0.12:9095,172.17.0.13:9095 --topic topicTest --consumer-property security.protocol=SASL_PLAINTEXT --consumer-property sasl.mechanism=PLAIN --from-beginning

  • 相关阅读:
    从温设计模式
    php pdf转图片
    PHP 微服务集群搭建
    死磕nginx系列--nginx 限流配置
    分别
    一生悲哀
    三十男人的思考
    test markdown
    linux 系统内核空间与用户空间通信的实现与分析<转>
    ue4 SNew补遗
  • 原文地址:https://www.cnblogs.com/hui413027075/p/14045153.html
Copyright © 2011-2022 走看看