zoukankan      html  css  js  c++  java
  • kafka leader 服务器均衡。

    Whenever a broker stops or crashes leadership for that broker's partitions transfers to other replicas. This means that by default when the broker is restarted it will only be a follower for all its partitions, meaning it will not be used for client reads and writes.

    任何时候当一台broker服务停掉或者崩溃后,位于本机上的partition的leader身份会赋予到其他的副本上面。这意味着默认情况下当一台broker服务重启后它上面的所有的partition都是follower身份,不会用于服务外部客户端的读写操作(所以会造成不均衡)

    To avoid this imbalance, Kafka has a notion of preferred replicas. If the list of replicas for a partition is 1,5,9 then node 1 is preferred as the leader to either node 5 or 9 because it is earlier in the replica list. You can have the Kafka cluster try to restore leadership to the restored replicas by running the command:

    为了避免上述情况,kafka发明了一个prefered replicas的概念。如果某一个副本的副本分布id为1,5,9,那么相比5和9的节点,节点1的由于在副本队列里更靠前就成为了preferer的partition。可以在集群上运行下列命令将各副本的leader分配恢复到服务崩溃之前的状况。

    > bin/kafka-preferred-replica-election.sh --zookeeper zk_host:port/chroot

    Since running this command can be tedious you can also configure Kafka to do this automatically by setting the following configuration:

    由于手动运行这个命令会很枯燥,也可以在配置文件里设置如下属性打开kafka自动balance选举的功能

    auto.leader.rebalance.enable=true

  • 相关阅读:
    Windows phone 应用开发系列教程(更新中)
    ios实例开发精品文章推荐(8.14)
    Android开发环境——调试器 DDMS相关内容汇总
    docker 发布应用时添加 git revision
    docker环境下数据库的备份(postgresql, mysql)
    golang web 方案
    golang 1.12 自动补全
    区块链简介
    天空的另一半
    Ecto中的changeset,schema,struct,map
  • 原文地址:https://www.cnblogs.com/dongxiao-yang/p/5216583.html
Copyright © 2011-2022 走看看