zoukankan      html  css  js  c++  java
  • kafka 集群部署

    kafka下载:

    http://kafka.apache.org/downloads

    版本:

    kafka_2.11-1.1.1.tgz

    解压:

    tar  -zxvf  kafka_2.11-1.1.1.tgz   -C /opt/install

    [root@hadoop002 kafka_2.11-1.1.1]# cd  config/

    修改配置 :

    参数说明:

    dataDir顾名思义就是zookeeper保存数据的目录,默认情况下zookeeper将写数据的日志文件也保存在这个目录里;
    
    clientPort这个端口就是客户端连接Zookeeper服务器的端口,Zookeeper会监听这个端口接受客户端的访问请求;
    
    server.A=B:C:D中的A是一个数字,表示这个是第几号服务器,B是这个服务器的IP地址,C第一个端口用来集群成员的信息交换,表示这个服务器与集群中的leader服务器交换信息的端口,D是在leader挂掉时专门用来进行选举leader所用的端口。

    vi zookeeper.properties ########### clouster config ########################3

    server.1= 192.168.25.143:2888:3888
    server.2= 192.168.25.144:2888:3888
    server.3= 192.168.25.145:2888:3888
    server.4= 192.168.25.146:2888:3888
    
    LogDir=/data/zookeeper/logs
    # The number of milliseconds of each tick
    tickTime=2000
    # The number of ticks that the initial
    # synchronization phase can take
    initLimit=10
    # The number of ticks that can pass between
    # sending a request and getting an acknowledgement
    syncLimit=5
    # the directory where the snapshot is stored.
    # do not use /tmp for storage, /tmp here is just
    # example sakes. 在dataDir 目录下创建myid 的文本,里面写上一个数字,每个zk 保持唯一,此处分别是1,2,3,4
    # 可以 直接 echo “1” >/data/zookeeper/myid
    dataDir
    =/data/zookeeper # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1

     vi server.properties 

    修改:

    ############################# Server Basics #############################
    
    # The id of the broker. 集群中每个 broker.id 必须保证唯一 This must be set to a unique integer for each broker.
    broker.id=0
    delete.topic.enable=true
    
    
    ############################# Log Basics #############################
    
    # A comma separated list of directories under which to store log files
    #kafka 中所有的数据中保存在logger 中 log.dirs
    =/data/kafka/log # The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. num.partitions=1 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. num.recovery.threads.per.data.dir=1 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connect=hadoop002:2181,hadoop003:2181,hadoop004:2181 # Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=6000 ############################# Group Coordinator Settings #############################

    启动集群:

    /opt/Kafka/kafka_2.11-1.1.1/bin/zookeeper-server-stop.sh
    /opt/Kafka/kafka_2.11-1.1.1/bin/kafka-server-stop.sh
    
    /opt/Kafka/kafka_2.11-1.1.1/bin/zookeeper-server-start.sh /opt/Kafka/kafka_2.11-1.1.1/config/zookeeper.properties &
    /opt/Kafka/kafka_2.11-1.1.1/bin/kafka-server-start.sh /opt/Kafka/kafka_2.11-1.1.1/config/server.properties &

    或者使用脚本:

    /etc/init.d$ vi kafka-start-up.sh

    #!/bin/bash
    
    
    #export KAFKA_HOME=$PATH
    export KAFKA_HOME=/opt/Kafka/kafka_2.11-1.1.1
    #chkconfig:2345 30 80  
    #description:kafka  
    #processname:kafka  
    case $1 in
      start)
             chmod -R 777 $KAFKA_HOME/logs
            chmod -R 777 /data/kafka
            $KAFKA_HOME/bin/zookeeper-server-start.sh $KAFKA_HOME/config/zookeeper.properties
            $KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties
           
           ;;
      stop)
            $KAFKA_HOME/bin/zookeeper-server-stop.sh
            $KAFKA_HOME/bin/kafka-server-stop.sh
    
          ;;
      *)
         echo "require start|stop"  ;;
    esac
  • 相关阅读:
    LeetCode15 3Sum
    LeetCode10 Regular Expression Matching
    LeetCode20 Valid Parentheses
    LeetCode21 Merge Two Sorted Lists
    LeetCode13 Roman to Integer
    LeetCode12 Integer to Roman
    LeetCode11 Container With Most Water
    LeetCode19 Remove Nth Node From End of List
    LeetCode14 Longest Common Prefix
    LeetCode9 Palindrome Number
  • 原文地址:https://www.cnblogs.com/lshan/p/11433731.html
Copyright © 2011-2022 走看看