zoukankan      html  css  js  c++  java
  • docker 搭建zookeeper集群和kafka集群

    docker 搭建zookeeper集群

    安装docker-compose容器编排工具

    Compose介绍

    Docker Compose 是 Docker 官方编排(Orchestration)项目之一,负责快速在集群中部署分布式应用。

    Compose 项目是 Docker 官方的开源项目,负责实现对 Docker 容器集群的快速编排。Compose 定位是 「定义和运行多个 Docker 容器的应用(Defining and running multicontainer Docker applications)」,其前身是开源项目 Fig。

    使用一个 Dockerfile 模板文件,可以让用户很方便的定义一个单独的应用容器。然而,在日常工作中,经常会碰到需要多个容器相互配合来完成某项任务的情况。例如要实现一个 Web 项目,除了 Web 服务容器本身,往往还需要再加上后端的数据库服务容器,甚至还包括负载均衡容器等。

    Compose 恰好满足了这样的需求。它允许用户通过一个单独的 docker-compose.yml 模板文件(YAML 格式)来定义一组相关联的应用容器为一个项目(project)。

    Compose 中有两个重要的概念:

    • 服务 ( service ):一个应用的容器,实际上可以包括若干运行相同镜像的容器实例
    • 项目 ( project ):由一组关联的应用容器组成的一个完整业务单元,在 dockercompose.yml 文件中定义。

    Compose 的默认管理对象是项目,通过子命令对项目中的一组容器进行便捷地生命周期管理。可见,一个项目可以由多个服务(容器)关联而成, Compose 面向项目进行管理

    Compose 项目由 Python 编写,实现上调用了 Docker 服务提供的 API 来对容器进行管理。因此,只要所操作的平台支持 Docker API,就可以在其上利用 Compose 来进行编排管理。

    安装与卸载

    Compose 可以通过 Python 的包管理工具 pip 进行安装,也可以直接下载编译好的二进制文件使用,甚至能够直接在 Docker 容器中运行。前两种方式是传统方式,适合本地环境下安装使用;最后一种方式则不破坏系统环境,更适合云计算场景。Docker for Mac 、 Docker for Windows 自带 docker-compose 二进制文件,安装 Docker 之后可以直接使用。Linux 系统请使用以下介绍的方法安装。

    二进制包安装

    在 Linux 上的也安装十分简单,从 官方 GitHub Release 处直接下载编译好的二进制文件即可。

    例如,在 Linux 64 位系统上直接下载对应的二进制包。

    sudo curl -L https://github.com/docker/compose/releases/download/1.17.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
    sudo chmod +x /usr/local/bin/docker-compose
    

    对于卸载如果是二进制包方式安装的,删除二进制文件即可。

    sudo rm /usr/local/bin/docker-compose
    

    PIP 安装

    这种方式是将 Compose 当作一个 Python 应用来从 pip 源中安装。执行安装命令:

    sudo pip install -U docker-compose
    

    使用PIP安装的时候,卸载可以使用如下命令:

    sudo pip uninstall docker-compose
    

    通过docker-compose 安装zookeeper集群

    新建docker-compose.yml文件

    在工作目录/docker-compose/zookeeper目录下创建docker-compose.yml文件

    添加内容如下:

    
    version: '3.4'
    services:
     zoo1:
      image: zookeeper:3.4 # 镜像名称
      restart: always # 当发生错误时自动重启
      hostname: zoo1
      container_name: zoo1
      privileged: true
      ports: # 端口
       - 2184:2181
      volumes: # 挂载数据卷
       - ./zoo1/data:/data
       - ./zoo1/datalog:/datalog 
      environment:
       TZ: Asia/Shanghai
       ZOO_MY_ID: 1 # 节点ID
       ZOO_PORT: 2181 # zookeeper端口号
       ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888 # zookeeper节点列表
      networks:
       mynetwork:
        ipv4_address: 172.18.0.4
     
     zoo2:
      image: zookeeper:3.4
      restart: always
      hostname: zoo2
      container_name: zoo2
      privileged: true
      ports:
       - 2182:2181
      volumes:
       - ./zoo2/data:/data
       - ./zoo2/datalog:/datalog
      environment:
       TZ: Asia/Shanghai
       ZOO_MY_ID: 2
       ZOO_PORT: 2181
       ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
      networks:
       mynetwork:
        ipv4_address: 172.18.0.5
     
     zoo3:
      image: zookeeper:3.4
      restart: always
      hostname: zoo3
      container_name: zoo3
      privileged: true
      ports:
       - 2183:2181
      volumes:
       - ./zoo3/data:/data
       - ./zoo3/datalog:/datalog
      environment:
       TZ: Asia/Shanghai
       ZOO_MY_ID: 3
       ZOO_PORT: 2181
       ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
      networks:
       mynetwork:
        ipv4_address: 172.18.0.6
     
    networks:
     mynetwork:
      external:
       name: mynetwork
    
    

    创建自定义网络

    docker network ls #查看当前网络
    docker network create --subnet=172.18.0.0/16 mynetwork #创建子网段为172.18.0.0/16 的IP网络
    

    启动zookeeper集群

    docker-compose up -d
    

    关于docker-compose命令

      build              Build or rebuild services
      bundle             Generate a Docker bundle from the Compose file
      config             Validate and view the Compose file
      create             Create services
      down               Stop and remove containers, networks, images, and volumes
      events             Receive real time events from containers
      exec               Execute a command in a running container
      help               Get help on a command
      images             List images
      kill               Kill containers
      logs               View output from containers
      pause              Pause services
      port               Print the public port for a port binding
      ps                 List containers
      pull               Pull service images
      push               Push service images
      restart            Restart services
      rm                 Remove stopped containers
      run                Run a one-off command
      scale              Set number of containers for a service
      start              Start services
      stop               Stop services
      top                Display the running processes
      unpause            Unpause services
      up                 Create and start containers
      version            Show the Docker-Compose version information
    
    

    查看集群是否启动成功

    docker-compose ps
    Name              Command               State                     Ports                   
    ------------------------------------------------------------------------------------------
    zoo1   /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2184->2181/tcp, 2888/tcp, 3888/tcp
    zoo2   /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2182->2181/tcp, 2888/tcp, 3888/tcp
    zoo3   /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2183->2181/tcp, 2888/tcp, 3888/tcp
    

    检查集群状态

    zoo1

    $ docker exec -it zookeeper_1 /bin/sh
    
    /zookeeper-3.4.11 # zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /conf/zoo.cfg
    Mode: follower                  // 这是个follower
    

    zoo2

    $ docker exec -it zookeeper_2 /bin/sh
    
    /zookeeper-3.4.11 # zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /conf/zoo.cfg
    Mode: leader                // 这是个leader
    

    zoo3

    $ docker exec -it zookeeper_3 /bin/sh
    
    /zookeeper-3.4.11 # zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /conf/zoo.cfg
    Mode: follower            // 这也是个follower哦
    

    zookeeper集群搭建完毕!it‘s over!

    基于docker-compose搭建kafka集群

    新建docker-compose.yml文件

    在工作目录/docker-compose/kafka目录下创建docker-compose.yml文件

    添加如下内容

    version: '2'
     
    services:
     broker1:
      image: wurstmeister/kafka
      restart: always
      hostname: broker1
      container_name: broker1
      privileged: true
      ports:
       - "9091:9092"
      environment:
       KAFKA_BROKER_ID: 1
       KAFKA_LISTENERS: PLAINTEXT://broker1:9092
       KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker1:9092
       KAFKA_ADVERTISED_HOST_NAME: broker1
       KAFKA_ADVERTISED_PORT: 9092
       KAFKA_ZOOKEEPER_CONNECT: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
       JMX_PORT: 9988
      volumes:
       - /var/run/docker.sock:/var/run/docker.sock
       - ./broker1:/kafka/kafka-logs-broker1
      external_links:
      - zoo1
      - zoo2
      - zoo3
      networks:
       mynetwork:
        ipv4_address: 172.18.0.14
     
     broker2:
      image: wurstmeister/kafka
      restart: always
      hostname: broker2
      container_name: broker2
      privileged: true
      ports:
       - "9092:9092"
      environment:
       KAFKA_BROKER_ID: 2
       KAFKA_LISTENERS: PLAINTEXT://broker2:9092
       KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker2:9092
       KAFKA_ADVERTISED_HOST_NAME: broker2
       KAFKA_ADVERTISED_PORT: 9092
       KAFKA_ZOOKEEPER_CONNECT: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
       JMX_PORT: 9977
      volumes:
       - /var/run/docker.sock:/var/run/docker.sock
       - ./broker2:/kafka/kafka-logs-broker2
      external_links: # 连接本compose文件以外的container
      - zoo1
      - zoo2
      - zoo3
      networks:
       mynetwork:
        ipv4_address: 172.18.0.15
     
     broker3:
      image: wurstmeister/kafka
      restart: always
      hostname: broker3
      container_name: broker3
      privileged: true
      ports:
       - "9093:9092"
      environment:
       KAFKA_BROKER_ID: 3
       KAFKA_LISTENERS: PLAINTEXT://broker3:9092
       KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker3:9092
       KAFKA_ADVERTISED_HOST_NAME: broker3
       KAFKA_ADVERTISED_PORT: 9092
       KAFKA_ZOOKEEPER_CONNECT: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
       JMX_PORT: 9999
      volumes:
       - /var/run/docker.sock:/var/run/docker.sock
       - ./broker3:/kafka/kafka-logs-broker3
      external_links: # 连接本compose文件以外的container
      - zoo1
      - zoo2
      - zoo3
      networks:
       mynetwork:
        ipv4_address: 172.18.0.16
     
     kafka-manager:
      image: sheepkiller/kafka-manager:latest
      restart: always
      container_name: kafka-manager
      hostname: kafka-manager
      ports:
       - "9000:9000"
      links:      # 连接本compose文件创建的container
       - broker1
       - broker2
       - broker3
      external_links:  # 连接本compose文件以外的container
       - zoo1
       - zoo2
       - zoo3
      environment:
       ZK_HOSTS: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
       KAFKA_BROKERS: broker1:9092,broker2:9092,broker3:9092
       APPLICATION_SECRET: letmein
       KM_ARGS: -Djava.net.preferIPv4Stack=true
      networks:
       mynetwork:
        ipv4_address: 172.18.0.10
     
    networks:
     mynetwork:
      external:  # 使用已创建的网络
       name: mynetwork
    
    

    共用zookeeper创建的网络

    启动集群

    docker-compose up -d
    

    验证集群

    docker exec -it broker1 bash
    cd /opt/kafka_2.11-2.0.0/bin/
    ./kafka-topics.sh --create --zookeeper zoo1:2181 --replication-factor 1 --partitions 8 --topic test
    ./kafka-console-producer.sh --broker-list localhost:9092 --topic test
    

    一般情况下上面这种就能验证集群了,但是在此处会抛出如下异常

    bash-4.4# kafka-topics.sh --create --zookeeper zoo1:2181 --replication-factor 1 --partitions 1 --topic mykafka
    Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 9977; nested exception is: 
            java.net.BindException: Address in use (Bind failed)
    sun.management.AgentConfigurationError: java.rmi.server.ExportException: Port already in use: 9977; nested exception is: 
            java.net.BindException: Address in use (Bind failed)
            at sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(ConnectorBootstrap.java:480)
            at sun.management.Agent.startAgent(Agent.java:262)
            at sun.management.Agent.startAgent(Agent.java:452)
    Caused by: java.rmi.server.ExportException: Port already in use: 9977; nested exception is: 
            java.net.BindException: Address in use (Bind failed)
            at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:346)
            at sun.rmi.transport.tcp.TCPTransport.exportObject(TCPTransport.java:254)
            at sun.rmi.transport.tcp.TCPEndpoint.exportObject(TCPEndpoint.java:411)
            at sun.rmi.transport.LiveRef.exportObject(LiveRef.java:147)
            at sun.rmi.server.UnicastServerRef.exportObject(UnicastServerRef.java:237)
            at sun.rmi.registry.RegistryImpl.setup(RegistryImpl.java:213)
            at sun.rmi.registry.RegistryImpl.<init>(RegistryImpl.java:173)
            at sun.management.jmxremote.SingleEntryRegistry.<init>(SingleEntryRegistry.java:49)
            at sun.management.jmxremote.ConnectorBootstrap.exportMBeanServer(ConnectorBootstrap.java:816)
            at sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(ConnectorBootstrap.java:468)
            ... 2 more
    Caused by: java.net.BindException: Address in use (Bind failed)
            at java.net.PlainSocketImpl.socketBind(Native Method)
            at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
            at java.net.ServerSocket.bind(ServerSocket.java:375)
            at java.net.ServerSocket.<init>(ServerSocket.java:237)
            at java.net.ServerSocket.<init>(ServerSocket.java:128)
            at sun.rmi.transport.proxy.RMIDirectSocketFactory.createServerSocket(RMIDirectSocketFactory.java:45)
            at sun.rmi.transport.proxy.RMIMasterSocketFactory.createServerSocket(RMIMasterSocketFactory.java:345)
            at sun.rmi.transport.tcp.TCPEndpoint.newServerSocket(TCPEndpoint.java:666)
            at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:335)
            ... 11 more
    
    

    是不是很奇怪?为嘛报错JMX错误

    百度找了很久,有人这样解决:

    解决方法:

    • 在每一个kafka节点加上环境变量 JMX_PORT=端口
    • 加上之后发现连不上,又是网络连接的问题,于是又把每个jmx端口暴露出来,然后fire-wall放行, 解决问题。
    • KAFKA_ADVERTISED_HOST_NAME这个最好设置宿主机的ip,宿主机以外的代码或者工具来连接,后面的端口也需要设置暴露的端口。

    但是亲测无效

    解决方案

    unset JMX_PORT;bin/kafka-topics.sh --list --zookeeper zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1

    在命令之前先重置一下unset JMX_PORT;

    亲测有效!

    如下:

    bin/kafka-topics.sh -create --zookeeper zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1 --replication-factor 1 --partitions 1 --topic mykafka
    Created topic mykafka.
    

    验证kafka管理端

    查看一下localhost:9000端口看能否出现一下界面

    ![image-20190830204949242](/Users/xxydliuyss/Library/Application Support/typora-user-images/image-20190830204949242.png)

    这个界面是我已经添加了kafka集群的,如果没有添加这里是个空页面

    ![image-20190830205044643](/Users/xxydliuyss/Library/Application Support/typora-user-images/image-20190830205044643.png)

    ![image-20190830205118882](/Users/xxydliuyss/Library/Application Support/typora-user-images/image-20190830205118882.png)

  • 相关阅读:
    7.5_链表_链表中添加结点
    【链表】创建新结点
    【单链表】头插法 & 尾插法
    7.5_链表_添加元素_尾插法/头插法
    7.5_链表_创建链表
    7.4_结构体_返回结构体的函数
    通俗的理解一下生成式对抗网络(GAN)
    Linux中如何让进程(或正在运行的程序)到后台运行?
    anaconda搭建本地源(加速访问),内网源(无外网访问)
    Ubuntu18.04(16和14也可以) 安装独立显卡后开机黑屏
  • 原文地址:https://www.cnblogs.com/lameclimber/p/11438877.html
Copyright © 2011-2022 走看看