zoukankan      html  css  js  c++  java
  • Apache Kafka监控之Kafka Web Console

     

    Kafka Web Console:是一款开源的系统,源码的地址在https://github.com/claudemamo/kafka-web-console中。Kafka Web Console也是用Scala语言编写的Java web程序用于监控Apache Kafka。这个系统的功能和KafkaOffsetMonitor很类似,但是我们从源码角度来看,这款系统实现比KafkaOffsetMonitor要复杂很多,而且编译配置比KafkaOffsetMonitor较麻烦。
      要想运行这套系统我们需要的先行条件为:

    1. Play Framework 2.2.x
    2. Apache Kafka 0.8.x
    3. Zookeeper 3.3.3 or 3.3.4

      同样,我们从https://github.com/claudemamo/kafka-web-console上面将源码下载下来,然后用sbt进行编译,在编译前我们需要做如下的修改:
      1、Kafka Web Console默认用的数据库是H2,它支持以下几种数据库:

     

    
    
    1. H2 (default)
    2. PostgreSql
    3. Oracle
    4. DB2
    5. MySQL
    6. ApacheDerby
    7. Microsoft SQL Server

     

    为了方便,我们可以使用Mysql数据库,只要做如下修改即可,找到 conf/application.conf文件,并修改如下
    
    
    1. #############################################################################
    2.  User:过往记忆
    3.  Date:14-08-08
    4.  Time:11:37
    5.  bolg: https://www.iteblog.com
    6.  本文地址:https://www.iteblog.com/archives/1084
    7.  过往记忆博客,专注于hadoophivesparksharkflume的技术博客,大量的干货
    8.  过往记忆博客微信公共帐号:iteblog_hadoop
    9. #############################################################################
    10. 将这个
    11. db.default.driver=org.h2.Driver
    12. db.default.url="jdbc:h2:file:play"
    13. # db.default.user=sa
    14. # db.default.password=""
    15.  
    16.  
    17. 修改成
    18. db.default.driver=com.mysql.jdbc.Driver
    19. db.default.url="jdbc:mysql://localhost:3306/test"
    20. db.default.user=root
    21. db.default.pass=123456
    我们还需要修改build.sbt,加入对Mysql的依赖:
    
    
    1. "mysql"%"mysql-connector-java"%"5.1.31"
     

     修改后的bulid.sbt : 

    
    
    1. name := "kafka-web-console"
    2.  
    3. version := "2.1.0-SNAPSHOT"
    4.  
    5. libraryDependencies ++= Seq(
    6.   jdbc,
    7.   cache,
    8.   "org.squeryl" % "squeryl_2.10" % "0.9.5-6",
    9.   "com.twitter" % "util-zk_2.10" % "6.11.0",
    10.   "com.twitter" % "finagle-core_2.10" % "6.15.0",
    11.   "org.quartz-scheduler" % "quartz" % "2.2.1",
    12.   "org.apache.kafka" % "kafka_2.10" % "0.8.1.1",
    13.   "mysql" % "mysql-connector-java" % "5.1.31"
    14.     exclude("javax.jms", "jms")
    15.     exclude("com.sun.jdmk", "jmxtools")
    16.     exclude("com.sun.jmx", "jmxri")
    17. )
    18.  
    19. play.Project.playScalaSettings
    20.  

     2、执行conf/evolutions/default/bak目录下面的1.sql、2.sql和3.sql三个文件。需要注意的是,这三个sql文件不能直接运行,有语法错误,需要做一些修改。

    修改后的1.sql :

    
    
    1. CREATE TABLE zookeepers (
    2.   name VARCHAR(100),
    3.   host VARCHAR(100),
    4.   port INT(100),
    5.   statusId INT(100),
    6.   groupId INT(100),
    7.   PRIMARY KEY (name)
    8. );
    9.  
    10. CREATE TABLE groups (
    11.   id INT(100),
    12.   name VARCHAR(100),
    13.   PRIMARY KEY (id)
    14. );
    15.  
    16. CREATE TABLE status (
    17.   id INT(100),
    18.   name VARCHAR(100),
    19.   PRIMARY KEY (id)
    20. );
    21.  
    22. INSERT INTO groups (id, name) VALUES (0, 'ALL');
    23. INSERT INTO groups (id, name) VALUES (1, 'DEVELOPMENT');
    24. INSERT INTO groups (id, name) VALUES (2, 'PRODUCTION');
    25. INSERT INTO groups (id, name) VALUES (3, 'STAGING');
    26. INSERT INTO groups (id, name) VALUES (4, 'TEST');
    27.  
    28. INSERT INTO status (id, name) VALUES (0, 'CONNECTING');
    29. INSERT INTO status (id, name) VALUES (1, 'CONNECTED');
    30. INSERT INTO status (id, name) VALUES (2, 'DISCONNECTED');
    31. INSERT INTO status (id, name) VALUES (3, 'DELETED'); 

    修改后的2.sql  : 

    
    
    1. ALTER TABLE zookeepers ADD COLUMN chroot VARCHAR(100);

    修改后的3.sql :

    
    
    1. ALTER TABLE zookeepers DROP PRIMARY KEY;
    2. ALTER TABLE zookeepers ADD COLUMN id int(100) NOT NULL AUTO_INCREMENT PRIMARY KEY;
    3.  
    4. ALTER TABLE zookeepers MODIFY COLUMN name VARCHAR(100) NOT NULL;
    5. ALTER TABLE zookeepers MODIFY COLUMN host VARCHAR(100) NOT NULL;
    6. ALTER TABLE zookeepers MODIFY COLUMN port INT(100)  NOT NULL;
    7. ALTER TABLE zookeepers MODIFY COLUMN  statusId INT(100)  NOT NULL;
    8. ALTER TABLE zookeepers MODIFY COLUMN groupId INT(100)  NOT NULL;
    9. ALTER TABLE zookeepers ADD UNIQUE (name);
    10.  
    11. CREATE TABLE offsetHistory (
    12.   id int(100) AUTO_INCREMENT PRIMARY KEY,
    13.   zookeeperId int(100),
    14.   topic VARCHAR(255),
    15.   FOREIGN KEY (zookeeperId) REFERENCES zookeepers(id),
    16.   UNIQUE (zookeeperId, topic)
    17. );
    18.  
    19. CREATE TABLE offsetPoints (
    20.   id int(100) AUTO_INCREMENT PRIMARY KEY,
    21.   consumerGroup VARCHAR(255),
    22.   timestamp TIMESTAMP,
    23.   offsetHistoryId int(100),
    24.   partition int(100),
    25.   offset int(100),
    26.   logSize int(100),
    27.   FOREIGN KEY (offsetHistoryId) REFERENCES offsetHistory(id)
    28. );
    29.  
    30. CREATE TABLE settings (
    31.   key_ VARCHAR(255) PRIMARY KEY,
    32.   value VARCHAR(255)
    33. );
    34.  
    35. INSERT INTO settings (key_, value) VALUES ('PURGE_SCHEDULE', '0 0 0 ? * SUN *');
    36. INSERT INTO settings (key_, value) VALUES ('OFFSET_FETCH_INTERVAL', '30'); 

    project/build.properties的

    sbt.version=0.13.0 要修改实际的sbt版本,比如我用的是sbt.version=0.13.15
    上面的注意事项弄完之后,我们就可以编译下载过来的源码:

    # sbt package

      编译的过程比较慢,有些依赖包下载速度非常地慢,请耐心等待。

      在编译的过程中,可能会出现有些依赖包无法下载,如下错误:
    
    
    1. [warn]modulenot found: com.typesafe.play#sbt-plugin;2.2.1
    2. [warn]==== typesafe-ivy-releases: tried
    3. [warn] http://repo.typesafe.com/typesafe/ivy-releases/
    4. com.typesafe.play/sbt-plugin/scala_2.9.2/sbt_0.12/2.2.1/ivys/ivy.xml
    5. [warn]==== sbt-plugin-releases: tried
    6. [warn] http://scalasbt.artifactoryonline.com/scalasbt/sbt-plugin-releases/
    7. com.typesafe.play/sbt-plugin/scala_2.9.2/sbt_0.12/2.2.1/ivys/ivy.xml
    8. [warn]====local: tried
    9. [warn]/home/iteblog/.ivy2/local/com.typesafe.play/
    10. sbt-plugin/scala_2.9.2/sbt_0.12/2.2.1/ivys/ivy.xml
    11. [warn]====Typesafe repository: tried
    12. [warn] http://repo.typesafe.com/typesafe/releases/com/
    13. typesafe/play/sbt-plugin_2.9.2_0.12/2.2.1/sbt-plugin-2.2.1.pom
    14. [warn]====public: tried
    15. [warn] http://repo1.maven.org/maven2/com/typesafe/play/
    16. sbt-plugin_2.9.2_0.12/2.2.1/sbt-plugin-2.2.1.pom
    17. [warn]::::::::::::::::::::::::::::::::::::::::::::::
    18.  
    19. ====local: tried
    20.  
    21. /home/iteblog/.ivy2/local/org.scala-sbt/collections/0.13.0/jars/collections.jar
    22.  
    23. ::::::::::::::::::::::::::::::::::::::::::::::
    24.  
    25. :: FAILED DOWNLOADS ::
    26.  
    27. ::^ see resolution messages for details ^::
    28.  
    29. ::::::::::::::::::::::::::::::::::::::::::::::
    30.  
    31. :: org.scala-sbt#collections;0.13.0!collections.jar
    32.  
    33. :::::::::::::::::::::::::::::::::::::::::::::: 
      我们可以手动地下载相关依赖,并放到类似/home/iteblog/.ivy2/local/org.scala-sbt/collections/0.13.0/jars/目录下面。然后再编译就可以了。

    启动的时候需要把.sql文件删除掉,否则会报错。

    http://localhost:9000

    最后,我们可以通过下面命令启动Kafka Web Console监控系统:

    # sbt run

    并可以在http://localhost:9000查看。下面是一张效果图

     

    1. Before you can monitor a broker, you need to register the Zookeeper server associated with it:

    register zookeeper

     

    Kafka Web Console

    Kafka Web Console is a Java web application for monitoring Apache Kafka. With a modern web browser, you can view from the console:

    • Registered brokers

    brokers


    • Topics, partitions, log sizes, and partition leaders

    topics


    • Consumer groups, individual consumers, consumer owners, partition offsets and lag

    topic


    • Graphs showing consumer offset and lag history as well as consumer/producer message throughput history.

    topic


    • Latest published topic messages (requires web browser support for WebSocket)

    topic feed


    Furthermore, the console provides a JSON API described in RAML. The API can be tested using the embedded API Console accessible through the URL http://[hostname]:[port]/api/console.

    Requirements

    • Play Framework 2.2.x
    • Apache Kafka 0.8.x
    • Zookeeper 3.3.3 or 3.3.4

    Deployment

    Consult Play!'s documentation for deployment options and instructions.

    Getting Started

    1. Kafka Web Console requires a relational database. By default, the server connects to an embedded H2 database and no database installation or configuration is needed. Consult Play!'s documentation to specify a database for the console. The following databases are supported:

      • H2 (default)
      • PostgreSql
      • Oracle
      • DB2
      • MySQL
      • Apache Derby
      • Microsoft SQL Server

      Changing the database might necessitate making minor modifications to the DDL to accommodate the new database.

    2. Before you can monitor a broker, you need to register the Zookeeper server associated with it:

    register zookeeper

    Filling in the form and clicking on Connect will register the Zookeeper server. Once the console has successfully established a connection with the registered Zookeeper server, it can retrieve all necessary information about brokers, topics, and consumers:

    zookeepers

     

  • 相关阅读:
    Hibernate事务代码规范写法
    关于hibernate插入数据时的乱码问题
    搭建hibernate环境(重点)
    接口测试概念以及用postman进行接口测试
    Atom编辑器之加快React开发的插件汇总
    如何搭建git服务器
    phpstorm 配置 xdebug调试工具
    linux 获取指定行范围文本内容
    odoo 创建一个qweb
    linux nohup 使用
  • 原文地址:https://www.cnblogs.com/yangcx666/p/8723857.html
Copyright © 2011-2022 走看看