zoukankan      html  css  js  c++  java
  • logstash收集nginx日志写入kafka

    实验简介:

        由logstash收集nginx日志写入kafka中,在由另一台主机logstash读取kafka日志写入elasticsearch

    一 logstash收集日志写入kafka

    1.1.1 编写logstash配置文件

    [root@localhost ~]# cat /etc/logstash/conf.d/nginx-kafka.conf
     input {                                             
           file {
               path => "/opt/vhosts/fatai/logs/access_json.log"
               start_position => "beginning"
               type => "nginx-accesslog"
               codec => "json"
               stat_interval => "2"
               }
    }
    output {
    
        kafka {
             bootstrap_servers => "192.168.10.10:9092"
             topic_id => 'nginx-access-kafkaceshi'
             codec => "json"
            }
    
    }

    1.1.2 验证并重启logstash

    [root@localhost ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx-kafka.conf -t
    WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
    Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    Configuration OK
    [root@localhost ~]# systemctl restart logstash.service 

    1.1.3 kafka端验证主题

    [root@DNS-Server tools]# /tools/kafka/bin/kafka-topics.sh --list  --zookeeper 192.168.10.10:2181,192.168.10.167:2181,192.168.10.171:2181
    nginx-access-kafkaceshi

    二 logstash收集kafka日志并写入elk

    1.1.1 编写logstash配置文件

    [root@Docker ~]# cat /etc/logstash/conf.d/nginx_kafka.conf
    input {
        kafka {
          bootstrap_servers => "192.168.10.10:9092"   #kafka地址
          topics => "nginx-access-kafkaceshi"         #定义主题
          group_id => "nginx-access-kafkaceshi"       #自定义
          codec => "json"                             #指定编码
          consumer_threads => 1                       #消费者线程
          decorate_events => true                     #要不要加kafka标记
        }
    }
    output {
      if [type] == "nginx-accesslog"{                 #type 是收集时候logstash定义的
        elasticsearch {
          hosts => ["192.168.10.10:9200"]
          index=> "nginx-accesslog-kafka-test-%{+YYYY.MM.dd}"
        }
      }
    }

    1.1.2 检测并重启

    [root@Docker ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx_kafka.conf -t
    WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
    Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    Configuration OK
    [root@Docker ~]# systemctl restart logstash.service

    1.1.3 elasticsearch验证

    作者:闫世成

    出处:http://cnblogs.com/yanshicheng

    联系:yans121@sina.com

    本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接。如有问题或建议,请联系上述邮箱,非常感谢。
  • 相关阅读:
    对于GetBuffer() 与 ReleaseBuffer() 的一些分析
    _tmain与main,winMain,wmain收藏
    【引用】常用字符串长度计算函数
    Invalid URI
    Cannot obtain the schema rowset "DBSCHEMA_TABLES_INFO" for OLE DB provider "SQLNCLI10" for linked server "DB1".
    Penang Industrial Zone
    Create Raid 1 and Raid 10 in one server
    Time zone BOGUS not found in registry
    'xxx_Forms' is not a valid Application Database or User 'sa' does not have sufficient permissions to check
    Syteline Goods Receiving Note Report
  • 原文地址:https://www.cnblogs.com/yanshicheng/p/9443129.html
Copyright © 2011-2022 走看看