zoukankan      html  css  js  c++  java
  • logstash收集nginx日志写入kafka1

    实验简介:

        由logstash收集nginx日志写入kafka中,在由另一台主机logstash读取kafka日志写入elasticsearch

    一 logstash收集日志写入kafka

    1.1.1 编写logstash配置文件

    [root@localhost ~]# cat /etc/logstash/conf.d/nginx-kafka.conf
     input {                                             
           file {
               path => "/opt/vhosts/fatai/logs/access_json.log"
               start_position => "beginning"
               type => "nginx-accesslog"
               codec => "json"
               stat_interval => "2"
               }
    }
    output {
    
        kafka {
             bootstrap_servers => "192.168.10.10:9092"
             topic_id => 'nginx-access-kafkaceshi'
             codec => "json"
            }
    
    }

    1.1.2 验证并重启logstash

    [root@localhost ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx-kafka.conf -t
    WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
    Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    Configuration OK
    [root@localhost ~]# systemctl restart logstash.service 

    1.1.3 kafka端验证主题

    [root@DNS-Server tools]# /tools/kafka/bin/kafka-topics.sh --list  --zookeeper 192.168.10.10:2181,192.168.10.167:2181,192.168.10.171:2181
    nginx-access-kafkaceshi

    二 logstash收集kafka日志并写入elk

    1.1.1 编写logstash配置文件

    [root@Docker ~]# cat /etc/logstash/conf.d/nginx_kafka.conf
    input {
        kafka {
          bootstrap_servers => "192.168.10.10:9092"   #kafka地址
          topics => "nginx-access-kafkaceshi"         #定义主题
          group_id => "nginx-access-kafkaceshi"       #自定义
          codec => "json"                             #指定编码
          consumer_threads => 1                       #消费者线程
          decorate_events => true                     #要不要加kafka标记
        }
    }
    output {
      if [type] == "nginx-accesslog"{                 #type 是收集时候logstash定义的
        elasticsearch {
          hosts => ["192.168.10.10:9200"]
          index=> "nginx-accesslog-kafka-test-%{+YYYY.MM.dd}"
        }
      }
    }

    1.1.2 检测并重启

    [root@Docker ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx_kafka.conf -t
    WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
    Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    Configuration OK
    [root@Docker ~]# systemctl restart logstash.service

    1.1.3 elasticsearch验证

    作者:闫世成

    出处:http://cnblogs.com/yanshicheng

    联系:yans121@sina.com

    本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接。如有问题或建议,请联系上述邮箱,非常感谢。
  • 相关阅读:
    C# 实现 Snowflake算法生成唯一性Id
    kafka可视化客户端工具(Kafka Tool)的基本使用(转)
    docker 安装kafka
    Model类代码生成器
    使用docker 部署rabbitmq 镜像
    Vue 增删改查 demo
    git 提交代码到库
    Android ble蓝牙问题
    mac 配置 ssh 到git (Could not resolve hostname github.com, Failed to connect to github.com port 443 Operation timed out)
    okhttp
  • 原文地址:https://www.cnblogs.com/yanshicheng/p/9443149.html
Copyright © 2011-2022 走看看