zoukankan      html  css  js  c++  java
  • Filebeat+Logstash自定义多索引

    方案一:推荐

    [root@elk-node-1 filebeat]# cat filebeat.yml|egrep -v "^$|^#|#"

    filebeat.inputs:

    - type: log

    enabled: true

    paths:

    - /opt/app/nginx/logs/elk.log

    fields:

    service: nginx

    - type: log

    enabled: true

    paths:

    - /var/log/cron

    fields:

    service: cron

    filebeat.config.modules:

    path: ${path.config}/modules.d/*.yml

    reload.enabled: false

    setup.template.settings:

    index.number_of_shards: 1

    setup.kibana:

    output.logstash:

    hosts: ["10.0.0.61:5044"]

    [root@elk-node-1 filebeat]#

    [root@elk-node-1 config]# cat logstash.conf

    input {

    beats {

    port => "5044"

    }

    }

    output {

    #输出时;如果等于nginx则输出"nginx-%{+YYYY.MM.dd}"

    if [fields][service] == "nginx" {

    elasticsearch {

    hosts => ["10.0.0.61:9200"]

    index => "test-yunshi-ht-ngin-%{+YYYY.MM.dd}"

    }

    }

    else if [fields][service] == "cron" {

    elasticsearch {

    hosts => ["10.0.0.61:9200"]

    index => "test-yunshi-ht-cron-%{+YYYY.MM.dd}"

    }

    }

    }

    方案二,不推荐使用的设置将继续工作,但计划在将来从logstash中删除。在ElasticSearch 6.0中,文档类型已被弃用,并在7.0中完全删除

    filebeat里添加document_type配置,定义一个识别号- input_type: log

      # Paths that should be crawled and fetched. Glob based paths.

      paths:

        - /var/logs/xx.log

      document_type: xx

      paths:

        - /data/logs/aa.log

      document_type: aa

    然后在logstash里配置对应的type

    output {

        if [type] =="xx"{

                elasticsearch {

                hosts => ["*.*.*.*:9200"]

                index => "xx-%{+YYYY.MM.dd}"

                document_type => "log"

            }

        }

        if [type] =="aa"{

                elasticsearch {

                  hosts => ["*.*.*.*:9200"]

                  index => "aa-%{+YYYY.MM.dd}"

                  document_type => "log"

                    }

        }

    }

  • 相关阅读:
    数据库知识总结
    servlet总结
    创建Dynamic Web Project工程
    Python--(爬虫与数据库的连接)
    Mongodb数据库操作
    Python Web(Django)连接SQL SERVER
    Python操作MySql
    Python(Django)项目与Apache的管理交互
    Python Django连接(听明白了是连接不是创建!)Mysql已存在的数据库
    Python selenium自动化网页抓取器
  • 原文地址:https://www.cnblogs.com/xy51/p/11208876.html
Copyright © 2011-2022 走看看