zoukankan      html  css  js  c++  java
  • grafana展示ES中的nginx日志-地图展示

    1.nginx配置

    日志配置格式
    log_format main
        '{"@timestamp":"$time_iso8601",'
        '"host":"$hostname",'
        '"server_ip":"$server_addr",'
        '"client_ip":"$remote_addr",'
        '"xff":"$http_x_forwarded_for",'
        '"domain":"$host",'
        '"url":"$uri",'
        '"referer":"$http_referer",'
        '"args":"$args",'
        '"upstreamtime":"$upstream_response_time",'
        '"responsetime":"$request_time",'
        '"request_method":"$request_method",'
        '"status":"$status",'
        '"size":"$body_bytes_sent",'
        '"request_body":"$request_body",'
        '"request_length":"$request_length",'
        '"protocol":"$server_protocol",'
        '"upstreamhost":"$upstream_addr",'
        '"file_dir":"$request_filename",'
        '"http_user_agent":"$http_user_agent"'
      '}';
    log_format aka_logs
        '{"@timestamp":"$time_iso8601",'
        '"host":"$hostname",'
        '"server_ip":"$server_addr",'
        '"client_ip":"$remote_addr",'
        '"xff":"$remote_addr",'
        '"domain":"$host",'
        '"url":"$uri",'
        '"referer":"$http_referer",'
        '"args":"$args",'
        '"upstreamtime":"$upstream_response_time",'
        '"responsetime":"$request_time",'
        '"request_method":"$request_method",'
        '"status":"$status",'
        '"size":"$body_bytes_sent",'
        '"request_body":"$request_body",'
        '"request_length":"$request_length",'
        '"protocol":"$server_protocol",'
        '"upstreamhost":"$upstream_addr",'
        '"file_dir":"$request_filename",'
        '"http_user_agent":"$http_user_agent"'
      '}';
    
        access_log  /var/log/nginx/access.log  aka_logs;
    
    

    2.本机上安装filebeat

    安装的版本和es一致即可,不做要求
    安装过程--lve
    filebeat配置
    # cat /etc/filebeat/filebeat.yml
    
    #=========================== Filebeat inputs =============================
    filebeat.inputs:                   # inputs为复数,表名type可以有多个
    - type: log                        # 输入类型
      access:
      enabled: true                    # 启用这个type配置
      # 日志是json开启这个
      json.keys_under_root: true       # 默认这个值是FALSE的,也就是我们的json日志解析后会被放在json键上。设为TRUE,所有的keys就会被放到根节点
      json.overwrite_keys: true        # 是否要覆盖原有的key,这是关键配置,将keys_under_root设为TRUE后,再将overwrite_keys也设为TRUE,就能把filebeat默认的key值给覆盖
      max_bytes: 20480                 # 单条日志的大小限制,建议限制(默认为10M,queue.mem.events * max_bytes 将是占有内存的一部分)
      paths:
        - /var/log/nginx/access.log    # 监控nginx 的access日志
    
    #  json.keys_under_root: true
    #  json.overwrite_keys: true
    #  json.add_error_key: true
      fields:                          # 额外的字段
        source: nginx           # 自定义source字段,用于es建议索引(字段名小写,我记得大写好像不行)
    
    # 自定义es的索引需要把ilm设置为false
    setup.ilm.enabled: false
    
    #-------------------------- Kafka output ------------------------------
    output.kafka:            # 输出到kafka
      enabled: true          # 该output配置是否启用
      hosts: ["10.0.0.11:9092", "10.0.0.12:9092", "10.0.0.13:9092"]  # kafka节点列表,根据自己的需求来
      topic: "elk-%{[fields.source]}"   # kafka会创建该topic,然后logstash(可以过滤修改)会传给es作为索引名称,根据自己的需求来
      partition.hash:
        reachable_only: true # 是否只发往可达分区
      compression: gzip      # 压缩
      max_message_bytes: 1000000  # Event最大字节数。默认1000000。应小于等于kafka broker message.max.bytes值
      required_acks: 1  # kafka ack等级
      worker: 1  # kafka output的最大并发数
      bulk_max_size: 2048    # 单次发往kafka的最大事件数
    logging.to_files: true   # 输出所有日志到file,默认true, 达到日志文件大小限制时,日志文件会自动限制替换,详细配置:https://www.cnblogs.com/qinwengang/p/10982424.html
    close_older: 30m         # 如果一个文件在某个时间段内没有发生过更新,则关闭监控的文件handle。默认1h
    force_close_files: false # 这个选项关闭一个文件,当文件名称的变化。只在window建议为true
    
    # 没有新日志采集后多长时间关闭文件句柄,默认5分钟,设置成1分钟,加快文件句柄关闭
    close_inactive: 1m
    
    # 传输了3h后没有传输完成的话就强行关闭文件句柄,这个配置项是解决以上案例问题的key point
    close_timeout: 3h
    
    # 这个配置项也应该配置上,默认值是0表示不清理,不清理的意思是采集过的文件描述在registry文件里永不清理,在运行一段时间后,registry会变大,可能会带来问题
    clean_inactive: 72h
    
    # 设置了clean_inactive后就需要设置ignore_older,且要保证ignore_older < clean_inactive
    ignore_older: 70h
    
    # 限制 CPU和内存资源
    max_procs: 1 # 限制一个CPU核心,避免过多抢占业务资源
    queue.mem.events: 256 # 存储于内存队列的事件数,排队发送 (默认4096)
    queue.mem.flush.min_events: 128
    
    
    

    3.安装自己的kafka集群和es集群(过程--略)

    4.配置logstash

    安装过程--略
    cat /etc/logstash/conf.d/  # 一些注释的不删除了,有一些bug
    
    input {                                        # 输入组件
        kafka {                                    # 从kafka消费数据
            bootstrap_servers => ["10.0.0.11:9092,10.0.0.12:9092,10.0.0.13:9092"]
            #topics => "%{[@metadata][topic]}"     # 使用kafka传过来的topic
            topics_pattern => "elk-.*"             # 使用正则匹配topic
            codec => "json"                        # 数据格式
            consumer_threads => 3                  # 消费线程数量
            decorate_events => true                # 可向事件添加Kafka元数据,比如主题、消息大小的选项,这将向logstash事件中添加一个名为kafka的字段
            auto_offset_reset => "latest"          # 自动重置偏移量到最新的偏移量
            group_id => "logstash-groups1"         # 消费组ID,多个有相同group_id的logstash实例为一个消费组
            client_id => "logstash1"               # 客户端ID
            fetch_max_wait_ms => "1000"            # 指当没有足够的数据立即满足fetch_min_bytes时,服务器在回答fetch请求之前将阻塞的最长时间
      }
    }
    
    
    #filter {
    #   # 因为Nginx前端有负载均衡,$remote_addr 字段不是用户真实ip地址
    #   # 本例获取 $http_x_forwarded_for 字段,$http_x_forwarded_for 字段第一个ip地址就是用户真实ip地址
    #   # 再nginx字段基础上添加 real_remote_addr 字段,用于存储用户真实ip地址
    #   if ([fields][source] =~ "nginx-access") {
    #     if "," in [xff] {
    #        mutate {
    #          split => ["xff", ","]
    #          add_field => { "real_remote_addr" => "%{[xff][0]}" }
    #        }
    #     } else if ([xff] == "-") {
    #        mutate {
    #          add_field => { "real_remote_addr" => "-" }
    #        }
    #     } else {
    #        mutate {
    #          add_field => { "real_remote_addr" => "%{xff}" }
    #        }
    #     }
    #
    #     geoip {
    #       target => "geoip"
    #       source => "real_remote_addr"
    #       database => "/usr/share/logstash/data/GeoLite2-City/GeoLite2-City.mmdb"
    #       add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
    #       add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
    #       # 去掉显示 geoip 显示的多余信息
    #       remove_field => ["[geoip][latitude]", "[geoip][longitude]", "[geoip][country_code]", "[geoip][country_code2]", "[geoip][country_code3]", "[geoip][timezone]", "[geoip][continent_code]", "[geoip][region_code]"]
    #     }
    #
    #     mutate {
    #       convert => {
    #         "[size]" => "integer"
    #         "[status]" => "integer"
    #         "[responsetime]" => "float"
    #         "[upstreamtime]" => "float"
    #         "[geoip][coordinates]" => "float"
    #       }
    #     }
    #
    #     # 根据http_user_agent来自动处理区分用户客户端系统与版本
    #     useragent {
    #       source => "http_user_agent"
    #       target => "ua"
    #       # 过滤useragent没用的字段
    #       remove_field => [ "[ua][minor]","[ua][major]","[ua][build]","[ua][patch]","[ua][os_minor]","[ua][os_major]" ]
    #     }
    #   }
    #}
    
    
    filter {
      geoip {
        #multiLang => "zh-CN"
        target => "geoip"
        source => "client_ip"
        database => "/usr/share/logstash/data/GeoLite2-City/GeoLite2-City.mmdb"
        add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
        add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
        # 去掉显示 geoip 显示的多余信息
        remove_field => ["[geoip][latitude]", "[geoip][longitude]", "[geoip][country_code]", "[geoip][country_code2]", "[geoip][country_code3]", "[geoip][timezone]", "[geoip][continent_code]", "[geoip][region_code]"]
      }
      mutate {
        convert => [ "size", "integer" ]
        convert => [ "status", "integer" ]
        convert => [ "responsetime", "float" ]
        convert => [ "upstreamtime", "float" ]
        convert => [ "[geoip][coordinates]", "float" ]
        # 过滤 filebeat 没用的字段,这里过滤的字段要考虑好输出到es的,否则过滤了就没法做判断
        remove_field => [ "ecs","agent","host","cloud","@version","input","logs_type" ]
      }
      # 根据http_user_agent来自动处理区分用户客户端系统与版本
      useragent {
        source => "http_user_agent"
        target => "ua"
        # 过滤useragent没用的字段
        remove_field => [ "[ua][minor]","[ua][major]","[ua][build]","[ua][patch]","[ua][os_minor]","[ua][os_major]" ]
      }
    }
    
    
    
    
    output {                                       # 输出组件
        elasticsearch {
            # Logstash输出到es
            hosts => ["10.0.0.11:9200", "10.0.0.12:9200", "10.0.0.13:9200"]
            index => "logstash-%{[fields][source]}-%{+YYYY-MM-dd}"      # 直接在日志中匹配,索引会去掉elk
            user => "elastic"
            password => "xxxxxxxxxx"
        }
       # stdout {
       # }
    }
    
    

    5.GeoLite2-City.mmdb 安装

    https://github.com/wp-statistics/GeoLite2-City  #github 上找找就行了
    将项目放在/usr/share/logstash/data/GeoLite2-City/GeoLite2-City.mmdb
    

    6.安装grafana

    下载grafana,使用docker更方便,不过本文要配置一些地图插件,所以选择rpm安装
    https://grafana.com/grafana/download/6.6.2 ##下载6.62,11190 模板最支持6.62,所以我们选择6.62
    

    7.安装地图相关插件

    grafana-cli plugins install grafana-worldmap-panel
    grafana-cli plugins install grafana-piechart-panel
    

    8.部门地图不显示

    cd /var/lib/grafana/plugins
    
    
    sed -i 's/https://cartodb-basemaps{s}.global.ssl.fastly.net/light_all/{z}/{x}/{y}.png/http://{s}.basemaps.cartocdn.com/light_all/{z}/{x}/{y}.png/' grafana-worldmap-panel/src/worldmap.ts grafana-worldmap-panel/dist/module.js grafana-worldmap-panel/dist/module.js.map
    
    
    sed -i 's/https://cartodb-basemaps-{s}.global.ssl.fastly.net/dark_all/{z}/{x}/{y}.png/http://{s}.basemaps.cartocdn.com/dark_all/{z}/{x}/{y}.png/'  grafana-worldmap-panel/src/worldmap.ts grafana-worldmap-panel/dist/module.js grafana-worldmap-panel/dist/module.js.ma
    
    重启grafana
    

    9.配置grafana

    11190 模板
    

  • 相关阅读:
    23种设计模式
    doraemon的python Flask框架 websocket和redis
    doraemon的python Flask框架 路由和配置
    doraemon的python Flask框架 安装以及基础应用
    doraemon的python centos的入门(五)用户和用户组权限
    doraemon的python centos的入门(四)查询和压缩文件、文件夹
    doraemon的python centos的入门(三)vim
    doraemon的python centos的入门(二)文件目录操作
    doraemon的python centos的入门(一)增删改查命令
    doraemon的python CRM项目中公户与私户转换、搜索条件的应用
  • 原文地址:https://www.cnblogs.com/dinghc/p/14831304.html
Copyright © 2011-2022 走看看