zoukankan      html  css  js  c++  java
  • ELK补充之Filebeat

    ELK补充之Filebeat

    https://www.elastic.co/cn/downloads/past-releases/filebeat-6-5-4
    https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.5.4-x86_64.rpm

    适用于没有JDK环境的,日志量非常大的

    使用filebeat替代logstash收集日志

    安装

    yum install filebeat-6.5.4-x86_64.rpm
    cp filebeat.yml{,.bak}

    配置filebeatgrep -E -v "#|^$" filebeat.yml

    filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /var/log/messages
      exclude_files: ['.gz$']
      fields:
        type: syslog
        host: "192.168.10.102"
    
    filebeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false
    setup.template.settings:
      index.number_of_shards: 3
    setup.kibana:
    processors:
      - add_host_metadata: ~
      - add_cloud_metadata: ~
    
    output.file:
      path: /tmp
      filename: filebeat.log
      
    systemctl restart filebeat
    ls /tmp/filebeat.log

    filebeat收集单个类型日志并写入redis

    配置filebeat将日志写入至redis

    grep -E -v "#|^$" filebeat.yml
    filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /var/log/messages
      exclude_files: ['.gz$']
      fields:
        type: syslog
        host: "192.168.10.102"
    
    filebeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false
    setup.template.settings:
      index.number_of_shards: 3
    setup.kibana:
    processors:
      - add_host_metadata: ~
      - add_cloud_metadata: ~
    
    output.redis:
      hosts: ["192.168.10.254:6379"]
      key: "filebeat"
      db: 1
      timeout: 5
      password: password
      
    systemctl restart filebeat

    配置其他logstash服务器从redis读取数据

    cat redis-es.conf 
    input {
      redis {
        data_type => "list"
        key => "filebeat"
        host => "192.168.10.254"
        port => "6379"
        db => "1"
        password => "password"
      }
    }
    
    output {
      if [fields][type] =="syslog"{
        elasticsearch {
          hosts => ["192.168.10.100:9200"]
          index => "filebeat-syslog-%{+YYYY.MM.dd}"
    }}}

    添加索引

    kibana
    kibana

    kibana
    kibana

    验证

    kibana
    kibana

    filebeat收集多个类型日志并写入redis

    配置filebeat将日志写入至redis

    filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /var/log/messages
      exclude_files: ['.gz$']
      fields:
        type: syslog
        host: "192.168.10.102"
    
    - type: log
      enabled: true
      paths:
        - /usr/local/nginx/logs/access_json.log
      fields:
        type: nginx-accesslog
        host: "192.168.10.102"
    
    filebeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false
    setup.template.settings:
      index.number_of_shards: 3
    setup.kibana:
    processors:
      - add_host_metadata: ~
      - add_cloud_metadata: ~
    
    output.redis:
      hosts: ["192.168.10.254:6379"]
      key: "filebeat"
      db: 1
      timeout: 5
      password: password

    配置其他logstash服务器从redis读取数据

    input {
      redis {
        data_type => "list"
        key => "filebeat"
        host => "192.168.10.254"
        port => "6379"
        db => "1"
        password => "password"
        codec => "json"
      }
    }
    
    output {
      if [fields][type] == "nginx-accesslog" {
        elasticsearch {
          hosts => ["192.168.10.100:9200"]
          index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"
          codec => "json"
    }}
      if [fields][type] =="syslog"{
        elasticsearch {
          hosts => ["192.168.10.100:9200"]
          index => "filebeat-syslog-%{+YYYY.MM.dd}"
    }}}

    添加索引

    kibana
    kibana

    验证

    kabina
    kabina

    日志收集实战

    filebeat
    filebeat

    官方文档:https://www.elastic.co/guide/en/beats/filebeat/current/logstash-output.html

    配置filebeat将日志写入至logstash 5044端口

    [root@logstash1 filebeat]# cat filebeat.yml
    filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /var/log/messages
      exclude_files: ['.gz$']
      fields:
        type: syslog
        host: "192.168.10.102"
    
    - type: log
      enabled: true
      paths:
        - /usr/local/nginx/logs/access_json.log
      fields:
        type: nginx-accesslog
        host: "192.168.10.102"
    
    filebeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false
    setup.template.settings:
      index.number_of_shards: 3
    setup.kibana:
    processors:
      - add_host_metadata: ~
      - add_cloud_metadata: ~
    
    output.logstash:
      hosts: ["192.168.10.102:5044"]
      #hosts: ["localhost:5044", "localhost:5045"] #logstash 服务器地址,可以是多个
      enabled: true   #是否开启输出至logstash,默认即为true
      worker: 1  #工作线程数
      compression_level: 3 #压缩级别
      #loadbalance: true #多个输出的时候开启负载

    logstash开启5044端口 将日志写入redis缓存

    cat redis.conf 
    input {
      beats {
        port => 5044
        codec => "json"
      }
    }
    
    output {
      if [fields][type] == "nginx-accesslog" {
        redis {
          data_type => "list"
          key => "filebeat"
          host => "192.168.10.254"
          port => "6379"
          db => "1"
          password => "password"
          codec => "json"
      }}
    }

    配置其他logstash服务器从redis读取数据

    input {
      redis {
        data_type => "list"
        key => "filebeat"
        host => "192.168.10.254"
        port => "6379"
        db => "1"
        password => "password"
        codec => "json"
      }
    }
    
    output {
      if [fields][type] == "nginx-accesslog" {
        elasticsearch {
          hosts => ["192.168.10.100:9200"]
          index => "filebeat-nginx-accesslog-%{+YYYY.MM.dd}"
          codec => "json"
    }}
      if [fields][type] =="syslog"{
        elasticsearch {
          hosts => ["192.168.10.100:9200"]
          index => "filebeat-syslog-%{+YYYY.MM.dd}"
    }}}

    kibana验证

    kainba
    kainba

    通过nginx代理kibana 并实现登录认证

    vim /usr/lib/systemd/system/nginx.service
    PIDFile=/run/nginx.pid #和nginx 配置文件的保持一致

    生成登录认证密码

    yum install httpd-tools –y
    htpasswd -bc /usr/local/nginx/conf/htpasswd.users user1 pass
    htpasswd -b /usr/local/nginx/conf/htpasswd.users user2 pass

    nginx配置登录认证

    vim /usr/local/nginx/conf/nginx.conf
    include /usr/local/nginx/conf/conf.d/*.conf;
    
    mkdir  /usr/local/nginx/conf/conf.d/
    vim /usr/local/nginx/conf/conf.d/kibana.conf
    upstream kibana_server {
            server  127.0.0.1:5601 weight=1 max_fails=3  fail_timeout=60;
    }
    
    server {
            listen 80;
            server_name www.to-kibana.com;
            auth_basic "Restricted Access";
            auth_basic_user_file /usr/local/nginx/conf/htpasswd.users;  
            location / {
            proxy_pass http://kibana_server;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
            }
    }
    
    chown  www.www /usr/local/nginx/ -R

    kibana配置文件

    [root@e2 sbin]# grep ^[a-Z] /etc/kibana/kibana.yml 
    server.port: 5601
    server.host: "127.0.0.1"
    elasticsearch.url: "http://192.168.10.100:9200"

    验证

    firefox www.to-kibana.com
    kibana

    kibana
    kibana

    通过地图统计客户IP所在城市

    http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz
    注意:一定是logstash开头,例如logstash-xxxxxxxxxxxxxxxxx

    tar xf GeoLite2-City.tar
    chown logstash.logstash GeoLite2-City_20190820/ -R

    kibana
    kibana

    kibana
    kibana

    kibana
    kibana

    kibana
    kibana

    kiabana画图

    当日nginx用户访问状态码

    kibana
    kibana

    当日nginx访问IP统计TOP10

    kibana
    kibana

    当日nginx状态码仪表盘

    kibana
    kibana

    kibana
    kibana

    kibana
    kibana

    日志写入数据库

    入数据库的目的是用于持久化保存重要数据,比如状态码、客户端IP、客户端浏览器版本等等,用于后期按月做数据统计等
    https://dev.mysql.com/downloads/connector/j/

    授权用户登录

    create database elk  character set utf8 collate utf8_bin;
    grant all privileges on elk.* to elk@"%" identified by 'pass';
    flush  privileges;

    连接数据库创建表

    mysql -hnode -uelk -ppass
    
    create table elklog(clientip varchar(128),upstreamtime varchar(128),url varchar(256),status int(16),time TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP) DEFAULT CHARSET=utf8 COLLATE=utf8_bin;

    logstash配置mysql-connector-java包

    https://dev.mysql.com/downloads/connector/j/

    yum install mysql-connector-java-8.0.17-1.el7.noarch.rpm
    rpm -qpl mysql-connector-java-8.0.17-1.el7.noarch.rpm
    mkdir -pv  /usr/share/logstash/vendor/jar/jdbc
    cp /usr/share/java/mysql-connector-java.jar  /usr/share/logstash/vendor/jar/jdbc/
    chown  logstash: /usr/share/logstash/vendor/jar/  -R

    安装配置插件

    /usr/share/logstash/bin/logstash-plugin  list #当前已经安装的所有插件
    /usr/share/logstash/bin/logstash-plugin   install  logstash-output-jdbc
    Validating logstash-output-jdbc
    Installing logstash-output-jdbc
    Installation successful

    配置logstash将日志写入数据库

    [root@logstash1 conf.d]# cat redis.conf 
    input {
      redis {
        data_type => "list"
        key => "filebeat"
        host => "192.168.10.254"
        port => "6379"
        db => "1"
        password => "password"
        codec => "json"
    }  }
    filter {
            if [fields][type] == "nginx-accesslog"  {
            geoip {
                    source => "clientip"
                    target => "geoip"
                    database => "/etc/logstash/GeoLite2-City_20190820/GeoLite2-City.mmdb"
                    add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
                    add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
            }
        mutate {
          convert => [ "[geoip][coordinates]", "float"]
            }
     }
    }
    
    output {
      if [fields][type] == "nginx-accesslog" {
        elasticsearch {
          hosts => ["192.168.10.100:9200"]
          index => "logstash-nginx-accesslog-%{+YYYY.MM.dd}"
          codec => "json"
    }
      jdbc {
        connection_string => "jdbc:mysql://192.168.10.254/elk?user=elk&password=pass&useUnicode=true&characterEncoding=UTF8"
        statement => ["INSERT INTO elklog(clientip,upstreamtime,url,status) VALUES(?,?,?,?)","clientip","upstreamtime","url","status"]}
    }
      if [fields][type] =="syslog"{
        elasticsearch {
          hosts => ["192.168.10.100:9200"]
          index => "filebeat-syslog-%{+YYYY.MM.dd}"
    }}}

    验证数据库是否写入数据

    Navicat for MySQL
    Navicat for MySQL

    自动删除索引脚本

    cat rm.sh
    #!/bin/bash
    
    IP=192.168.10.100
    DAYS=365
    DATE=`date -d "${DAYS} days ago" +%Y.%m.%d`   # ${DAYS}天前的意思,如2019.8.22-1=2019.8.21
    LOG_NAME="logstash-nginx-accesslog"   
    FILE_NAME=${LOG_NAME}-${DATE}  #日志名
    
    curl -XDELETE  http://${IP}:9200/${FILE_NAME}   #连接Elasticsearch删除
    echo "${FILE_NAME} delete success"
  • 相关阅读:
    使用opencv工程
    面试官最爱问的问题背后真相
    哎,哎,去了清华园
    突然发现兰皙欧洗面奶不错
    crs.exe 进程管理里面的流氓进程之封杀
    初步使用OpenCV
    动态网站基础
    Java IO -- 序列化的疑问
    建造模式
    MYSQL 从头开始-2(join)
  • 原文地址:https://www.cnblogs.com/fina/p/11393856.html
Copyright © 2011-2022 走看看