zoukankan      html  css  js  c++  java
  • 13 EBLK日志分析

    1. EBLK的解释:

    • E Elasticsearch java
    • B Filebeat Go
    • L Logstash java
    • K Kibana java

    2. 日志分析需求:

    1. 找出访问排名前十的IP,URL
    2. 找出10点到12点之间排名前十的IP,URL
    3. 对比昨天这个时间段访问情况有什么变化
    4. 对比上个星期同一天同一时间段的访问变化
    5. 找出搜索引擎访问的次数和每个搜索引擎各访问了多少次
    6. 指定域名的关键链接访问次数,响应时间
    7. 网站HTTP状态码情况
    8. 找出攻击者的IP地址,这个IP访问了什么页面,这个IP什么时候来的,什么时候走的,共访问了多少次
    9. 5分钟内告诉我结果

    3. 各种配置

    3.1 单机ES环境配置

    [命令]

    systemctl stop elasticsearch
    
    rm -rf /var/lib/elasticsearch/*
    
    cat > /etc/elasticsearch/elasticsearch.yml << 'EOF'
    node.name: node-1
    path.data: /var/lib/elasticsearch
    path.logs: /var/log/elasticsearch
    network.host: 127.0.0.1,10.0.0.51
    http.port: 9200
    discovery.seed_hosts: ["10.0.0.51"]
    cluster.initial_master_nodes: ["10.0.0.51"]
    EOF
    
    systemctl start elasticsearch
    

    3.2 收集普通格式的nginx日志

    [命令]

    vim /etc/filebeat/filebeat.yml
    
    filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /var/log/nginx/access.log
    output.elasticsearch:
      hosts: ["10.0.0.51:9200"]
    

    3.3 收集JSON格式的nginx日志

    [命令]

    目前不完善的地方:
    1.日志字段不能拆分,不能单独显示
    2.索引名称不是自定义
    
    我们期望的结果:
    1.日志字段可以单独显示
    
    $remote_addr 	10.0.0.1
    - 				-
    $remote_user 	-
    [$time_local] 	[08/Oct/2020:10:27:44 +0800]
    $request		GET /zhangya HTTP/1.1
    $status 		404
    $body_bytes_sent	555
    $http_referer		-
    $http_user_agent	Chrome
    $http_x_forwarded_for -
    
    
    操作步骤:
    1.停止filebeat和nginx
    systemctl stop filebeat nginx
    
    2.清空Nginx日志
    > /var/log/nginx/access.log
    
    3.删除ES索引
    
    4.修改Nginx日志为json格式:
    log_format json '{ "time_local": "$time_local", '
                              '"remote_addr": "$remote_addr", '
                              '"referer": "$http_referer", '
                              '"request": "$request", '
                              '"status": $status, '
                              '"bytes": $body_bytes_sent, '
                              '"agent": "$http_user_agent", '
                              '"x_forwarded": "$http_x_forwarded_for", '
                              '"up_addr": "$upstream_addr",'
                              '"up_host": "$upstream_http_host",'
                              '"upstream_time": "$upstream_response_time",'
                              '"request_time": "$request_time"'
        ' }';
        access_log  /var/log/nginx/access.log  json;
    	
    
    
    5.重启nginx
    nginx -t 
    systemctl restart nginx 
    
    6.访问并测试
    curl 127.0.0.1 
    tail -f /var/log/nginx/access
    # 修改后的日志结果:
    { 
      "time_local": "08/Oct/2020:11:10:17 +0800", 
      "remote_addr": "127.0.0.1", 
      "referer": "-", 
      "request": "GET / HTTP/1.1", 
      "status": 200, 
      "bytes": 5, 
      "agent": "curl/7.29.0", 
      "x_forwarded": "-", 
      "up_addr": "-",
      "up_host": "-",
      "upstream_time": "-",
      "request_time": "0.000"
    }
    
    7.修改filebeat配置文件
    filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /var/log/nginx/access.log
      json.keys_under_root: true
      json.overwrite_keys: true
    
    output.elasticsearch:
      hosts: ["10.0.0.51:9200"]
    
    8.重启filebeat
    systemctl restart filebeat
    
    9.访问并测试
    
    10.kibana删除旧索引,创建新索引
    

    3.4 自定义索引名称

    [命令]

    filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /var/log/nginx/access.log
      json.keys_under_root: true
      json.overwrite_keys: true
    
    output.elasticsearch:
      hosts: ["10.0.0.51:9200"]
      index: "nginx-%{[agent.version]}-%{+yyyy.MM}"
    
    setup.ilm.enabled: false
    setup.template.enabled: false
    
    logging.level: info
    logging.to_files: true
    logging.files:
      path: /var/log/filebeat
      name: filebeat
      keepfiles: 7
      permissions: 0644
    

    3.5 日志类型定义索引名称

    [命令]

    方法1:啰嗦
    filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /var/log/nginx/access.log
      json.keys_under_root: true
      json.overwrite_keys: true
    
    - type: log
      enabled: true
      paths:
        - /var/log/nginx/error.log
    
    processors:
      - drop_fields:
          fields: ["ecs","log"] 
    
    output.elasticsearch:
      hosts: ["10.0.0.51:9200"]
      indices:
        - index: "nginx-access-%{[agent.version]}-%{+yyyy.MM}"
          when.contains:
            log.file.path: "/var/log/nginx/access.log"
    
        - index: "nginx-error-%{[agent.version]}-%{+yyyy.MM}"
          when.contains:
            log.file.path: "/var/log/nginx/error.log"
    
    setup.ilm.enabled: false
    setup.template.enabled: false
    
    logging.level: info
    logging.to_files: true
    
    方法2:优雅
    filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /var/log/nginx/access.log
      json.keys_under_root: true
      json.overwrite_keys: true
      tags: ["access"]
    
    - type: log
      enabled: true
      paths:
        - /var/log/nginx/error.log
      tags: ["error"]
    
    processors:
      - drop_fields:
          fields: ["ecs","log"]
    
    output.elasticsearch:
      hosts: ["10.0.0.51:9200"]
      indices:
        - index: "nginx-access-%{[agent.version]}-%{+yyyy.MM}"
          when.contains:
            tags: "access"
    
        - index: "nginx-error-%{[agent.version]}-%{+yyyy.MM}"
          when.contains:
            tags: "error"
    
    setup.ilm.enabled: false
    setup.template.enabled: false
    
    logging.level: info
    logging.to_files: true
    
    

    3.6 使用ES-pipeline转换Nginx普通日志

    [命令]

    0.grok转换语法:
    127.0.0.1 							==> %{IP:clientip}
    - 									==> -
    - 									==> -
    [08/Oct/2020:16:34:40 +0800] 		==> \[%{HTTPDATE:nginx.access.time}\]
    "GET / HTTP/1.1" 					==> "%{DATA:nginx.access.info}"
    200 								==> %{NUMBER:http.response.status_code:long} 
    5 									==> %{NUMBER:http.response.body.bytes:long}
    "-" 								==> "(-|%{DATA:http.request.referrer})"
    "curl/7.29.0" 						==> "(-|%{DATA:user_agent.original})"
    "-"									==> "(-|%{IP:clientip})"
    
    1.修改nginx日志为普通格式
    systemctl stop filebeat
    > /var/log/nginx/access.log
    vim /etc/nginx/nginx.conf
    systemctl restart nginx
    curl 127.0.0.1
    cat /var/log/nginx/access.log
    
    2.创建ES的pipeline
    GET _ingest/pipeline
    PUT  _ingest/pipeline/pipeline-nginx-access
    {
      "description" : "nginx access log",
      "processors": [
        {
          "grok": {
            "field": "message",
            "patterns": ["%{IP:clientip} - - \[%{HTTPDATE:nginx.access.time}\] "%{DATA:nginx.access.info}" %{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long} "(-|%{DATA:http.request.referrer})" "(-|%{DATA:user_agent.original})""]
          }
        },{
          "remove": {
            "field": "message"
          }
        }
      ]
    }
    
    3.修改filebeat配置文件
    filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /var/log/nginx/access.log
      tags: ["access"]
    
    - type: log
      enabled: true
      paths:
        - /var/log/nginx/error.log
      tags: ["error"]
    
    processors:
      - drop_fields:
          fields: ["ecs","log"]
    
    output.elasticsearch:
      hosts: ["10.0.0.51:9200"]
    
      pipelines:
        - pipeline: "pipeline-nginx-access"
          when.contains:
            tags: "access"
    
      indices:
        - index: "nginx-access-%{[agent.version]}-%{+yyyy.MM}"
          when.contains:
            tags: "access"
    
        - index: "nginx-error-%{[agent.version]}-%{+yyyy.MM}"
          when.contains:
            tags: "error"
    
    setup.ilm.enabled: false
    setup.template.enabled: false
    
    logging.level: info
    logging.to_files: true
    

    3.7 收集tomcat的json日志

    [命令]

    1.修改tomcat配置文件
    [root@web01 ~]# /opt/tomcat/bin/shutdown.sh
    [root@web01 ~]# vim /opt/tomcat/conf/server.xml 
    	       pattern="{&quot;clientip&quot;:&quot;%h&quot;,&quot;ClientUser&quot;:&quot;%l&quot;,&quot;authenticated&quot;:&quot;%u&quot;,&quot;AccessTime&quot;:&quot;%t&quot;,&quot;method&quot;:&quot;%r&quot;,&quot;status&quot;:&quot;%s&quot;,&quot;SendBytes&quot;:&quot;%b&quot;,&quot;Query?string&quot;:&quot;%q&quot;,&quot;partner&quot;:&quot;%{Referer}i&quot;,&quot;AgentVersion&quot;:&quot;%{User-Agent}i&quot;}"/>
    
    
    
    2.filebeat配置文件
    filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /opt/tomcat/logs/localhost_access_log.*.txt 
      json.keys_under_root: true
      json.overwrite_keys: true
    
    output.elasticsearch:
      hosts: ["10.0.0.51:9200"]
      index: "tomcat-%{[agent.version]}-%{+yyyy.MM}"
    
    setup.ilm.enabled: false
    setup.template.enabled: false
    

    3.8 收集java多行日志

    [命令]

    filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /var/log/elasticsearch/elasticsearch.log
    
      multiline.pattern: ^[
      multiline.negate: true
      multiline.match: after
    
    output.elasticsearch:
      hosts: ["10.0.0.51:9200"]
      index: "es-%{[agent.version]}-%{+yyyy.MM}"
    
    setup.ilm.enabled: false
    setup.template.enabled: false
    
  • 相关阅读:
    hexo博客搭建
    HDFS基本命令
    hadoop简单排序
    HBase实验
    linux版python升级依赖项问题
    hadoop大数据生态安装
    linux-anoconda更换镜像
    [渣译文] 使用 MVC 5 的 EF6 Code First 入门 系列:排序、筛选和分页
    [渣译文] 使用 MVC 5 的 EF6 Code First 入门 系列:实现基本的CRUD功能
    [渣译文] 使用 MVC 5 的 EF6 Code First 入门 系列:建立一个EF数据模型
  • 原文地址:https://www.cnblogs.com/xcymn/p/14110671.html
Copyright © 2011-2022 走看看