zoukankan      html  css  js  c++  java
  • ELKstack搭建及配置

    ELKstack 简介

    1. ELK是Elasticsearch、Logstash、Kibana三个开源软件的组合。在实时数据检索和分析场合,三者通常是配合使用,而且又都先后归于 Elastic.co 公司名下,故有此简称。
    2. Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。
    3. Logstash是一个完全开源的工具,它可以对你的日志进行收集、分析,并将其存储供以后使用。
    4. kibana 是一个开源和免费的工具,它可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志。

    流程

      在需要收集日志的所有服务上部署logstash,作为logstash agent(logstash shipper)用于监控并过滤收集日志,将过滤后的内容发送到Redis,然后logstash indexer将日志收集在一起交给全文搜索服务ElasticSearch,可以用ElasticSearch进行自定义搜索通过Kibana 来结合自定义搜索进行页面展示。

    二、ELK安装及配置

    1、系统及软件版本介绍:
    系统:CentOS6.5_64
    elasticsearch:elasticsearch-5.6.3.tar.gz
    logstash:logstash-2.3.4.tar.gz
    kibana:kibana-4.5.4-linux-x64.tar.gz
    redis:redis-3.2.tar.gz
    JDK:jdk-8u73-linux-x64.tar.gz

    2、服务器规划 在两台服务器安装ELK组件
    A-client(需要分析的nginx日志服务器):安装logstash(logstash agent)
    B-master(ELK服务端):安装elasticsearch、logstash(logstash index)、kibana、redis

    软件包安装目录:/data/elk

    3、创建用户

    # groupadd app
    useradd -g app -d /data/elk elk

    4、安装及配置JDK
    logstash及elasticsearch需要JDK支持

    # su - elk
    $ tar zxf jdk-8u73-linux-x64.tar.gz
    $ vim .bash_profile (添加及修改如下内容)
    
    JAVA_HOME=/data/elk/jdk1.8.0_73
    PATH=${JAVA_HOME}/bin:$PATH:$HOME/bin
    
    export PATH JAVA_HOME
    
    $ . .bash_profile

    5. ansible 一键安装 A服务器安装及配置logstash client

    #目录结构如下
    #
    tree ansible-logstash-playbook ansible-logstash-playbook ├── hosts ├── jdk.retry ├── jdk.yml └── roles ├── jdk │   ├── defaults │   ├── files │   │   └── jdk-8u131-linux-x64.tar.gz │   ├── handlers │   ├── meta │   ├── tasks │   │   └── main.yml │   ├── templates │   └── vars └── logstash ├── defaults ├── files │   └── logstash-2.3.4.tar.gz ├── handlers ├── meta ├── tasks │   └── main.yml ├── templates │   └── logstash_agent.conf.j2 └── vars └── main.yml

    ansible一键安装jdk

    # cat ansible-logstash-playbook/roles/jdk/tasks/main.yml 
    - name: create group
      group: name=app system=yes
    - name: create logstash-user
      user: name=elk group=app home=/data/elk system=yes
    - name: create a diretory if it doesn't exist
      file:
        path: /data/elk/soft
        state: directory
        mode: 0755
        owner: elk
        group: app
    - name: create a diretory if it doesn't exist
      file:
        path: /data/elk/scripts
        state: directory
        mode: 0755
        owner: elk
        group: app
    - name: jdk file to dest host
      copy:
        src: /data/soft/jdk-8u131-linux-x64.tar.gz
        dest: /data/elk/soft/
        owner: elk
        group: app
    - name: tar jdk-8u131-linux-x64.tar.gz
      shell: chdir=/data/elk/soft tar zxf jdk-8u131-linux-x64.tar.gz && chown -R elk.app /data/elk/
    - name: java_profile config
      shell: /bin/echo {{ item }} >> /data/elk/.bash_profile && source /data/elk/.bash_profile
      with_items:
        - "export JAVA_HOME=/data/elk/soft/jdk1.8.0_131"
        - PATH=${JAVA_HOME}/bin:$PATH:$HOME/bin
        - export PATH

    absible 一键安装logstash client

    # cat ansible-logstash-playbook/roles/logstash/tasks/main.yml 
    - name: logstash file to dest host copy: src: logstash-2.3.4.tar.gz dest: /data/elk/soft/ owner: elk group: app - name: tar logstash-2.3.4.tar.gz shell: chdir=/data/elk/soft/ tar zxf logstash-2.3.4.tar.gz - name: mv logstash-2.3.4 shell: mv /data/elk/soft/logstash-2.3.4 /data/elk/logstash && chown -R elk.app /data/elk - name: touch conf file: path=/data/elk/logstash/conf owner=elk group=app state=directory - name: logstash conf file to dest host template: src: logstash_agent.conf.j2 dest: /data/elk/logstash/conf/logstash_agent.conf owner: elk group: app - name: start logstash client shell: su - elk -c "nohup /data/elk/logstash/bin/logstash agent -f /data/elk/logstash/conf/logstash_agent.conf &"
    # cat ansible-logstash-playbook/roles/logstash/templates/logstash_agent.conf.j2 
    input {
            file {
                    type => "tomcat log"
            add_field => {"host"=> "{{IP}}" }           #{{IP }} <==> vars/main.yml
                    path => ["/data/tomcat/apache-tomcat-8088/logs/catalina.out"]  #tomcat日志路径
            }
    }
    output {
            redis {
                    host => "10.19.182.215" #redis server IP
                    port => "6379" #redis server port
                    data_type => "list" #redis作为队列服务器,key的类型为list
                    key => "tomcat:redis" #key的名称,可以自定义
            }
    }
    # cat ansible-logstash-playbook/roles/logstash/vars/main.yml 
    IP: "{{ ansible_eth0['ipv4']['address'] }}"

     ansible一键执行部署

    # cat jdk.yml 
    - hosts: logstash
      user: root
      roles:
        - jdk
        - logstash
    # ansible-playbook jdk.yml -i hosts 

    6.服务端: 安装及配置elasticsearch && redis

    $ unzip elasticsearch-5.6.3.zip
    $ mv elasticsearch-5.6.3 elasticsearch
    $ mkdir elasticsearch/{logs,data}
    $ vim elasticsearch/config/elasticsearch.yml  #修改如下内容
    cluster.name: server
    node.name: node-1
    path.data: /data1/elk/elasticsearch-5.6.3/data
    path.logs: /data1/elk/elasticsearch-5.6.3/logs
    network.host: 10.19.86.42
    http.port: 9200
    http.cors.enabled: true           #开启跨域访问
    http.cors.allow-origin: "*"       #跨域访问,安装elasticsearch-head需要
    discovery.zen.ping.unicast.hosts: ["10.19.33.42", "10.19.22.215","10.19.11.184"]
    discovery.zen.minimum_master_nodes: 3
    
    #调整使用内存大小;默认为2G;建议调整物理机内存一半;最大内存使用32G;
    $ vim elasticsearch-5.6.3/config/jvm.options 
    -Xms20g
    -Xmx20g
    $ echo 511 > /proc/sys/net/core/somaxconn
    # cat /etc/security/limits.d/90-nproc.conf 
    # Default limit for number of user's processes to prevent
    # accidental fork bombs.
    # See rhbz #432903 for reasoning.
    
    *          soft    nproc     102400
    root       soft    nproc     unlimited
    echo 262144 > /proc/sys/vm/max_map_count 
    $ ./bin/elasticsearch -d

    7. 服务端配置logstash indexer

    [elk@10-19-86-42 ~]$ cat logstash/conf/logstash_tomcat.conf 
    input {
        redis {
            host => "10.19.86.42"
            port => "6379"
            data_type => "list"
            key => "tomcat:redis"
            type => "redis-input"
            }
        }
        filter {
            grok {
                match => { "message" => "^%{TIMESTAMP_ISO8601:[@metadata][timestamp]}s+
    s+%{LOGLEVEL:level}s+%{GREEDYDATA:class}s+-s+%{GREEDYDATA:msg}" }
               #match => { "message" => "(^.+Exception:.+)|(^s+at .+)|(^s+... d+ more)|(^s*Caused by:.+)" }   #匹配堆信息
            }
    mutate {
        split => { "fieldname" => "," }
    }
    date {
        match => [ "timestamp","dd/MMM/yyyy:HH:mm:ss Z" ]
        remove_field => [ "timestamp" ]
        }
    }
    
    output {
        elasticsearch {
           hosts => ["10.19.86.42:9200","10.19.77.184:9200","10.19.182.215:9200"]
           workers =>2                     #必须设置线程数
       flush_size => 50000
       idle_flush_time => 1
           index => "catalina-%{+YYYY.MM.dd}"
    }
              stdout {codec => rubydebug}
    }

    supervisor 启动logstash

    $ cat /etc/supervisor/conf.d/logstash_tomcat.conf 
    # start logstash
    [program:logstash-tomcat]
    environment=
        JAVA_HOME= /data/elk/jdk1.8.0_131
    
    directory=/data/elk/logstash
    command=/data/elk/logstash/bin/logstash  -w 24 -b 5000 -f /data/elk/logstash/conf/logstash_tomcat.conf
    autostart = true
    startsecs = 5
    user = elk
    group = app
    stdout_logfile = /data/elk/logs/logstash_tomcat.log
    stderr_logfile = /data/elk/logs/logstash_tomcat_err.log

    注:启动多个logstash indexer服务来消费redis队列; 

    8. 安装kibana并配置

    $ tar kibana-5.6.3-linux-x86_64.tar.gz 
    $ vim kibana-5.6.3-linux-x86_64/config/kibana.yml
    server.port: 5601
    server.host: "10.19.11.42"
    elasticsearch.url: "http://10.19.11.42:9200"

    kiana启动

    $ cat logstash_kinana.conf 
    # start logstash
    [program:kinana]
    environment=
        JAVA_HOME= /data/elk/jdk1.8.0_131
    
    directory=/data/elk/kibana-5.6.3-linux-x86_64
    command=/data/elk/kibana-5.6.3-linux-x86_64/bin/kibana
    autostart = true
    startsecs = 5
    user = elk
    group = app
    stdout_logfile = /data/elk/logs/kibana.log
    stderr_logfile = /data/elk/logs/kibana_err.log

    补充:

    1. 安装elasticsearch-head

    # yum install -y git grunt-cli grunt npm
    # git clone git://github.com/mobz/elasticsearch-head.git
    vim elasticsearch-head/Gruntfile.js
            connect: {
                server: {
                    options: {
                        hostname: '10.19.86.42',
                        port: 9100,
                        base: '.',
                        keepalive: true
                    }
                }
            }

    启动elasticsearch-head

    $ cd /data/elk/elasticsearch-5.6.3/elasticsearch-head/node_modules/grunt/bin && nohup ./grunt server &
  • 相关阅读:
    【Android】Camera 使用浅析
    【Android】Camera 使用浅析
    每日学习总结<二> 2015-9-1
    每日学习总结<一> 2015-8-31
    【原创】利用typeface实现不同字体的调用显示及String转换为Unicode
    Android 软件开发之如何使用Eclipse Debug调试程序详解及Eclipse常用快捷键(转)
    Flask学习之 会话控制
    Vue组件介绍及开发
    Flask学习之 会话控制
    Flask学习之 请求与响应
  • 原文地址:https://www.cnblogs.com/patrick0715/p/7743383.html
Copyright © 2011-2022 走看看