zoukankan      html  css  js  c++  java
  • Kafka+Zookeeper+Filebeat+ELK 搭建日志收集系统

    ELK

    ELK目前主流的一种日志系统,过多的就不多介绍了
    Filebeat收集日志,将收集的日志输出到kafka,避免网络问题丢失信息
    kafka接收到日志消息后直接消费到Logstash
    Logstash将从kafka中的日志发往elasticsearch
    Kibana对elasticsearch中的日志数据进行展示
    
     
    image

    环境介绍:

    软件版本:
    - Centos 7.4
    - java 1.8.0_45
    - Elasticsearch 6.4.0
    - Logstash 6.4.0
    - Filebeat 6.4.0
    - Kibana 6.4.0
    - Kafka 2.12
    - Zookeeper 3.4.13
    
    服务器:
    - 10.241.0.1  squid(软件分发,集中控制)
    - 10.241.0.10 node1
    - 10.241.0.11 node2
    - 10.241.0.12 node3
    
    部署角色
    - elasticsearch: 10.241.0.10(master),10.241.0.11,10.241.0.12
      https://www.elastic.co/cn/products/elasticsearch
      Elasticsearch 允许执行和合并多种类型的搜索 ( 结构化、非结构化、地理位置、度量指标 )搜索方式
    
    - logstash: 10.241.0.10,10.241.0.11,10.241.0.12
      https://www.elastic.co/cn/products/logstash
      Logstash 支持各种输入选择 ,可以在同一时间从众多常用来源捕捉事件
    
    - filebeat: 10.241.0.10,10.241.0.11,10.241.0.12
      https://www.elastic.co/cn/products/beats/filebeat
      Filebeat 内置的多种模块(auditd、Apache、NGINX、System 和 MySQL)可实现对常见日志格式的一键收集、解析和可视化.
    
    - kibana: 10.241.0.10
      https://www.elastic.co/cn/products/kibana
      Kibana 让您能够可视化 Elasticsearch 中的数据并操作 Elastic Stack
    
    - kafka: 10.241.0.10,10.241.0.11,10.241.0.12
      http://kafka.apache.org/
      Kafka是一种高吞吐量的分布式发布订阅消息系统,它可以处理消费者规模的网站中的所有动作流数据
      kafka集群部署前面的博客: https://www.jianshu.com/p/a9ff97dcfe4e
    

    开始安装部署ELK

    1.下载安装包及测试安装包完整性

    [root@squid ~]# cat /etc/hosts
    10.241.0.1  squid
    10.241.0.10 squid
    10.241.0.11 node2
    10.241.0.12 node3
    
    [root@squid ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.tar.gz
    [root@squid ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.tar.gz.sha512
    [root@squid ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.4.0-linux-x86_64.tar.gz
    [root@squid ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.4.0-linux-x86_64.tar.gz.sha512
    [root@squid ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.0.tar.gz
    [root@squid ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.0.tar.gz.sha512
    [root@squid ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-linux-x86_64.tar.gz
    [root@squid ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-linux-x86_64.tar.gz.sha512
    
    [root@squid ~]# yum install perl-Digest-SHA
    [root@squid ~]# shasum -a 512 -c  elasticsearch-6.4.0.tar.gz.sha512
    elasticsearch-6.4.0.tar.gz: OK
    [root@squid ~]# shasum -a 512 -c  filebeat-6.4.0-linux-x86_64.tar.gz.sha512
    filebeat-6.4.0-linux-x86_64.tar.gz: OK
    [root@squid ~]# shasum -a 512 -c  kibana-6.4.0-linux-x86_64.tar.gz.sha512
    kibana-6.4.0-linux-x86_64.tar.gz: OK
    [root@squid ~]# shasum -a 512 -c  logstash-6.4.0.tar.gz.sha512
    logstash-6.4.0.tar.gz: OK
    

    2.部署elasticsearch

    1) Ansible主机清单
    [root@squid ~]# cat /etc/ansible/hosts 
    [client]
    10.241.0.10 es_master=true
    10.241.0.11 es_master=false
    10.241.0.12 es_master=false
    
    2) 创建es用户和用户组
    [root@squid ~]# ansible client -m group -a 'name=elk'
    [root@squid ~]# ansible client -m user -a 'name=es group=elk home=/home/es shell=/bin/bash'
    
    3) 将elasticsearch解压到目标主机
    [root@squid ~]# ansible client -m unarchive -a 'src=/root/elasticsearch-6.4.0.tar.gz  dest=/usr/local owner=es group=elk'
    
    4)将准备好的es配置文件模板分发到各个节点
    [root@squid ~]# cat elasticsearch.yml.j2 
    #集群名称及数据存放位置
    cluster.name: my_es_cluster
    node.name: es-{{ansible_hostname}}
    path.data: /data/elk/es/data
    path.logs: /data/elk/es/logs
    #允许跨域访问
    http.cors.enabled: true 
    http.cors.allow-origin: "*" 
    #集群中的角色
    node.master: {{es_master}}
    node.data: true 
    #允许访问的地址及传输使用的端口
    network.host: 0.0.0.0
    transport.tcp.port: 9300
    #使用tcp传输压缩
    transport.tcp.compress: true
    http.port: 9200
    #使用单播模式去连接其他节点
    discovery.zen.ping.unicast.hosts: ["node1","node2","node3"]
    
    5) 执行ansible,分发配置文件
    [root@squid ~]# ansible client -m template -a 'src=/root/elasticsearch.yml.j2 dest=/usr/local/elasticsearch-6.4.0/config/elasticsearch.yml owner=es group=elk'
    
    6) 修改系统允许最大打开的文件句柄数等参数,
    [root@squid ~]# cat change_system_args.sh
    #!/bin/bash
    if [ "`grep 65536 /etc/security/limits.conf`" = "" ]
    then
    cat >> /etc/security/limits.conf << EOF
    # End of file
    * - nofile 1800000
            * soft nproc 65536
            * hard nproc 65536
            * soft nofile 65536
            * hard nofile 65536
    EOF
    fi
    
    if [ "`grep 655360 /etc/sysctl.conf`" = "" ]
    then
    echo "vm.max_map_count=655360"  >> /etc/sysctl.conf
    fi
    
    7) 通过ansible来执行脚本
    [root@squid ~]# ansible client -m script -a '/root/change_system_args.sh'
    
    8) 重启目标主机,是参数生效(因为目标主机重启 所以ansible连不上)
    [root@squid ~]# ansible client -m shell -a 'reboot'
    10.241.0.11 | UNREACHABLE! => {
        "changed": false, 
        "msg": "SSH Error: data could not be sent to remote host "10.241.0.11". Make sure this host can be reached over ssh", 
        "unreachable": true
    }
    10.241.0.12 | UNREACHABLE! => {
        "changed": false, 
        "msg": "SSH Error: data could not be sent to remote host "10.241.0.12". Make sure this host can be reached over ssh",
        "unreachable": true
    }
    10.241.0.10 | UNREACHABLE! => {
        "changed": false, 
        "msg": "SSH Error: data could not be sent to remote host "10.241.0.10". Make sure this host can be reached over ssh",
        "unreachable": true
    }
    
    9 )创建elk目录
    [root@squid ~]# ansible client -m file -a 'name=/data/elk/  state=directory owner=es group=elk'
    
    10) 启动es
    [root@squid ~]# ansible client -m shell -a 'su - es -c "/usr/local/elasticsearch-6.4.0/bin/elasticsearch -d"' 
    
    10.241.0.11 | SUCCESS | rc=0 >>
    
    10.241.0.10 | SUCCESS | rc=0 >>
    
    10.241.0.12 | SUCCESS | rc=0 >>
    
    11) 查看是否启动
    [root@squid ~]# ansible client -m shell -a 'ps -ef|grep elasticsearch' 
    10.241.0.12 | SUCCESS | rc=0 >>
    es        3553     1 19 20:35 ?        00:00:48 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.eFvx2dMC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/local/elasticsearch-6.4.0 -Des.path.conf=/usr/local/elasticsearch-6.4.0/config -Des.distribution.flavor=default -Des.distribution.type=tar -cp /usr/local/elasticsearch-6.4.0/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
    es        3594  3553  0 20:35 ?        00:00:00 /usr/local/elasticsearch-6.4.0/modules/x-pack-ml/platform/linux-x86_64/bin/controller
    root      3711  3710  0 20:39 ?        00:00:00 /bin/sh -c ps -ef|grep elasticsearch
    root      3713  3711  0 20:39 ?        00:00:00 grep elasticsearch
    
    10.241.0.10 | SUCCESS | rc=0 >>
    es        4899     1 22 20:35 ?        00:00:54 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.1uRdvBGd -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/local/elasticsearch-6.4.0 -Des.path.conf=/usr/local/elasticsearch-6.4.0/config -Des.distribution.flavor=default -Des.distribution.type=tar -cp /usr/local/elasticsearch-6.4.0/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
    es        4940  4899  0 20:35 ?        00:00:00 /usr/local/elasticsearch-6.4.0/modules/x-pack-ml/platform/linux-x86_64/bin/controller
    root      5070  5069  0 20:39 ?        00:00:00 /bin/sh -c ps -ef|grep elasticsearch
    root      5072  5070  0 20:39 ?        00:00:00 grep elasticsearch
    
    10.241.0.11 | SUCCESS | rc=0 >>
    es        3556     1 19 20:35 ?        00:00:47 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.fnAavDi0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/local/elasticsearch-6.4.0 -Des.path.conf=/usr/local/elasticsearch-6.4.0/config -Des.distribution.flavor=default -Des.distribution.type=tar -cp /usr/local/elasticsearch-6.4.0/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
    es        3597  3556  0 20:35 ?        00:00:00 /usr/local/elasticsearch-6.4.0/modules/x-pack-ml/platform/linux-x86_64/bin/controller
    root      3710  3709  0 20:39 ?        00:00:00 /bin/sh -c ps -ef|grep elasticsearch
    root      3712  3710  0 20:39 ?        00:00:00 grep elasticsearch
    
    12) 查看集群状态
    [root@squid ~]# curl -s http://node1:9200/_nodes/process?pretty |grep -C 5 _nodes
    {
      "_nodes" : {
        "total" : 3,
        "successful" : 3,
        "failed" : 0
      },
      "cluster_name" : "my_es_cluster",
    

    3.部署Filebeat

    1) 分发安装包到客户机
    [root@squid ~]# ansible client -m unarchive -a 'src=/root/filebeat-6.4.0-linux-x86_64.tar.gz dest=/usr/local'
    
    2) 修改安装包名称
    [root@squid ~]# ansible client -m shell -a 'mv /usr/local/filebeat-6.4.0-linux-x86_64 /usr/local/filebeat-6.4.0'
    10.241.0.12 | SUCCESS | rc=0 >>
    
    10.241.0.11 | SUCCESS | rc=0 >>
    
    10.241.0.10 | SUCCESS | rc=0 >>
    
    3) 修改配置文件
    [root@squid ~]# cat filebeat.yml.j2 
    filebeat.prospectors:
    - type: log
      paths:
        - /var/log/supervisor/kafka
    
    output.kafka:
      enabled: true
      hosts: ["10.241.0.10:9092","10.241.0.11:9092","10.241.0.12:9092"]
      topic: kafka_run_log
    
    ##参数解释
    enabled 表明这个模块是启动的
    host  把filebeat的数据发送到那台kafka上
    topic 这个很重要,发送给kafka的topic,若topic不存在,则会自动创建此topic
    
    4) 分发到客户机,并将原来的配置文件备份
    [root@squid ~]# ansible client -m copy -a 'src=/root/filebeat.yml.j2 dest=/usr/local/filebeat-6.4.0/filebeat.yml backup=yes'
    
    5) 启动filebeat
    [root@squid ~]# ansible client -m shell -a '/usr/local/filebeat-6.4.0/filebeat -c /usr/local/filebeat-6.4.0/filebeat.yml &'
    10.241.0.11 | SUCCESS | rc=0 >>
    
    10.241.0.10 | SUCCESS | rc=0 >>
    
    10.241.0.12 | SUCCESS | rc=0 >>
    
    6) 查看filebeat进程
    [root@squid ~]# ansible client -m shell -a 'ps -ef|grep filebeat| grep -v grep'
    10.241.0.12 | SUCCESS | rc=0 >>
    root      4890     1  0 22:50 ?        00:00:00 /usr/local/filebeat-6.4.0/filebeat -c /usr/local/filebeat-6.4.0/filebeat.yml
    
    10.241.0.10 | SUCCESS | rc=0 >>
    root      6881     1  0 22:50 ?        00:00:00 /usr/local/filebeat-6.4.0/filebeat -c /usr/local/filebeat-6.4.0/filebeat.yml
    
    10.241.0.11 | SUCCESS | rc=0 >>
    root      4939     1  0 22:50 ?        00:00:00 /usr/local/filebeat-6.4.0/filebeat -c /usr/local/filebeat-6.4.0/filebeat.yml
    
    7) 查看是否有topic创建成功
    [root@node1 local]# /usr/local/kafka/bin/kafka-topics.sh --list --zookeeper  10.241.0.10:2181
    ConsumerTest
    __consumer_offsets
    kafka_run_log #filebeat创建的topic
    topicTest
    

    4.部署Logstash

    1) 解压安装包值目标主机
    [root@squid ~]# ansible client -m unarchive -a 'src=/root/logstash-6.4.0.tar.gz dest=/usr/local owner=es group=elk'
    
    2) Logstash配置文件
    [root@squid ~]# cat logstash-kafka.conf.j2
    input {
        kafka {
            type => "kafka-logs"
            bootstrap_servers => "10.241.0.10:9092,10.241.0.11:9092,10.241.0.12:9092"
            group_id => "logstash"
            auto_offset_reset => "earliest"
            topics => "kafka_run_log"
            consumer_threads => 5
            decorate_events => true
            }
    }
    
    output {
        elasticsearch {
        index => 'kafka-run-log-%{+YYYY.MM.dd}'
        hosts => ["10.241.0.10:9200","10.241.0.11:9200","10.241.0.12:9200"]
    }
    
    3) 使用ansible推送logstash配置文件到目标主机
    [root@squid ~]# ansible client -m copy -a 'src=/root/logstash.conf.j2 dest=/usr/local/logstash-6.4.0/config/logstash.conf owner=es group=elk'
    
    4) 启动Logstash
    [root@squid ~]# ansible client -m shell -a 'su - es -c "/usr/local/logstash-6.4.0/bin/logstash -f /usr/local/logstash-6.4.0/config/logstash.conf &"'     
    
    5)_查看Logstash进程
    [root@squid ~]# ansible client -m shell -a 'ps -ef|grep logstash|grep -v grep'
    10.241.0.11 | SUCCESS | rc=0 >>
    es        6040     1 99 23:39 ?        00:02:11 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/local/logstash-6.4.0/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/guava-22.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/janino-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/logstash-core.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash -f /usr/local/logstash-6.4.0/config/logstash.conf
    
    10.241.0.12 | SUCCESS | rc=0 >>
    es        5970     1 99 23:39 ?        00:02:13 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/local/logstash-6.4.0/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/guava-22.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/janino-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/logstash-core.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash -f /usr/local/logstash-6.4.0/config/logstash.conf
    
    10.241.0.10 | SUCCESS | rc=0 >>
    es        9095     1 98 23:39 ?        00:02:10 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/local/logstash-6.4.0/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/guava-22.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/janino-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/logstash-core.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash -f /usr/local/logstash-6.4.0/config/logstash.conf
    

    5.部署kibana

    1) 将安装包拷贝到node1节点
    [root@squid ~]# scp kibana-6.4.0-linux-x86_64.tar.gz root@10.241.0.10:/root
    kibana-6.4.0-linux-x86_64.tar.gz                 100%  179MB  59.7MB/s   00:03
    
    2) 解压kibana
    [root@node1 ~]# tar  -zxf kibana-6.4.0-linux-x86_64.tar.gz  -C /usr/local
    [root@node1 ~]# mv /usr/local/kibana-6.4.0-linux-x86_64/ /usr/local/kibana-6.4.0
    
    3) 修改配置文件
    [root@node1 ~]# cat /usr/local/kibana-6.4.0/config/kibana.yml
    server.port: 5601
    server.host: "10.241.0.10"
    kibana.index: ".kibana
    
    4) 启动kibana (前台启动)
    [root@node1 ~]# /usr/local/kibana-6.4.0/bin/kibana
    
    5) 访问的kibana
    http://10.241.0.10:5601
    
    6) 添加日志
    Management -> Kibana 列Index Patterns -> Index pattern
    
    7) 发送消息到kafka-run-log  topic,查看是否能通过kibana展示
    
     
    image
     
    image
     
    image
     
    image
     
    image
     
    image


    作者:baiyongjie
    链接:https://www.jianshu.com/p/d072a55aa844
    來源:简书
    简书著作权归作者所有,任何形式的转载都请联系作者获得授权并注明出处。
  • 相关阅读:
    ul前面有40px的距离怎么办
    JQuey中 attr('checked', true)设置状态只有第一次有用
    只有一个RADIO的单选框如何在选中后取消选中
    为Table中的thead加上边框
    ADB 无线连接设备
    面试准备的内容
    蓝牙MESH相关代码
    怎样重构代码
    safari 调试iPhone web页面
    Appium1.6.4 真机运行ios10.3.1 填坑记
  • 原文地址:https://www.cnblogs.com/cheyunhua/p/9765421.html
Copyright © 2011-2022 走看看