zoukankan      html  css  js  c++  java
  • Elk 进阶部署

    虚拟机两台:

    192.168.1.42

    192.168.1.46

    系统环境保持一致:

    cat /etc/redhat-release

    uname -a

    elk准备环境保持一致:

    elasticsearch安装:

    下载并安装GPG key

    rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

    添加yum源:

    安装elasticsearch

    yum install -y elasticsearch

    logstash安装

    下载并安装GPG key

    1. [root@linux-node2 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

    添加yum仓库

    1. [root@linux-node2 ~]# vim /etc/yum.repos.d/logstash.repo
    2. [logstash-2.1]
    3. name=Logstash repository for 2.1.x packages
    4. baseurl=http://packages.elastic.co/logstash/2.1/centos
    5. gpgcheck=1
    6. gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
    7. enabled=1

    安装logstash

    1. [root@linux-node2 ~]# yum install -y logstash

    安装kibana

    1. [root@linux-node2 ~]#cd /usr/local/src
    2. [root@linux-node2 ~]#wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
    3. tar zxf kibana-4.3.1-linux-x64.tar.gz
    4. [root@linux-node1 src]# mv kibana-4.3.1-linux-x64 /usr/local/
    5. [root@linux-node2 src]# ln -s /usr/local/kibana-4.3.1-linux-x64/ /usr/local/kibana

    安装Redis,nginx和java

    yum install -y redis nginx java

    管理配置elasticsearch

    管理linux-node1的elasticsearch

    修改elasticsearch配置文件,并授权

    1. [root@linux-node1 src]# grep -n '^[a-Z]' /etc/elasticsearch/elasticsearch.yml
    2. 17:cluster.name: chuck-cluster 判别节点是否是统一集群
    3. 23:node.name: linux-node1 节点的hostname
    4. 33:path.data: /data/es-data 数据存放路径
    5. 37:path.logs: /var/log/elasticsearch/ 日志路径
    6. 43:bootstrap.mlockall: true 锁住内存,使内存不会再swap中使用
    7. 54:network.host: 0.0.0.0 允许访问的ip
    8. 58:http.port: 9200 端口
    9. transport.tcp.port: 9300
    10. node.master: true
    11. node.data: true
    12. discovery.zen.ping.unicast.hosts: ["192.168.1.46:9300", "192.168.1.42:9301"]
    13. discovery.zen.minimum_master_nodes: 1
    1. [root@linux-node1 ~]# mkdir -p /data/es-data
    2. [root@linux-node1 src]# chown elasticsearch.elasticsearch /data/es-data/

    启动elasticsearch

    1. [root@linux-node1 src]# systemctl start elasticsearch
    2. [root@linux-node1 src]# systemctl enable elasticsearch
    3. ln -s '/usr/lib/systemd/system/elasticsearch.service' '/etc/systemd/system/multi-user.target.wants/elasticsearch.service'
    4. [root@linux-node1 src]# systemctl status elasticsearch
    5. elasticsearch.service - Elasticsearch
    6. Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled)
    7. Active: active (running) since Thu 2016-01-14 09:30:25 CST; 14s ago
    8. Docs: http://www.elastic.co
    9. Main PID: 37954 (java)
    10. CGroup: /system.slice/elasticsearch.service
    11. └─37954 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConc...
    12. Jan 14 09:30:25 linux-node1 systemd[1]: Starting Elasticsearch...
    13. Jan 14 09:30:25 linux-node1 systemd[1]: Started Elasticsearch.
    14. [root@linux-node1 src]# netstat -lntup|grep 9200
    15. tcp6 0 0 :::9200 :::* LISTEN 37954/java

    访问9200端口,会把信息显示出来 

     

    elasticsearch进行交互

    交互的两种方法

      • Java API : 
        node client 
        Transport client
      • RESTful API 
        Javascript 
        .NET 
        php 
        Perl 
        Python 
        Ruby

     使用head插件显示索引和分片情况

    /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head

    在插件中添加一个index-demo/test的索引,提交请求 

    使用kopf插件监控elasticsearch

    /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf

    管理linux-node2的elasticsearch

    将linux-node1的配置文件拷贝到linux-node2中,并修改配置文件并授权 
    配置文件中cluster.name的名字一定要一致,当集群内节点启动的时候,默认使用组播(多播),寻找集群中的节点

    1. [root@linux-node1 src]# scp /etc/elasticsearch/elasticsearch.yml 192.168.56.12:/etc/elasticsearch/elasticsearch.yml
    2. [root@linux-node2 elasticsearch]# sed -i '23s#node.name: linux-node1#node.name: linux-node2#g' elasticsearch.yml
    3. [root@linux-node2 elasticsearch]# mkdir -p /data/es-data
    4. [root@linux-node2 elasticsearch]# chown elasticsearch.elasticsearch /data/es-data/

     注意修改:

    vim   elasticsearch.yml

    transport.tcp.port: 9301

    node.master: false
    node.data: true

    启动elasticsearch

    1. [root@linux-node2 elasticsearch]# systemctl enable elasticsearch.service
    2. ln -s '/usr/lib/systemd/system/elasticsearch.service' '/etc/systemd/system/multi-user.target.wants/elasticsearch.service'
    3. [root@linux-node2 elasticsearch]# systemctl start elasticsearch.service
    4. [root@linux-node2 elasticsearch]# systemctl status elasticsearch.service
    5. elasticsearch.service - Elasticsearch
    6. Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled)
    7. Active: active (running) since Thu 2016-01-14 02:56:35 CST; 4s ago
    8. Docs: http://www.elastic.co
    9. Process: 38519 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
    10. Main PID: 38520 (java)
    11. CGroup: /system.slice/elasticsearch.service
    12. └─38520 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConc...
    13. Jan 14 02:56:35 linux-node2 systemd[1]: Starting Elasticsearch...
    14. Jan 14 02:56:35 linux-node2 systemd[1]: Started Elasticsearch.

    输入master的ip访问,出现了两个节点的信息:

    配置logstash

    使用rubudebug显示详细输出,codec为一种编解码器

    1. [root@linux-node1 bin]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug} }'
    2. Settings: Default filter workers: 1
    3. Logstash startup completed
    4. chuck ==>输入
    5. {
    6. "message" => "chuck",
    7. "@version" => "1",
    8. "@timestamp" => "2016-01-14T06:07:50.117Z",
    9. "host" => "linux-node1"
    10. } ==>使用rubydebug输出

    使用logstash将信息写入到elasticsearch

    1. [root@linux-node1 bin]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.56.11:9200"] } }'
    2. Settings: Default filter workers: 1
    3. Logstash startup completed
    4. maliang
    5. chuck
    6. chuck-blog.com
    7. www.chuck-bllog.com

    使用logstash启动一个配置文件,会在elasticsearch中写一份

    [root@linux-node1 ~]# cat normal.conf
    input { stdin { } }
    output {
    elasticsearch { hosts => ["localhost:9200"] }
    stdout { codec => rubydebug }
    }
    [root@linux-node1 ~]# /opt/logstash/bin/logstash -f normal.conf
    Settings: Default filter workers: 1
    Logstash startup completed
    123
    {
    "message" => "123",
    "@version" => "1",
    "@timestamp" => "2016-01-14T06:51:13.411Z",
    "host" => "linux-node1

    收集系统日志的conf

    [root@linux-node1 ~]# cat system.conf
    input {
      file {
        path => "/var/log/messages"
        type => "system"
        start_position => "beginning"
        }
    }
    output {
      elasticsearch {
        hosts => ["192.168.56.11:9200"]
        index => "system-%{+YYYY.MM.dd}"
            }
    }
    [root@linux-node1 ~]# /opt/logstash/bin/logstash -f system.conf

     

    收集elasticsearch的error日志

     此处把上个system日志和这个error(java程序日志)日志,放在一起。使用if判断,两种日志分别写到不同索引中.此处的type(固定的就是type,不可更改)不可以和日志格式的任何一个域(可以理解为字段)的名称重复,也就是说日志的域不可以有type这个名称。

    [root@linux-node1 ~]# cat all.conf
    input {
      file {
        path => "/var/log/messages"
        type => "system"
        start_position => "beginning"
      }
      file {
        path => "/var/log/elasticsearch/chuck-cluster.log"
        type => "es-error"
        start_position => "beginning"
      }
    }
    output {
      if [type] == "system" {
        elasticsearch {
          hosts => ["192.168.56.11:9200"]
          index => "system-%{+YYYY.MM.dd}"
        }
      }
      if [type] == "es-error" {
        elasticsearch {
          hosts => ["192.168.56.11:9200"]
          index => "es-error-%{+YYYY.MM.dd}"
        }
      }
    }
    [root@linux-node1 ~]# /opt/logstash/bin/logstash -f all.conf

     

    kibana配置

    启动一个screen,并启动kibana

    1. [root@linux-node1 ~]# screen
    2. [root@linux-node1 ~]# /usr/local/kibana/bin/kibana
    3. 使用crtl +a+d退出screen

    使用浏览器打开192.168.1.46:5601

    logstash手机nginx、syslog和tcp日志

    收集nginx的访问日志

    在这里使用codec的json插件将日志的域进行分段,使用key-value的方式,使日志格式更清晰,易于搜索,还可以降低cpu的负载 
    更改nginx的配置文件的日志格式,使用json

    启动nginx

    使用logstash将nginx访问日志收集起来,继续写到all.conf中 

    收集系统syslog日志

    使用文件file的形式收集了系统日志/var/log/messages,但是实际生产环境是需要使用syslog插件直接收集 
    修改syslog的配置文件,把日志信息发送到514端口上

    将system-syslog放到all.conf中,启动all.conf

     

    在elasticsearch插件中就可见到增加的system-syslog索引 

    编写tcp.conf

    使用nc对6666端口写入数据

    [root@linux-node1 ~]# nc 192.168.1.46 6666 </var/log/yum.log

    将信息输入到tcp的伪设备中

     echo "chuck" >/dev/tcp/192.168.1.46/6666

    暂时到此结束....

  • 相关阅读:
    Cmder配置
    uboot移植
    嵌入式产品开发技术问题
    flexbox布局
    使用PS过程
    STM32 使用 FreeRTOS过程记录
    TTL、RS232、RS485、串口
    用纯css改变下拉列表select框的默认样式
    task9暂存
    Hello World
  • 原文地址:https://www.cnblogs.com/alex-note/p/6952178.html
Copyright © 2011-2022 走看看