zoukankan      html  css  js  c++  java
  • 企业级日志分析平台(十):ELKStack之操作深入(中)

    一,企业级Elasticsearch使用详解

     

    1.1 基本概念

    ElasticsearchMySQL
    Index Database
    Type Table
    Document Row
    Field Column

    - Node:运行单个ES实例的服务器 
    - Cluster:一个或多个节点构成集群 
    - Index:索引是多个文档的集合(必须是小写字母) 
    - Document:Index里每条记录称为Document,若干文档构建一个Index 
    - Type:一个Index可以定义一种或多种类型,将Document逻辑分组 
    - Field:ES存储的最小单元 
    - Shards:ES将Index分为若干份,每一份就是一个分片。 
    - Replicas:Index的一份或多份副本

     

    1.2 实验环境说明

    主机名IP地址用途
    ES1 192.168.200.191 elasticsearch-node1
    ES2 192.168.200.192 elasticsearch-node2
    ES3 192.168.200.193 elasticsearch-node3
    Logstash-Kibana 192.168.200.194 日志可视化服务器
     
    1. #系统初始环境调整
    2. [root@ES1 ~]# cat /etc/redhat-release
    3. CentOS Linux release 7.5.1804 (Core)
    4. [root@ES1 ~]# uname -r
    5. 3.10.0-862.3.3.el7.x86_64
    6. [root@ES1 ~]# systemctl stop firewalld
    7. [root@ES1 ~]# setenforce 0
    8. setenforce: SELinux is disabled
    9. [root@ES1 ~]# sestatus
    10. SELinux status: disabled
    11. #更换亚洲时区
    12. [root@ES1 ~]# /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    13. #安装时间同步
    14. [root@ES1 ~]# yum -y install ntpdate
    15. #进行时间同步
    16. [root@ES1 ~]# ntpdate ntp1.aliyun.com
     

    1.3 企业级Elasticsearch集群部署

     
    1. #在三台ES上都进行如下操作
    2. #yum安装jdk1.8
    3. [root@ES1 ~]# yum -y install java-1.8.0-openjdk
    4. #导入yum方式安装ES的公钥
    5. [root@ES1 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
    6. #添加ES的yum源文件
    7. [root@ES1 ~]# vim /etc/yum.repos.d/elastic.repo
    8. [root@ES1 ~]# cat /etc/yum.repos.d/elastic.repo
    9. [elastic-6.x]
    10. name=Elastic repository for 6.x packages
    11. baseurl=https://artifacts.elastic.co/packages/6.x/yum
    12. gpgcheck=1
    13. gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
    14. enabled=1
    15. autorefresh=1
    16. type=rpm-md
    17. #安装elasticsearch
    18. [root@ES1 ~]# yum -y install elasticsearch
    19. #配置elasticsearch的配置文件
    20. #将以下内容进行修改
    21. [root@ES1 ~]# cat -n /etc/elasticsearch/elasticsearch.yml.bak | sed -n '17p;23p;33p;37p;55p;59p;68p;72p'
    22. 17 #cluster.name: my-application
    23. 23 #node.name: node-1
    24. 33 path.data: /var/lib/elasticsearch
    25. 37 path.logs: /var/log/elasticsearch
    26. 55 #network.host: 192.168.0.1
    27. 59 #http.port: 9200
    28. 68 #discovery.zen.ping.unicast.hosts: ["host1", "host2"]
    29. 72 #discovery.zen.minimum_master_nodes:
    30. [root@ES1 ~]# cat -n /etc/elasticsearch/elasticsearch.yml | sed -n '17p;23p;33p;37p;55p;59p;68p;72p'
    31. 17 cluster.name: elk-cluster
    32. 23 node.name: node-1
    33. 33 path.data: /var/lib/elasticsearch
    34. 37 path.logs: /var/log/elasticsearch
    35. 55 network.host: 192.168.200.191
    36. 59 http.port: 9200
    37. 68 discovery.zen.ping.unicast.hosts: ["192.168.200.191", "192.168.200.192","192.168.200.193"]
    38. 72 discovery.zen.minimum_master_nodes: 2
    39. #将ES1配置文件拷贝到ES2和ES3
    40. [root@ES1 ~]# scp /etc/elasticsearch/elasticsearch.yml 192.168.200.193:/etc/elasticsearch/
    41. root@192.168.200.193's password:
    42. elasticsearch.yml 100% 2903 3.8MB/s 00:00
    43. [root@ES1 ~]# scp /etc/elasticsearch/elasticsearch.yml 192.168.200.192:/etc/elasticsearch/
    44. root@192.168.200.192's password:
    45. elasticsearch.yml 100% 2903 5.0MB/s 00:00
    46. #只需要修改ES2和ES3的节点名称和监听端口即可
    47. [root@ES2 elasticsearch]# sed -n '23p;55p' /etc/elasticsearch/elasticsearch.yml
    48. node.name: node-2
    49. network.host: 192.168.200.192
    50. [root@ES3 yum.repos.d]# sed -n '23p;55p' /etc/elasticsearch/elasticsearch.yml
    51. node.name: node-3
    52. network.host: 192.168.200.193
    53. #启动三台ES上的elasticsearch
    54. [root@ES1 ~]# systemctl start elasticsearch
    55. [root@ES2 ~]# systemctl start elasticsearch
    56. [root@ES3 ~]# systemctl start elasticsearch
    57. #查看集群节点的健康情况
    58. [root@ES1 ~]# curl -X GET "192.168.200.191:9200/_cat/health?v"
    59. epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
    60. 1534519567 23:26:07 elk-cluster green 3 3 0 0 0 0 0 0 - 100.0%
     

    1.4 Elasticsearch数据操作

    RestFul API格式

     
    1. curl -X<verb> '<protocol>://<host>:<port>/<path>?<query_string>' -d '<body>'
    参数描述
    verb HTTP方法,比如GET,POST,PUT,HEAD,DELETE
    host ES集群中的任意节点主机名
    port ES HTTP服务端口,默认9200
    path 索引路径
    query_string 可选的查询请求参数。例如?pretty参数将格式化输出JSON数据
    -d 里面放一个GET的JSON格式请求主体
    body 自己写的JSON格式的请求主体
     
    1. #列出数据库所有的索引
    2. [root@ES1 ~]# curl -X GET "192.168.200.191:9200/_cat/indices?v"
    3. health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
    4. #创建一个索引
    5. [root@ES1 ~]# curl -X PUT "192.168.200.191:9200/logs-test-2018.08.17"
    6. {"acknowledged":true,"shards_acknowledged":true,"index":"logs-test-2018.08.17"}
    7. #查看数据库所有索引
    8. [root@ES1 ~]# curl -X GET "192.168.200.191:9200/_cat/indices?v"
    9. health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
    10. green open logs-test-2018.08.17 a-M8lGYtSIqvahUeFqd8Vg 5 1 0 0 2.2kb 1.1kb

    Elasticsearch的操作,同学们了解即可。详细可以查看官方文档

    https://www.elastic.co/guide/en/elasticsearch/reference/current/_index_and_query_a_document.html

     

    1.5 Head插件图形管理Elasticsearch

     
    1. #head插件下载
    2. [root@ES1 ~]# wget https://npm.taobao.org/mirrors/node/latest-v4.x/node-v4.4.7-linux-x64.tar.gz
    3. [root@ES1 ~]# ls
    4. anaconda-ks.cfg node-v4.4.7-linux-x64.tar.gz
    5. [root@ES1 ~]# tar xf node-v4.4.7-linux-x64.tar.gz -C /usr/local/
    6. [root@ES1 ~]# mv /usr/local/node-v4.4.7-linux-x64/ /usr/local/node-v4.4
    7. [root@ES1 ~]# echo -e 'NODE_HOME=/usr/local/node-v4.4 PATH=$NODE_HOME/bin:$PATH export NODE_HOME PATH' >> /etc/profile
    8. [root@ES1 ~]# tail -3 /etc/profile
    9. NODE_HOME=/usr/local/node-v4.4
    10. PATH=$NODE_HOME/bin:$PATH
    11. export NODE_HOME PATH
    12. [root@ES1 ~]# source /etc/profile
    13. #安装git客户端
    14. [root@ES1 ~]# yum -y install git
    15. #git拉取elasticsearch-head代码
    16. [root@ES1 ~]# git clone git://github.com/mobz/elasticsearch-head.git
    17. [root@ES1 ~]# cd elasticsearch-head/
    18. [root@ES1 elasticsearch-head]# npm install
    19. 特别提示:
    20. 此安装过程报错也没关系,不影响使用
    21. #修改源码包配置文件Gruntfile.js
    22. #在95行处下边增加一行代码如下
    23. [root@ES1 elasticsearch-head]# cat -n Gruntfile.js | sed -n '90,97p'
    24. 90 connect: {
    25. 91 server: {
    26. 92 options: {
    27. 93 port: 9100,
    28. 94 base: '.',
    29. 95 keepalive: true, #添加一个逗号
    30. 96 hostname: '*' #增加本行代码
    31. 97 }
    32. #启动head插件
    33. [root@ES1 elasticsearch-head]# npm run start

    现在我们在浏览器上访问http://IP:9100

    image_1d1g976j8u4h1i5k15k21dc0r6m9.png-33kB

    虽然浏览器上我们打开了,但是我们发现插件无法连接elasticsearch的API,这是因为ES5.0+版本以后,要想连接API必须先要进行授权才行。

     
    1. #先ES配置文件添加两行代码
    2. [root@ES1 ~]# echo -e 'http.cors.enabled: true http.cors.allow-origin: "*"' >> /etc/elasticsearch/elasticsearch.yml
    3. [root@ES1 ~]# tail -2 /etc/elasticsearch/elasticsearch.yml
    4. http.cors.enabled: true
    5. http.cors.allow-origin: "*"
    6. #重启动elasticsearch
    7. [root@ES1 ~]# systemctl restart elasticsearch

    image_1d1g97tjd3ndmoqha7hsi1fuam.png-67.2kB

    image_1d1g9897vtgg8ps1slg1dtvs6m1j.png-66kB

     

    二,企业级Logstash使用详解

     

    2.1 Logstash安装与Input常用插件

     

    2.1.1 Logstash-安装

     
    1. #yum安装jdk1.8
    2. [root@ES1 ~]# yum -y install java-1.8.0-openjdk
    3. [root@Logstash-Kibana ~]# vim /etc/yum.repos.d/elastic.repo
    4. [root@Logstash-Kibana ~]# cat /etc/yum.repos.d/elastic.repo
    5. [elastic-6.x]
    6. name=Elastic repository for 6.x packages
    7. baseurl=https://artifacts.elastic.co/packages/6.x/yum
    8. gpgcheck=1
    9. gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
    10. enabled=1
    11. autorefresh=1
    12. type=rpm-md
    13. [root@Logstash-Kibana ~]# yum -y install logstash
     

    2.1.2 Logstash-条件判断

    • 比较操作符: 
      • 相等:==,!=,<,>,<=,>=
      • 正则:=~(正则匹配),!~(不匹配正则)
      • 包含:in(包含),not in(不包含)
    • 布尔操作符: 
      • and(与)
      • or(或)
      • nand(非与)
      • xor(非或)
    • 一元运算符: 
      • !:取反
      • ():复合表达式
      • !():对复合表达式取反
     

    2.1.3 Logstash-Input之Stdin,File,Tcp,Beats插件

     
    1. #(1)stdin示例
    2. input {
    3. stdin{ #标准输入(用户交互输入数据)
    4. }
    5. }
    6. filter { #条件过滤(抓取字段信息)
    7. }
    8. output {
    9. stdout {
    10. codec => rubydebug #输出调试(调试配置文件语法用)
    11. }
    12. }
    13. #(2)File示例
    14. input {
    15. file {
    16. path => "/var/log/messages" #读取的文件路径
    17. tags => "123" #标签
    18. type => "syslog" #类型
    19. }
    20. }
    21. filter { #条件过滤(抓取字段信息)
    22. }
    23. output {
    24. stdout {
    25. codec => rubydebug #输出调试(调试配置文件语法用)
    26. }
    27. }
    28. #(3)TCP示例
    29. input {
    30. tcp {
    31. port => 12345
    32. type => "nc"
    33. }
    34. }
    35. filter { #条件过滤(抓取字段信息)
    36. }
    37. output {
    38. stdout {
    39. codec => rubydebug #输出调试(调试配置文件语法用)
    40. }
    41. }
    42. #(4)Beats示例
    43. input {
    44. beats { #后便会专门讲,此处不演示
    45. port => 5044
    46. }
    47. }
    48. filter { #条件过滤(抓取字段信息)
    49. }
    50. output {
    51. stdout {
    52. codec => rubydebug #输出调试(调试配置文件语法用)
    53. }
    54. }

    (1)input ==> stdin{}标准输入插件测试

     
    1. #创建logstash配置文件
    2. [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf
    3. [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf
    4. input {
    5. stdin{
    6. }
    7. }
    8. filter {
    9. }
    10. output {
    11. stdout {
    12. codec => rubydebug
    13. }
    14. }
    15. #测试logstash配置文件是否正确
    16. [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf -t
    17. OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
    18. WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
    19. Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    20. [WARN ] 2018-08-19 23:09:16.736 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
    21. Configuration OK #配置文件正确
    22. [INFO ] 2018-08-19 23:09:19.018 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
    23. #启动Logstash进行测试
    24. [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
    25. #此处省略若干行
    26. sadadasdasa #这就是用户输入的数据
    27. {
    28. "host" => "Logstash-Kibana",
    29. "message" => "sadadasdasa", #被logstash存储在message字段中
    30. "@version" => "1",
    31. "@timestamp" => 2018-08-19T15:14:48.678Z
    32. }
    33. 13213121
    34. {
    35. "host" => "Logstash-Kibana",
    36. "message" => "13213121",
    37. "@version" => "1",
    38. "@timestamp" => 2018-08-19T15:14:52.212Z
    39. }
    40. 特别提示:
    41. 让用户直接输入数据的方式就是标准输入stdin{};
    42. 将输入的数据存储到message以后直接输出到屏幕上进行调试就是标准输出stdout{codec => rubydebug}

    (2)input ==> file{}读取文件数据

     
    1. #修改Logstash配置文件
    2. [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf
    3. [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf
    4. input {
    5. file {
    6. path => "/var/log/messages"
    7. tags => "123"
    8. type => "syslog"
    9. }
    10. }
    11. filter {
    12. }
    13. output {
    14. stdout {
    15. codec => rubydebug
    16. }
    17. }
    18. #启动Logstash
    19. [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
    20. #再开一个窗口向日志文件输入一句话
    21. [root@Logstash-Kibana ~]# echo "1111" >> /var/log/messages
    22. #回头再去查看logstash的debug输出
    23. {
    24. "@timestamp" => 2018-08-19T15:26:10.469Z,
    25. "@version" => "1",
    26. "host" => "Logstash-Kibana",
    27. "tags" => [
    28. [0] "123"
    29. ],
    30. "message" => "1111",
    31. "path" => "/var/log/messages",
    32. "type" => "syslog"
    33. }

    (3)input ==> tcp{}通过监听tcp端口接收日志

     
    1. #修改logstash配置文件
    2. [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf
    3. [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf
    4. input {
    5. tcp {
    6. port => 12345
    7. type => "nc"
    8. }
    9. }
    10. filter {
    11. }
    12. output {
    13. stdout {
    14. codec => rubydebug
    15. }
    16. }
    17. #启动Logstash
    18. [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
    19. #再开一个窗口,查看12345端口监听情况
    20. [root@Logstash-Kibana ~]# netstat -antup | grep 12345
    21. tcp6 0 0 :::12345 :::* LISTEN 12626/java
    22. #在ES1上安装nc向12345端口传输数据
    23. [root@ES1 ~]# yum -y install nc
    24. [root@ES1 ~]# echo "welcome to yunjisuan" | nc 192.168.200.194 12345
    25. #回头再去查看logstash的debug输出,如下
    26. {
    27. "type" => "nc",
    28. "message" => "welcome to yunjisuan",
    29. "port" => 41650,
    30. "@version" => "1",
    31. "@timestamp" => 2018-08-19T15:43:50.543Z,
    32. "host" => "192.168.200.191"
    33. }
     

    2.1.4 更多Input插件的用户请查看官网链接

    https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html

     

    2.2 Logstash-Input(Output)之Codec插件

     
    1. #Json/Json_lines示例
    2. input {
    3. stdin {
    4. codec => json { #将json格式的数据转码成UTF-8格式后进行输入
    5. charset => ["UTF-8"]
    6. }
    7. }
    8. }
    9. filter {
    10. }
    11. output {
    12. stdout {
    13. codec => rubydebug
    14. }
    15. }

    (1)codec => json {}将json格式数据进行编码转换

     
    1. #修改logstash配置文件
    2. [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf
    3. input {
    4. stdin {
    5. codec => json {
    6. charset => ["UTF-8"]
    7. }
    8. }
    9. }
    10. filter {
    11. }
    12. output {
    13. stdout {
    14. codec => rubydebug
    15. }
    16. }
    17. #启动logstash
    18. [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
    19. #再开一个窗口进入python交互界面生成json格式数据
    20. >>> import json
    21. >>> data = [{'a':1,'b':2,'c':3,'d':4,'e':5}]
    22. >>> json = json.dumps(data)
    23. >>> print json
    24. [{"a": 1, "c": 3, "b": 2, "e": 5, "d": 4}] #这就是json格式数据
    25. #将json格式数据,输入后,查看logstash数据的输出结果
    26. [{"a": 1, "c": 3, "b": 2, "e": 5, "d": 4}]
    27. {
    28. "b" => 2,
    29. "a" => 1,
    30. "host" => "Logstash-Kibana",
    31. "c" => 3,
    32. "e" => 5,
    33. "d" => 4,
    34. "@version" => "1",
    35. "@timestamp" => 2018-08-20T13:27:58.991Z
    36. }
     

    2.3 Logstash-Filter之Json,Kv插件

     
    1. #Json示例
    2. input {
    3. stdin {
    4. }
    5. }
    6. filter {
    7. json {
    8. source => "message" #将保存在message中的json数据进行结构化解析
    9. target => "content" #解析后的结果保存在content里
    10. }
    11. }
    12. output {
    13. stdout {
    14. codec => rubydebug
    15. }
    16. }
    17. #Kv示例
    18. filter {
    19. kv {
    20. field_split => "&?" #将输入的数据按&字符进行切割解析
    21. }
    22. }

    (1)filter => json {}将json的编码进行结构化解析过滤

     
    1. #修改logstash配置文件
    2. [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf
    3. [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf
    4. input {
    5. stdin {
    6. }
    7. }
    8. filter {
    9. }
    10. output {
    11. stdout {
    12. codec => rubydebug
    13. }
    14. }
    15. #启动logstash服务
    16. [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
    17. #交互式输入json格式数据
    18. {"a": 1, "c": 3, "b": 2, "e": 5, "d": 4}
    19. {
    20. "@version" => "1",
    21. "message" => "{"a": 1, "c": 3, "b": 2, "e": 5, "d": 4}", #数据都保存在了message字段里
    22. "@timestamp" => 2018-08-20T14:08:54.275Z,
    23. "host" => "Logstash-Kibana"
    24. }
    25. #再次修改logstash配置文件
    26. [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf
    27. input {
    28. stdin {
    29. }
    30. }
    31. filter {
    32. json {
    33. source => "message"
    34. target => "content"
    35. }
    36. }
    37. output {
    38. stdout {
    39. codec => rubydebug
    40. }
    41. }
    42. #启动logstash服务
    43. [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
    44. #交互式输入以下内容进行解析
    45. {"a": 1, "c": 3, "b": 2, "e": 5, "d": 4}
    46. {
    47. "content" => { #json被结构化解析出来了
    48. "a" => 1,
    49. "e" => 5,
    50. "d" => 4,
    51. "c" => 3,
    52. "b" => 2
    53. },
    54. "@version" => "1",
    55. "message" => "{"a": 1, "c": 3, "b": 2, "e": 5, "d": 4}",
    56. "@timestamp" => 2018-08-20T14:05:39.915Z,
    57. "host" => "Logstash-Kibana"
    58. }

    (2)filter => kv {}将输入的数据按照制定符号切割

     
    1. #修改logstash配置文件
    2. [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf
    3. input {
    4. stdin {
    5. }
    6. }
    7. filter {
    8. kv {
    9. field_split => "&?"
    10. }
    11. }
    12. output {
    13. stdout {
    14. codec => rubydebug
    15. }
    16. }
    17. #启动logstash
    18. [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
    19. #交互式输入以下数据,然后查看解析结果
    20. name=123&yunjisuan=benet&yun=166
    21. {
    22. "host" => "Logstash-Kibana",
    23. "yunjisuan" => "benet",
    24. "yun" => "166",
    25. "@version" => "1",
    26. "message" => "name=123&yunjisuan=benet&yun=166",
    27. "@timestamp" => 2018-08-20T14:16:38.227Z,
    28. "name" => "123"
    29. }
     

    2.4 Logstash-Filter之Grok插件

     

    2.4.1 grok自定义正则的数据抓取模式

     
    1. #日志输入示例:
    2. 223.72.85.86 GET /index.html 15824 200
    3. #grok自定义正则的数据抓取示例
    4. input {
    5. stdin {
    6. }
    7. }
    8. filter {
    9. grok {
    10. match => {
    11. "message" => '(?<client>[0-9.]+)[ ]+(?<method>[A-Z]+)[ ]+(?<request>[a-zA-Z/.]+)[ ]+(?<bytes>[0-9]+)[ ]+(?<num>[0-9]+)'
    12. }
    13. }
    14. }
    15. output {
    16. stdout {
    17. codec => rubydebug
    18. }
    19. }
    20. #操作演示
    21. #修改logstash配置文件
    22. [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf
    23. [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf
    24. input {
    25. stdin {
    26. }
    27. }
    28. filter {
    29. grok {
    30. match => {
    31. "message" => '(?<client>[0-9.]+)[ ]+(?<method>[A-Z]+)[ ]+(?<request>[a-zA-Z/.]+)[ ]+(?<bytes>[0-9]+)[ ]+(?<num>[0-9]+)'
    32. }
    33. }
    34. }
    35. output {
    36. stdout {
    37. codec => rubydebug
    38. }
    39. }
    40. [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
    41. #输入日志进行数据抓取测试
    42. 223.72.85.86 GET /index.html 15824 200
    43. {
    44. "message" => "223.72.85.86 GET /index.html 15824 200",
    45. "bytes" => "15824",
    46. "num" => "200",
    47. "@version" => "1",
    48. "method" => "GET",
    49. "client" => "223.72.85.86",
    50. "request" => "/index.html",
    51. "host" => "Logstash-Kibana",
    52. "@timestamp" => 2018-08-21T13:50:27.029Z
    53. }
     

    2.4.2 grok内置正则的数据抓取模式

    为了方便用户抓取数据方便,官方自定义了一些内置正则的默认抓取方式 
    Grok默认的内置正则模式,官方网页示例 
    https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns

     
    1. #logstash默认挂载的常用的内置正则库文件
    2. [root@Logstash-Kibana ~]# rpm -ql logstash | grep grok-patterns
    3. /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/grok-patterns
    4. [root@Logstash-Kibana ~]# cat /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/grok-patterns
    5. ...由于显示内容过多,此处省略无数行,请自行打开查看...
    6. #操作演示
    7. #日志输入示例:
    8. 223.72.85.86 GET /index.html 15824 200
    9. #修改logstash配置文件
    10. [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf
    11. [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf
    12. input {
    13. stdin {
    14. }
    15. }
    16. filter {
    17. grok {
    18. match => {
    19. "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:num}"
    20. }
    21. }
    22. }
    23. output {
    24. stdout {
    25. codec => rubydebug
    26. }
    27. }
    28. #启动logstash
    29. [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
    30. #输入日志示例内容后,如下
    31. 223.72.85.86 GET /index.html 15824 200
    32. {
    33. "client" => "223.72.85.86",
    34. "method" => "GET",
    35. "bytes" => "15824",
    36. "host" => "Logstash-Kibana",
    37. "num" => "200",
    38. "message" => "223.72.85.86 GET /index.html 15824 200",
    39. "@version" => "1",
    40. "@timestamp" => 2018-08-21T14:19:04.960Z,
    41. "request" => "/index.html"
    42. }
     

    2.4.3 grok自定义内置正则的数据抓取模式

    示例:将2.4.1的自定义正则转换成自定义的内置正则

     
    1. #日志输入示例(新增一个数据):
    2. 223.72.85.86 GET /index.html 15824 200 "welcome to yunjisuan"
    3. #修改logstash配置文件
    4. [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf
    5. [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf
    6. input {
    7. stdin {
    8. }
    9. }
    10. filter {
    11. grok {
    12. patterns_dir => "/opt/patterns" #自定义的内置正则抓取模板路径
    13. match => {
    14. "message" => '%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:num} "%{STRING:content}"'
    15. }
    16. }
    17. }
    18. output {
    19. stdout {
    20. codec => rubydebug
    21. }
    22. }
    23. #创建自定义内置正则的挂载模板文件
    24. [root@Logstash-Kibana ~]# vim /opt/patterns
    25. [root@Logstash-Kibana ~]# cat /opt/patterns
    26. STRING .*
    27. #启动logstash
    28. [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
    29. #输入日志示例,查看数据抓取结果
    30. 223.72.85.86 GET /index.html 15824 200 "welcome to yunjisuan"
    31. {
    32. "method" => "GET",
    33. "@version" => "1",
    34. "bytes" => "15824",
    35. "client" => "223.72.85.86",
    36. "@timestamp" => 2018-08-21T14:38:04.361Z,
    37. "host" => "Logstash-Kibana",
    38. "request" => "/index.html",
    39. "num" => "200",
    40. "content" => "welcome to yunjisuan",
    41. "message" => "223.72.85.86 GET /index.html 15824 200 "welcome to yunjisuan""
    42. }
     

    2.4.4 grok多模式匹配的数据抓取

    有的时候,我们可能需要抓取多种日志格式的数据 
    因此,我们需要配置grok的多模式匹配的数据抓取

     
    1. #日志输入示例:
    2. 223.72.85.86 GET /index.html 15824 200 "welcome to yunjisuan"
    3. 223.72.85.86 GET /index.html 15824 200 Mr.chen-2018-8-21
    4. #修改logstash配置文件
    5. [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf
    6. [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf
    7. input {
    8. stdin {
    9. }
    10. }
    11. filter {
    12. grok {
    13. patterns_dir => "/opt/patterns"
    14. match => [ #请注意多模式和单模式匹配的区别
    15. "message",'%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:num} "%{STRING:content}"',
    16. "message",'%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:num} 《%{NAME:name}》'
    17. ]
    18. }
    19. }
    20. output {
    21. stdout {
    22. codec => rubydebug
    23. }
    24. }
    25. #增加一个自定义的内置正则抓取变量
    26. [root@Logstash-Kibana ~]# vim /opt/patterns
    27. [root@Logstash-Kibana ~]# cat /opt/patterns
    28. STRING .*
    29. NAME .*
    30. #启动logstash
    31. [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
    32. #输入日志示例,查看数据抓取结果
    33. 223.72.85.86 GET /index.html 15824 200 "welcome to yunjisuan"
    34. {
    35. "@version" => "1",
    36. "client" => "223.72.85.86",
    37. "request" => "/index.html",
    38. "num" => "200",
    39. "@timestamp" => 2018-08-21T14:47:26.971Z,
    40. "content" => "welcome to yunjisuan",
    41. "host" => "Logstash-Kibana",
    42. "bytes" => "15824",
    43. "message" => "223.72.85.86 GET /index.html 15824 200 "welcome to yunjisuan"",
    44. "method" => "GET"
    45. }
    46. 223.72.85.86 GET /index.html 15824 200 Mr.chen-2018-8-21
    47. {
    48. "@version" => "1",
    49. "client" => "223.72.85.86",
    50. "request" => "/index.html",
    51. "num" => "200",
    52. "@timestamp" => 2018-08-21T14:47:40.430Z,
    53. "host" => "Logstash-Kibana",
    54. "bytes" => "15824",
    55. "name" => "Mr.chen-2018-8-21",
    56. "message" => "223.72.85.86 GET /index.html 15824 200 《Mr.chen-2018-8-21》",
    57. "method" => "GET"
    58. }
     

    2.5 Logstash-Filter之geoip插件

    geoip插件可以对IP的来源进行分析,并通过Kibana的地图功能形象的显示出来。

     
    1. #日志输入示例:
    2. 223.72.85.86 GET /index.html 15824 200 "welcome to yunjisuan"
    3. 119.147.146.189 GET /index.html 15824 200 Mr.chen-2018-8-21
    4. #修改logstash配置文件
    5. [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf
    6. [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf
    7. input {
    8. stdin {
    9. }
    10. }
    11. filter {
    12. grok {
    13. patterns_dir => "/opt/patterns"
    14. match => [
    15. "message",'%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:num} "%{STRING:content}"',
    16. "message",'%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:num} 《%{NAME:name}》'
    17. ]
    18. }
    19. geoip {
    20. source => "client"
    21. database => "/opt/GeoLite2-City.mmdb"
    22. }
    23. }
    24. output {
    25. stdout {
    26. codec => rubydebug
    27. }
    28. }
    29. #下载geoip插件包
    30. [root@Logstash-Kibana ~]# wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz
    31. #解压安装geoip插件包
    32. [root@Logstash-Kibana ~]# tar xf GeoLite2-City.tar.gz
    33. [root@Logstash-Kibana ~]# ls
    34. anaconda-ks.cfg GeoLite2-City_20180807 GeoLite2-City.tar.gz
    35. [root@Logstash-Kibana ~]# cp GeoLite2-City_20180807/GeoLite2-City.mmdb /opt/
    36. #启动logstash
    37. [root@Logstash-Kibana opt]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
    38. #输入日志示例模板,查看数据抓取结果
    39. 223.72.85.86 GET /index.html 15824 200 "welcome to yunjisuan"
    40. {
    41. "geoip" => {
    42. "country_code3" => "CN", #IP所在国家
    43. "city_name" => "Beijing", #IP所在城市
    44. "longitude" => 116.3889,
    45. "region_code" => "BJ",
    46. "country_code2" => "CN",
    47. "location" => {
    48. "lon" => 116.3889, #IP所在地图经度
    49. "lat" => 39.9288 #IP所在地图纬度
    50. },
    51. "timezone" => "Asia/Shanghai",
    52. "latitude" => 39.9288,
    53. "region_name" => "Beijing",
    54. "continent_code" => "AS",
    55. "ip" => "223.72.85.86",
    56. "country_name" => "China"
    57. },
    58. "message" => "223.72.85.86 GET /index.html 15824 200 "welcome to yunjisuan"",
    59. "@timestamp" => 2018-08-21T15:45:06.179Z,
    60. "content" => "welcome to yunjisuan",
    61. "client" => "223.72.85.86",
    62. "@version" => "1",
    63. "host" => "Logstash-Kibana",
    64. "method" => "GET",
    65. "bytes" => "15824",
    66. "num" => "200",
    67. "request" => "/index.html"
    68. }
    69. 119.147.146.189 GET /index.html 15824 200 Mr.chen-2018-8-21
    70. {
    71. "geoip" => {
    72. "country_code3" => "CN",
    73. "longitude" => 113.25,
    74. "region_code" => "GD",
    75. "country_code2" => "CN",
    76. "location" => {
    77. "lon" => 113.25,
    78. "lat" => 23.1167
    79. },
    80. "timezone" => "Asia/Shanghai",
    81. "latitude" => 23.1167,
    82. "region_name" => "Guangdong",
    83. "continent_code" => "AS",
    84. "ip" => "119.147.146.189",
    85. "country_name" => "China"
    86. },
    87. "message" => "119.147.146.189 GET /index.html 15824 200 《Mr.chen-2018-8-21》",
    88. "name" => "Mr.chen-2018-8-21",
    89. "@timestamp" => 2018-08-21T15:45:55.386Z,
    90. "client" => "119.147.146.189",
    91. "@version" => "1",
    92. "host" => "Logstash-Kibana",
    93. "method" => "GET",
    94. "bytes" => "15824",
    95. "num" => "200",
    96. "request" => "/index.html"
    97. }
     

    2.6 Logstash-输出(Output)插件

     
    1. #ES示例
    2. output {
    3. elasticsearch {
    4. hosts => "localhost:9200" #将数据写入elasticsearch
    5. index => "logstash-mr_chen-admin-%{+YYYY.MM.dd}" #索引为xxx
    6. }
    7. }
     

    三,企业级Kibana使用详解

    主机名IP地址用途
    ES1 192.168.200.191 elasticsearch-node1
    ES2 192.168.200.192 elasticsearch-node2
    ES3 192.168.200.193 elasticsearch-node3
    Logstash-Kibana 192.168.200.194 日志可视化服务器
     

    3.1 ELK Stack配置应用案例

     
    1. #利用yum源安装kibana
    2. [root@Logstash-Kibana ~]# ll /etc/yum.repos.d/elastic.repo
    3. -rw-r--r-- 1 root root 215 8 19 22:07 /etc/yum.repos.d/elastic.repo
    4. [root@Logstash-Kibana ~]# yum -y install kibana
    5. #修改logstash配置文件
    6. [root@Logstash-Kibana ~]# vim /etc/logstash/conf.d/test.conf
    7. [root@Logstash-Kibana ~]# cat /etc/logstash/conf.d/test.conf
    8. input {
    9. file {
    10. path => ["/var/log/messages"]
    11. type => "system" #对数据添加类型
    12. tags => ["syslog","test"] #对数据添加标识
    13. start_position => "beginning"
    14. }
    15. file {
    16. path => ["/var/log/audit/audit.log"]
    17. type => "system" #对数据添加类型
    18. tags => ["auth","test"] #对数据添加标识
    19. start_position => "beginning"
    20. }
    21. }
    22. filter {
    23. }
    24. output {
    25. if [type] == "system" {
    26. if [tags][0] == "syslog" { #通过判断可以将不同日志写到不同的索引里
    27. elasticsearch {
    28. hosts => ["http://192.168.200.191:9200","http://192.168.200.192:9200","http://192.168.200.193:9200"]
    29. index => "logstash-mr_chen-syslog-%{+YYYY.MM.dd}"
    30. }
    31. stdout { codec => rubydebug }
    32. }
    33. else if [tags][0] == "auth" {
    34. elasticsearch {
    35. hosts => ["http://192.168.200.191:9200","http://192.168.200.192:9200","http://192.168.200.193:9200"]
    36. index => "logstash-mr_chen-auth-%{+YYYY.MM.dd}"
    37. }
    38. stdout { codec => rubydebug }
    39. }
    40. }
    41. }
    42. #修改kibana的配置文件
    43. [root@Logstash-Kibana kibana]# cat -n kibana.yml.bak | sed -n '7p;28p'
    44. 7 #server.host: "localhost"
    45. 28 #elasticsearch.url: "http://localhost:9200"
    46. [root@Logstash-Kibana kibana]# cat -n kibana.yml | sed -n '7p;28p'
    47. 7 server.host: "0.0.0.0"
    48. 28 elasticsearch.url: "http://192.168.200.191:9200" #就写一个ES主节点即可
    49. #启动kibana进程
    50. [root@Logstash-Kibana ~]# systemctl start kibana
    51. #启动logstash
    52. [root@Logstash-Kibana ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf

    特别提示: 
    如果elasticsearch里没有任何索引,那么kibana是都取不到的 
    所以启动logstash先elasticsearch里写点数据就好了

    通过浏览器访问kibana

    http://192.168.200.194:5601

    image_1d1g9kiaeuf31qeecjqj5ugfl20.png-101.1kB

    image_1d1g9lgq1hml1bm3va6a18kt62d.png-95.2kB

    我们创建两个索引后,如下图所示

    image_1d1g9m2abc7v1ujv1c3ptal7s12q.png-140.6kB

     

    3.2 Kibana常用查询表达式

    直接演示简单讲解kibana的数据检索功能

    image_1d1g9mhsi1j7t152int1hfr54n37.png-133.2kB

    image_1d1g9mn616qa1vmg1aqev0b1h283k.png-153.6kB

     

    3.3 基于Nginx实现Kibana访问认证

    操作方法同ELK(上),此处略过

  • 相关阅读:
    C# 汉字转拼音(转)
    检测Sql Server服务器SQL语句执行情况
    查看sql执行的情况
    Sql Server简单加密与解密 【转】
    细说SQL Server中的加密【转】
    asp.net发布到IIS中出现错误:处理程序“PageHandlerFactory-Integrated”在其模块列表中有一个错误模块“ManagedPipelineHandler”
    HTTPS那些事(三)攻击实例与防御
    HTTPS那些事(二)SSL证书
    HTTPS那些事(一)HTTPS原理
    achartengine(Google给android提供的画图工具包)的介绍和使用
  • 原文地址:https://www.cnblogs.com/linyaonie/p/11231183.html
Copyright © 2011-2022 走看看