zoukankan      html  css  js  c++  java
  • elk 搭建

    elk 平台搭建:
    
    ELK平台搭建
    
    系统环境
    
    System: Centos release 6.7 (Final)
    
    ElasticSearch: 2.1.0
    
    Logstash: 2.1.1
    
    Kibana: 4.3.0
    
    Java: openjdk version  "1.8.0_65"
    
    注:由于Logstash的运行依赖于Java环境, 而Logstash 1.5以上版本不低于java 1.7,因此推荐使用最新版本的Java。因为我们只需要Java的运行环境,所以可以只安装JRE,不过这里我依然使用JDK,请自行搜索安装。
    
    
    配置ElasticSearch:
    
    tar -zxvf elasticsearch-2.1.0.tar.gz
    cd elasticsearch-2.1.0
    
    
    zjtest7-redis:/usr/local/elasticsearch-2.3.4# ./bin/plugin install mobz/elasticsearch-head
    -> Installing mobz/elasticsearch-head...
    Plugins directory [/usr/local/elasticsearch-2.3.4/plugins] does not exist. Creating...
    Trying https://github.com/mobz/elasticsearch-head/archive/master.zip ...
    Downloading .....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
    Verifying https://github.com/mobz/elasticsearch-head/archive/master.zip checksums if available ...
    NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify)
    Installed head into /usr/local/elasticsearch-2.3.4/plugins/head
    
    
    然后编辑ES的配置文件:
    
    vi config/elasticsearch.yml
    
    添加配置:
    cluster.name=es_cluster
    node.name=node0
    path.data=/tmp/elasticsearch/data
    path.logs=/tmp/elasticsearch/logs
    #当前hostname或IP,我这里是centos2
    network.host=192.168.32.80
    network.port=9200
    
    
    启动报错:
    zjtest7-redis:/usr/local/elasticsearch-2.3.4# ./bin/elasticsearch
    Exception in thread "main" SettingsException[Failed to load settings from [elasticsearch.yml]]; nested: ElasticsearchParseException[malformed, expected settings to start with 'object', instead was [VALUE_STRING]];
    Likely root cause: ElasticsearchParseException[malformed, expected settings to start with 'object', instead was [VALUE_STRING]]
    	at org.elasticsearch.common.settings.loader.XContentSettingsLoader.load(XContentSettingsLoader.java:65)
    	at org.elasticsearch.common.settings.loader.XContentSettingsLoader.load(XContentSettingsLoader.java:45)
    	at org.elasticsearch.common.settings.loader.YamlSettingsLoader.load(YamlSettingsLoader.java:46)
    	at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1080)
    	at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:1067)
    	at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:88)
    	at org.elasticsearch.common.cli.CliTool.<init>(CliTool.java:107)
    	at org.elasticsearch.common.cli.CliTool.<init>(CliTool.java:100)
    	at org.elasticsearch.bootstrap.BootstrapCLIParser.<init>(BootstrapCLIParser.java:48)
    	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:226)
    	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
    Refer to the log for complete error details.
    
    
    
    原因 需要这么写:
    cluster.name: es_cluster
    node.name: node01
    path.data: /tmp/elasticsearch/data
    path.logs: /tmp/elasticsearch/logs
    network.host: 192.168.32.80
    network.port: 9200
    
    
    zjtest7-redis:/usr/local/elasticsearch-2.3.4# ./bin/elasticsearch
    Exception in thread "main" java.lang.RuntimeException: don't run elasticsearch as root.
    	at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:93)
    	at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:144)
    	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
    	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
    Refer to the log for complete error details.
    
    
    创建 elk 用户和group:
    
    
    [elk@zjtest7-redis local]$ cd elasticsearch-2.3.4
    [elk@zjtest7-redis elasticsearch-2.3.4]$ ./bin/elasticsearch
    [2016-07-22 14:03:22,797][WARN ][bootstrap                ] unable to install syscall filter: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in
    [2016-07-22 14:03:23,728][INFO ][node                     ] [node01] version[2.3.4], pid[1756], build[e455fd0/2016-06-30T11:24:31Z]
    [2016-07-22 14:03:23,728][INFO ][node                     ] [node01] initializing ...
    [2016-07-22 14:03:25,787][INFO ][plugins                  ] [node01] modules [reindex, lang-expression, lang-groovy], plugins [head], sites [head]
    [2016-07-22 14:03:25,878][INFO ][env                      ] [node01] using [1] data paths, mounts [[/ (/dev/mapper/vg00-lv_root)]], net usable_space [86.7gb], net total_space [96.2gb], spins? [possibly], types [ext4]
    [2016-07-22 14:03:25,882][INFO ][env                      ] [node01] heap size [1015.6mb], compressed ordinary object pointers [true]
    [2016-07-22 14:03:25,882][WARN ][env                      ] [node01] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
    [2016-07-22 14:03:33,299][INFO ][node                     ] [node01] initialized
    [2016-07-22 14:03:33,302][INFO ][node                     ] [node01] starting ...
    [2016-07-22 14:03:33,561][INFO ][transport                ] [node01] publish_address {192.168.32.80:9300}, bound_addresses {192.168.32.80:9300}
    [2016-07-22 14:03:33,576][INFO ][discovery                ] [node01] es_cluster/gYNVInstR16CujdeJ1T6YQ
    [2016-07-22 14:03:36,717][INFO ][cluster.service          ] [node01] new_master {node01}{gYNVInstR16CujdeJ1T6YQ}{192.168.32.80}{192.168.32.80:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
    [2016-07-22 14:03:36,759][INFO ][http                     ] [node01] publish_address {192.168.32.80:9200}, bound_addresses {192.168.32.80:9200}
    [2016-07-22 14:03:36,759][INFO ][node                     ] [node01] started
    [2016-07-22 14:03:36,867][INFO ][gateway                  ] [node01] recovered [0] indices into cluster_state
    
    
    
    
    可以看到,它跟其他的节点的传输端口为9300,接受HTTP请求的端口为9200。
    
    使用ctrl+C停止。当然,也可以使用后台进程的方式启动ES:
    
    ./bin/elasticsearch &
    然后可以打开页面localhost:9200,将会看到以下内容:
    
    
    http://192.168.32.80:9200/
    
    
    返回;
    
    {
      "name" : "node01",
      "cluster_name" : "es_cluster",
      "version" : {
        "number" : "2.3.4",
        "build_hash" : "e455fd0c13dceca8dbbdbb1665d068ae55dabe3f",
        "build_timestamp" : "2016-06-30T11:24:31Z",
        "build_snapshot" : false,
        "lucene_version" : "5.5.0"
      },
      "tagline" : "You Know, for Search"
    }
    
    
    返回展示了配置的cluster_name和name,以及安装的ES的版本等信息。
    
    刚刚安装的head插件,它是一个用浏览器跟ES集群交互的插件,可以查看集群状态、集群的doc内容、执行搜索和普通的Rest请求等。
    
    现在也可以使用它打开localhost:9200/_plugin/head页面来查看ES集群状态:
    
    
    
    Logstash
    
    Logstash的功能如下:
    
    
    [elk@zjtest7-redis logstash-2.3.4]$ ./bin/logstash
    No command given
    
    Usage: logstash <command> [command args]
    Run a command with the --help flag to see the arguments.
    For example: logstash agent --help
    
    Available commands:
      agent - runs the logstash agent
      version - emits version info about this logstash
    
    
    [elk@zjtest7-redis logstash-2.3.4]$ ./bin/logstash agent -f config/log4j_to_es.conf
    Settings: Default pipeline workers: 1
    log4j:WARN No appenders could be found for logger (org.apache.http.client.protocol.RequestAuthCache).
    log4j:WARN Please initialize the log4j system properly.
    log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
    Pipeline main started
    
    [elk@zjtest7-redis logstash-2.3.4]$ ./bin/logstash agent -f config/log4j_to_es.conf &
    
    
    
    
    输入和输出日志:
    
    代码如下:
    
    input {
            file {
                    type => "nginx_access"
                    path => ["/usr/share/nginx/logs/test.access.log"]
            }
    }
    output {
            redis {
                    host => "localhost"
                    data_type => "list"
                    key => "logstash:redis"
            }
    }
    
    
    
    
    
    output {
      # For detail config for elasticsearch as output, 
      # See: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html
      elasticsearch {
        action => "index"          #The operation on ES
        hosts  => "192.168.32.80:9200"   #ElasticSearch host, can be array.
        index  => "applog"         #The index to write data to.
      }
    }
    
    
    
    logstash,elasticsearch,kibana 怎么进行nginx的日志分析呢?首先,架构方面,nginx是有日志文件的,它的每个请求的状态等都有日志文件进行记录。其次,需要有个队列,redis的list结构正好可以作为队列使用。然后分析使用elasticsearch就可以进行分析和查询了。
    
    我们需要的是一个分布式的,日志收集和分析系统。logstash有agent和indexer两个角色。对于agent角色,放在单独的web机器上面,然后这个agent不断地读取nginx的日志文件,每当它读到新的日志信息以后,就将日志传送到网络上的一台redis队列上。对于队列上的这些未处理的日志,有不同的几台logstash indexer进行接收和分析。分析之后存储到elasticsearch进行搜索分析。再由统一的kibana进行日志web界面的展示。
    
    
    启动/usr/local/elasticsearch-2.3.4# ./bin/elasticsearch
    
    
    [elk@zjtest7-redis logstash-2.3.4]$ ./bin/logstash agent -f config/log4j_to_es.conf
    
    
    
    logstash日志收集分发到elastic集群,
    
    elasticsearch进行数据索引,
    
    
    kibana进行结构化查询展示,redis做缓存队列。
    
    
    
    
    装:
    
    logstash,新版下载即可用,需加配置文件agent.conf和index.conf,当然最好途径是官网sample即得即用。
    
    /usr/local/redis/bin/redis-server *:6379  
    
    
    
    input {
            file {
                    type => "nginx_access"
                    path => ["/usr/local/nginx/logs/test.access.log"]
            }
    }
    output {
            redis {
                    host => "localhost"
                    data_type => "list"
                    key => "logstash:redis"
                    port=>6379
                    password => "1234567"
            }
    }
    ~                                                                                                                               
    ~     
    
    zjtest7-redis:/usr/local/logstash-2.3.4# ./bin/logstash -f /usr/local/logstash-2.3.4/config/logstash_agent.conf 
    Settings: Default pipeline workers: 1
    Pipeline main started
    
    
    
    日志格式:
    
    log_format logstash '$http_host $server_addr $remote_addr [$time_local] "$request" '
                        '$request_body $status $body_bytes_sent "$http_referer" "$http_user_agent" '
                        '$request_time $upstream_response_time';
    
    
    
    
    
    redis :
    
    127.0.0.1:6379> llen  "logstash:redis"
    (integer) 328
    
    
    
    到这里数据已经存放到redis队列:
    
    
    logstash 数据写入到redis 队列:
    
    
    zjtest7-redis:/usr/local/logstash-2.3.4# ./bin/logstash -f /usr/local/logstash-2.3.4/config/logstash_agent.conf    ### 存数据
    
    
    zjtest7-redis:/usr/local/logstash-2.3.4# ./bin/logstash -f /usr/local/logstash-2.3.4/config/logstash_indexer.conf
    
    
    logstash分为 index和aget ,agent负责监控、过滤日志,
    
    index负责收集日志并将日志交给ElasticSearch 做搜索
    
    
    
    
    Shipper:发送事件(events)至LogStash;
    
    
    
    通常,远程代理端(agent)只需要运行这个组件即可;
    Broker and Indexer:接收并索引化事件;
    
    
    
    Search and Storage:允许对事件进行搜索和存储;Web Interface:
    
    
    基于Web的展示界面
    192.168.32.162为日志查看服务器,该机器需要安装redis, elasticserch, logstatsh, kibana这四个应用程序。
    
    
    
    
    
    192.168.32.161?为应用程序nginx应用程序,我们这次只收集他的日志进行分析。
    
    
    
    
    
    
    
    
    
    
    
    
    这个配置文件,是读取nginx日志写入到redis
    
    zjtest7-redis:/usr/local/logstash-2.3.4/config# cat logstash_agent.conf 
    input {
            file {
                    type => "nginx_access"
                    path => ["/usr/local/nginx/logs/test.access.log"]
            }
    }
    output {
            redis {
                    host => "localhost"
                    data_type => "list"
                    key => "logstash:redis"
                    port=>"6379"
                    password => "1234567"
            }
    }
    
    
    
    这个配置文件是读取本地的redis数据,交给elasticsearch
    zjtest7-redis:/usr/local/logstash-2.3.4/config# cat logstash_indexer.conf 
    input {
            redis {
                    host => "localhost"
                    data_type => "list"
                    key => "logstash:redis"
                    type => "redis-input"
                    password => "1234567"
                    port =>"6379"
            }
    }
    output {
            elasticsearch {
                    embedded => false
                    protocol => "http"
                    host => "localhost"
                    port => "9200"
                    index => "access-%{+YYYY.MM.dd}"
                    document_type="access"
            }
    		stdout {
    			codec => rubydebug
    		}
    }
    
    
    
    Kibana
    
    配置Kibana:
    
    tar -zxvf kibana-4.3.0-linux-x86.tar.gz
    cd kibana-4.3.0-linux-x86
    vi config/kibana.yml
    
    server.port: 5601
    server.host: "192.168.32.80"
    elasticsearch.url: http://192.168.32.80:9200
    kibana.index:".kibana'

  • 相关阅读:
    在windows下安装环回适配器(Microsoft Loopback Adapter)
    c#中的 ? 与 ??
    MVC中提交包含HTML代码的页面处理方法
    Linux(CentOS)日常操作命令
    MySql命令行下导出、导入数据
    NHibernate中text类型字段太长时被截断解决办法
    windows7下修改hosts文件无效解决办法
    IIS与Apache同时使用80端口
    因为数据库正在使用,所以无法获得对数据库的独占访问权 SQL 2005 / SQL 2008
    在windows64位服务器上运行windows32位机器上开发的asp.net应用程序
  • 原文地址:https://www.cnblogs.com/hzcya1995/p/13350476.html
Copyright © 2011-2022 走看看