zoukankan      html  css  js  c++  java
  • 001_docker-compose构建elk环境

         由于打算给同事分享elk相关的东西,搭建配置elk环境太麻烦了,于是想到了docker。docker官方提供了docker-compose编排工具,elk集群一键就可以搞定,真是兴奋。好了下面咱们开始吧。

    一、

    https://github.com/deviantony/docker-elk

    $ cd     /006_xxxallproject/005_docker/001_elk/docker-elk

    $ git clone https://github.com/deviantony/docker-elk.git
    
    $ docker-compose -f docker-compose.yml up
    
    Creating network "dockerelk_elk" with driver "bridge"
    Building elasticsearch
    Step 1/1 : FROM docker.elastic.co/elasticsearch/elasticsearch:5.3.0
    5.3.0: Pulling from elasticsearch/elasticsearch
    3690ec4760f9: Pull complete
    
    .......
    
    90f6e7841041: Pull complete
    Digest: sha256:56ac964338bc74f3874d63271433f6555648d55405a89c96f56a18dee48456eb
    Status: Downloaded newer image for docker.elastic.co/elasticsearch/elasticsearch:5.3.0
    ---> ccec59a7dd84
    Successfully built ccec59a7dd84
    WARNING: Image for service elasticsearch was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
    Building logstash
    Step 1/1 : FROM docker.elastic.co/logstash/logstash:5.3.0
    5.3.0: Pulling from logstash/logstash
    fec6b243e075: Pull complete
    
    ......
    
    0b2611cd5a87: Pull complete
    Digest: sha256:4e0255387c9c2bfcd2442343d3455417598faa1f2133b44276c4a2222f83a39d
    Status: Downloaded newer image for docker.elastic.co/logstash/logstash:5.3.0
    ---> b583a99a08a0
    Successfully built b583a99a08a0
    WARNING: Image for service logstash was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
    Building kibana
    Step 1/1 : FROM docker.elastic.co/kibana/kibana:5.3.0
    5.3.0: Pulling from kibana/kibana
    dd5dd61c1a5a: Pull complete
    
    .....
    
    464e6d8125d9: Pull complete
    Digest: sha256:ddeab1a2a3347ebf4ee59e0a3a209b6e48105e1e881419606378b5da1c4d0bf6
    Status: Downloaded newer image for docker.elastic.co/kibana/kibana:5.3.0
    ---> a21e19753b0c
    Successfully built a21e19753b0c
    WARNING: Image for service kibana was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
    Creating dockerelk_elasticsearch_1
    Creating dockerelk_logstash_1
    Creating dockerelk_kibana_1
    Attaching to dockerelk_elasticsearch_1, dockerelk_logstash_1, dockerelk_kibana_1
    elasticsearch_1 | [2017-04-19T10:28:31,280][INFO ][o.e.n.Node ] [] initializing ...
    elasticsearch_1 | [2017-04-19T10:28:31,436][INFO ][o.e.e.NodeEnvironment ] [S8f8ukX] using [1] data paths, mounts [[/ (none)]], net usable_space [54.3gb], net total_space [59gb], spins? [possibly], types [aufs]
    elasticsearch_1 | [2017-04-19T10:28:31,437][INFO ][o.e.e.NodeEnvironment ] [S8f8ukX] heap size [247.5mb], compressed ordinary object pointers [true]
    elasticsearch_1 | [2017-04-19T10:28:31,441][INFO ][o.e.n.Node ] node name [S8f8ukX] derived from node ID [S8f8ukXtTXuG806Cn0JqHw]; set [node.name] to override
    elasticsearch_1 | [2017-04-19T10:28:31,441][INFO ][o.e.n.Node ] version[5.3.0], pid[1], build[3adb13b/2017-03-23T03:31:50.652Z], OS[Linux/4.9.13-moby/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_92-internal/25.92-b14]
    elasticsearch_1 | [2017-04-19T10:28:35,236][INFO ][o.e.p.PluginsService ] [S8f8ukX] loaded module [aggs-matrix-stats]
    elasticsearch_1 | [2017-04-19T10:28:35,241][INFO ][o.e.p.PluginsService ] [S8f8ukX] loaded module [ingest-common]
    elasticsearch_1 | [2017-04-19T10:28:35,241][INFO ][o.e.p.PluginsService ] [S8f8ukX] loaded module [lang-expression]
    elasticsearch_1 | [2017-04-19T10:28:35,241][INFO ][o.e.p.PluginsService ] [S8f8ukX] loaded module [lang-groovy]
    elasticsearch_1 | [2017-04-19T10:28:35,241][INFO ][o.e.p.PluginsService ] [S8f8ukX] loaded module [lang-mustache]
    elasticsearch_1 | [2017-04-19T10:28:35,241][INFO ][o.e.p.PluginsService ] [S8f8ukX] loaded module [lang-painless]
    elasticsearch_1 | [2017-04-19T10:28:35,242][INFO ][o.e.p.PluginsService ] [S8f8ukX] loaded module [percolator]
    elasticsearch_1 | [2017-04-19T10:28:35,242][INFO ][o.e.p.PluginsService ] [S8f8ukX] loaded module [reindex]
    elasticsearch_1 | [2017-04-19T10:28:35,242][INFO ][o.e.p.PluginsService ] [S8f8ukX] loaded module [transport-netty3]
    elasticsearch_1 | [2017-04-19T10:28:35,242][INFO ][o.e.p.PluginsService ] [S8f8ukX] loaded module [transport-netty4]
    elasticsearch_1 | [2017-04-19T10:28:35,243][INFO ][o.e.p.PluginsService ] [S8f8ukX] loaded plugin [x-pack]
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:28:35Z","tags":["info","optimize"],"pid":6,"message":"Optimizing and caching bundles for kibana, timelion and status_page. This may take a few minutes"}
    elasticsearch_1 | [2017-04-19T10:28:39,739][INFO ][o.e.n.Node ] initialized
    elasticsearch_1 | [2017-04-19T10:28:39,740][INFO ][o.e.n.Node ] [S8f8ukX] starting ...
    elasticsearch_1 | [2017-04-19T10:28:39,962][WARN ][i.n.u.i.MacAddressUtil ] Failed to find a usable hardware address from the network interfaces; using random bytes: bd:73:08:b5:53:64:8d:4d
    elasticsearch_1 | [2017-04-19T10:28:40,068][INFO ][o.e.t.TransportService ] [S8f8ukX] publish_address {172.18.0.2:9300}, bound_addresses {[::]:9300}
    elasticsearch_1 | [2017-04-19T10:28:40,078][INFO ][o.e.b.BootstrapChecks ] [S8f8ukX] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
    elasticsearch_1 | [2017-04-19T10:28:43,184][INFO ][o.e.c.s.ClusterService ] [S8f8ukX] new_master {S8f8ukX}{S8f8ukXtTXuG806Cn0JqHw}{Y-MCKyqBSAWOcKkGBIlaZg}{172.18.0.2}{172.18.0.2:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
    elasticsearch_1 | [2017-04-19T10:28:43,235][INFO ][o.e.h.n.Netty4HttpServerTransport] [S8f8ukX] publish_address {172.18.0.2:9200}, bound_addresses {[::]:9200}
    elasticsearch_1 | [2017-04-19T10:28:43,245][INFO ][o.e.n.Node ] [S8f8ukX] started
    elasticsearch_1 | [2017-04-19T10:28:43,265][INFO ][o.e.g.GatewayService ] [S8f8ukX] recovered [0] indices into cluster_state
    elasticsearch_1 | [2017-04-19T10:28:43,330][INFO ][o.e.l.LicenseService ] [S8f8ukX] license [ed2218ec-bbd8-46b6-8151-bb65198904f0] mode [trial] - valid
    logstash_1 | Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
    logstash_1 | [2017-04-19T10:28:43,916][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
    logstash_1 | [2017-04-19T10:28:43,935][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"ea7ee9f6-294d-422e-93db-0c3edff2f0da", :path=>"/usr/share/logstash/data/uuid"}
    logstash_1 | [2017-04-19T10:28:44,434][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
    logstash_1 | [2017-04-19T10:28:44,435][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}
    logstash_1 | [2017-04-19T10:28:44,685][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x6c579f03 URL:http://elasticsearch:9200/>}
    logstash_1 | [2017-04-19T10:28:44,688][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
    logstash_1 | [2017-04-19T10:28:44,782][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
    logstash_1 | [2017-04-19T10:28:44,797][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
    elasticsearch_1 | [2017-04-19T10:28:44,936][WARN ][o.e.d.i.m.TypeParsers ] field [include_in_all] is deprecated, as [_all] is deprecated, and will be disallowed in 6.0, use [copy_to] instead.
    elasticsearch_1 | [2017-04-19T10:28:44,958][WARN ][o.e.d.i.m.TypeParsers ] field [include_in_all] is deprecated, as [_all] is deprecated, and will be disallowed in 6.0, use [copy_to] instead.
    logstash_1 | [2017-04-19T10:28:45,045][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x4c36694f URL://elasticsearch:9200>]}
    logstash_1 | [2017-04-19T10:28:45,047][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
    logstash_1 | [2017-04-19T10:28:45,075][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:5000"}
    logstash_1 | [2017-04-19T10:28:45,084][INFO ][logstash.pipeline ] Pipeline main started
    logstash_1 | [2017-04-19T10:28:45,127][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:52Z","tags":["info","optimize"],"pid":6,"message":"Optimization of bundles for kibana, timelion and status_page complete in 76.28 seconds"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:52Z","tags":["status","plugin:kibana@5.3.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:52Z","tags":["status","plugin:elasticsearch@5.3.0","info"],"pid":6,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:52Z","tags":["status","plugin:xpack_main@5.3.0","info"],"pid":6,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:52Z","tags":["status","plugin:searchprofiler@5.3.0","info"],"pid":6,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:52Z","tags":["status","plugin:tilemap@5.3.0","info"],"pid":6,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:52Z","tags":["status","plugin:console@5.3.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:52Z","tags":["status","plugin:timelion@5.3.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:52Z","tags":["listening","info"],"pid":6,"message":"Server running at http://0:5601"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:52Z","tags":["status","ui settings","info"],"pid":6,"state":"yellow","message":"Status changed from uninitialized to yellow - Elasticsearch plugin is yellow","prevState":"uninitialized","prevMsg":"uninitialized"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:57Z","tags":["status","plugin:xpack_main@5.3.0","info"],"pid":6,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:57Z","tags":["status","plugin:searchprofiler@5.3.0","info"],"pid":6,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:57Z","tags":["status","plugin:tilemap@5.3.0","info"],"pid":6,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:57Z","tags":["status","plugin:elasticsearch@5.3.0","info"],"pid":6,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
    elasticsearch_1 | [2017-04-19T10:29:57,552][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [buildNum]
    elasticsearch_1 | [2017-04-19T10:29:57,582][INFO ][o.e.c.m.MetaDataCreateIndexService] [S8f8ukX] [.kibana] creating index, cause [api], templates [], shards [1]/[1], mappings [server, config]
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:58Z","tags":["status","plugin:elasticsearch@5.3.0","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:58Z","tags":["status","ui settings","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Elasticsearch plugin is yellow"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:58Z","tags":["license","info","xpack"],"pid":6,"message":"Imported license information from Elasticsearch: mode: trial | status: active | expiry date: 2017-05-19T10:28:43+00:00"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:58Z","tags":["status","plugin:xpack_main@5.3.0","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:58Z","tags":["status","plugin:searchprofiler@5.3.0","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}
    kibana_1 | {"type":"log","@timestamp":"2017-04-19T10:29:58Z","tags":["status","plugin:tilemap@5.3.0","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}
    

    二、

    $ docker ps -a
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    7e261f00be84 dockerelk_kibana "/bin/sh -c /usr/l..." About an hour ago Up About an hour 0.0.0.0:5601->5601/tcp dockerelk_kibana_1
    7d6a850abc97 dockerelk_logstash "/usr/local/bin/do..." About an hour ago Up About an hour 0.0.0.0:5000->5000/tcp dockerelk_logstash_1
    a8948cfdf7b4 dockerelk_elasticsearch "/bin/bash bin/es-..." About an hour ago Up About an hour 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp dockerelk_elasticsearch_1
    

    三、

    $ cat docker-compose.yml

    version: '2'
    
    services:
    
      elasticsearch:
        build: elasticsearch/
        ports:
          - "9200:9200"
          - "9300:9300"
        environment:
          ES_JAVA_OPTS: "-Xmx256m -Xms256m"
          # disable X-Pack
          # see https://www.elastic.co/guide/en/x-pack/current/xpack-settings.html
          #     https://www.elastic.co/guide/en/x-pack/current/installing-xpack.html#xpack-enabling
          xpack.security.enabled: "false"
          xpack.monitoring.enabled: "false"
          xpack.graph.enabled: "false"
          xpack.watcher.enabled: "false"
        networks:
          - elk
    
      logstash:
        build: logstash/
        volumes:
          - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
          - ./logstash/pipeline:/usr/share/logstash/pipeline
        ports:
          - "5000:5000"
        environment:
          LS_JAVA_OPTS: "-Xmx256m -Xms256m"
        networks:
          - elk
        depends_on:
          - elasticsearch
    
      kibana:
        build: kibana/
        volumes:
          - ./kibana/config/:/usr/share/kibana/config
        ports:
          - "5601:5601"
        networks:
          - elk
        depends_on:
          - elasticsearch
    
    networks:
    
      elk:
        driver: bridge
    

    四、为了节省我电脑的资源,先关闭了

    docker stop/start   CONTAINER-NAMES

    ----------------------------------------------------------------------------------------------------------------------------------------------------------------

    一、只能说elk版本迭代太快,之前插件安装的方式在5.x不能用了,只好用之前的版本的。

    git地址==>http://elk-docker.readthedocs.io/#prerequisites

    $ sudo docker pull sebp/elk:es235_l234_k454
    $ sudo docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk sebp/elk:es235_l234_k454

    二、安装好了elk,下面看就是怎么用起来了,请老司机们follow me!

    (1)elasticsearch-head插件安装        git地址==> https://github.com/mobz/elasticsearch-head

    root@a122726854cc:/usr/share/elasticsearch# bin/plugin install mobz/elasticsearch-head

    (2)lmenezes/elasticsearch-kopf插件安装    git地址==>https://github.com/lmenezes/elasticsearch-kopf

    root@a122726854cc:/usr/share/elasticsearch# bin/plugin install lmenezes/elasticsearch-kopf/2.0
    

    附:elk启动过程

    $ sudo docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk sebp/elk:es235_l234_k454
     * Starting periodic command scheduler cron                                                                                                          [ OK ]
     * Starting Elasticsearch Server                                                                                                                            sysctl: setting key "vm.max_map_count": Read-only file system
                                                                                                                                                         [ OK ]
    waiting for Elasticsearch to be up (1/30)
    waiting for Elasticsearch to be up (2/30)
    waiting for Elasticsearch to be up (3/30)
    waiting for Elasticsearch to be up (4/30)
    waiting for Elasticsearch to be up (5/30)
    logstash started.
     * Starting Kibana4                                                                                                                                  [ OK ]
    ==> /var/log/elasticsearch/elasticsearch.log <==
    [2017-04-22 21:56:07,164][INFO ][env                      ] [Karma] heap size [990.7mb], compressed ordinary object pointers [true]
    [2017-04-22 21:56:07,164][WARN ][env                      ] [Karma] max file descriptors [65535] for elasticsearch process likely too low, consider increasing to at least [65536]
    [2017-04-22 21:56:08,398][INFO ][node                     ] [Karma] initialized
    [2017-04-22 21:56:08,399][INFO ][node                     ] [Karma] starting ...
    [2017-04-22 21:56:08,475][INFO ][transport                ] [Karma] publish_address {172.17.0.2:9300}, bound_addresses {[::]:9300}
    [2017-04-22 21:56:08,479][INFO ][discovery                ] [Karma] elasticsearch/0pG46Uz1SKSMqq3y3zbvYw
    [2017-04-22 21:56:11,546][INFO ][cluster.service          ] [Karma] new_master {Karma}{0pG46Uz1SKSMqq3y3zbvYw}{172.17.0.2}{172.17.0.2:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
    [2017-04-22 21:56:11,557][INFO ][http                     ] [Karma] publish_address {172.17.0.2:9200}, bound_addresses {[::]:9200}
    [2017-04-22 21:56:11,557][INFO ][node                     ] [Karma] started
    [2017-04-22 21:56:11,627][INFO ][gateway                  ] [Karma] recovered [0] indices into cluster_state
    
    ==> /var/log/logstash/logstash.log <==
    
    ==> /var/log/kibana/kibana4.log <==
    {"type":"log","@timestamp":"2017-04-22T21:56:14+00:00","tags":["status","plugin:kibana","info"],"pid":192,"name":"plugin:kibana","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2017-04-22T21:56:14+00:00","tags":["status","plugin:elasticsearch","info"],"pid":192,"name":"plugin:elasticsearch","state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2017-04-22T21:56:14+00:00","tags":["status","plugin:kbn_vislib_vis_types","info"],"pid":192,"name":"plugin:kbn_vislib_vis_types","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2017-04-22T21:56:14+00:00","tags":["status","plugin:markdown_vis","info"],"pid":192,"name":"plugin:markdown_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2017-04-22T21:56:14+00:00","tags":["status","plugin:metric_vis","info"],"pid":192,"name":"plugin:metric_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2017-04-22T21:56:14+00:00","tags":["status","plugin:spyModes","info"],"pid":192,"name":"plugin:spyModes","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2017-04-22T21:56:14+00:00","tags":["status","plugin:statusPage","info"],"pid":192,"name":"plugin:statusPage","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2017-04-22T21:56:14+00:00","tags":["status","plugin:table_vis","info"],"pid":192,"name":"plugin:table_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2017-04-22T21:56:14+00:00","tags":["listening","info"],"pid":192,"message":"Server running at http://0.0.0.0:5601"}
    
    ==> /var/log/logstash/logstash.log <==
    {:timestamp=>"2017-04-22T21:56:18.930000+0000", :message=>"Pipeline main started"}
    
    ==> /var/log/elasticsearch/elasticsearch.log <==
    [2017-04-22 21:56:19,517][INFO ][cluster.metadata         ] [Karma] [.kibana] creating index, cause [api], templates [], shards [1]/[1], mappings [config]
    [2017-04-22 21:56:19,755][INFO ][cluster.routing.allocation] [Karma] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
    
    ==> /var/log/kibana/kibana4.log <==
    {"type":"log","@timestamp":"2017-04-22T21:56:19+00:00","tags":["status","plugin:elasticsearch","info"],"pid":192,"name":"plugin:elasticsearch","state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
    {"type":"log","@timestamp":"2017-04-22T21:56:22+00:00","tags":["status","plugin:elasticsearch","info"],"pid":192,"name":"plugin:elasticsearch","state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}
    

      

  • 相关阅读:
    CRLF注入
    Windows下消息中间件RabbitMQ安装教程(超详细)
    (超详细)SpringBoot+RabbitMQ+Stomp+JS实现前端消息推送
    数数塔 NBUT 1083
    数数塔 NBUT 1083
    数塔 HDU 2084
    数塔 HDU 2084
    数塔 HDU 2084
    递推
    递推
  • 原文地址:https://www.cnblogs.com/itcomputer/p/6735254.html
Copyright © 2011-2022 走看看