zoukankan      html  css  js  c++  java
  • Kibana --> Getting Started -->Building your own dashboard

    Ready to load some data and build a dashboard? This tutorial shows you how to:

    • Load a data set into Elasticsearch
    • Define an index pattern
    • Discover and explore the data
    • Visualize the data
    • Add visualizations to a dashboard
    • Inspect the data behind a visualization 

    Loading sample data

    This tutorial requires three data sets:

    • The complete works of William Shakespeare, suitably parsed into fields. Download shakespeare.json.
    • A set of fictitious accounts with randomly generated data. Download accounts.zip.
    • A set of randomly generated log files. Download logs.jsonl.gz.

    Two of the data sets are compressed. To extract the files, use these commands:

    unzip accounts.zip
    gunzip logs.jsonl.gz

    Structure of the data sets

    The Shakespeare data set has this structure:

    {
        "line_id": INT,
        "play_name": "String",
        "speech_number": INT,
        "line_number": "String",
        "speaker": "String",
        "text_entry": "String",
    }

    The accounts data set is structured as follows:

    {
        "account_number": INT,
        "balance": INT,
        "firstname": "String",
        "lastname": "String",
        "age": INT,
        "gender": "M or F",
        "address": "String",
        "employer": "String",
        "email": "String",
        "city": "String",
        "state": "String"
    }

    The logs data set has dozens of different fields. Here are the notable fields for this tutorial:

    {
        "memory": INT,
        "geo.coordinates": "geo_point"
        "@timestamp": "date"
    }

    Set up mappings

    Before you load the Shakespeare and logs data sets, you must set up mappings for the fields.

    Mappings divide the documents in the index into logical groups and specify the characteristics of the fields.

    These characteristics include the searchability of the field and whether it’s tokenized, or broken up into separate words.

    In Kibana Dev Tools > Console, set up a mapping for the Shakespeare data set:

    PUT /shakespeare
    {
     "mappings": {
      "doc": {
       "properties": {
        "speaker": {"type": "keyword"},
        "play_name": {"type": "keyword"},
        "line_id": {"type": "integer"},
        "speech_number": {"type": "integer"}
       }
      }
     }
    }

    This mapping specifies field characteristics for the data set:

    • The speaker and play_name fields are keyword fields. These fields are not analyzed. The strings are treated as a single unit even if they contain multiple words.
    • The line_id and speech_number fields are integers.

    响应

    {
    "acknowledged" : true,
    "shards_acknowledged" : true,
    "index" : "shakespeare"
    }

    The logs data set requires a mapping to label the latitude and longitude pairs as geographic locations by applying the geo_point type.

    PUT /logstash-2015.05.18
    {
      "mappings": {
        "log": {
          "properties": {
            "geo": {
              "properties": {
                "coordinates": {
                  "type": "geo_point"
                }
              }
            }
          }
        }
      }
    }

    {
    "acknowledged" : true,
    "shards_acknowledged" : true,
    "index" : "logstash-2015.05.18"
    }

    The accounts data set doesn’t require any mappings.

    查询一下当前的所有indices

    GET /_cat/indices?v HTTP/1.1
    Host: localhost:9200

    新导入的logstash-2015.05.18,logstash-2015.05.19,logstash-2015.05.20这个三个index的docs.count的个数都是0。

    bank是之前在学习elastic search时候导入的

    health status index               uuid                   pri rep docs.count docs.deleted store.size pri.store.size
    yellow open   logstash-2015.05.18 dL2ZaIelR_uvKMnPYy_8Eg   5   1          0            0      1.2kb          1.2kb
    yellow open   logstash-2015.05.20 M1PWnqXLRgClt-iwqN4OUg   5   1          0            0      1.2kb          1.2kb
    yellow open   customer            p6H8gEOdQAWBuSN2HDEjZA   5   1          1            0      4.4kb          4.4kb
    yellow open   shakespeare         I8mqiFkkTdK9IlcarIZA4A   5   1          0            0      1.2kb          1.2kb
    yellow open   bank                l45mhl-7QNibqbmbi2Jmbw   5   1       1000            0    475.1kb        475.1kb
    green  open   .kibana_1           CUsQj9zkSCSC-XiDJgXYQQ   1   0          2            0      8.6kb          8.6kb
    yellow open   logstash-2015.05.19 14rDFdQFTQK-GNgDXtlmeQ   5   1          0            0      1.2kb          1.2kb

    Load the data sets

    At this point, you’re ready to use the Elasticsearch bulk API to load the data sets:

    curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json
    curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/shakespeare/doc/_bulk?pretty' --data-binary @shakespeare_6.0.json
    curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl

    Or for Windows users, in Powershell:

    Invoke-RestMethod "http://localhost:9200/bank/account/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "accounts.json"
    Invoke-RestMethod "http://localhost:9200/shakespeare/doc/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "shakespeare_6.0.json"
    Invoke-RestMethod "http://localhost:9200/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "logs.jsonl"

    可以保存为一个ps1的脚本文件,然后直接运行这个脚本文件进行导入

    These commands might take some time to execute, depending on the available computing resources.

    Verify successful loading:

    再次查询所有的index

    GET /_cat/indices?v

    Your output should look similar to this:

    health status index               uuid                   pri rep docs.count docs.deleted store.size pri.store.size
    yellow open   logstash-2015.05.18 dL2ZaIelR_uvKMnPYy_8Eg   5   1       4631            0     22.5mb         22.5mb
    yellow open   logstash-2015.05.20 M1PWnqXLRgClt-iwqN4OUg   5   1       4750            0     24.1mb         24.1mb
    yellow open   customer            p6H8gEOdQAWBuSN2HDEjZA   5   1          1            0      4.4kb          4.4kb
    yellow open   shakespeare         I8mqiFkkTdK9IlcarIZA4A   5   1     111396            0     21.5mb         21.5mb
    yellow open   bank                l45mhl-7QNibqbmbi2Jmbw   5   1       1000            0    475.1kb        475.1kb
    green  open   .kibana_1           CUsQj9zkSCSC-XiDJgXYQQ   1   0          2            0      8.6kb          8.6kb
    yellow open   logstash-2015.05.19 14rDFdQFTQK-GNgDXtlmeQ   5   1       4624            0     23.6mb         23.6mb

    Defining your index patterns

    Index patterns tell Kibana which Elasticsearch indices you want to explore. An index pattern can match the name of a single index, or include a wildcard (*) to match multiple indices.

    For example, Logstash typically creates a series of indices in the format logstash-YYYY.MMM.DD. To explore all of the log data from May 2018, you could specify the index pattern logstash-2018.05*.

    You’ll create patterns for the Shakespeare data set, which has an index named shakespeare, and the accounts data set, which has an index named bank. These data sets don’t contain time-series data.

    1. In Kibana, open Management, and then click Index Patterns.
    2. If this is your first index pattern, the Create index pattern page opens automatically. Otherwise, click Create index pattern in the upper left.
    3. Enter shakes* in the Index pattern field.

  • 相关阅读:
    Delphi内JPG与BMP的互相转换
    通过设置数据单元格的hint和ToolTips属性,当移动鼠标到该单元格时,可以显示单元格容纳不下的文本内容...
    delphi内进行音量控制及静音
    误把TXT文件关联设成CMD的解决办法
    精通批处理教程
    我的Qzone第一天
    寻找第K大的数的方法总结
    算法的力量(李开复)
    HDOJ 1001
    添加收藏夹
  • 原文地址:https://www.cnblogs.com/chucklu/p/10559546.html
Copyright © 2011-2022 走看看