zoukankan      html  css  js  c++  java
  • ELK学习

     

    Lucene是一套信息检索工具包!jar包,不好含搜索引擎系统!

    包含的:索引结构!读写索引的工具!排序,搜索规则。。。工具包

    Lucene 和 ElasticSearch关系:

    ElasticSearch 是基于lucene做了一些封装和增强!

    只要学不死,就往死里学!

     

    ElasticSearch概述

    历史

    谁在使用

    ES和solr的差别

    ElasticSearch简介

    ElasticSearch 是一个实时分布式搜索和分析引擎。

    用于全文搜索结构化搜索分析以及将这三者混合使用;

    Solr简介

    Lucene简介

    ElasticSearch 和 solr比较

     

     

    ElasticSearch安装

    jdk1.8最低要求!

    ES依赖于java,确保java环境安装正确。

    安装

    https://www.elastic.co/cn/

    windows安装

    1、解压就可以使用

    2、熟悉目录

     bin #启动目录
     config #配置文件
       log4j2 #日志配置文件
       jvm.options #java虚拟机相关配置
       elasticsearch.yml #elasticsearch 配置文件,默认端口 9200
     lib  #相关jar包
     modules #功能模块
     plugins  #插件
    
    

    3、启动

    bin目录下面,elasticSearch.bat,双击即可启动!

    4、访问测试

     安装可视化界面es head 插件

    步骤:

    1、进入head 目录,然后通过cmd进入,输入  cnpm install  ,等待安装完成。

     

     

     2、继续输入命令: npm run start

     

     问题:出现了跨域问题:

     解决跨域问题

    1. 先停掉ElasticSearch,然后进入到ElasticSearch的安装目录下面,找到yml 文件,然后在最后一行输入:
    http.cors.enabled: true
    http.cors.allow-origin: "*"

           

    # ======================== Elasticsearch Configuration =========================
    #
    # NOTE: Elasticsearch comes with reasonable defaults for most settings.
    #       Before you set out to tweak and tune the configuration, make sure you
    #       understand what are you trying to accomplish and the consequences.
    #
    # The primary way of configuring a node is via this file. This template lists
    # the most important settings you may want to configure for a production cluster.
    #
    # Please consult the documentation for further information on configuration options:
    # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
    #
    # ---------------------------------- Cluster -----------------------------------
    #
    # Use a descriptive name for your cluster:
    #
    #cluster.name: my-application
    #
    # ------------------------------------ Node ------------------------------------
    #
    # Use a descriptive name for the node:
    #
    #node.name: node-1
    #
    # Add custom attributes to the node:
    #
    #node.attr.rack: r1
    #
    # ----------------------------------- Paths ------------------------------------
    #
    # Path to directory where to store the data (separate multiple locations by comma):
    #
    #path.data: /path/to/data
    #
    # Path to log files:
    #
    #path.logs: /path/to/logs
    #
    # ----------------------------------- Memory -----------------------------------
    #
    # Lock the memory on startup:
    #
    #bootstrap.memory_lock: true
    #
    # Make sure that the heap size is set to about half the memory available
    # on the system and that the owner of the process is allowed to use this
    # limit.
    #
    # Elasticsearch performs poorly when the system is swapping the memory.
    #
    # ---------------------------------- Network -----------------------------------
    #
    # Set the bind address to a specific IP (IPv4 or IPv6):
    #
    #network.host: 192.168.0.1
    #
    # Set a custom port for HTTP:
    #
    #http.port: 9200
    #
    # For more information, consult the network module documentation.
    #
    # --------------------------------- Discovery ----------------------------------
    #
    # Pass an initial list of hosts to perform discovery when this node is started:
    # The default list of hosts is ["127.0.0.1", "[::1]"]
    #
    #discovery.seed_hosts: ["host1", "host2"]
    #
    # Bootstrap the cluster using an initial set of master-eligible nodes:
    #
    #cluster.initial_master_nodes: ["node-1", "node-2"]
    #
    # For more information, consult the discovery and cluster formation module documentation.
    #
    # ---------------------------------- Gateway -----------------------------------
    #
    # Block initial recovery after a full cluster restart until N nodes are started:
    #
    #gateway.recover_after_nodes: 3
    #
    # For more information, consult the gateway module documentation.
    #
    # ---------------------------------- Various -----------------------------------
    #
    # Require explicit names when deleting indices:
    #
    #action.destructive_requires_name: true
    
    
    http.cors.enabled: true
    http.cors.allow-origin: "*"

    再次启动ElasticSearch.bat  如下图所示,就表示Es Head 插件安装完毕。

    把索引就是一个数据库!可以建立索引(库)、文档(库中的数据)

    安装Kibana

    了解ELK

    ELK 是ElasticSearch 、Logstash 、Kibana 三大框架首字母大写简称。 也称作:Elastic Stack。

       1、 其中ElasticSearch 是一个基于 Lucene 、分布式、通过Restful 方式进行交互的近实时的搜索平台框架。

       2、 Logstash 是ELK 的中央数据流引擎,是用来从不同的渠道(文件,数据存储,redis,MQ消息队列中)收集不同格式的数据的,通过过滤候支持输出到不同过的目的地(文件、MQ、redis、elasticsearch、kafaka等

       3、 Kibana 可以把ElasticSearch 的数据展示出来,用来展示数据的。

     安装Kibana

    在kibana的bin目录下面,双击kibana.bat文件即可启动;可以看下,kibana.bat 是基于node 来启动的,所以要先安装node的环境

     

     启动

    1、双击bibana.bat

     2、在浏览器中访问  http://localhost:5601/

     

     3、汉化!

    在 E:ELKkibana-7.6.1-windows-x86_64kibana-7.6.1-windows-x86_64x-packplugins ranslations ranslations      值中国话的配置文件

     步骤

     

     关闭kibana,重启

     效果:

    ES核心概念

    1、索引

    2、字段类型(mapping 映射)

    3、文档

    4、分片(倒序排列)

    ElasticSearch 是面向文档的, 一切都是JSON!

    数据库  DB ElasticSearch
    数据库(database) 索引(indices)
    表(table) types (7.x 会被弃用)
    行(rows) documents
    字段(columns) fields

    一些概念

    1、文档

    文档的重要属性:  

       1、key:value 类型

       2、一个文档可以包含子文档,一个文档可以包含子文档,复杂的逻辑实体就是这么来的!(就是一个json 对象,fastjson 进行自动转换
       3、灵活的结构,文档不依赖预先定义的模式,在关系型数据库中需要先定义字段,然后才可以使用,但是在ES中字段是很灵活的,可以动态添加一个字段。

     简单来说,文档就相当与一条记录,例如

    User
      1  zhangsna         23
      2  lisi             35

     

     2、索引

    索引就是一个数据库!

     

     

    精确查询

     GET kuang/user/_search?q=name:狂神说java

     IK分词器 

    IK分词器 就是把一段文字切割,分成不同的词组。

    IK分词器的安装

     把下载的ik分词器 放到 elasticSearch 的plugin 目录下面,在 plugin目录下面新建一个目录,ik,然后把下载的ik分词器 里面的内容全部复制到ik这个目录下面即可!

     重启elasticSearch,可以看到加载 了刚才添加的ik分词器插件。

    重启kibana验证一下ik分词器

    ik分词器有2中分词的类型

     

    自定义分词器

    把‘狂神说’ 分割了。这个时候我们可以自己定义我们常用的词

     

    第一步: 在 elasticSearch 的 plugin 目录的 新建的 ik 分词器的目录下的config 文件夹下面新建一个 以 .dic 结尾的文件

     

     第二步:在新建的kuang.dic 目录下面加上 自定义的词组

     第三步: 在 IKAnalyzer.cfg.xml 目录下面的 加上自己写的  kuang.dic 文件

    验证:发现 狂神说  三个字没有被切割

     

     将来我们需要自定义分词就在自己定义的dic文件中配置即可。

    ELK集成SpringBoot

    1、所需要的依赖

    2、如何创建

    3、关闭客户端

    配置一个基本的项目

    2、添加ES 配置类

    package com.wang.config;
    
    import org.apache.http.HttpHost;
    import org.elasticsearch.client.RestClient;
    import org.elasticsearch.client.RestHighLevelClient;
    import org.springframework.context.annotation.Configuration;
    
    /**
     * @author 王立朝
     * @date 2020-12
     * @description:ES 配置类,2个步骤,1找到对象,放到spring中;
     */
    @Configuration
    public class ElasticSearchConfig {
    
        /**
         * 注入bean
         */
        public RestHighLevelClient restHighLevelClient(){
            RestHighLevelClient client = new RestHighLevelClient(
                    RestClient.builder(new HttpHost("localhost",9200,"http"))
            );
            return client;
        }
    
    }

    源码

     

     重要的类就是下面的这个

    //
    // Source code recreated from a .class file by IntelliJ IDEA
    // (powered by Fernflower decompiler)
    //
    
    package org.springframework.boot.autoconfigure.elasticsearch.rest;
    
    import java.time.Duration;
    import org.apache.http.HttpHost;
    import org.apache.http.auth.AuthScope;
    import org.apache.http.auth.Credentials;
    import org.apache.http.auth.UsernamePasswordCredentials;
    import org.apache.http.client.CredentialsProvider;
    import org.apache.http.impl.client.BasicCredentialsProvider;
    import org.elasticsearch.client.RestClient;
    import org.elasticsearch.client.RestClientBuilder;
    import org.elasticsearch.client.RestHighLevelClient;
    import org.springframework.beans.factory.ObjectProvider;
    import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
    import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
    import org.springframework.boot.context.properties.PropertyMapper;
    import org.springframework.context.annotation.Bean;
    import org.springframework.context.annotation.Configuration;
    
    class RestClientConfigurations {
        RestClientConfigurations() {
        }
    
        @Configuration(
            proxyBeanMethods = false
        )
        static class RestClientFallbackConfiguration {
            RestClientFallbackConfiguration() {
            }
    
    // 普通客户端 @Bean @ConditionalOnMissingBean RestClient elasticsearchRestClient(RestClientBuilder builder) {
    return builder.build(); } } @Configuration( proxyBeanMethods = false ) @ConditionalOnClass({RestHighLevelClient.class}) static class RestHighLevelClientConfiguration { RestHighLevelClientConfiguration() { } // RestHighLevelClient 高级的客户端 @Bean @ConditionalOnMissingBean RestHighLevelClient elasticsearchRestHighLevelClient(RestClientBuilder restClientBuilder) { return new RestHighLevelClient(restClientBuilder); } @Bean @ConditionalOnMissingBean RestClient elasticsearchRestClient(RestClientBuilder builder, ObjectProvider<RestHighLevelClient> restHighLevelClient) { RestHighLevelClient client = (RestHighLevelClient)restHighLevelClient.getIfUnique(); return client != null ? client.getLowLevelClient() : builder.build(); } } @Configuration( proxyBeanMethods = false ) static class RestClientBuilderConfiguration { RestClientBuilderConfiguration() { }
    // RestClientBuilder @Bean @ConditionalOnMissingBean RestClientBuilder elasticsearchRestClientBuilder(RestClientProperties properties, ObjectProvider
    <RestClientBuilderCustomizer> builderCustomizers) { HttpHost[] hosts = (HttpHost[])properties.getUris().stream().map(HttpHost::create).toArray((x$0) -> { return new HttpHost[x$0]; }); RestClientBuilder builder = RestClient.builder(hosts); PropertyMapper map = PropertyMapper.get(); map.from(properties::getUsername).whenHasText().to((username) -> { CredentialsProvider credentialsProvider = new BasicCredentialsProvider(); Credentials credentials = new UsernamePasswordCredentials(properties.getUsername(), properties.getPassword()); credentialsProvider.setCredentials(AuthScope.ANY, credentials); builder.setHttpClientConfigCallback((httpClientBuilder) -> { return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider); }); }); builder.setRequestConfigCallback((requestConfigBuilder) -> { properties.getClass(); map.from(properties::getConnectionTimeout).whenNonNull().asInt(Duration::toMillis).to(requestConfigBuilder::setConnectTimeout); properties.getClass(); map.from(properties::getReadTimeout).whenNonNull().asInt(Duration::toMillis).to(requestConfigBuilder::setSocketTimeout); return requestConfigBuilder; }); builderCustomizers.orderedStream().forEach((customizer) -> { customizer.customize(builder); }); return builder; } } }

    具体的ElasticSearchAPi 操作

    1、创建索引

      // 测试索引的创建 Request
        @Test
        void testCreateIndex() throws IOException {
            // 1、创建索引请求
            CreateIndexRequest request = new CreateIndexRequest("wang_index");
            // 2、执行创建请求
            CreateIndexResponse createIndexResponse = client.indices().create(request, RequestOptions.DEFAULT);
            System.out.println(createIndexResponse);
        }

    2、获取索引

     @Test
        void testExistIndex() throws IOException {
            GetIndexRequest getIndexRequest = new GetIndexRequest("wang_index2");
            boolean exists = client.indices().exists(getIndexRequest, RequestOptions.DEFAULT);
            System.out.println(exists);
        }

    3、删除索引

    
    
    /**
    * 测试删除索引
    */
    @Test
        void testDeleteIndex() throws IOException {
            DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest("wang_index");
            AcknowledgedResponse response = client.indices().delete(deleteIndexRequest, RequestOptions.DEFAULT);
            System.out.println("response = " + response.isAcknowledged());
    
        }

    文档操作

    4、创建文档

     /**
         * 创建文档
         */
        @Test
        void testAddDocument() throws IOException {
            // 创建对象
            User zhangsan = new User("1", "zhangsan", "12");
            // 创建文档请求
            IndexRequest indexRequest = new IndexRequest("wang_index");
            // 规则 对应命令: PUT /wang_index/_doc/1
            indexRequest.id("1");
            indexRequest.timeout(TimeValue.timeValueSeconds(1));
            // 将数据放入请求,json
            indexRequest.source(JSON.toJSONString(zhangsan), XContentType.JSON);
            // 客户端发送请求
            IndexResponse index = client.index(indexRequest, RequestOptions.DEFAULT);
            System.out.println("index = " + index.status());
    
    
        }

    5、测试文档是否存在

     /**
         * 测试文档是否存在 get /index/doc/1
         * @throws IOException
         */
        @Test
        void existDocument() throws IOException {
            // 创建获取文档的请求
            GetRequest getRequest = new GetRequest("wang_index", "1");
            // TODO 不获取返回的 _source 的上下文了
            getRequest.fetchSourceContext(new FetchSourceContext(false));
            getRequest.storedFields("_none_");
            boolean exists = client.exists(getRequest, RequestOptions.DEFAULT);
            System.out.println(exists);
    
    
        }

    6、获取文档信息

       /**
         * 获取文档的信息
         */
        @Test
        void testGetDocument() throws IOException {
            // 创建获取文档的请求
            GetRequest getRequest = new GetRequest("wang_index", "1");
            GetResponse getResponse = client.get(getRequest, RequestOptions.DEFAULT);
            System.out.println(getResponse.getSourceAsString());
            System.out.println(getResponse);
        }

    7、更新文档的信息

       /**
         * 更新文档的信息
         */
        @Test
        void testUpdateDocument() throws IOException {
            // 创建获取文档的请求
            UpdateRequest updateRequest = new UpdateRequest("wang_index", "1");
            updateRequest.timeout("1s");
            User user1 = new User("2", "小王", "12");
            // 把Java对象转换为JSON格式
            updateRequest.doc(JSON.toJSONString(user1), XContentType.JSON);
    
            UpdateResponse update = client.update(updateRequest, RequestOptions.DEFAULT);
            System.out.println(update);
        }

    8、删除文档的信息

        /**
         * 删除文档的信息
         */
        @Test
        void testDeleteDocument() throws IOException {
            // 创建获取文档的请求
            DeleteRequest deleteRequest = new DeleteRequest("wang_index", "1");
            // 设置请求响应时间
            deleteRequest.timeout("1s");
            // 删除
            DeleteResponse delete = client.delete(deleteRequest, RequestOptions.DEFAULT);
            System.out.println(delete);
        }

    9、批量插入数据

     /**
         * 批量插入数据
         */
        @Test
        void testBulkRequest() throws IOException {
            // 批量操作的请求
            BulkRequest bulkRequest = new BulkRequest();
            bulkRequest.timeout("10s");
    
            ArrayList<User> userArrayList = new ArrayList<>();
            userArrayList.add(new User("2", "小王", "12"));
            userArrayList.add(new User("1", "小王11", "12"));
            userArrayList.add(new User("2", "小王2", "12"));
            userArrayList.add(new User("3", "小王3", "13"));
            userArrayList.add(new User("4", "小王4", "14"));
            userArrayList.add(new User("5", "小王5", "15"));
    
            for (int i = 0; i < userArrayList.size(); i++) {
                bulkRequest.add(new IndexRequest("wang_index")
                        .id("" + (i + 1)).source(JSON.toJSONString(userArrayList.get(i)), XContentType.JSON));
            }
            BulkResponse bulk = client.bulk(bulkRequest, RequestOptions.DEFAULT);
            // 返回失败,表示批量插入成功
            System.out.println(bulk.hasFailures());
        }

    10 、查询文档

        /**
         * 查询
         */
        @Test
        void testSearch() throws IOException {
            SearchRequest searchRequest = new SearchRequest("wang_index");
            // 1、构建搜索条件Builder 构造器,通过
            SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
    
            // 2、查询条件 ,我们可以使用QueryBuilders 工具来实现
            // QueryBuilders.termQuery 精确查询
            // QueryBuyilders.matchAllQuery(); 匹配所有
            TermQueryBuilder termQueryBuilder = QueryBuilders.termQuery("age", "15");
            searchSourceBuilder.query(termQueryBuilder);
            searchSourceBuilder.timeout(new TimeValue(60, TimeUnit.MINUTES));
            // 3、把查询条件放入请求中
            searchRequest.source(searchSourceBuilder);
            // 4、执行
            SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
            System.out.println(JSON.toJSONString(searchResponse.getHits()));
            System.out.println("===========");
            System.out.println(JSON.toJSONString(searchResponse.getHits()));
            System.out.println("-------------");
            for (SearchHit documentFields: searchResponse.getHits().getHits()){
                System.out.println(documentFields.getSourceAsMap());
            }
    
        }
  • 相关阅读:
    证明LDU分解的唯一性
    SVD图片有损压缩测试
    复数系下常量乘向量的范数
    6.4.2定理证明
    证明2x2正交矩阵专置后还是正交矩阵
    百度地图 ijintui以及七牛、百度编辑器、kindeditor
    安装和使用solr
    Python验证码识别处理实例 深度学习大作业
    lucene之中文分词及其高亮显示
    Elasticsearch报警插件Watch安装以及使用
  • 原文地址:https://www.cnblogs.com/wanglichaoya/p/ELK.html
Copyright © 2011-2022 走看看