zoukankan      html  css  js  c++  java
  • java Elasticsearch 进行嵌套子聚合

    聚合子查询:

    TermsAggregationBuilder aggregation = AggregationBuilders.terms("dt_id").field("dt_ids").size(30000);
            String agg_area_field = "city_code";
            TermsAggregationBuilder area_field_builder = AggregationBuilders.terms("area_field").field(agg_area_field).size(30000);
            area_field_builder.subAggregation(aggregation);
            SearchResponse response = client.prepareSearch(index_name).setTypes("lw_devices")
                    .setQuery(boolQuery)
                    .addAggregation(area_field_builder)
                    .execute()
                    .actionGet();

    以上demo的大致的意思就是,先对city_code字段进行聚合,然后对聚合结果,再用dt_ids字段进行嵌套聚合(子聚合) , 相当于sql的两个group by , 一个聚合嵌套于另一个聚合之内,可以用subAggregation方法进行关联。

    然后就是对聚合结果的遍历:

            Terms terms = response.getAggregations().get("area_field");
            List<Terms.Bucket> buckets = terms.getBuckets();for (Terms.Bucket bucket : buckets) {
                //地区
                String code = (String) bucket.getKey();
                System.out.println(code);
                Aggregations aggregations = bucket.getAggregations();
                Terms dt_id = aggregations.get("dt_id");
                List<Terms.Bucket> buckets1 = dt_id.getBuckets();
                for (Terms.Bucket bucket1 : buckets1) {
                    System.out.println(bucket1.getKey() + ":" + bucket1.getDocCount());
                }
            }

    思路就是,先根据response拿到aggregation,然后根据标识符拿到对应的,聚合类型,然后得到bucket的集合。然后再通过bucket拿到aggregation, 然后以此循环往下提取聚合结果。

  • 相关阅读:
    LeetCode_441. Arranging Coins
    LeetCode_437. Path Sum III
    Spearman秩相关系数和Pearson皮尔森相关系数
    Spark MLlib 之 Basic Statistics
    Spark MLlib 之 Naive Bayes
    Spark MLlib Data Type
    maven 下载 源码和javadoc 命令
    Hadoop的数据输入的源码解析
    Spark相关错误汇总
    Spark External Datasets
  • 原文地址:https://www.cnblogs.com/chenmz1995/p/11583915.html
Copyright © 2011-2022 走看看