zoukankan      html  css  js  c++  java
  • druid安装部署

    Druid安装部署

    一、环境需求

    Java 8 (8u92 +)

    Linux, Mac OS X, or other Unix-like OS(不支持Windows系统)

    Zookeeper (3.4 +)

    Druid下载:

    官网:https://druid.apache.org,进入官网后点击“Download”,进入下载界面,当前最新版本0.16,这里我们选择编译版本下载;

    如果想下载历史版本,可以下拉,点击“Apache release archives “进入历史版本下载界面

    二、单机版安装部署

    1.将安装包上传到服务器,并解压安装包

    tar -xzf apache-druid-0.16.0-incubating-bin.tar.gz  -C  /data

    2.下载zookeeper安装包上传到服务器,并解压安装包到druid的根目录,并重名为zk

      tar -xzf zookeeper-3.4.6.tar.gz  -C /data/apache-druid-0.16.0-incubating

      mv  /data/apache-druid-0.16.0-incubating/zookeeper-3.4.6  /data /apache-druid-0.16.0-incubating/zk

    3.进入druid的安装目录,执行单机启动脚本

      cd /data/apache-druid-0.16.0-incubating

      ./bin/start-micro-quickstart

          4.访问http://localhost:8888 查看druid管理界面

    三、集群版安装部署

    部署规划

    角色

    机器

    集群角色(进程)

    主节点

    hadoop1(192.168.252.111)

    Coordinator,Overlord

    数据节点

    hadoop2(192.168.252.112)

    Historical, MiddleManager

    查询节点

    hadoop3(192.168.252.113)

    Broker ,router

    外部依赖

    组件

    机器

    描述

    zookeeper

    hadoop1、hadoop2、hadoop3

    提供分布式协调服务

    hadoop

    hadoop1、hadoop2、hadoop3

    提供数据文件存储库

    mysql

    hadoop1

    提供元数据存储库

    注:外部依赖组件安装请查阅相关资料,这里默认外部依赖已安装成功并运行

    1.将安装包上传到hadoop1服务器,并解压安装包,并进入该目录

    tar -xzf apache-druid-0.16.0-incubating-bin.tar.gz  -C  /data

    cd /data/apache-druid-0.16.0-incubating

    2.修改核心配置文件,配置文件如下

          vim conf/druid/cluster/_common/common.runtime.properties

    1.  
      #
    2.  
      # Licensed to the Apache Software Foundation (ASF) under one
    3.  
      # or more contributor license agreements. See the NOTICE file
    4.  
      # distributed with this work for additional information
    5.  
      # regarding copyright ownership. The ASF licenses this file
    6.  
      # to you under the Apache License, Version 2.0 (the
    7.  
      # "License"); you may not use this file except in compliance
    8.  
      # with the License. You may obtain a copy of the License at
    9.  
      #
    10.  
      # http://www.apache.org/licenses/LICENSE-2.0
    11.  
      #
    12.  
      # Unless required by applicable law or agreed to in writing,
    13.  
      # software distributed under the License is distributed on an
    14.  
      # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    15.  
      # KIND, either express or implied. See the License for the
    16.  
      # specific language governing permissions and limitations
    17.  
      # under the License.
    18.  
      #
    19.  
       
    20.  
      # Extensions specified in the load list will be loaded by Druid
    21.  
      # We are using local fs for deep storage - not recommended for production - use S3, HDFS, or NFS instead
    22.  
      # We are using local derby for the metadata store - not recommended for production - use MySQL or Postgres instead
    23.  
       
    24.  
      # If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
    25.  
      # If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
    26.  
      # More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
    27.  
      druid.extensions.loadList=["mysql-metadata-storage","druid-hdfs-storage","druid-kafka-indexing-service"]
    28.  
       
    29.  
      # If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
    30.  
      # and uncomment the line below to point to your directory.
    31.  
      #druid.extensions.hadoopDependenciesDir=/my/dir/hadoop-dependencies
    32.  
       
    33.  
       
    34.  
      #
    35.  
      # Hostname
    36.  
      #
    37.  
      druid.host=localhost
    38.  
       
    39.  
      #
    40.  
      # Logging
    41.  
      #
    42.  
       
    43.  
      # Log all runtime properties on startup. Disable to avoid logging properties on startup:
    44.  
      druid.startup.logging.logProperties=true
    45.  
       
    46.  
      #
    47.  
      # Zookeeper
    48.  
      #
    49.  
       
    50.  
      druid.zk.service.host=hadoop1:2181,hadoop2:2181,hadoop3:2181
    51.  
      druid.zk.paths.base=/druid
    52.  
       
    53.  
      #
    54.  
      # Metadata storage
    55.  
      #
    56.  
       
    57.  
      # For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
    58.  
      #druid.metadata.storage.type=derby
    59.  
      #druid.metadata.storage.connector.connectURI=jdbc:derby://localhost:1527/var/druid/metadata.db;create=true
    60.  
      #druid.metadata.storage.connector.host=localhost
    61.  
      #druid.metadata.storage.connector.port=1527
    62.  
       
    63.  
      # For MySQL (make sure to include the MySQL JDBC driver on the classpath):
    64.  
      druid.metadata.storage.type=mysql
    65.  
      druid.metadata.storage.connector.connectURI=jdbc:mysql://hadoop1:3306/druid
    66.  
      druid.metadata.storage.connector.user=root
    67.  
      druid.metadata.storage.connector.password=123456
    68.  
       
    69.  
      # For PostgreSQL:
    70.  
      #druid.metadata.storage.type=postgresql
    71.  
      #druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
    72.  
      #druid.metadata.storage.connector.user=...
    73.  
      #druid.metadata.storage.connector.password=...
    74.  
       
    75.  
      #
    76.  
      # Deep storage
    77.  
      #
    78.  
       
    79.  
      # For local disk (only viable in a cluster if this is a network mount):
    80.  
      #druid.storage.type=local
    81.  
      #druid.storage.storageDirectory=var/druid/segments
    82.  
       
    83.  
      # For HDFS:
    84.  
      druid.storage.type=hdfs
    85.  
      druid.storage.storageDirectory=hdfs://testcluster/druid/segments
    86.  
       
    87.  
      # For S3:
    88.  
      #druid.storage.type=s3
    89.  
      #druid.storage.bucket=your-bucket
    90.  
      #druid.storage.baseKey=druid/segments
    91.  
      #druid.s3.accessKey=...
    92.  
      #druid.s3.secretKey=...
    93.  
       
    94.  
      #
    95.  
      # Indexing service logs
    96.  
      #
    97.  
       
    98.  
      # For local disk (only viable in a cluster if this is a network mount):
    99.  
      #druid.indexer.logs.type=file
    100.  
      #druid.indexer.logs.directory=var/druid/indexing-logs
    101.  
       
    102.  
      # For HDFS:
    103.  
      druid.indexer.logs.type=hdfs
    104.  
      druid.indexer.logs.directory=hdfs://testcluster/druid/indexing-logs
    105.  
       
    106.  
      # For S3:
    107.  
      #druid.indexer.logs.type=s3
    108.  
      #druid.indexer.logs.s3Bucket=your-bucket
    109.  
      #druid.indexer.logs.s3Prefix=druid/indexing-logs
    110.  
       
    111.  
      #
    112.  
      # Service discovery
    113.  
      #
    114.  
       
    115.  
      druid.selectors.indexing.serviceName=druid/overlord
    116.  
      druid.selectors.coordinator.serviceName=druid/coordinator
    117.  
       
    118.  
      #
    119.  
      # Monitoring
    120.  
      #
    121.  
       
    122.  
      druid.monitoring.monitors=["org.apache.druid.java.util.metrics.JvmMonitor"]
    123.  
      druid.emitter=noop
    124.  
      druid.emitter.logging.logLevel=info
    125.  
       
    126.  
      # Storage type of double columns
    127.  
      # ommiting this will lead to index double as float at the storage layer
    128.  
       
    129.  
      druid.indexing.doubleStorage=double
    130.  
       
    131.  
      #
    132.  
      # Security
    133.  
      #
    134.  
      druid.server.hiddenProperties=["druid.s3.accessKey","druid.s3.secretKey","druid.metadata.storage.connector.password"]
    135.  
       
    136.  
       
    137.  
      #
    138.  
      # SQL
    139.  
      #
    140.  
      druid.sql.enable=true
    141.  
       
    142.  
      #
    143.  
      # Lookups
    144.  
      #
    145.  
      druid.lookup.enableLookupSyncOnStartup=false

    重点关注红色部分,其中druid.extensions.loadList为需要加载的外部组件配置,示例中依赖外部组件zookeeper、hadoop、mysql。(kafka依赖配置可选)

    zookeeper默认配置支持zookeeper3.4.14,如何外部依赖zookeeper版本不兼容,则需要将相关的zookeeper jar拷贝到druid的lib目录下替换原zookeeper相关jar

    hadoop 默认配置支持hadoop 2.8.3客户端,该处需要将外部依赖中hadoop的核心配置文件core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml拷贝到druid中的conf/druid/cluster/_common/目录下,如果该hadoop客户端不兼容组件中的hadoop集群,还需要将集群兼容的hadoop客户端相关jar拷贝到druid中的extensions/druid-hdfs-storage目录下替换原来相关jar

    mysql 没有默认支持,需要将相关版本的mysql的驱动jar拷贝到extensions/mysql-metadata-storage目录下

    3.将druid整个目录发送到hadoop2和hadoop3机器中

    4.根据需要(机器配置、性能要求等)修改hadoop1中关于master节点相关配置,在Druid中的confdruidclustermastercoordinator-overlord目录下,其中runtime.properties为druid想的属性配置,jvm.config为该节点进程的jvm参数配置,默认的集群版本的jvm配置均比较高,如果用于学习测试机器配置不够用情况下,可以将单机版的配置拷贝过来,单机版配置目录confdruidsingle-servermicro-quickstartcoordinator-overlord

    5.根据需要(机器配置、性能要求等)修改hadoop2中关于data节点相关配置,在Druid中的confdruidcluster data目录下,该节点有两个进程historical和middleManager,根据需要修改相关配置,类似步骤4如果测试可以选择单机版的相关配置

    6. 根据需要(机器配置、性能要求等)修改hadoop3中关于query节点相关配置,在Druid中的confdruidcluster query目录下,该节点有两个进程broker和router,根据需要修改相关配置,类似步骤4如果测试可以选择单机版的相关配置

    注:这里druid集群版本中默认配置较测试学习虚拟机要求过高,笔者测试均采用单机版配置

    7.启动

          hadoop1:sh bin/start-cluster-master-no-zk-server >master.log &

          hadoop2:sh bin/start-cluster-data-server > data.log &

          hadoop3:sh bin/start-cluster-query-server > query.log &

    8.关闭:只需要在相关的节点执行 bin/service  --down 关闭当前节点的druid相关的全部进程

    9.Druid相关管理界面访问地址:

          http://192.168.252.113:8888/

          http://192.168.252.111:8081/

    常见问题

    1.关闭命令,不能使用sh bin/service  --down ,因为service  脚本获取参数缺陷

    2. bin/verify-default-ports 脚本为启动检查端口占用脚本,如果你修改过相关组件的端口后,还是提示端口占用,需要关注该脚本,同步修改(该情况一般出现在单机版中)

    3.如果启动日志中出现:Unrecogized VM option 'ExitOnOutOfMemoryError' 说明内存不足,需要修改相应进程的jvm.config配置,或者加机器内存

    https://blog.csdn.net/cb2474600377/article/details/103577796

  • 相关阅读:
    adb
    Android 入门第一课 一个简单的提示框
    JAVA基础入门
    Android环境配置
    Redis_基本类型介绍和指令___3
    Redis_基本类型介绍和指令___2
    Redis_基本类型介绍和指令___1
    Rediss_基本介绍
    int 指令
    关于学习汇编的一些规则的理解(div mul cf of)
  • 原文地址:https://www.cnblogs.com/cnndevelop/p/14011379.html
Copyright © 2011-2022 走看看