zoukankan      html  css  js  c++  java
  • clickhouse集群部署

     

    -----------------------zookeeper部署-----------------------------------------

    请参考:https://www.cnblogs.com/hxlasky/p/14665752.html

     

    ------------------------单机安装----------------------------------------------
    1.下载clickhouse
    需要下载如下3个按照包
    clickhouse-client-20.8.3.18-1.el7.x86_64.rpm
    clickhouse-common-static-20.8.3.18-1.el7.x86_64.rpm
    clickhouse-server-20.8.3.18-1.el7.x86_64.rpm

    下载地址为:https://packagecloud.io/Altinity/clickhouse

    或是这里下载

    https://mirrors.tuna.tsinghua.edu.cn/clickhouse/rpm/stable/x86_64/


    2.进行安装
    这里先在其中一个节点上进行安装,安装顺序如下:
    [root@localhost clickhouse]#rpm -ivh clickhouse-common-static-20.8.3.18-1.el7.x86_64.rpm
    [root@localhost clickhouse]#rpm -ivh clickhouse-server-common-20.8.3.18-1.el7.x86_64.rpm
    [root@localhost clickhouse]#rpm -ivh clickhouse-server-20.8.3.18-1.el7.x86_64.rpm
    [root@localhost clickhouse]#rpm -ivh clickhouse-client-20.8.3.18-1.el7.x86_64.rpm



    拷贝到另外的节点进行安装
    scp clickhouse-client-20.8.3.18-1.el7.x86_64.rpm root@192.168.1.135:/soft/clickhouse/
    scp clickhouse-common-static-20.8.3.18-1.el7.x86_64.rpm root@192.168.1.135:/soft/clickhouse/
    scp clickhouse-server-20.8.3.18-1.el7.x86_64.rpm root@192.168.1.135:/soft/clickhouse/
    scp clickhouse-server-common-20.8.3.18-1.el7.x86_64.rpm root@192.168.1.135:/soft/clickhouse/

    scp clickhouse-client-20.8.3.18-1.el7.x86_64.rpm root@192.168.1.136:/soft/clickhouse/
    scp clickhouse-common-static-20.8.3.18-1.el7.x86_64.rpm root@192.168.1.136:/soft/clickhouse/
    scp clickhouse-server-20.8.3.18-1.el7.x86_64.rpm root@192.168.1.136:/soft/clickhouse/
    scp clickhouse-server-common-20.8.3.18-1.el7.x86_64.rpm root@192.168.1.136:/soft/clickhouse/


    3. rpm安装完毕无误后,clickhouse-server和clickhouse-client配置目录如下
    [root@localhost clickhouse]# ls -al /etc/clickhouse-server/
    total 56
    drwxr-xr-x   2 root root    41 Apr 16 10:49 .
    drwxr-xr-x. 91 root root  8192 Apr 16 10:50 ..
    -rw-r--r--   1 root root 33738 Oct  6  2020 config.xml
    -rw-r--r--   1 root root  5587 Oct  6  2020 users.xml

    [root@localhost clickhouse]# ll /etc/clickhouse-client/
    total 4
    drwxr-xr-x 2 clickhouse clickhouse    6 Apr 16 10:50 conf.d
    -rw-r--r-- 1 clickhouse clickhouse 1568 Oct  6  2020 config.xml

    4. 修改数据存放的目录:
    对于目前性能压测只需要关注/etc/clickhouse-server/的config.xml
    只修改数据存放的目录:
    <!-- Path to data directory, with trailing slash. -->
    <path>/home/clickhouse/data/clickhouse</path>

    注意这个配置的目录磁盘空间必须足够大
    其他配置可以根据自己的实际情况而定,注意配置端口是否被占用

    5. 启动clickhouse-server
    [root@localhost clickhouse-server]# /etc/init.d/clickhouse-server start
    查看端口
    [root@localhost clickhouse-server]# ss -nlp|grep 9000

    6. 停止clickhouse-server:
    [root@localhost clickhouse-server]#/etc/init.d/clickhouse-server stop

    7. 进入clickhouse-client:
    [root@localhost /]# clickhouse-client --host localhost
    ClickHouse client version 20.8.3.18.
    Connecting to localhost:9000 as user default.
    Connected to ClickHouse server version 20.8.3 revision 54438.

    localhost :)

    localhost :) show databases;

    SHOW DATABASES

    ┌─name───────────────────────────┐
    │ _temporary_and_external_tables │
    │ default                        │
    │ system                         │
    └────────────────────────────────┘

    8 配置远程登录
    [root@db_server_yayun_01 ~]vi /etc/clickhouse-server/config.xml

    找到<listent_host>::</listen_host> 去掉注释
    然后进行重新启动
    /etc/init.d/clickhouse-server stop
    /etc/init.d/clickhouse-server start

    这样通过客户端就可以连接到服务器了
    clickhouse-client --host 192.168.1.118 --port=9000 -m

    -m代表的是在命令行模式sql可以进行换行执行

    三个节点都启动

    --------------------------------------------集群部署-------------------------
    三个节点全部按照上面的指导部署单节点成功后,开始配置部署集群需要的配置
    1.首先以一个节点为例配置 vi /etc/metrika.xml 添加配置信息如下
    注意:为什么直接vim /etc/metrika.xml一个新文件,这里很难去理解,有点莫名其妙,
    其实如果仔细看过clickhouse的配置文件 /etc/clickhouse-server/config.xml 就能明白,有这么一段被注释的配置说明:
    <!-- If element has 'incl' attribute, then for it's value will be used corresponding substitution from another file.
             By default, path to file with substitutions is /etc/metrika.xml. It could be changed in config in 'include_from' element.
             Values for substitutions are specified in /yandex/name_of_substitution elements in that file.
          -->

     

    <yandex> 
      <clickhouse_remote_servers>
        <perftest_3shards_1replicas>
          <shard>
            <internal_replication>true</internal_replication>  
            <replica>
              <host>192.168.1.118</host>  
              <port>9000</port>
            </replica>
          </shard>  
          <shard>
            <replica>
              <internal_replication>true</internal_replication>  
              <host>192.168.1.135</host>  
              <port>9000</port>
            </replica>
          </shard>  
          <shard>
            <internal_replication>true</internal_replication>  
            <replica>
              <host>192.168.1.136</host>  
              <port>9000</port>
            </replica>
          </shard>
        </perftest_3shards_1replicas>
      </clickhouse_remote_servers>  
      <!--zookeeper相关配置-->  
      <zookeeper-servers>
        <node index="1">
          <host>192.168.1.118</host>  
          <port>2182</port>
        </node>  
        <node index="2">
          <host>192.168.1.135</host>  
          <port>2182</port>
        </node>  
        <node index="3">
          <host>192.168.1.136</host>  
          <port>2182</port>
        </node>
      </zookeeper-servers>  
      <macros>
        <shard_name>01</shard_name>  
        <replica>192.168.1.118</replica>
      </macros>  
      <networks>
        <ip>::/0</ip>
      </networks>  
      <clickhouse_compression>
        <case>
          <min_part_size>10000000000</min_part_size>  
          <min_part_size_ratio>0.01</min_part_size_ratio>  
          <method>lz4</method>
        </case>
      </clickhouse_compression>
    </yandex>

    红色部分根据节点的ip进行修改

     

    2.重新启动
    按照上面的指导配置完成之后,在每个节点都启动clickhouse的服务,和单节点启动一样,当出现无误后,查看clickhouse的log文件,如果出现以下信息,就基本没有问题
    每个节点都执行
    /etc/init.d/clickhouse-server stop
    /etc/init.d/clickhouse-server start

    查看日志
    more /var/log/clickhouse-server/clickhouse-server.log

     

    3.验证
    在每个节点启动clickhouse客户端,和单节点启动完全一样,查询集群信息
    [root@localhost init.d]# clickhouse-client --host 192.168.1.118 --port=9000
    ClickHouse client version 20.8.3.18.
    Connecting to 192.168.1.118:9000 as user default.
    Connected to ClickHouse server version 20.8.3 revision 54438.

    localhost :) select * from system.clusters;

    SELECT *
    FROM system.clusters

    ┌─cluster───────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─────┬─host_address──┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
    │ perftest_3shards_1replicas        │         1 │            1 │           1 │ 192.168.1.118 │ 192.168.1.118 │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ perftest_3shards_1replicas        │         2 │            1 │           1 │ 192.168.1.135 │ 192.168.1.135 │ 9000 │        0 │ default │                  │            0 │                       0 │
    │ perftest_3shards_1replicas        │         3 │            1 │           1 │ 192.168.1.136 │ 192.168.1.136 │ 9000 │        0 │ default │                  │            0 │                       0 │
    │ test_cluster_two_shards           │         1 │            1 │           1 │ 127.0.0.1     │ 127.0.0.1     │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_cluster_two_shards           │         2 │            1 │           1 │ 127.0.0.2     │ 127.0.0.2     │ 9000 │        0 │ default │                  │            0 │                       0 │
    │ test_cluster_two_shards_localhost │         1 │            1 │           1 │ localhost     │ ::1           │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_cluster_two_shards_localhost │         2 │            1 │           1 │ localhost     │ ::1           │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_shard_localhost              │         1 │            1 │           1 │ localhost     │ ::1           │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_shard_localhost_secure       │         1 │            1 │           1 │ localhost     │ ::1           │ 9440 │        0 │ default │                  │            0 │                       0 │
    │ test_unavailable_shard            │         1 │            1 │           1 │ localhost     │ ::1           │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_unavailable_shard            │         2 │            1 │           1 │ localhost     │ ::1           │    1 │        0 │ default │                  │            0 │                       0 │
    └───────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────────┴───────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘

    11 rows in set. Elapsed: 0.004 sec.

    localhost :) exit
    Bye.
    [root@localhost init.d]# clickhouse-client --host 192.168.1.135 --port=9000
    ClickHouse client version 20.8.3.18.
    Connecting to 192.168.1.135:9000 as user default.
    Connected to ClickHouse server version 20.8.3 revision 54438.

    localhost :) select * from system.clusters;

    SELECT *
    FROM system.clusters

    ┌─cluster───────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─────┬─host_address──┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
    │ perftest_3shards_1replicas        │         1 │            1 │           1 │ 192.168.1.118 │ 192.168.1.118 │ 9000 │        0 │ default │                  │            0 │                       0 │
    │ perftest_3shards_1replicas        │         2 │            1 │           1 │ 192.168.1.135 │ 192.168.1.135 │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ perftest_3shards_1replicas        │         3 │            1 │           1 │ 192.168.1.136 │ 192.168.1.136 │ 9000 │        0 │ default │                  │            0 │                       0 │
    │ test_cluster_two_shards           │         1 │            1 │           1 │ 127.0.0.1     │ 127.0.0.1     │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_cluster_two_shards           │         2 │            1 │           1 │ 127.0.0.2     │ 127.0.0.2     │ 9000 │        0 │ default │                  │            0 │                       0 │
    │ test_cluster_two_shards_localhost │         1 │            1 │           1 │ localhost     │ ::1           │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_cluster_two_shards_localhost │         2 │            1 │           1 │ localhost     │ ::1           │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_shard_localhost              │         1 │            1 │           1 │ localhost     │ ::1           │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_shard_localhost_secure       │         1 │            1 │           1 │ localhost     │ ::1           │ 9440 │        0 │ default │                  │            0 │                       0 │
    │ test_unavailable_shard            │         1 │            1 │           1 │ localhost     │ ::1           │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_unavailable_shard            │         2 │            1 │           1 │ localhost     │ ::1           │    1 │        0 │ default │                  │            0 │                       0 │
    └───────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────────┴───────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘

    11 rows in set. Elapsed: 0.003 sec.

    localhost :) exit
    Bye.
    [root@localhost init.d]# clickhouse-client --host 192.168.1.136 --port=9000
    ClickHouse client version 20.8.3.18.
    Connecting to 192.168.1.136:9000 as user default.
    Connected to ClickHouse server version 20.8.3 revision 54438.

    localhost :) select * from system.clusters;

    SELECT *
    FROM system.clusters

    ┌─cluster───────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─────┬─host_address──┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
    │ perftest_3shards_1replicas        │         1 │            1 │           1 │ 192.168.1.118 │ 192.168.1.118 │ 9000 │        0 │ default │                  │            0 │                       0 │
    │ perftest_3shards_1replicas        │         2 │            1 │           1 │ 192.168.1.135 │ 192.168.1.135 │ 9000 │        0 │ default │                  │            0 │                       0 │
    │ perftest_3shards_1replicas        │         3 │            1 │           1 │ 192.168.1.136 │ 192.168.1.136 │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_cluster_two_shards           │         1 │            1 │           1 │ 127.0.0.1     │ 127.0.0.1     │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_cluster_two_shards           │         2 │            1 │           1 │ 127.0.0.2     │ 127.0.0.2     │ 9000 │        0 │ default │                  │            0 │                       0 │
    │ test_cluster_two_shards_localhost │         1 │            1 │           1 │ localhost     │ ::1           │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_cluster_two_shards_localhost │         2 │            1 │           1 │ localhost     │ ::1           │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_shard_localhost              │         1 │            1 │           1 │ localhost     │ ::1           │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_shard_localhost_secure       │         1 │            1 │           1 │ localhost     │ ::1           │ 9440 │        0 │ default │                  │            0 │                       0 │
    │ test_unavailable_shard            │         1 │            1 │           1 │ localhost     │ ::1           │ 9000 │        1 │ default │                  │            0 │                       0 │
    │ test_unavailable_shard            │         2 │            1 │           1 │ localhost     │ ::1           │    1 │        0 │ default │                  │            0 │                       0 │
    └───────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────────┴───────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘

    11 rows in set. Elapsed: 0.004 sec.

     

    4.数据验证

    创建数据库:
    登录其中一个节点上执行我们这里登陆到192.168.1.136上执行:
    [root@localhost init.d]# clickhouse-client --host localhost -m

    CREATE DATABASE db_test ON CLUSTER perftest_3shards_1replicas

    创建本地表:
    -- 没有自动创建可以手动创建
    use db_test;
    CREATE TABLE user_local ON CLUSTER perftest_3shards_1replicas
    (
    id Int32,
    name String
    ) ENGINE=TinyLog;

    创建分布式表:
    CREATE TABLE user_all ON CLUSTER perftest_3shards_1replicas(
    id Int32,
    name String) ENGINE = Distributed(perftest_3shards_1replicas, db_test, user_local,id);

    往本地表写入数据:
    192.168.1.118
    insert into user_local(id, name) values (1, 'zhangsan');
    insert into user_local(id, name) values (2, 'lisi');

    192.168.1.135:
    insert into user_local(id, name) values (3, 'wangm');
    insert into user_local(id, name) values (4, 'lijing');

    192.168.1.136:
    insert into user_local(id, name) values (5, 'zhangquan');
    insert into user_local(id, name) values (6, 'lihua');


    查询分布式表:
    localhost :) select * from user_all order by id;

    SELECT *
    FROM user_all
    ORDER BY id ASC

    ┌─id─┬─name─────┐
    │  1 │ zhangsan │
    │  2 │ lisi     │
    └────┴──────────┘
    ┌─id─┬─name───┐
    │  3 │ wangm  │
    │  4 │ lijing │
    └────┴────────┘
    ┌─id─┬─name──────┐
    │  5 │ zhangquan │
    │  6 │ lihua     │
    └────┴───────────┘

    从分布式表里写入数据:
    insert into user_all(id, name) values (7, 'wangmingquan');

    默认插入的新数据会按照一定的规则插入到不同节点(这里是写入到节点:192.168.1.135)的单表上,如下:
    localhost :) select * from user_local;

    SELECT *
    FROM user_local

    ┌─id─┬─name─────────┐
    │  3 │ wangm        │
    │  4 │ lijing       │
    │  7 │ wangmingquan │

     

    5.建表例子

    CREATE TABLE metric_local ON CLUSTER perftest_3shards_1replicas
    (
       app String,
       block_qps Int64,
       count Int64,
       exception_qps Int64,
       gmt_create DateTime DEFAULT now(),
       gmt_modified DateTime DEFAULT now(),
       id Int64,
       machine_ip Nullable(String),
       pass_qps Int64,
       resource String,
       resource_code Int64,
       rt double,
       success_qps Int64,
       timestamp DateTime DEFAULT now(),
       day date DEFAULT now()
    )
    engine = MergeTree
    partition by day
    order by id
    settings index_granularity = 8192;

    insert into metric_local(app,block_qps,count,exception_qps,id,machine_ip,pass_qps,resource,resource_code,rt,success_qps) values('1','1','1','1','1','1','1','1','1','1','1');

    CREATE TABLE metric_all ON CLUSTER perftest_3shards_1replicas(
       app String,
       block_qps Int64,
       count Int64,
       exception_qps Int64,
       gmt_create DateTime DEFAULT now(),
       gmt_modified DateTime DEFAULT now(),
       id Int64,
       machine_ip Nullable(String),
       pass_qps Int64,
       resource String,
       resource_code Int64,
       rt double,
       success_qps Int64,
       timestamp DateTime DEFAULT now(),
       day date DEFAULT now()
    ) ENGINE = Distributed(perftest_3shards_1replicas, db_test, metric_local,id);

     

    6.客户端连接clickhouse

    可以通过客户端工具DBeaver进行连接,选择clickhouse驱动即可

    jdbc:clickhouse://192.168.1.118:8123

     

    7.配置默认default用户的密码

    生产加密密码,我这里明文密码是123456,对其加密后生成的串如下:

    [root@elastic1 clickhouse-server]# echo -n 123456 | openssl dgst -sha256
    (stdin)= 8d969eef6ecad3c29a3a629280e686cf0c3f5d5a86aff3ca12020c923adc6c92

     

    修改配置文件

    /etc/clickhouse-server/users.xml

    <users><default></default></users>

    加入如下项(红色部分):

    <password_sha256_hex>8d969eef6ecad3c29a3a629280e686cf0c3f5d5a86aff3ca12020c923adc6c92</password_sha256_hex>

     

    [root@localhost clickhouse-server]# more users.xml 
    <?xml version="1.0"?>
    <yandex>
        <!-- Profiles of settings. -->
        <profiles>
            <!-- Default settings. -->
            <default>
                <!-- Maximum memory usage for processing single query, in bytes. -->
                <max_memory_usage>10000000000</max_memory_usage>
    
                <!-- Use cache of uncompressed blocks of data. Meaningful only for processing many of very short queries. -->
                <use_uncompressed_cache>0</use_uncompressed_cache>
    
                <!-- How to choose between replicas during distributed query processing.
                     random - choose random replica from set of replicas with minimum number of errors
                     nearest_hostname - from set of replicas with minimum number of errors, choose replica
                      with minimum number of different symbols between replica's hostname and local hostname
                      (Hamming distance).
                     in_order - first live replica is chosen in specified order.
                     first_or_random - if first replica one has higher number of errors, pick a random one from replicas with minimum number of errors.
                -->
                <load_balancing>random</load_balancing>
            </default>
    
            <!-- Profile that allows only read queries. -->
            <readonly>
                <readonly>1</readonly>
            </readonly>
        </profiles>
    
        <!-- Users and ACL. -->
        <users>
            <!-- If user name was not specified, 'default' user is used. -->
            <default>
                <!-- Password could be specified in plaintext or in SHA256 (in hex format).
    
                     If you want to specify password in plaintext (not recommended), place it in 'password' element.
                     Example: <password>qwerty</password>.
                     Password could be empty.
    
                     If you want to specify SHA256, place it in 'password_sha256_hex' element.
                     Example: <password_sha256_hex>65e84be33532fb784c48129675f9eff3a682b27168c0ea744b2cf58ee02337c5</password_sha256_hex>
                     Restrictions of SHA256: impossibility to connect to ClickHouse using MySQL JS client (as of July 2019).
    
                     If you want to specify double SHA1, place it in 'password_double_sha1_hex' element.
                     Example: <password_double_sha1_hex>e395796d6546b1b65db9d665cd43f0e858dd4303</password_double_sha1_hex>
    
                     If you want to specify a previously defined LDAP server (see 'ldap_servers' in main config) for authentication, place its name in 'server' element inside 'ldap' element.
                     Example: <ldap><server>my_ldap_server</server></ldap>
    
                     How to generate decent password:
                     Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha256sum | tr -d '-'
                     In first line will be password and in second - corresponding SHA256.
    
                     How to generate double SHA1:
                     Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-'
                     In first line will be password and in second - corresponding double SHA1.
                -->
    <password_sha256_hex>8d969eef6ecad3c29a3a629280e686cf0c3f5d5a86aff3ca12020c923adc6c92</password_sha256_hex>
                <!-- List of networks with open access.
    
                     To open access from everywhere, specify:
                        <ip>::/0</ip>
    
                     To open access only from localhost, specify:
                        <ip>::1</ip>
                        <ip>127.0.0.1</ip>
    
                     Each element of list has one of the following forms:
                     <ip> IP-address or network mask. Examples: 213.180.204.3 or 10.0.0.1/8 or 10.0.0.1/255.255.255.0
                         2a02:6b8::3 or 2a02:6b8::3/64 or 2a02:6b8::3/ffff:ffff:ffff:ffff::.
                     <host> Hostname. Example: server01.yandex.ru.
                         To check access, DNS query is performed, and all received addresses compared to peer address.
                     <host_regexp> Regular expression for host names. Example, ^serverdd-dd-d.yandex.ru$
                         To check access, DNS PTR query is performed for peer address and then regexp is applied.
                         Then, for result of PTR query, another DNS query is performed and all received addresses compared to peer address.
                         Strongly recommended that regexp is ends with $
                     All results of DNS requests are cached till server restart.
                -->
                <networks incl="networks" replace="replace">
                    <ip>::/0</ip>
                </networks>
    
                <!-- Settings profile for user. -->
                <profile>default</profile>
    
                <!-- Quota for user. -->
                <quota>default</quota>
    
                <!-- User can create other users and grant rights to them. -->
                <!-- <access_management>1</access_management> -->
             </default>
        </users>
    
        <!-- Quotas. -->
        <quotas>
            <!-- Name of quota. -->
            <default>
                <!-- Limits for time interval. You could specify many intervals with different limits. -->
                <interval>
                    <!-- Length of interval. -->
                    <duration>3600</duration>
    
                    <!-- No limits. Just calculate resource usage for time interval. -->
                    <queries>0</queries>
                    <errors>0</errors>
                    <result_rows>0</result_rows>
                    <read_rows>0</read_rows>
                    <execution_time>0</execution_time>
                </interval>
            </default>
        </quotas>
    </yandex>

    然后重新启动

    /etc/init.d/clickhouse-server stop
    /etc/init.d/clickhouse-server start

     

    使用密码进行登陆

    [root@localhost clickhouse-server]# clickhouse-client -h 192.168.1.118 -m -u default --password 123456
    ClickHouse client version 20.8.3.18.
    Connecting to 192.168.1.118:9000 as user default.
    Connected to ClickHouse server version 20.8.3 revision 54438.

    8.添加用户

    [root@localhost clickhouse-server]# more users.xml 
    <?xml version="1.0"?>
    <yandex>
        <!-- Profiles of settings. -->
        <profiles>
            <!-- Default settings. -->
            <default>
                <!-- Maximum memory usage for processing single query, in bytes. -->
                <max_memory_usage>10000000000</max_memory_usage>
    
                <!-- Use cache of uncompressed blocks of data. Meaningful only for processing many of very short queries. -->
                <use_uncompressed_cache>0</use_uncompressed_cache>
    
                <!-- How to choose between replicas during distributed query processing.
                     random - choose random replica from set of replicas with minimum number of errors
                     nearest_hostname - from set of replicas with minimum number of errors, choose replica
                      with minimum number of different symbols between replica's hostname and local hostname
                      (Hamming distance).
                     in_order - first live replica is chosen in specified order.
                     first_or_random - if first replica one has higher number of errors, pick a random one from replicas with minimum number of errors.
                -->
                <load_balancing>random</load_balancing>
            </default>
    
            <!-- Profile that allows only read queries. -->
            <readonly>
                <readonly>1</readonly>
            </readonly>
        </profiles>
    
        <!-- Users and ACL. -->
        <users>
            <!-- If user name was not specified, 'default' user is used. -->
            <default>
                <!-- Password could be specified in plaintext or in SHA256 (in hex format).
    
                     If you want to specify password in plaintext (not recommended), place it in 'password' element.
                     Example: <password>qwerty</password>.
                     Password could be empty.
    
                     If you want to specify SHA256, place it in 'password_sha256_hex' element.
                     Example: <password_sha256_hex>65e84be33532fb784c48129675f9eff3a682b27168c0ea744b2cf58ee02337c5</password_sha256_hex>
                     Restrictions of SHA256: impossibility to connect to ClickHouse using MySQL JS client (as of July 2019).
    
                     If you want to specify double SHA1, place it in 'password_double_sha1_hex' element.
                     Example: <password_double_sha1_hex>e395796d6546b1b65db9d665cd43f0e858dd4303</password_double_sha1_hex>
    
                     If you want to specify a previously defined LDAP server (see 'ldap_servers' in main config) for authentication, place its name in 'server' element inside 'ldap' element.
                     Example: <ldap><server>my_ldap_server</server></ldap>
    
                     How to generate decent password:
                     Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha256sum | tr -d '-'
                     In first line will be password and in second - corresponding SHA256.
    
                     How to generate double SHA1:
                     Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-'
                     In first line will be password and in second - corresponding double SHA1.
                -->
    <password_sha256_hex>8d969eef6ecad3c29a3a629280e686cf0c3f5d5a86aff3ca12020c923adc6c92</password_sha256_hex>
                <!-- List of networks with open access.
    
                     To open access from everywhere, specify:
                        <ip>::/0</ip>
    
                     To open access only from localhost, specify:
                        <ip>::1</ip>
                        <ip>127.0.0.1</ip>
    
                     Each element of list has one of the following forms:
                     <ip> IP-address or network mask. Examples: 213.180.204.3 or 10.0.0.1/8 or 10.0.0.1/255.255.255.0
                         2a02:6b8::3 or 2a02:6b8::3/64 or 2a02:6b8::3/ffff:ffff:ffff:ffff::.
                     <host> Hostname. Example: server01.yandex.ru.
                         To check access, DNS query is performed, and all received addresses compared to peer address.
                     <host_regexp> Regular expression for host names. Example, ^serverdd-dd-d.yandex.ru$
                         To check access, DNS PTR query is performed for peer address and then regexp is applied.
                         Then, for result of PTR query, another DNS query is performed and all received addresses compared to peer address.
                         Strongly recommended that regexp is ends with $
                     All results of DNS requests are cached till server restart.
                -->
                <networks incl="networks" replace="replace">
                    <ip>::/0</ip>
                </networks>
    
                <!-- Settings profile for user. -->
                <profile>default</profile>
    
                <!-- Quota for user. -->
                <quota>default</quota>
    
                <!-- User can create other users and grant rights to them. -->
                <!-- <access_management>1</access_management> -->
             </default>
    
          <dbaadmin>
            <password>123456</password>
            <access_management>1</access_management>
            <networks incl="networks" replace="replace">
                    <ip>::/0</ip>
            </networks>
            <profile>default</profile>
            <quota>default</quota>
            <allow_databases>
                    <database>db_test</database>
                    <database>db_hxl</database>
            </allow_databases>
          </dbaadmin>
    
        </users>
    
        <!-- Quotas. -->
        <quotas>
            <!-- Name of quota. -->
            <default>
                <!-- Limits for time interval. You could specify many intervals with different limits. -->
                <interval>
                    <!-- Length of interval. -->
                    <duration>3600</duration>
    
                    <!-- No limits. Just calculate resource usage for time interval. -->
                    <queries>0</queries>
                    <errors>0</errors>
                    <result_rows>0</result_rows>
                    <read_rows>0</read_rows>
                    <execution_time>0</execution_time>
                </interval>
            </default>
        </quotas>
    </yandex>

    红色部分为新增的用户,同时限定该用户能访问的数据库.

     

  • 相关阅读:
    The connection to adb is down, and a severe error has occured.问题解决
    JavaScript的实现
    Dreamer2.1 发布 新增将Bean解析成xml和json
    由某公司的面试小题目说起,对责任链模式的总结
    TCP协议中的SO_LINGER选项
    杭电2602(01背包问题)——第一次做背包问题的理解
    [置顶] 浏览器模式和标准对于javascript的影响
    李克强总理坚决不“救市”底气何来?
    lua序列化table表到文件中
    获取java byte的无符号数值
  • 原文地址:https://www.cnblogs.com/hxlasky/p/14666616.html
Copyright © 2011-2022 走看看