zoukankan      html  css  js  c++  java
  • ranger kafka

    Authorizing Kafka access over non-authenticated channel via Ranger

    This section answers some questions one is likely to encounter when trying to authorize access to Kafka over non-authenticated channel. This Kafka feature is available in HDP releases 2.3.4 (Dal-M20) or later.

    Can I authorizer access to Kafka over a non-secure channel via Ranger?

    Yes. you can control access by ip-address.

    Can I authorize access to Kafka over non-secure channel by user/user-groups?

    No, one can’t use user/group based access to authorize Kafka access over a non-secure channel.  This is because it isn't possible to assert client’s identity over the non-secure channel.

    What is a recommended way to set-up policies when trying to control access to Kafka over a non-secure channel?

    Ensure that all Brokers nodes have Kafka Admin access.  This is a mandatory step.  If you don’t perform this step then your cluster won’t work properly.

    • Identify the nodes where brokers are running.

    • Create a policy where resource is * (i.e. all topics) and grant Kafka Admin access type to the public user group.  Specify ip-address of all the brokers as the ip-range policy condition on the policy item.

    Screen Shot 2015-12-10 at 6.41.01 PM.png

    Ensure publishers have appropriate access.

    • Identify ip address of all nodes where publishers would run along with their respective topics.

    • Create policy where resources are the respective topic names and grant Publish access type to public user group.  Specify ip-address of machines where those publishers would run as the ip-range policy condition on the policy item.

    • Specify topic name(s) as policy resource.  Note that you can specify multiple topics  or even regular expressions in topic names.

    Screen Shot 2015-12-10 at 6.44.11 PM.png

    Ensure consumers have appropriate access. Same process as publishers except change access type to Consume instead or Produce.

    Screen Shot 2015-12-10 at 6.45.24 PM.png

    Why do we have to specify public user group on all policies items created for authorizing Kafka access over non-secure channel?

    • Kafka can’t assert the identity of client user over a non-secure channel.  Thus, Kafka treats all users for such access as an anonymous user (a special user literally named ANONYMOUS).

    • Ranger's public user group is a means to model all users which, of course, includes this anonymous user (ANONYMOUS).

    What are the specific things to watch out for when setting up authorization for accessing Kafka over non-secure channel?

    • Make sure that all broker-ips have Kafka admin access to all topics, i.e. *.
    • Make sure no publishers or consumers are running on broker nodes that need access control.  Since broker ips have open access it isn’t possible to control access on those nodes.

    I have the policies as specified above, however, I am still not able to consume over an non-authenticated channel using bin/kafka-console-consumer.sh script that is a part of the Kafka distribution!  The consumer hangs and gives the error message “No brokers found in ZK.”  What gives?

    • Ensure that /etc/kafka/conf/kafka_client_jaas.conf does not have specification for serviceName="zookeeper".  This is typically the Client section.
    • Ensure that you are not specifying --security-protocol PLAINTEXTSASL argument to the consumer.  Either specify --security-protocol PLAINTEXT or leave --security-protocol unspecified since its default value is PLAINTEXT.

    I can’t edit the /etc/kafka/conf/kafka_client_jaas.conf file!  What should I do to consume kafka messages over an non-authenticated channel?

    • In that case just do a kinit with a valid password/ticket.
    • That token will get used to authenticate you to zookeeper.  After that you should be able to consume messages from kafka over non-authenticated channel.  Connection to Kafka brokers correctly happens over non-authenticated channel and should get authorized as user ANONYMOUS.

    Why do I need to edit the /etc/kafka/conf/kafka_client_jaas.conf file?

    Presence of Client block in /etc/kafka/conf/kafka_client_jaas.conf for service zookeeper causes the console consumer connect to zookeeper in  secure mode.  To do so it needs a ticket -- which won’t exist in simple auth mode, so it fails.

    Authorizing topic creation

    This section describes the issues one might encounter while trying to authorize topic creation in Kafka using Ranger.

    Can I authorizer topic creation via Ranger?

    Yes, but only if the topic is being auto-created by consumers or producers.

    What is the recommended policy setup to authorize topic auto-creation for producers or consumers?

    • Create a policy where resource is all topics, i.e. *.
    • For producers, create a policy item under this policy which grants both Produce and Configure permissions to the relevant user/user-groups.

    • For consumers, create a policy item under this policy which grants both Consume and Configure permissions to the relevant user/user-groups.

    Can I authorize topic auto-creation for producers or consumers that connect over non-authenticated channel?

    • Yes, create a policy similar to that for secure producer.
    • Either add user group public to the policy item or specify and ip-address base custom condition.
    • Refer to FAQ about authorizing Kafka access over non-authenticated channel for additional details and rationale.

    Why do I have to grant create access to all topics (via *) to allow for auto-creation to work for producers and/or consumers?

    Topic creation is currently a cluster level privilege.  Thus it requires access privileges over all topics in a cluster, i.e. *.

    I want to allow topic auto creation for any topic that starts with finance, e.g. finance_1finance_2, etc. to users that are part of Finance user group.  But I don’t want them to be able to auto create topics that start with other strings, say, marketing_123.  Can I model this sort of an authorization in Ranger Kafka plugin?

    • No.  Because in Kafka currently topic creation is a cluster level permissions, i.e. all topics.
    • There is a pending proposal about Hierarchical topics in Kafka which, if and when it’s implemented, could help with that use case.

    I am using the Kafka supplied console consumer to test topic auto creation by a consumer, but it is not working.  Shouldn’t the new topic get auto-created the moment I startup the consumer?  I have verified the recommended policy setup as indicated above!  What gives?

    Make sure that you specify the following two argument to the console consumer.

    • --new-consumer
    • --boot-strap <broker-name(s)>: Any single broker host/port would do.

    Most common way of creating topic involves using the bin/kafka-tpics.sh script that is a part of the Kafka distribution.  Can I authorize topic creation via that mechanism?

    No.

    Why can’t I authorize topic creation done via the bin/kafka-tpics.sh script!?

    • This script talks directly to zookeeper.  Hence, the policies of Kafka plugin don’t come into the picture.
    • Script adds entries into zookeeper nodes and watchers inside the brokers monitor it and create topics.

    So what are my options to authorize topic creation via the bin/kafka-tpics.sh script?

    • Since this directly interacts with zookeeper this is best controlled via zookeeper acls.

    Is there a Ranger plugin for Zookeeper?

    Not yet.

    Where can I learn more about Kafka’s support for publish/consume over non-authenticated channel?

    Please refer to KAFKA-1809 which implemented the multiple listeners Design.

  • 相关阅读:
    快速幂
    某年元宵节大礼包 矩阵快速幂
    HDU 3303 Harmony Forever 前缀和+树状数组||线段树
    HDU 4325 Flowers 树状数组+离散化
    11、【设计模式】构建器模式
    【基础】数据类型
    【Mybatis】Mybatis缓存
    【FTP】FTP(文件传输协议)工作原理(SFTP)
    Docker是什么
    【RabbitMQ】使用RabbitMQ实现延迟任务
  • 原文地址:https://www.cnblogs.com/felixzh/p/12259436.html
Copyright © 2011-2022 走看看