zoukankan      html  css  js  c++  java
  • hadoop mapred-queue-acls 配置(转)

    hadoop作业提交时可以指定相应的队列,例如:-Dmapred.job.queue.name=queue2
    通过对mapred-queue-acls.xml和mapred-site.xml配置可以对不同的队列实现不同用户的提交权限.
    先编辑mapred-site.xml,修改配置如下(增加四个队列):

    <property> 
      <name>mapred.queue.names</name> 
      <value>default,queue1,queue2,queue3,queue4</value> 
    </property>

    修改生效后通过jobtrack界面可以看到配置的队列信息。

    要对队列进行控制, 还需要编辑mapred-queue-acls.xml文件

    <property> 
      <name>mapred.queue.queue1.acl-submit-job</name> 
      <value>' '</value> 
      <description> Comma separated list of user and group names that are allowed 
       to submit jobs to the 'default' queue. The user list and the group list 
       are separated by a blank. For e.g. user1,user2 group1,group2. 
       If set to the special value '*', it means all users are allowed to 
       submit jobs. If set to ' '(i.e. space), no user will be allowed to submit 
       jobs. 
     
       It is only used if authorization is enabled in Map/Reduce by setting the 
       configuration property mapred.acls.enabled to true. 
       Irrespective of this ACL configuration, the user who started the cluster and 
       cluster administrators configured via 
       mapreduce.cluster.administrators can submit jobs. 
      </description> 
    </property> 

     要配置多个队列, 只需要重复添加上面配置信息,修改队列名称和value值,为方便测试,queue1禁止所有用户向其提交作业. 
       要使该配置生效, 还需要修改mapred-site.xml,将mapred.acls.enabled值设置为true

    <property> 
      <name>mapred.acls.enabled</name> 
      <value>true</value> 
    </property> 

     重启hadoop, 使配置生效, 接下来拿hive进行测试:

    先使用queue2队列:

    set mapred.job.queue.name=queue2; 
    hive>  
        > select count(*) from t_aa_pc_log; 
    Total MapReduce jobs = 1 
    Launching Job 1 out of 1 
    Number of reduce tasks determined at compile time: 1 
    In order to change the average load for a reducer (in bytes): 
      set hive.exec.reducers.bytes.per.reducer=<number> 
    In order to limit the maximum number of reducers: 
      set hive.exec.reducers.max=<number> 
    In order to set a constant number of reducers: 
      set mapred.reduce.tasks=<number> 
    Starting Job = job_201205211843_0002, Tracking URL = http://192.168.189.128:50030/jobdetails.jsp?jobid=job_201205211843_0002 
    Kill Command = /opt/app/hadoop-0.20.2-cdh3u3/bin/hadoop job  -Dmapred.job.tracker=192.168.189.128:9020 -kill job_201205211843_0002 
    2012-05-21 18:45:01,593 Stage-1 map = 0%,  reduce = 0% 
    2012-05-21 18:45:04,613 Stage-1 map = 100%,  reduce = 0% 
    2012-05-21 18:45:12,695 Stage-1 map = 100%,  reduce = 100% 
    Ended Job = job_201205211843_0002 
    OK 
    136003 
    Time taken: 14.674 seconds 
    hive>  

    作业成功完成

    再来向queue1队列提交作业:

       > set mapred.job.queue.name=queue1; 
    hive> select count(*) from t_aa_pc_log; 
    Total MapReduce jobs = 1 
    Launching Job 1 out of 1 
    Number of reduce tasks determined at compile time: 1 
    In order to change the average load for a reducer (in bytes): 
      set hive.exec.reducers.bytes.per.reducer=<number> 
    In order to limit the maximum number of reducers: 
      set hive.exec.reducers.max=<number> 
    In order to set a constant number of reducers: 
      set mapred.reduce.tasks=<number> 
    org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.security.AccessControlException: User p_sdo_data_01 cannot perform operation SUBMIT_JOB on queue queue1. 
     Please run "hadoop queue -showacls" command to find the queues you have access to . 
        at org.apache.hadoop.mapred.ACLsManager.checkAccess(ACLsManager.java:179) 
        at org.apache.hadoop.mapred.ACLsManager.checkAccess(ACLsManager.java:136) 
        at org.apache.hadoop.mapred.ACLsManager.checkAccess(ACLsManager.java:113) 
        at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3781) 
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
        at java.lang.reflect.Method.invoke(Method.java:597) 
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) 
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434) 
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430) 
        at java.security.AccessController.doPrivileged(Native Method) 
        at javax.security.auth.Subject.doAs(Subject.java:396) 
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157) 
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428) 

    作业提交失败!

    最后, 可以使用 hadoop queue -showacls 命令查看队列信息:

    [hadoop@localhost conf]$ hadoop queue -showacls 
    Queue acls for user :  hadoop 
     
    Queue  Operations 
    ===================== 
    queue1  administer-jobs 
    queue2  submit-job,administer-jobs 
    queue3  submit-job,administer-jobs 
    queue4  submit-job,administer-jobs 

    转自 http://yaoyinjie.blog.51cto.com/3189782/872294

  • 相关阅读:
    Fibonacci Again
    N的10000的阶乘
    HDU2141(二分搜索)
    POJ2366(HASH法)
    10106 Product
    UVA 401 Palindromes
    UVA424 Integer Inquiry
    POJ2503(二分搜索)
    mysql重置root密码
    tidb安装haproxy负载均衡
  • 原文地址:https://www.cnblogs.com/ggjucheng/p/3352579.html
Copyright © 2011-2022 走看看