今天尝试在Hadoop 2.x开发集群上配置Kerberos,遇到一些问题,记录一下
设置hadoop security
core-site.xml
<property> <name>hadoop.security.authentication</name> <value>kerberos</value> </property> <property> <name>hadoop.security.authorization</name> <value>true</value> </property>
hadoop.security.authentication默认是simple方式,也就是基于文件系统的验证方式,这里我们改为kerberos
设置hdfs security
hdfs-site.xml
<property> <name>dfs.block.access.token.enable</name> <value>true</value> </property> <property> <name>dfs.https.enable</name> <value>false</value> </property> <property> <name>dfs.namenode.https-address</name> <value>dev80.hadoop:50470</value> </property> <property> <name>dfs.https.port</name> <value>50470</value> </property> <property> <name>dfs.namenode.keytab.file</name> <value>/etc/hadoop.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>hadoop/_HOST@DIANPING.COM</value> </property> <property> <name>dfs.namenode.kerberos.https.principal</name> <value>host/_HOST@DIANPING.COM</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>dev80.hadoop:50090</value> </property> <property> <name>dfs.namenode.secondary.https-port</name> <value>50470</value> </property> <property> <name>dfs.namenode.secondary.keytab.file</name> <value>/etc/hadoop.keytab</value> </property> <property> <name>dfs.namenode.secondary.kerberos.principal</name> <value>hadoop/_HOST@DIANPING.COM</value> </property> <property> <name>dfs.namenode.secondary.kerberos.https.principal</name> <value>host/_HOST@DIANPING.COM</value> </property> <property> <name>dfs.datanode.data.dir.perm</name> <value>700</value> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:1003</value> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:1007</value> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:1005</value> </property> <property> <name>dfs.datanode.keytab.file</name> <value>/etc/hadoop.keytab</value> </property> <property> <name>dfs.datanode.kerberos.principal</name> <value>hadoop/_HOST@DIANPING.COM</value> </property> <property> <name>dfs.datanode.kerberos.https.principal</name> <value>host/_HOST@DIANPING.COM</value> </property> <property> <name>dfs.datanode.data.dir.perm</name> <value>700</value> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:1003</value> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:1007</value> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:1005</value> </property> <property> <name>dfs.datanode.keytab.file</name> <value>/etc/hadoop.keytab</value> </property> <property> <name>dfs.datanode.kerberos.principal</name> <value>hadoop/_HOST@DIANPING.COM</value> </property> <property> <name>dfs.datanode.kerberos.https.principal</name> <value>host/_HOST@DIANPING.COM</value> </property> <property> <name>dfs.web.authentication.kerberos.principal</name> <value>HTTP/_HOST@DIANPING.COM</value> </property> <property> <name>dfs.web.authentication.kerberos.keytab</name> <value>/etc/hadoop.keytab</value> <description> The Kerberos keytab file with the credentials for the HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. </description> </property>dfs.datanode.address表示data transceiver RPC server所绑定的hostname或IP地址,如果开启security,端口号必须小于1024,否则的话启动datanode时候会报“Cannot start secure cluster without privileged resources”错误
namenode和secondary namenode都是以hadoop用户身份启动
datanode需要以root用户身份用jsvc来启动,而Hadoop 2.x自身带的jsvc是32位版本的,需要去jsvc官网上重新下载编译
1. wget http://mirror.esocc.com/apache//commons/daemon/binaries/commons-daemon-1.0.15-bin.tar.gz
2. cd src/native/unix; configure; make
生成jsvc 64位executable,把它拷贝到$HADOOP_HOME/libexec
[hadoop@dev80 unix]$ file jsvc
jsvc: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped
jsvc: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped
3. mvn package
编译commons-daemon-1.0.15.jar,拷贝到$HADOOP_HOME/share/hadoop/hdfs/lib下,同时删除自带版本的commons-daemon jar包
hadoop-env.sh中修改
# The jsvc implementation to use. Jsvc is required to run secure datanodes. export JSVC_HOME=/usr/local/hadoop/hadoop-2.1.0-beta/libexec # On secure datanodes, user to run the datanode as after dropping privileges export HADOOP_SECURE_DN_USER=hadoop # The directory where pid files are stored. /tmp by default export HADOOP_SECURE_DN_PID_DIR=/usr/local/hadoop # Where log files are stored in the secure data environment. export HADOOP_SECURE_DN_LOG_DIR=/data/logs
分发配置和jar到整个集群
用hadoop帐号启动namenode,然后切换到root,再启动datanode,发现namenode web页面上有显示"
Security is
ON
"
设置yarn security
yarn-site.xml
<property> <name>yarn.resourcemanager.keytab</name> <value>/etc/hadoop.keytab</value> </property> <property> <name>yarn.resourcemanager.principal</name> <value>hadoop/_HOST@DIANPING.COM</value> </property> <property> <name>yarn.nodemanager.keytab</name> <value>/etc/hadoop.keytab</value> </property> <property> <name>yarn.nodemanager.principal</name> <value>hadoop/_HOST@DIANPING.COM</value> </property> <property> <name>yarn.nodemanager.container-executor.class</name> <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value> </property> <property> <name>yarn.nodemanager.linux-container-executor.group</name> <value>hadoop</value> </property>
container-executor默认是DefaultContainerExecutor,是以起Nodemanager的用户身份启动container的,切换为LinuxContainerExecutor会以提交application的用户身份来启动,它使用一个setuid可执行文件来启动和销毁container
这个可执行文件在bin/container-executor,不过Hadoop默认带的还是32位版本,所以需要重新编译
下载Hadoop 2.x source code
mvn package -Pdist,native -DskipTests -Dtar -Dcontainer-executor.conf.dir=/etc
注:container-executor.conf.dir必须显示注明,它表示setuid可执行文件依赖的配置文件路径,默认会在$HADOOP_HOME/etc/hadoop下,不过由于该文件需要父目录和以上的目录的owner都为root,要不然会有以下报错,所以为了方便我们设置为/etc
Caused by: org.apache.hadoop.util.Shell$ExitCodeException: File /usr/local/hadoop/hadoop-2.1.0-beta/etc/hadoop must be owned by root, but is owned by 500 at org.apache.hadoop.util.Shell.runCommand(Shell.java:458) at org.apache.hadoop.util.Shell.run(Shell.java:373) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:578) at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:147)
默认的寻找configuration路径
[root@dev80 bin]# strings container-executor | grep etc
../etc/hadoop/container-executor.cfg
看出来是默认加载$HADOOP_HOME/etc/hadoop
/container-executor.cfg
加上container-executor.conf.dir=/etc 再编译后
[hadoop@dev80 bin]$ strings container-executor | grep etc
/etc/container-executor.cfg
container-executor.cfg中设置
yarn.nodemanager.linux-container-executor.group=hadoop
min.user.id=499
min.user.id=499
将container-executor拷贝到$HADOOP_HOME/bin
chown root:hadoop container-executor /etc/container-executor.cfg
chmod 4750 container-executor
chmod 400 /etc/container-executor.cfg
同步配置文件到整个集群,用hadoop帐号启动ResourceManager和Nodemanager
设置jobhistory server security
mapred-site.xml
<property> <name>mapreduce.jobhistory.keytab</name> <value>/etc/hadoop.keytab</value> </property> <property> <name>mapreduce.jobhistory.principal</name> <value>hadoop/_HOST@DIANPING.COM</value> </property>
启动JobHistoryServer
sbin/mr-jobhistory-daemon.sh start historyserver
执行命令kinit,获得一张tgt(ticket granting ticket)
[hadoop@dev80 hadoop]$ kinit -r 24l -k -t /home/hadoop/.keytab hadoop [hadoop@dev80 hadoop]$ klist Ticket cache: FILE:/tmp/krb5cc_500 Default principal: hadoop@DIANPING.COM Valid starting Expires Service principal 09/11/13 15:25:34 09/12/13 15:25:34 krbtgt/DIANPING.COM@DIANPING.COM renew until 09/12/13 15:25:34其中/tmp/krb5cc_500就是ticket cache file, 500表示hadoop帐号的uid,默认会读取
用户也可以通过设置export KRB5CCNAME=/tmp/krb5cc_500来指定ticket cache路径
用完之后可以kdestroy销毁掉该ticket cache
如果本地没有ticket cache,会报如下错误
13/09/11 16:21:35 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
附上keytab中的principal
[hadoop@dev80 hadoop]$ klist -k -t /etc/hadoop.keytab Keytab name: WRFILE:/etc/hadoop.keytab KVNO Timestamp Principal ---- ----------------- -------------------------------------------------------- 1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM 1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM