cloudera 网址 http://www.cloudera.com/blog/2011/07/hoop-hadoop-hdfs-over-http/
hoop说明文档 http://cloudera.github.com/hoop/docs/latest/index.html
hoop简介
何为Hoop?
Hoop是对Hadoop HDFS Proxy 的改良重写,为HadoopHDFS提供了HTTP(S)的访问接口。使用Hoop,你可以:
- 通过标准的HTTP协议访问你的HDFS系统
- 在运行不同版本的HDFS之间进行数据交换(这克服了一些RPC方式因版本不同而产生的兼容性问题)
- 将对HDFS的操作置于防火墙的保护下。Hoop Server在系统中可以充当网关的角色,并且只允许本系统穿越。
组件
Hoop由两部分组成:Hoop Server 和 Hoop Client,他们分别作用是:
- Hoop Server是一个提供REST HTTP协议的服务,它允许你通过HTTP协议进行所有HDFS支持的文件系统的操作。
- Hoop Client是一个HDFS的客户端实现,使用它你可以使用通常的HDFS的操作方法来通过Hoop操作HDFS。
例子
下面是几个使用传统的CURL工具通过Hoop操作HDFS的例子:
1.获取home目录
$ curl -i "http://hoopbar:14000?op=homedir&user.name=babu" HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked {"homeDir":"http:\/\/hoopbar:14000\/user\/babu"} $
2.读取一个文件内容
$ curl -i "http://hoopbar:14000?/user/babu/hello.txt&user.name=babu" HTTP/1.1 200 OK Content-Type: application/octet-stream Transfer-Encoding: chunked Hello World! $
3.写文件
$ curl -i -X POST "http://hoopbar:14000/user/babu/data.txt?op=create" --data-binary @mydata.txt --header "content-type: application/octet-stream" HTTP/1.1 200 OK Location: http://hoopbar:14000/user/babu/data.txt Content-Type: application/json Content-Length: 0 $
4.列出目录的内容
$ curl -i "http://hoopbar:14000?/user/babu?op=list&user.name=babu" HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked [ { "path" : "http:\/\/hoopbar:14000\/user\/babu\/data.txt" "isDir" : false, "len" : 966, "owner" : "babu", "group" : "supergroup", "permission" : "-rw-r--r--", "accessTime" : 1310671662423, "modificationTime" : 1310671662423, "blockSize" : 67108864, "replication" : 3 } ] $
更多操作可以看这里:Hoop HTTP REST API http://cloudera.github.com/hoop/docs/latest/HttpRestApi.html
获取Hoop
Hoop使用的是Apache License 2.0 发布,你可以在github上获取到它的源码(http://github.com/cloudera/hoop)这里(http://cloudera.github.com/hoop.)还有各种相关的安装使用教程http://cloudera.github.com/hoop/docs/latest/ServerSetup.html 。
二、 安装
http://github.com/cloudera/hoop可以git方式或 zip方式下载,下载最新版本cloudera-hoop-11bb221.zip
解压
三 安装maven
下载及安装见http://maven.apache.org/download.html
解压到/usr/local下
设置 export M2_HOME=/usr/local/apache-maven-3.0.3
export M2=$M2_HOME/bin
export MAVEN_OPTS="-Xms256m -Xmx512m" 设置jvm参数
export PATH=$M2:$PATH 加入到path
export JAVA_HOME=/usr/java/jdk1.6.0_24/
且确认将$JAVA_HOME/bin加入了path。
执行mvn -version 确认是否安装成功
四 安装hoop
mvn clean package -Dmaven.test.skip=true site assembly:single
tar xzf hoop/hoop-distro/target/hoop-0.1.0-SNAPSHOT.tar.gz
Configure Hadoop
Edit Hadoop core-site.xml and defined the Unix user that will run the Hoop server as a proxyuser. For example:
... <property> <name>hadoop.proxyuser.#HOOPUSER#.hosts</name> <value>myhoop.foo.com</value> </property> <property> <name>hadoop.proxyuser.#HOOPUSER#.groups</name> <value>*</value> </property> ...
IMPORTANT: Replace #HOOPUSER# with the Unix user that will start the Hoop server.
Restart Hadoop
You need to restart Hadoop for the proxyuser configuration ot become active.
Start/Stop Hoop
To start/stop Hoop use Hoop's bin/hoop.sh script. For example:
hoop-0.1.0-SNAPSHOT $ bin/hoop.sh start
NOTE: Invoking the script without any parameters list all possible parameters (start, stop, run, etc.). The hoop.sh script is a wrapper for Tomcat's catalina.sh script that sets the environment variables and Java System properties required to run Hoop.
Test Hoop is working
~ $ curl -i "http://<HOOPHOSTNAME>:14000?user.name=babu&op=homedir" HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked {"homeDir":"http:\/\/<HOOP_HOST>:14000\/user\/babu"}
Embedded Tomcat Configuration
To configure the embedded Tomcat go to the tomcat/conf.
Hoop preconfigures the HTTP and Admin ports in Tomcat's server.xml to 14000 and 14001.
Tomcat logs are also preconfigured to go to Hoop's logs/ directory.
The following environment variables (which can be set in Hoop's conf/hoop-env.sh script) can be used to alter those values:
- HOOP_HTTP_PORT
- HOOP_ADMIN_PORT
- HOOP_LOG
Configure Hoop
Edit the hoop-0.1.0-SNAPSHOT/conf/hoop-site.xml file and set the hoop.hadoop.conf:fs.default.name property to the HDFS Namenode URI. For example:
hoop.hadoop.conf:fs.default.name=hdfs://localhost:8021