Hadoop学习第四天之hadoop命令操作(下)
1. Hadoop dfsadmin #启动dfs admin客户端
![](//img-blog.csdn.net/20150311101908198?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-report #报告当前集群的节点信息
![](//img-blog.csdn.net/20150311102102717?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-safemode enter #进入安全模式
-safemode leave #离开安全模式
-safemode get #获取安全模式状态
-safemode wait #等待,直到安全模式结束
![](//img-blog.csdn.net/20150311102132762?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-saveNamespace #开启保存命名空间,必须开启安全模式
![](//img-blog.csdn.net/20150311102131032?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-refreshNodes 刷新集群的datanode节点
-finalizeUpgrade #升级时先删除现有备份
![](//img-blog.csdn.net/20150311102329217?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-upgradeProgress status #查询集群升级的状态
-upgradeProgress details #查询集群升级的详细状态信息
-upgradeProgress force #强制升级
![](//img-blog.csdn.net/20150311102354847?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
![](//img-blog.csdn.net/20150311102408575?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
![](//img-blog.csdn.net/20150311102425657?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-metesave <fileName> #将元信息保存在hdfs上的指定文件中
![](//img-blog.csdn.net/20150311102446078?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-refreshServiceAcl 刷新服务的访问控制列表,Namenode将会重新加载访问控制列表
![](//img-blog.csdn.net/20150311102353928?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-refreshUserToGroupsMappings #刷新用户到组的映射关系
![](//img-blog.csdn.net/20150311102525608?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-refreshSuperUserGroupsConfiguration #刷新超级用户代理组的映射
![](//img-blog.csdn.net/20150311102425627?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-setQuota <quota> <dirname>…<dirname> #为每个目录dirname设置配额,不能在安全模式下
![](//img-blog.csdn.net/20150311102558072?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-clrQuota <dirname>…<dirname> #清除每个dirname的配额
![](//img-blog.csdn.net/20150311102614156?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-setSpaceQuota <quota> <dirname> …. <dirname> #设置磁盘空间配额
![](//img-blog.csdn.net/20150311102641799?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-clrSpaceQuota <dirname> ….<dirname> #清除磁盘空间配额
![](//img-blog.csdn.net/20150311102654684?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
- setBalancerBandwidth <bandwidth in bytes per second> #平衡各节点数据的带宽大小。此例设置带宽为500M/s
![](//img-blog.csdn.net/20150311102711720?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-help [cmd] 显示命令的帮助信息
![](//img-blog.csdn.net/20150311102611177?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
2. Hadoop mradmin #启动MapReduce admin客户端
![](//img-blog.csdn.net/20150311102701097?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-refreshServiceAcl #刷新服务的访问控制列表,MapReduce重新载入访问控制列表
![](//img-blog.csdn.net/20150311102729536?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-refreshQueues #刷新MapReduce的队列
![](//img-blog.csdn.net/20150311102750580?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-refreshUserToGroupsMappings #刷新用户到组的映射关系
![](//img-blog.csdn.net/20150311102811500?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-refreshSuperUserGroupsConfiguration #刷新超级用户代理组的映射
![](//img-blog.csdn.net/20150311102828707?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-refreshNodes #刷新tasktrackers节点集群
![](//img-blog.csdn.net/20150311103001994?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-safemode enter #开启MapReduce的安全模式
-safemode leave #离开MapReduce的安全模式
-safemode get #获取MapReduce的安全模式状态
-safemode wait #等待MapReduce的安全模式结束
![](//img-blog.csdn.net/20150311102904244?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
-help [cmd] #查看命名的描述
3. hadoop namenode –format #Hadoop的namenode节点格式化
4. hadoop secondary namenode #启动secondarynamenode ,因为secondarynamenode已经启动,需要先关闭后重新启动
![](//img-blog.csdn.net/20150311103014179?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
5. hadoop namenode #单独运行dfs的namenode节点,如果节点已经启动,需要先关闭,然后重新启动
![](//img-blog.csdn.net/20150311103154252?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
6. hadoop datanode #单独运行dfs的datanode节点,如果节点已经启动,需要先关闭,然后重新启动
![](//img-blog.csdn.net/20150311103119184?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
7. hadoop fsck #运行dfs文件系统的检查命令
![](//img-blog.csdn.net/20150311103141102?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
8. hadoop balancer #运行集群的平衡策略
![](//img-blog.csdn.net/20150311101957509?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
9. hadoop fetchdt # 获取namenode的授权
![](//img-blog.csdn.net/20150311103216655?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
10. hadoop jobtracker #运行MapReduce的jobTracker,如果jobtracker已经启动,会提示地址已经被占用。
![](//img-blog.csdn.net/20150311103235281?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
11. hadoop pipes #运行管道job
![](//img-blog.csdn.net/20150311103406571?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
12. hadoop tasktracker #运行MapReduce的tasktracker 节点,如果当前节点已经启动tasktracker,会提示地址已经被占用
![](//img-blog.csdn.net/20150311103422733?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
13. hadoop historyserver #运行job的历史服务作为一个独立的守护进程
![](//img-blog.csdn.net/20150311103331316?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
14. hadoop job #调整MapReduce的job
![](//img-blog.csdn.net/20150311103501452?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
15. hadoop queue #获取job队列的相关信息
![](//img-blog.csdn.net/20150311103405090?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
16. hadoop version #打印版本
![](//img-blog.csdn.net/20150311103532309?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQva2lzc3N1bjA2MDg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
17. hadoop jar <jar> #运行编写好的jar文件
18. hadoop distcp <srcurl> <desturl> #复制文件和目录
19. hadoop archive -archiveName NAME -p <parent path> <src>*<dest> #创建一个Hadoop的存档