zoukankan      html  css  js  c++  java
  • Hadoop集群日常运维

    一、备份namenode的元数据

    namenode中的元数据非常重要,如丢失或者损坏,则整个系统无法使用。因此应该经常对元数据进行备份,最好是异地备份。

    1、将元数据复制到远程站点

    (1)以下代码将secondary namenode中的元数据复制到一个时间命名的目录下,然后通过scp命令远程发送到其它机器

     #!/bin/bash  
     export dirname=/mnt/tmphadoop/dfs/namesecondary/current/`date +%y%m%d%H`  
     if [ ! -d ${dirname} ]  
     then  
     mkdir  ${dirname}  
     cp /mnt/tmphadoop/dfs/namesecondary/current/*  ${dirname}  
     fi  
     scp -r ${dirname} slave1:/mnt/namenode_backup/  
     rm -r ${dirname}  
    

     


    (2)配置crontab,定时执行此项工作
    0 0,8,14,20 * * * bash /mnt/scripts/namenode_backup_script.sh

    2、在远程站点中启动一个本地namenode守护进程,尝试加载这些备份文件,以确定是否已经进行了正确备份。

    二、数据备份

    对于重要的数据,不能完全依赖HDFS,而是需要进行备份,注意以下几点
    (1)尽量异地备份
    (2)如果使用distcp备份至另一个hdfs集群,则不要使用同一版本的hadoop,避免hadoop自身导致数据出错。

    三、文件系统检查

    定期在整个文件系统上运行HDFS的fsck工具,主动查找丢失或者损坏的块。
    建议每天执行一次。

     [jediael@master ~]$ hadoop fsck /  
     ……省略输出(若有错误,则在此外出现,否则只会出现点,一个点表示一个文件)……  
     .........Status: HEALTHY  
      Total size:    14466494870 B  
      Total dirs:    502  
      Total files:   1592 (Files currently being written: 2)  
      Total blocks (validated):      1725 (avg. block size 8386373 B)  
      Minimally replicated blocks:   1725 (100.0 %)  
      Over-replicated blocks:        0 (0.0 %)  
      Under-replicated blocks:       648 (37.565216 %)  
      Mis-replicated blocks:         0 (0.0 %)  
      Default replication factor:    2  
      Average block replication:     2.0  
      Corrupt blocks:                0  
      Missing replicas:              760 (22.028986 %)  
      Number of data-nodes:          2  
      Number of racks:               1  
     FSCK ended at Sun Mar 01 20:17:57 CST 2015 in 608 milliseconds  
       
     The filesystem under path '/' is HEALTHY  
    
    上海尚学堂 shsxt.com
    

      



    (1)若hdfs-site.xml中的dfs.replication设置为3,而实现上只有2个datanode,则在执行fsck时会出现以下错误;
    /hbase/Mar0109_webpage/59ad1be6884739c29d0624d1d31a56d9/il/43e6cd4dc61b49e2a57adf0c63921c09:  Under replicated blk_-4711857142889323098_6221. Target Replicas is 3 but found 2 replica(s).
    注意,由于原来的dfs.replication为3,后来下线了一台datanode,并将dfs.replication改为2,但原来已创建的文件也会记录dfs.replication为3,从而出现以上错误,并导致 Under-replicated blocks:       648 (37.565216 %)。

    (2)fsck工具还可以用来检查一个文件包括哪些块,以及这些块分别在哪等

     [jediael@master conf]$ hadoop fsck /hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7 -files -blocks -racks  
    2.   
    3. FSCK started by jediael from /10.171.29.191 for path /hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7 at Sun Mar 01 20:39:35 CST 2015  
    4. /hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7 21507169 bytes, 1 block(s):  Under replicated blk_7117944555454804881_3655. Target Replicas is 3 but found  replica(s).  
     0. blk_7117944555454804881_3655 len=21507169 repl=2 [/default-rack/10.171.94.155:50010, /default-rack/10.251.0.197:50010]  
       
     Status: HEALTHY  
      Total size:    21507169 B  
      Total dirs:    0  
      Total files:   1  
      Total blocks (validated):      1 (avg. block size 21507169 B)  
      Minimally replicated blocks:   1 (100.0 %)  
      Over-replicated blocks:        0 (0.0 %)  
      Under-replicated blocks:       1 (100.0 %)  
      Mis-replicated blocks:         0 (0.0 %)  
      Default replication factor:    2  
      Average block replication:     2.0  
      Corrupt blocks:                0  
      Missing replicas:              1 (50.0 %)  
      Number of data-nodes:          2  
      Number of racks:               1  
     FSCK ended at Sun Mar 01 20:39:35 CST 2015 in 0 milliseconds  
       
      
     The filesystem under path '/hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7' is HEALTHY  
    

      



    此命令的用法如下:

    [jediael@master ~]$ hadoop fsck -files  
     Usage: DFSck  [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]]  
               start checking from this path  
             -move   move corrupted files to /lost+found  
             -delete delete corrupted files  
             -files  print out files being checked  
             -openforwrite   print out files opened for write  
             -blocks print out block report  
             -locations      print out locations for every block  
             -racks  print out network topology for data-node locations  
                     By default fsck ignores files opened for write, use -openforwrite to report such files. They are usually  tagged CORRUPT or HEALTHY depending on their block allocation status  
     Generic options supported are  
     -conf <configuration file>     specify an application configuration file  
    -D <property=value>            use value for given property  
     -fs <local|namenode:port>      specify a namenode  
     -jt <local|jobtracker:port>    specify a job tracker  
     -files <comma separated list of files>    specify comma separated files to be copied to the map reduce cluster  
     -libjars <comma separated list of jars>    specify comma separated jar files to include in the classpath.  
     -archives <comma separated list of archives>    specify comma separated archives to be unarchived on the compute machines.  
       
    The general command line syntax is  
    bin/hadoop command [genericOptions] [commandOptions]  
    

      


    详细解释请见《hadoop权威指南》P376

    上海尚学堂大数据培训课程之Hadoop,获取相关学习资料教程请评论留言。上海尚学堂大数据培训班即将开班,欢迎预定免费试听名额。

  • 相关阅读:
    Quartz.net 定时任务在IIS中未按时执行
    扩展方法
    mysql 实用语句
    jquery each map
    js匿名函数多时注意
    ASP.NET MVC3调用分部视图
    eclipse快捷键
    regular 点滴
    适配器模式
    php代码实现简单图片下载
  • 原文地址:https://www.cnblogs.com/shsxt/p/9307459.html
Copyright © 2011-2022 走看看