zoukankan      html  css  js  c++  java
  • hadoop 突然断电数据丢失问题

    HDFS-Could not obtain block

     

    MapReduce Total cumulative CPU time: 33 seconds 380 msec

    Ended Job = job_201308291142_4635 with errors

    Error during job, obtaining debugging information...

    Job Tracking URL: http://xxx /jobdetails.jsp?jobid=job_201308291142_4635

    Examining task ID: task_201308291142_4635_m_000019 (and more) from job job_201308291142_4635

    Examining task ID: task_201308291142_4635_m_000007 m(and more) from job job_201308291142_4635

    Examining task ID: task_201308291142_4635_m_000009 (and more) from job job_201308291142_4635

     

    Task with the most failures(5):

    -----

    Task ID:

      task_201308291142_4635_m_000009

     

    URL:

      http://xxxxxxx:50030/taskdetails.jsp?jobid=job_201308291142_4635&tipid=task_201308291142_4635_m_000009

    -----

    Diagnostic Messages for this Task:

    java.io.IOException: java.io.IOException: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1555036314-10.115.5.16-1375773346340:blk_-2678705702538243931_541142 file=/user/hive/warehouse/playtime/dt=20131119/access_pt.log.2013111904.log

            at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)

            at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)

            at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:330)

            at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:246)

            at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:215)

            at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:200)

            at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)

            at org.apache.hadoop.mapred.MapTask.runOldMa 

     

    •   Reson
    •  Solution 

          HDFS FILE 

                - If HDFS block is missing 

             1. confirm status

                  Confirm missing block is exit or not.

                  If missing block is over 1, file is not able to read. 

     $ hadoop dfsadmin -report

     

     Configured Capacity: 411114887479296 (373.91 TB)

    Present Capacity: 411091477784158 (373.89 TB)

    DFS Remaining: 411068945908611 (373.87 TB)

    DFS Used: 22531875547 (20.98 GB)

    DFS Used%: 0.01%

    Under replicated blocks: 0

    Blocks with corrupt replicas: 0

    Missing blocks: 0

     

    -------------------------------------------------

    Datanodes available: 20 (20 total, 0 dead)

     

                 2. detail block file

                   hadoop fsck

          hadoop fsck / -files -blocks

        

    ...

    Status: HEALTHY

     Total size:    4056908575 B (Total open files size: 3505453 B)

     Total dirs:    533

     Total files:   15525 (Files currently being written: 2)

     Total blocks (validated):  15479 (avg. block size 262091 B) (Total open file blocks (not validated): 2)

     Minimally replicated blocks:   15479 (100.0 %)

     Over-replicated blocks:    0 (0.0 %)

     Under-replicated blocks:   0 (0.0 %)

     Mis-replicated blocks:     0 (0.0 %)

     Default replication factor:    3

     Average block replication: 3.0094967

     Corrupt blocks:        0

     Missing replicas:      0 (0.0 %)

     Number of data-nodes:      20

     Number of racks:       1

    FSCK ended at Tue Nov 19 10:17:19 KST 2013 in 351 milliseconds

     

    The filesystem under path '/' is HEALTHY

     

                3.  remove corrupted file

     $ hadoop fsck -delete

     

    .....

    .........................Status: HEALTHY

     Total size:    4062473881 B (Total open files size: 3505453 B)

     Total dirs:    533

     Total files:   15525 (Files currently being written: 2)

     Total blocks (validated):      15479 (avg. block size 262450 B) (Total open file blocks (not validated): 2)

     Minimally replicated blocks:   15479 (100.0 %)

     Over-replicated blocks:        0 (0.0 %)

     Under-replicated blocks:       0 (0.0 %)

     Mis-replicated blocks:         0 (0.0 %)

     Default replication factor:    3

     Average block replication:     3.0094967

     Corrupt blocks:                0

     Missing replicas:              0 (0.0 %)

     Number of data-nodes:          20

     Number of racks:               1

    FSCK ended at Tue Nov 19 10:21:41 KST 2013 in 294 milliseconds

     

     

    The filesystem under path '/' is HEALTHY

         

               HIVE FILE 

                   -  If hive block is missing 

           alter table drop partition 

     

  • 相关阅读:
    深度优先搜索
    哈希算法
    双指针问题
    基本概念
    Ionic JPush极光推送二
    一条sql获取每个类别最新的一条记录
    Ionic App 更新插件cordova-plugin-app-version
    Ionic跳转到外网地址
    Ionic cordova-plugin-splashscreen
    Web API 上传下载文件
  • 原文地址:https://www.cnblogs.com/fengwenit/p/5941552.html
Copyright © 2011-2022 走看看