zoukankan      html  css  js  c++  java
  • UDE-00008 ORA-31626 ORA-06512 ORA-25254

    今天在导出一个模式的时候,约140GB,出现例如以下错误:

    UDE-00008: operation generated ORACLE error 31626
    ORA-31626: job does not exist
    ORA-06512: at "SYS.KUPC$QUE_INT", line 536
    ORA-25254: time-out in LISTEN while waiting for a message
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 2772
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 3886
    ORA-06512: at line 1

    MOS上面给的解释:

    CAUSE

    The problem is due to the fact that there are so-called orphaned Datapump jobs (i.e. Datapump actions that failed but are not cleaned up properly) still in the database.

    SOLUTION

    The solution is to clean the traces of these orphaned jobs by using the steps outlined in 
    Note 336014.1 - How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ?

    After that, restart the Datapump operation.

     


    UDE-00008 ORA-31626 ORA-06512 ORA-25254 - 第1张  | Life Is Easy

    UDE-00008 ORA-31626 ORA-06512 ORA-25254 - 第2张  | Life Is Easy
    How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ? (文档 ID 336014.1) UDE-00008 ORA-31626 ORA-06512 ORA-25254 - 第3张  | Life Is Easy转究竟部

     

    In this Document


    Goal

    Solution

    Additional Resources

    APPLIES TO:

    Oracle Database – Enterprise Edition – Version 10.1.0.2 to 12.1.0.1 [Release 10.1 to 12.1]
    Oracle Database – Standard Edition – Version 10.1.0.2 to 12.1.0.1 [Release 10.1 to 12.1]
    Oracle Database – Personal Edition – Version 10.1.0.2 to 12.1.0.1 [Release 10.1 to 12.1]
    Enterprise Manager for Oracle Database – Version 10.1.0.2 to 12.1.0.6.0 [Release 10.1 to 12.1]
    Information in this document applies to any platform.
    ***Checked for relevance on 29-Apr-2014***

    GOAL

    How to cleanup orphaned Data Pump jobs in DBA_DATAPUMP_JOBS ?

    SOLUTION

    The jobs used in this example:
    - Export job SCOTT.EXPDP_20051121 is a schema level export that is running
    - Export job SCOTT.SYS_EXPORT_TABLE_01 is an orphaned table level export job
    - Export job SCOTT.SYS_EXPORT_TABLE_02 is a table level export job that was stopped
    - Export job SYSTEM.SYS_EXPORT_FULL_01 is a full database export job that is temporary stopped


    Step 1. Determine in SQL*Plus which Data Pump jobs exist in the database:

    %sqlplus /nolog

     

    CONNECT / as sysdba 
    SET lines 200 
    COL owner_name FORMAT a10; 
    COL job_name FORMAT a20 
    COL state FORMAT a12
    COL operation LIKE state 
    COL job_mode LIKE state 
    COL owner.object for a50

    – locate Data Pump jobs: 

    SELECT owner_name, job_name, rtrim(operation) "OPERATION", 
           rtrim(job_mode) "JOB_MODE", state, attached_sessions
      FROM dba_datapump_jobs
     WHERE job_name NOT LIKE 'BIN$%'
     ORDER BY 1,2;

    OWNER_NAME JOB_NAME            OPERATION JOB_MODE  STATE       ATTACHED
    ———- ——————- ——— ——— ———– ——–
    SCOTT      EXPDP_20051121      EXPORT    SCHEMA    EXECUTING          1
    SCOTT      SYS_EXPORT_TABLE_01 EXPORT    TABLE     NOT RUNNING        0 
    SCOTT      SYS_EXPORT_TABLE_02 EXPORT    TABLE     NOT RUNNING        0 
    SYSTEM     SYS_EXPORT_FULL_01  EXPORT    FULL      NOT RUNNING        0

    Step 2. Ensure that the listed jobs in dba_datapump_jobs are not export/import Data Pump jobs that are active: status should be 'NOT RUNNING'.

    Step 3. Check with the job owner that the job with status 'NOT RUNNING' in dba_datapump_jobs is not an export/import Data Pump job that has been temporary stopped, but is actually a job that failed. (E.g. the full database export job by SYSTEM is not a job that failed, but was deliberately paused with STOP_JOB).

    Step 4. Determine in SQL*Plus the related master tables:

    – locate Data Pump master tables: 

     

    SELECT o.status, o.object_id, o.object_type, 
           o.owner||'.'||object_name "OWNER.OBJECT" 
      FROM dba_objects o, dba_datapump_jobs j 
     WHERE o.owner=j.owner_name AND o.object_name=j.job_name 
       AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2; 

    STATUS   OBJECT_ID OBJECT_TYPE  OWNER.OBJECT 
    ——- ———- ———— ————————- 
    VALID        85283 TABLE        SCOTT.EXPDP_20051121 
    VALID        85215 TABLE        SCOTT.SYS_EXPORT_TABLE_02 
    VALID        85162 TABLE        SYSTEM.SYS_EXPORT_FULL_01

    Step 5. For jobs that were stopped in the past and won't be restarted anymore, delete the master table. E.g.:

    DROP TABLE scott.sys_export_table_02;

     

    – For systems with recycle bin additionally run:
    purge dba_recyclebin;

    Step 6. Re-run the query on dba_datapump_jobs and dba_objects (step 1 and 4). If there are still jobs listed in dba_datapump_jobs, and these jobs do not have a master table anymore, cleanup the job while connected as the job owner. E.g.:

    CONNECT scott/tiger 

     

    SET serveroutput on 
    SET lines 100 
    DECLARE 
       h1 NUMBER; 
    BEGIN 
       h1 := DBMS_DATAPUMP.ATTACH('SYS_EXPORT_TABLE_01','SCOTT'); 
       DBMS_DATAPUMP.STOP_JOB (h1); 
    END; 
    /
     

    Note that after the call to the STOP_JOB procedure, it may take some time for the job to be removed. Query the view user_datapump_jobs to check whether the job has been removed:

    CONNECT scott/tiger 

     

    SELECT * FROM user_datapump_jobs;
     

    Step 7. Confirm that the job has been removed:

    CONNECT / as sysdba 
    SET lines 200  
    COL owner_name FORMAT a10;  
    COL job_name FORMAT a20  
    COL state FORMAT a12  
    COL operation LIKE state  
    COL job_mode LIKE state  
    COL owner.object for a50

     

    – locate Data Pump jobs:  

    SELECT owner_name, job_name, rtrim(operation) "OPERATION", 
           rtrim(job_mode) "JOB_MODE", state, attached_sessions
      FROM dba_datapump_jobs
     WHERE job_name NOT LIKE 'BIN$%'
     ORDER BY 1,2;

    OWNER_NAME JOB_NAME            OPERATION JOB_MODE  STATE       ATTACHED 
    ———- ——————- ——— ——— ———– ——– 
    SCOTT      EXPDP_20051121      EXPORT    SCHEMA    EXECUTING          1 
    SYSTEM     SYS_EXPORT_FULL_01  EXPORT    FULL      NOT RUNNING        0 

    – locate Data Pump master tables: 

    SELECT o.status, o.object_id, o.object_type, 
           o.owner||'.'||object_name "OWNER.OBJECT" 
      FROM dba_objects o, dba_datapump_jobs j 
     WHERE o.owner=j.owner_name AND o.object_name=j.job_name 
       AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2; 

    STATUS   OBJECT_ID OBJECT_TYPE  OWNER.OBJECT 
    ——- ———- ———— ————————- 
    VALID        85283 TABLE        SCOTT.EXPDP_20051121 
    VALID        85162 TABLE        SYSTEM.SYS_EXPORT_FULL_01


    Remarks:
    1. Orphaned Data Pump jobs do not have an impact on new Data Pump jobs. The view dba_datapump_jobs is a view, based on gv$datapump_job, obj$, com$, and user$. The view shows the Data Pump jobs that are still running, or jobs for which the master table was kept in the database, or in case of an abnormal end of the Data Pump job (the orphaned job). If a new Data Pump job is started, a new entry will be created, which has no relation to the old Data Pump jobs.

    2. When starting the new Data Pump job and using a system generated name, we check the names of existing Data Pump jobs in the dba_datapump_job in order to obtain a unique new system generated jobname. Naturally, there needs to be enough free space for the new master table to be created in the schema that started the new Data Pump job.

    3. A Data Pump job is not the same as a job that is defined with DBMS_JOBS. Jobs created with DBMS_JOBS use there own processes. Data Pump jobs use a master process and worker process(es). In case a Data Pump still is temporary stopped (STOP_JOB while in interactive command mode), the Data Pump job still exists in the database (status: NOT RUNNING), while the master and worker process(es) are stopped and do not exist anymore. The client can attach to the job at a later time, and continue the job execution (START_JOB).

    4. The possibility of corruption when the master table of an active Data Pump job is deleted, depends on the Data Pump job.

    4.a. If the job is an export job, corruption is unlikely as the drop of the master table will only cause the Data Pump master and worker processes to abort. This situation is similar to aborting an export of the original export client.

    4.b. If the job is an import job then the situation is different. When dropping the master table, the Data Pump worker and master processes will abort. This will probably lead to an incomplete import: e.g. not all table data was imported, and/or table was imported incomplete, and indexes, views, etc. are missing. This situation is similar to aborting an import of the original import client.

    The drop of the master table itself, does not lead to any data dictionary corruption. If you keep the master table after the job completes (using the undocumented parameter: KEEP_MASTER=Y), then a drop of the master table afterwards, will not cause any corruption

  • 相关阅读:
    2,SFDC 管理员篇
    1,SFDC 管理员篇
    0,SFDC 管理员篇
    Java控制台中输入中文输出乱码的解决办法
    struts1和struts2线程安全问题
    PL/SQL Developer使用技巧、快捷键
    SpringMVC前传--从Struts 1.x-2.x MVC-Spring 3.0 MVC
    H5元素拖拽使用事件数据传输
    js实现拼图小游戏
    js实现简单轮播图效果
  • 原文地址:https://www.cnblogs.com/blfshiye/p/4513722.html
Copyright © 2011-2022 走看看