zoukankan      html  css  js  c++  java
  • 转 使用隐含Trace参数诊断Oracle Data Pump故障

    http://blog.itpub.net/17203031/viewspace-772718/

    Data Pump数据泵是Oracle10g开始推出的,用于取代传统exp/imp工具的数据备份还原组件。经过若干版本的演进和修改,Data Pump已经非常成熟,逐渐被越来越多的DBA和运维人员接受。

     

    相对于传统的exp/impData Pump有很多优势,也变得更加复杂。数据泵一个最显著的特点就是Server-Side运行。Exp/Imp是运行在客户端上面的小工具,虽然使用方便,但是需要处理数据源端和目标端各自服务器和客户端四个版本的差异兼容问题。这就是为什么网络上很多朋友都在纠结如何处理Exp/Imp的版本差异。而且,运行在客户端上的Exp/Imp受网络影响很大,一旦操作时间较长网络不稳定,操作过程可能就以失败告终。同时,exp/imp还存在很多性能、稳定性和特性支持上的不足。

     

     

    Data Pump数据泵是运行在服务端,直接就减少了版本问题出现的可能。即使存在版本问题,使用version参数也可以进行有效的控制。此外单独的作业运行,可以避免出现意外中断的情况。

     

     

    尽管如此,我们还是经常会遇到Data Pump的故障和问题,很多时候仅仅借助提示信息不能做到完全的诊断。这个时候,我们可以考虑使用Data Pump的隐藏参数Trace来生成跟踪文件,逐步排查错误。

     

    1、  Data Pump工作原理和环境准备

     

    Data Pump工作原理有两个特点:作业调度,多进程配合协作。在Oracle中,Data Pump是作为一个特定的Job来进行处理的,可以进行Job作业的启动、终止、暂停,而且更重要的是Dump作业的工作过程是独立于外部用户的。也就是说,用户不需要和Exp/Imp一样“死盯着”界面,也不需要使用nohup &后台作业化,就可以实现自动的后台操作。

     

    在工作中,Data Pump是一个多进程配合的工作。我们从工作日志上就可以看到,每个Data Pump作业在创建的时候,会自动创建一个作业表,其中记录操作过程。Job工作的时候有两类Process进程工作,一个是master control process,负责整体过程协调,Work Process池管理,任务分配。实际进行导入导出的是Work process,如果设置了parallel参数,就会有多个Work Process进行数据工作。

     

    Data Pump的诊断本质上就是对各种Process行为的跟踪。Oracle提供了一个Trace的隐含参数,来帮助我们实现这个目标。

     

    首先,我们准备一下Data Pump工作环境。开始需要准备Directory对象。

     

     

    [root@SimpleLinux /]# ls -l | grep dumpdata

    drwxr-xr-x   2 root   root      4096 Sep 11 09:01 dumpdata

    [root@SimpleLinux /]# chown -R oracle:oinstall dumpdata/

    [root@SimpleLinux /]# ls -l | grep dumpdata

    drwxr-xr-x   2 oracle oinstall  4096 Sep 11 09:01 dumpdata

     

    --创建directory对象

    SQL> select * from v$version where rownum<2;

     

    BANNER

    -----------------------------------------------------

    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Producti

     

    SQL> create directory dumpdir as '/dumpdata';

    Directory created

     

     

    2、隐含参数Trace

     

    Trace参数是Data Pump隐含内部使用的一个参数。使用方法和其他数据泵参数相同,但是使用取值需要有一些注意之处。下面是我们实验的Trace命令。

     

     

    [oracle@SimpleLinux dumpdata]$ expdp "/ as sysdba" directory=dumpdir schemas=scott dumpfile=scott_dump.dmp parallel=2 trace=480300

     

    Export: Release 11.2.0.3.0 - Production on Wed Sep 11 09:45:07 2013

     

     

    Trace并不像其他跟踪过程相同,使用y/n的参数,开启或者关闭。Data PumpTrace参数是一个7位十六进制组成的数字串。不同的数字串表示不同的跟踪对象方法。7位十六进制数字分为两个部分,前三个数字表示特定的数据泵组件,后四位使用0300就可以。

     

    根据Oracle MOS中提供信息资料,Trace字符遵守如下设置规则:

     

    ü  不要输入超过7位长度;

    ü  不需要使用0X指定十六进制字符;

    ü  不能将十六进制字符转化为数字取值;

    ü  如果7位字符以0开头,可以省略0

    ü  输入字符大小写不敏感;

     

    各个组件分别使用不同的三位十六进制数字代表。如下片段所示:

     

     

    -- Summary of Data Pump trace levels:
    -- ==================================

      Trace   DM   DW  ORA  Lines
      level  trc  trc  trc     in
      (hex) file file file  trace                                         Purpose
    ------- ---- ---- ---- ------ -----------------------------------------------
      10300    x    x    x  SHDW: To trace the Shadow process (API) (expdp/impdp)
      20300    x    x    x  KUPV: To trace Fixed table
      40300    x    x    x  'div' To trace Process services
      80300    x            KUPM: To trace Master Control Process (MCP)      (DM)
     100300    x    x       KUPF: To trace File Manager
     200300    x    x    x  KUPC: To trace Queue services
     400300         x       KUPW: To trace Worker process(es)                (DW)
     800300         x       KUPD: To trace Data Package
    1000300         x       META. To trace Metadata Package
    --- +
    1FF0300    x    x    x  'all' To trace all components          (full tracing)

     

     

    如果需要同时跟踪多个组件,需要将目标组件的hex值进行累加,后面四位的300相同。

     

    目标Dump作业生成的Trace文件,同其他Trace文件没有什么本质差异。默认都是在BACKGROUP_DUMP_DEST目录。但是注意,Data PumpTrace过程,会生成多个Trace文件,而且定位需要知道dmdwProcess ID信息。

     

    笔者建议的一种方法是,如果系统业务不是非常繁忙,可以将目录上的Trctrm文件暂时保存在其他的地方。再进行Trace作业,此时生成的文件就可以明显看出是哪些。

     

    对于跟踪的Trace取值,Oracle建议使用480300就可以应对大部分的情况。480300会跟踪Oracle Dump作业的Master Control ProcessMCP)和Work Process。作为初始化跟踪的过程,480300基本就够用了。

     

    3Expdp Trace过程

     

    我们先从数据导出ExpdpTrace,导出一个案例。首先清理一下Trace File目录。

     

     

    [oracle@SimpleLinux trace]$ rm *.trc

    [oracle@SimpleLinux trace]$ rm *.trm

    [oracle@SimpleLinux trace]$ ls -l

    total 92

    -rw-r----- 1 oracle oinstall 86384 Sep 11 09:37 alert_ora11g.log

     

     

    调用命令,以两个并行度的方法进行导出动作。

     

     

    [oracle@SimpleLinux dumpdata]$ expdp "/ as sysdba" directory=dumpdir schemas=scott dumpfile=scott_dump.dmp parallel=2 trace=480300

     

    Export: Release 11.2.0.3.0 - Production on Wed Sep 11 09:45:07 2013

     

    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

     

    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production

    With the Partitioning, OLAP, Data Mining and Real Application Testing options

    Starting "SYS"."SYS_EXPORT_SCHEMA_01":  "/******** AS SYSDBA" directory=dumpdir schemas=scott dumpfile=scott_dump.dmp parallel=2 trace=480300

    Estimate in progress using BLOCKS method...

    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

    Total estimation using BLOCKS method: 32.18 MB

    Processing object type SCHEMA_EXPORT/USER

    . . exported "SCOTT"."T_MASTER":"P1"                     42.43 KB     982 rows

    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

    Processing object type SCHEMA_EXPORT/ROLE_GRANT

    (篇幅原因,有省略……

    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

    . . exported "SCOTT"."T_MASTER":"P2"                     88.69 KB    1859 rows

    . . exported "SCOTT"."T_SLAVE":"P1"                      412.2 KB   11268 rows

    . . exported "SCOTT"."T_SLAVE":"P2"                      975.7 KB   21120 rows

    . . exported "SCOTT"."DEPT"                              5.929 KB       4 rows

    . . exported "SCOTT"."EMP"                               8.562 KB      14 rows

    . . exported "SCOTT"."SALGRADE"                          5.859 KB       5 rows

    . . exported "SCOTT"."BONUS"                                 0 KB       0 rows

    Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded

    ******************************************************************************

    Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:

      /dumpdata/scott_dump.dmp

    Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at 09:45:36

     

     

    我们从日志上能看出Parallel的一点不一样,额外的T_MASTER.P1被提前导出了。

     

    新生成的Trace文件目录。

     

     

    [oracle@SimpleLinux trace]$ ls -l

    total 260

    -rw-r----- 1 oracle oinstall 87421 Sep 11 09:45 alert_ora11g.log

    -rw-r----- 1 oracle oinstall 40784 Sep 11 09:45 ora11g_dm00_3894.trc

    -rw-r----- 1 oracle oinstall  1948 Sep 11 09:45 ora11g_dm00_3894.trm

    -rw-r----- 1 oracle oinstall 73971 Sep 11 09:45 ora11g_dw00_3896.trc

    -rw-r----- 1 oracle oinstall  1986 Sep 11 09:45 ora11g_dw00_3896.trm

    -rw-r----- 1 oracle oinstall 27366 Sep 11 09:45 ora11g_dw01_3898.trc

    -rw-r----- 1 oracle oinstall   982 Sep 11 09:45 ora11g_dw01_3898.trm

    -rw-r----- 1 oracle oinstall  3016 Sep 11 09:45 ora11g_ora_3890.trc

    -rw-r----- 1 oracle oinstall   209 Sep 11 09:45 ora11g_ora_3890.trm

     

     

    Dmdw标注的就是MCPWork Process生成的Trace文件。同时Parallel设置使得dw0001两个。

     

    在导出过程中,我们可以看到两个worker的会话信息。

     

     

    SQL> select * from dba_datapump_sessions;

     

    OWNER_NAME          JOB_NAME     INST_ID SADDR    SESSION_TYPE

    ------------------------------ ------------------------------ ---------- -------- --------------

    SYS     SYS_EXPORT_SCHEMA_01                    1 35EB0580 DBMS_DATAPUMP

    SYS       SYS_EXPORT_SCHEMA_01                    1 35E95280 MASTER

    SYS       SYS_EXPORT_SCHEMA_01                    1 35E8A480 WORKER

    SYS       SYS_EXPORT_SCHEMA_01                    1 35E84D80 WORKER

     

     

    此时我们可以从Trace文件中,看到一些Data Pump工作的细节信息。例如:在MCPTrace文件中,我们看到一系列调用动作过程,如下片段:

     

    --初始化导出动作,整理文件系统;

    KUPM:09:45:08.720: ****IN DISPATCH at 35108, request type=1001

    KUPM:09:45:08.721: Current user is: SYS

    KUPM:09:45:08.721: hand := DBMS_DATAPUMP.OPEN ('EXPORT', 'SCHEMA', '', 'SYS_EXPORT_SCHEMA_01', '', '2');

    KUPM:09:45:08.791: Resumable enabled

    KUPM:09:45:08.799: Entered state: DEFINING

    KUPM:09:45:08.799: initing file system

     

    *** 2013-09-11 09:45:08.893

    KUPM:09:45:08.893: ****OUT DISPATCH, request type=1001, response type =2041

     

    --日志写入

    KUPM:09:45:12.135: ****IN DISPATCH at 35112, request type=3031

    KUPM:09:45:12.135: Current user is: SYS

    KUPM:09:45:12.136: Log message received from worker DG,KUPC$C_1_20130911094507,KUPC$A_1_094510040559000,MCP,3,Y

    KUPM:09:45:12.136: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

    kwqberlst rqan->lascn_kwqiia > 0 block

    kwqberlst rqan->lascn_kwqiia  4

    kwqberlst ascn 986758 lascn 0

    KUPM:09:45:12.137: ****OUT DISPATCH, request type=3031, response type =2041

     

     

    Worker Process中,如下片段看出在导出数据。

     

     

    KUPW:09:45:12.153: 1:

    KUPW:09:45:12.153: 1:

    KUPW:09:45:12.153: 1: TABLE

    KUPW:09:45:12.153: 1: SCOTT

    KUPW:09:45:12.153: 1: DEPT

    KUPW:09:45:12.154: 1: In procedure LOCATE_DATA_FILTERS

    KUPW:09:45:12.154: 1: In function NEXT_PO_NUMBER

    KUPW:09:45:12.161: 1: In procedure DETERMINE_METHOD_PARALLEL

    KUPW:09:45:12.161: 1: flags mask: 0

    KUPW:09:45:12.161: 1: dapi_possible_meth: 1

    KUPW:09:45:12.161: 1: data_size: 65536

    KUPW:09:45:12.161: 1: et_parallel: TRUE

    KUPW:09:45:12.161: 1: object: TABLE_DATA:"SCOTT"."DEPT"

    KUPW:09:45:12.164: 1: l_dapi_bit_mask: 7

    KUPW:09:45:12.164: 1: l_client_bit_mask: 7

    KUPW:09:45:12.164: 1: TABLE_DATA:"SCOTT"."DEPT" either, parallel: 1

    KUPW:09:45:12.164: 1: In function GATHER_PARSE_ITEMS

    KUPW:09:45:12.165: 1: In function CHECK_FOR_REMAP_NETWORK

    KUPW:09:45:12.165: 1: Nothing to remap

    KUPW:09:45:12.165: 1: In procedure BUILD_OBJECT_STRINGS

    KUPW:09:45:12.165: 1: In DETERMINE_BASE_OBJECT_INFO

    KUPW:09:45:12.165: 1: TABLE_DATA

    KUPW:09:45:12.165: 1: SCOTT

    KUPW:09:45:12.165: 1: EMP

     

     

    4Impdp导入过程

     

    Trace过程中,我们也可以如10046跟踪过程一样,添加SQL跟踪。Data Pump本质上工作还是一系列的SQL语句,很多时候的性能问题根源都是从SQL着手的。

     

    切换到SQL跟踪模式也比较简单,一般是在Trace数值后面添加1。我们使用导入过程进行实验。

     

     

    --处理之前

    [root@SimpleLinux trace]# ls -l

    total 4

    -rw-r----- 1 oracle oinstall 552 Sep 11 10:49 alert_ora11g.log

     

    [oracle@SimpleLinux dumpdata]$ impdp "/ as sysdba" directory=dumpdir dumpfile=scott_dump.dmp remap_schema=scott:test trace=480301 parallel=2

     

    Import: Release 11.2.0.3.0 - Production on Wed Sep 11 10:50:14 2013

     

    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

     

    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production

    With the Partitioning, OLAP, Data Mining and Real Application Testing options

    Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded

    Starting "SYS"."SYS_IMPORT_FULL_01":  "/******** AS SYSDBA" directory=dumpdir dumpfile=scott_dump.dmp remap_schema=scott:test trace=480301 parallel=2

    Processing object type SCHEMA_EXPORT/USER

    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

    Processing object type SCHEMA_EXPORT/ROLE_GRANT

    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

    Processing object type SCHEMA_EXPORT/TABLE/TABLE

    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

    . . imported "TEST"."T_MASTER":"P1"                      42.43 KB     982 rows

    . . imported "TEST"."T_MASTER":"P2"                      88.69 KB    1859 rows

    . . imported "TEST"."T_SLAVE":"P1"                       412.2 KB   11268 rows

    . . imported "TEST"."T_SLAVE":"P2"                       975.7 KB   21120 rows

    . . imported "TEST"."DEPT"                               5.929 KB       4 rows

    . . imported "TEST"."EMP"                                8.562 KB      14 rows

    . . imported "TEST"."SALGRADE"                           5.859 KB       5 rows

    . . imported "TEST"."BONUS"                                  0 KB       0 rows

    Processing object type SCHEMA_EXPORT/TABLE/COMMENT

    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

    Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at 10:50:24

     

     

    查看跟踪目录。

     

     

    [root@SimpleLinux trace]# ls -l

    total 7588

    -rw-r----- 1 oracle oinstall     739 Sep 11 10:50 alert_ora11g.log

    -rw-r----- 1 oracle oinstall 1916394 Sep 11 10:50 ora11g_dm00_4422.trc

    -rw-r----- 1 oracle oinstall    9446 Sep 11 10:50 ora11g_dm00_4422.trm

    -rw-r----- 1 oracle oinstall 2706475 Sep 11 10:50 ora11g_dw00_4424.trc

    -rw-r----- 1 oracle oinstall   15560 Sep 11 10:50 ora11g_dw00_4424.trm

    -rw-r----- 1 oracle oinstall 2977812 Sep 11 10:50 ora11g_ora_4420.trc

    -rw-r----- 1 oracle oinstall   12266 Sep 11 10:50 ora11g_ora_4420.trm

    -rw-r----- 1 oracle oinstall   29795 Sep 11 10:50 ora11g_p000_4426.trc

    -rw-r----- 1 oracle oinstall     526 Sep 11 10:50 ora11g_p000_4426.trm

    -rw-r----- 1 oracle oinstall   30109 Sep 11 10:50 ora11g_p001_4428.trc

    -rw-r----- 1 oracle oinstall     524 Sep 11 10:50 ora11g_p001_4428.trm

    -rw-r----- 1 oracle oinstall    8430 Sep 11 10:50 ora11g_p002_4430.trc

    -rw-r----- 1 oracle oinstall     184 Sep 11 10:50 ora11g_p002_4430.trm

    -rw-r----- 1 oracle oinstall    8432 Sep 11 10:50 ora11g_p003_4432.trc

    -rw-r----- 1 oracle oinstall     204 Sep 11 10:50 ora11g_p003_4432.trm

     

     

    目录生成的Trace文件,都是10046格式的Raw文件。截取片段如下:

     

     

    =====================

    PARSING IN CURSOR #13035136 len=51 dep=2 uid=0 ct=3 lid=0 tim=1378867817703043 hv=1523794037 ad='360b079c' sqlid='b1wc53ddd6h3p'

    select audit$,options from procedure$ where obj#=:1

    END OF STMT

    PARSE #13035136:c=0,e=96,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=1637390370,tim=1378867817703039

    EXEC #13035136:c=0,e=79,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=1637390370,tim=1378867817703178

    FETCH #13035136:c=0,e=53,p=0,cr=3,cu=0,mis=0,r=1,dep=2,og=4,plh=1637390370,tim=1378867817703248

    STAT #13035136 id=1 cnt=1 pid=0 pos=1 bj=221 p='TABLE ACCESS BY INDEX ROWID PROCEDURE$ (cr=3 pr=0 pw=0 time=53 us cost=2 size=47 card=1)'

    STAT #13035136 id=2 cnt=1 pid=1 pos=1 bj=231 p='INDEX UNIQUE SCAN I_PROCEDURE1 (cr=2 pr=0 pw=0 time=24 us cost=1 size=0 card=1)'

    CLOSE #13035136:c=0,e=7,dep=2,type=1,tim=1378867817703387

    =====================

     

     

    5、结论

     

    Oracle Data Pump已经非常成熟,也越来越多被人们接受。Trace参数尤其存在的历史背景,相信使用的机会越来越少。不过,作为研究内部机制的用途,还是比较有用的。

    #####sample 0  使用network-link 导入20G 以上 clob 表 ,性能很慢


    select a.owner,
    a.table_name,
    a.column_name,
    b.segment_name,
    ROUND(b.BYTES / 1024 / 1024)
    from dba_lobs a, dba_segments b
    where a.segment_name = b.segment_name
    and a.owner = 'XXX'
    and a.table_name = 'YYYY'
    union all
    select a.owner,
    a.table_name,
    a.column_name,
    b.segment_name,
    ROUND(b.BYTES / 1024 / 1024)
    from dba_lobs a, dba_segments b
    where a.index_name = b.segment_name
    and a.owner = 'XXX'
    and a.table_name = 'YYYY'

    ;

    转到底部转到底部

    2016-11-3 PROBLEM
    为此文档评级 通过电子邮件发送此文档的链接 在新窗口中打开文档 可打印页

    In this Document

      Symptoms
      Cause
      Solution
      References

    APPLIES TO:

    Oracle Database - Enterprise Edition - Version 10.1.0.3 and later
    Information in this document applies to any platform.
    ***Checked for relevance on 10-Feb-2016***

    SYMPTOMS

    A severe performance impact is experienced when using IMPDP with the NETWORK_LINK command line option to transfer a table which has 2 CLOB columns (900000 rows, average row length ~5 kB).

    CAUSE

    The cause of this problem has been identified in:
    Bug 4438573 - DATAPUMP RUNS VERY SLOW OVER NETWORK FOR TABLES WITH CLOBS
    closed with status "Not a Bug". 

    This is expected behavior when dealing with LOBs and the use of the NETWORK_LINK functionality. 
    The bug states:

    "IMPDP with NETWORK_LINK ultimately uses SQL of the form:
    INSERT INTO local_tab_name SELECT ... FROM remote_tab_name@network_link;


    Underneath this, the number of network round trips varies significantly for CLOB versus VARCHAR2 by necessity.

    For a table with VARCHAR2 columns the remote fetches can pull back several rows in one go in a single packet.

    For a table with CLOB columns the remote fetches pull back several rows in one go but the CLOB columns return a LOB LOCATOR. A LOB locator is like a handle to the LOB itself. Each of these LOBs has to be requested and read individually resulting in a lot more network round trips and these add significantly to the time taken.

    Example:

    In some situation we get 8 rows back in each fetch, so for VARCHAR2 we send a fetch request and get back a large packet with 8 rows of data for all columns.

    In the CLOB case we send a fetch request and get back a packet with 8 rows of data which includes 3 LOB locators per row. We then have to send a LOB READ request for each of these LOBs to the remote site, and get back that LOB data. 8*3 = 24 extra round trips to get that data."

    SOLUTION

    As this is the way LOB access is implemented, the only workaround available is to avoid network access to remote LOBs by using a dump file instead of the NETWORK_LINK functionality.

     ##############sample 2 使用networklink 方式导入几个20G 的大表 (非clob 字段),导致临时表空间被称爆满 ORA-1652: unable to extend temp segment ,ORA-30036

    ----

    DataPump Network Mode Import Consumes Lots Of Temporary Segments In TEMP Tablespace

    Oracle Database - Enterprise Edition - Version 10.1.0.2 to 11.2.0.4 [Release 10.1 to 11.2]
    Information in this document applies to any platform.
    ***Checked for relevance on 27-May-2014***
    SYMPTOMS

    You try to import a huge table with DataPump import (IMPDP) using a network link. During this procedure, lots of temporary segments are allocated in TEMP tablespace and the import job may fail with the errors like:

    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    ORA-39171: Job is experiencing a resumable wait.
    ORA-1652: unable to extend temp segment by 128 in tablespace TEMP
    CAUSE

    The issue was investigated in
    Bug 10396489 - SUSPECT BEHAVIOR AT DATA PUMP NETWORK IMPORT OF A HUGE PARTITIONED TABLE
    closed with status 'Not a Bug' (expected behavior).
    This is happening because the import is using the APPEND hint for an insert statement in network mode import to load the data fast.

    Each parallel execution server allocates a new temporary segment and inserts data into that temporary segment. When a COMMIT runs (at the end of table/partition), the parallel execution coordinator merges the new temporary segments into the primary table segment, where it is visible to users.

    SOLUTION

    1. Increase the TEMP tablespace size

    - OR -

    2. Generate export dumps using expdp on source database and then import the dump files on target database using impdp instead using network mode import.

    #########sample 3 ----使用impdp方式导入几个20G 的大表 (非clob 字段),导致undo表空间被称爆满 ORA-30036

    Run Out Of Space On UNDO Tablespace Using DataPump Import/Export (文档 ID 735366.1) 转到底部转到底部

    GOAL

    With the old import utility (imp) there is the option of using the parameters BUFFER and COMMIT=Y.

    That way, there are lower chances of running into issues with the UNDO tablespace. Is there anything similar in Import DataPump or it's necessary to increase the UNDO tablespace?

    An example experiencing issues is when using DataPump to re-organize tables.

    SOLUTION

    Unlike the traditional Export and Import utilities, which used the BUFFER, COMMIT, COMPRESS, CONSISTENT, DIRECT and RECORDLENGTH parameters, DataPump needs no tuning to achieve maximum performance.

    DataPump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner. Initialization parameters should be sufficient upon installation.

    However, you can receive the error:

    ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS1'

    during the Import (impdp) if indexes are present in some cases.

    Impdp maintains indexes during import by default and does not use direct_path if tables and indexes are already created. However, if there is no index to enforce constraints and you specify:

    ACCESS_METHOD=DIRECT_PATH

    with the DataPump import command line, DataPump can use direct path method to do the import.

    To get around potential issues with the UNDO tablespace in this case:

    - load data by direct path by disabling primary key constraint (using ALTER TABLE ... MODIFY CONSTRAINT ... DISABLE NOVALIDATE) and using access_method=direct_path.
    - after loading data, enable primary key constraint (using ALTER TABLE ... MODIFY CONSTRAINT ... ENABLE VALIDATE)

     ############

    Error ORA-30036 DataPump Import (IMPDP) Exhausts Undo Tablespace (文档 ID 727894.1) 转到底部转到底部

    In this Document
    Symptoms
    Changes
    Cause
    Solution
    References
    APPLIES TO:

    Oracle Database - Enterprise Edition - Version 10.1.0.2 and later
    Information in this document applies to any platform.

    SYMPTOMS

    The import DataPump session completes with the following errors:

    ORA-31693: Table data object "[schema]"."[table-name]" failed to load/unload and is being skipped due to error:
    ORA-30032: the suspended (resumable) statement has timed out
    ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS1'
    Job "[user]"."SYS_IMPORT_TABLE_01" completed with 141 error(s) at 01:15:34
    This indicates that ROLLBACK was being performed during the time in which no progress was made. It appears there is excessive UNDO being generated.


    CHANGES

    CAUSE

    Excess undo generation can occur when there is a Primary Key (PK) constraint present on the system. Import DataPump will perform index maintenance and this can increase undo usage especially if there is other DML occurring on the database.


    SOLUTION

    Disable constraints for Primary Keys (PK) on the database during import datapump load. This will reduce undo as index maintenance will not be performed.


    REFERENCES

    NOTE:735366.1 - Run Out Of Space On UNDO Tablespace Using DataPump Import/Export

    NOTE:1670349.1 - Import DataPump - How To Limit The Amount Of UNDO Generation of an IMPDP job ?

    ##########sample2 

    debug sql:

    REM srdc_impdp_performance.sql - Gather Information for IMPDP Performance Issues
    define SRDCNAME='IMPDP_PERFORMANCE'
    SET MARKUP HTML ON PREFORMAT ON
    set TERMOUT off FEEDBACK off verify off TRIMSPOOL on HEADING off
    set lines 132 pages 10000
    COLUMN SRDCSPOOLNAME NOPRINT NEW_VALUE SRDCSPOOLNAME
    select 'SRDC_'||upper('&&SRDCNAME')||'_'||upper(instance_name)||'_'||to_char(sysdate,'YYYYMMDD_HH24MISS') SRDCSPOOLNAME from v$instance;
    set TERMOUT on MARKUP html preformat on
    REM
    spool &&SRDCSPOOLNAME..htm
    select '+----------------------------------------------------+' from dual
    union all
    select '| Diagnostic-Name: '||'&&SRDCNAME' from dual
    union all
    select '| Timestamp: '||to_char(systimestamp,'YYYY-MM-DD HH24:MI:SS TZH:TZM') from dual
    union all
    select '| Machine: '||host_name from v$instance
    union all
    select '| Version: '||version from v$instance
    union all
    select '| DBName: '||name from v$database
    union all
    select '| Instance: '||instance_name from v$instance
    union all
    select '+----------------------------------------------------+' from dual
    /

    set HEADING on MARKUP html preformat off
    REM === -- end of standard header -- ===

    set concat "#"
    SET PAGESIZE 9999
    SET LINESIZE 256
    SET TRIMOUT ON
    SET TRIMSPOOL ON
    Column sid format 99999 heading "SESS|ID"
    Column serial# format 9999999 heading "SESS|SER|#"
    Column session_id format 99999 heading "SESS|ID"
    Column session_serial# format 9999999 heading "SESS|SER|#"
    Column event format a40
    Column total_waits format 9,999,999,999 heading "TOTAL|TIME|WAITED|MICRO"
    Column pga_used_mem format 9,999,999,999
    Column pga_alloc_mem format 9,999,999,999
    Column status heading 'Status' format a20
    Column timeout heading 'Timeout' format 999999
    Column error_number heading 'Error Number' format 999999
    Column error_msg heading 'Message' format a44
    Column sql_text heading 'Current SQL statement' format a44
    Column Number_of_objects format 99999999
    Column object_type format a35
    ALTER SESSION SET nls_date_format='DD-MON-YYYY HH24:MI:SS';

    SET MARKUP HTML ON PREFORMAT ON

    --====================Retrieve sid, serial# information for the active DataPump process(es)===========================
    SET HEADING OFF
    SELECT '=================================================================================================================================' FROM dual
    UNION ALL
    SELECT 'Determine sid, serial# details for the active DataPump process(es):' FROM dual
    UNION ALL
    SELECT '=================================================================================================================================' FROM dual;
    SET HEADING ON
    set feedback on
    col program for a38
    col username for a10
    col spid for a7
    select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid,
    s.status, s.username, d.job_name, p.spid, s.serial#, p.pid
    from v$session s, v$process p, dba_datapump_sessions d
    where p.addr=s.paddr and s.saddr=d.saddr and
    (UPPER (s.program) LIKE '%DM0%' or UPPER (s.program) LIKE '%DW0%');
    set feedback off

    --====================Retrieve sid, serial#, PGA details for the active DataPump process(es)===========================
    SET HEADING OFF
    SELECT '=================================================================================================================================' FROM dual
    UNION ALL
    SELECT 'Determine PGA details for the active DataPump process(es):' FROM dual
    UNION ALL
    SELECT '=================================================================================================================================' FROM dual;
    SET HEADING ON
    set feedback on
    SELECT sid, s.serial#, p.PGA_USED_MEM,p.PGA_ALLOC_MEM
    FROM v$process p, v$session s
    WHERE p.addr = s.paddr and
    (UPPER (s.program) LIKE '%DM0%' or UPPER (s.program) LIKE '%DW0%');
    set feedback off


    --====================Retrive all wait events and time in wait for the running DataPump process(es)====================
    SET HEADING OFF
    SELECT '=================================================================================================================================' FROM dual
    UNION ALL
    SELECT 'All wait events and time in wait for the active DataPump process(es):' FROM dual
    UNION ALL
    SELECT '=================================================================================================================================' FROM dual;
    SET HEADING ON
    select session_id, session_serial#, Event, sum(time_waited) total_waits
    from v$active_session_history
    where sample_time > sysdate - 1 and
    (UPPER (program) LIKE '%DM0%' or UPPER (program) LIKE '%DW0%') and
    session_id in (select sid from v$session where UPPER (program) LIKE '%DM0%' or UPPER (program) LIKE '%DW0%') and
    session_state = 'WAITING' And time_waited > 0
    group by session_id, session_serial#, Event
    order by session_id, session_serial#, total_waits desc;

    --====================DataPump progress - retrieve current sql id and statement====================
    SET HEADING OFF
    SELECT '=================================================================================================================================' FROM dual
    UNION ALL
    SELECT 'DataPump progress - retrieve current SQL id and statement:' FROM dual
    UNION ALL
    SELECT '=================================================================================================================================' FROM dual;
    SET HEADING ON
    select sysdate, a.sid, a.sql_id, a.event, b.sql_text
    from v$session a, v$sql b
    where a.sql_id=b.sql_id and
    (UPPER (a.program) LIKE '%DM0%' or UPPER (a.program) LIKE '%DW0%')
    order by a.sid desc;

    SET HEADING OFF MARKUP HTML OFF
    SET SERVEROUTPUT ON FORMAT WRAP

    declare
    v_ksppinm varchar2(30);
    CURSOR c_fix IS select v.KSPPSTVL value FROM x$ksppi n, x$ksppsv v WHERE n.indx = v.indx and n.ksppinm = v_ksppinm;
    CURSOR c_count is select count(*) from DBA_OPTSTAT_OPERATIONS where operation in ('gather_dictionary_stats','gather_fixed_objects_stats');
    CURSOR c_stats is select operation, START_TIME, END_TIME from DBA_OPTSTAT_OPERATIONS
    where operation in ('gather_dictionary_stats','gather_fixed_objects_stats') order by 2 desc;
    v_long_op_flag number := 0 ;
    v_target varchar2(100);
    v_sid number;
    v_totalwork number;
    v_opname varchar2(200);
    v_sofar number;
    v_time_remain number;
    stmt varchar2(2000);
    v_fix c_fix%ROWTYPE;
    v_count number;

    begin
    stmt:='select count(*) from v$session_longops where sid in (select sid from v$session where UPPER (program) LIKE '||
    '''%DM0%'''||' or UPPER (program) LIKE '||'''%DW0%'')'||' and totalwork <> sofar';
    DBMS_OUTPUT.PUT_LINE ('<pre>');
    dbms_output.put_line ('=================================================================================================================================');
    dbms_output.put_line ('Check v$session_longops - DataPump pending work');
    dbms_output.put_line ('=================================================================================================================================');
    execute immediate stmt into v_long_op_flag;
    if (v_long_op_flag > 0) then
    dbms_output.put_line ('The number of long running DataPump processes is: '|| v_long_op_flag);
    dbms_output.put_line (chr (10));
    for longop in (select sid,target,opname, sum(totalwork) totwork, sum(sofar) sofar, sum(totalwork-sofar) blk_remain, Round(sum(time_remaining/60),2) time_remain
    from v$session_longops where sid in (select sid from v$session where UPPER (program) LIKE '%DM0%' or UPPER (program) LIKE '%DW0%') and
    opname NOT LIKE '%aggregate%' and totalwork <> sofar group by sid,target,opname) loop
    dbms_output.put_line (Rpad ('DataPump SID', 40, ' ')||chr (9)||':'||chr (9)||longop.sid);
    dbms_output.put_line (Rpad ('Object being read', 40, ' ')||chr (9)||':'||chr (9)||longop.target);
    dbms_output.put_line (Rpad ('Operation being executed', 40, ' ')||chr (9)||':'||chr (9)||longop.opname);
    dbms_output.put_line (Rpad ('Total blocks to be read', 40, ' ')||chr (9)||':'||chr (9)||longop.totwork);
    dbms_output.put_line (Rpad ('Total blocks already read', 40, ' ')||chr (9)||':'||chr (9)||longop.sofar);
    dbms_output.put_line (Rpad ('Remaining blocks to be read', 40, ' ')||chr (9)||':'||chr (9)||longop.blk_remain);
    dbms_output.put_line (Rpad ('Estimated time remaining for the process', 40, ' ')||chr (9)||':'||chr (9)||longop.time_remain|| ' Minutes');
    dbms_output.put_line (chr (10));
    end Loop;
    else
    DBMS_OUTPUT.PUT_LINE ('No DataPump session is found in v$session_longops');
    dbms_output.put_line (chr (10));
    end If;

    DBMS_OUTPUT.PUT_LINE ('=================================Have Dictionary and Fixed Objects statistics been gathered?====================================');
    open c_count;
    fetch c_count into v_count;
    if v_count>0 then
    BEGIN
    DBMS_OUTPUT.PUT_LINE (rpad ('OPERATION', 30)||' '||rpad ('START_TIME', 32)||' '||rpad ('END_TIME', 32));
    DBMS_OUTPUT.PUT_LINE (rpad ('--------------------------', 30)||' '||rpad ('-----------------------------', 32)||' '||rpad ('-----------------------------', 32));
    FOR v_stats IN c_stats LOOP
    DBMS_OUTPUT.PUT_LINE (rpad (v_stats.operation, 30)||' '||rpad (v_stats.start_time, 32)||' '||rpad (v_stats.end_time, 32));
    END LOOP;
    end;
    else
    DBMS_OUTPUT.PUT_LINE ('Dictionary and fixed objects statistics have not been gathered for this database.');
    dbms_output.put_line (chr (10));
    END IF;
    dbms_output.put_line ('=================================================================================================================================');
    dbms_output.put_line (chr (10));

    for i in 1..6 loop
    if i = 1 then
    v_ksppinm := 'fixed_date';
    elsif i = 2 then
    v_ksppinm := 'aq_tm_processes';
    elsif i = 3 then
    v_ksppinm := 'compatible';
    elsif i = 4 then
    v_ksppinm := 'optimizer_features_enable';
    elsif i = 5 then
    v_ksppinm := 'optimizer_index_caching';
    elsif i = 6 then
    v_ksppinm := 'optimizer_index_cost_adj';
    end if;

    dbms_output.put_line ('=================================================================================================================================');
    DBMS_OUTPUT.PUT_LINE ('Is the '||upper (v_ksppinm)||' parameter set?');
    dbms_output.put_line ('=================================================================================================================================');

    open c_fix;
    fetch c_fix into v_fix;
    close c_fix;
    if nvl (to_char (v_fix.value), '1') = to_char ('1') then
    DBMS_OUTPUT.PUT_LINE ('No value is found for '||upper (v_ksppinm)||' parameter.');
    else
    DBMS_OUTPUT.PUT_LINE ('The '||upper (v_ksppinm)||' parameter is set for this database and the value is: '||v_fix.value);
    end if;
    dbms_output.put_line('=================================================================================================================================');
    dbms_output.put_line (chr (10));
    end loop;
    end;
    /

    set feedback off
    begin
    dbms_output.put_line(chr(10));
    DBMS_OUTPUT.PUT_LINE ('=================================================Encountering space issues?======================================================');
    end;
    /

    begin
    dbms_output.put_line(chr(10));
    DBMS_OUTPUT.PUT_LINE ('Look at view DBA_RESUMABLE:');
    end;
    /

    set feedback on
    SET HEADING on
    set linesize 120
    set pagesize 120
    column name heading 'Name' format a20
    column status heading 'Status' format a20
    column timeout heading 'Timeout' format 999999
    column error_number heading 'Error Number' format 999999
    column error_msg heading 'Message' format a44

    select NAME,STATUS, TIMEOUT, ERROR_NUMBER, ERROR_MSG from DBA_RESUMABLE;

    set feedback off
    SET HEADING OFF

    begin
    dbms_output.put_line(chr(10));
    DBMS_OUTPUT.PUT_LINE ('Look at view DBA_OUTSTANDING_ALERTS:');
    end;
    /

    set feedback on
    SET HEADING on
    column object_name heading 'Object Name' format a14
    column object_type heading 'Object Type' format a14
    column reason heading 'Reason' format a40
    column suggested_action heading 'Suggested action' format a40

    select OBJECT_NAME,OBJECT_TYPE,REASON,SUGGESTED_ACTION from DBA_OUTSTANDING_ALERTS;

    set feedback off
    SET HEADING OFF
    SET LINESIZE 256
    begin
    dbms_output.put_line ('=================================================================================================================================');
    DBMS_OUTPUT.PUT_LINE ('</pre>');
    end;
    /

    spool off
    PROMPT
    PROMPT
    PROMPT REPORT GENERATED : &SRDCSPOOLNAME..htm

    exit

    ##############444

    转到底部转到底部

    APPLIES TO:

    Oracle Database - Personal Edition - Version 8.1.7.0 to 11.2.0.4 [Release 8.1.7 to 11.2]
    Oracle Database - Enterprise Edition - Version 8.1.7.0 to 11.2.0.4 [Release 8.1.7 to 11.2]
    Oracle Database - Standard Edition - Version 8.1.7.0 to 11.2.0.4 [Release 8.1.7 to 11.2]
    Information in this document applies to any platform.

    SYMPTOMS

    An export or import of a table with a Large Object (LOB) column, has slower performance than an export or import of a table without LOB columns.

    Tests done with table with CLOB and without CLOB. Both tables contained 500,000 rows of data.

                                           No CLOB      No CLOB           With CLOB
                                            DIRECT CONVENTIONAL         column
    ----------------------------------------------------------------------------------------
    8.1.7.4.0                                0:13               0:20                      7:49
    9.2.0.4.0                                0:14               0:18                      7:37
    9.2.0.5.0                                0:12               0:15                      7:03
    10.1.0.2.0                              0:16               0:31                      7:15
    10.2.0.5.0                              1:36               2:13                      7:55
    11.1.0.7.0                              0:20               0:10                      2:46
    11.2.0.2.0                              1:54               2:03                    24:43

    NOTE:
    Above performance results should not be considered as a benchmark of the performance between different Oracle versions, as the test databases were located on different machines with different hardware, and the databases had a different parameter configuration. The main objective of these results is to give an indication of the difference in the time that is needed to export a table with a LOB column, and a table without a LOB column.

    CHANGES

    You recently created tables that have Large Object (LOB) columns.

    CAUSE

    This is expected behavior. The rows of a table with a LOB column are fetched one row at a time. Also note that rows in tables that contain objects and LOBs will be exported using conventional path, even if direct path was specified. Also during import, the rows of tables containing LOB columns are inserted individually.

    SOLUTION

    Although the performance of the export cannot be improved directly, possible alternative solutions are:

    • If not required, do not use LOB columns.

      - OR -

    • Use Transport Tablespace export instead of full/user/table level export.

      - OR -

    • Upgrade to Oracle10g and use Export DataPump and Import DataPump.

    ###sample 5


    --> 原库ip:10.198.109.1/2 用户名:os1opr 表名:docversion (1200万数据)
    会在原库上新建一个用户和一张表存储docversion表的部分数据 (插入1200万数据)

    该表大小8G,总数据量为1325W,5个索引
    SELECT BYTES/1024/1024/1024 FROM DBA_SEGMENTS WHERE SEGMENT_NAME=UPPER('docversion')
    8.5078125 G

    SELECT COUNT(*) FROM os1opr.docversion 数据量1325W
    13259053

    原库表空间如下:请选择大的表空间存放表。

    tablespace 最大大小 使用率
    PE_DATA 31.99998 42.77
    2 PE_INDEX 31.99998 36.2
    3 OBJECTSTORE1_DATA 63.99997 34.23
    4 SYSTEM 31.99998 6.15
    5 SYSAUX 31.99998 4.26
    6 UNDOTBS1 31.99998 0.13
    7 GCD_DATA 31.99998 0.03
    8 USERS 31.99998 0
    9 OBJECTSTORE1_INDEX 31.99998 0

    2.新库表空间大小如下,请选择大的表空间存放表。

    tablespace 最大大小 使用率
    1 IVMS_DATA 63.99997 68.74
    2 IVMS_INDX 31.99998 22.54
    3 SYSTEM 31.99998 6.44
    4 SYSAUX 31.99998 3.45
    5 UNDOTBS2 31.99998 0.13
    6 UNDOTBS1 31.99998 0.11
    7 UNDOTBS 31.99998 0.01
    8 USERS 31.99998 0



    以下是DBA建议,具体建议在测试环境做一次完整测试:



    1.8G的大小,预计产生的日志量不会超过16G, 建议分批插入,比如50W ~ 100W数据插入,提交一次,不建议一次插入1000W数据库再提交。
    2.可以考虑先建立表,倒入数据,倒入数据完成后,在进行索引创建的工作。
    3.在测试环境做测试的时候,注意观察临时表空间和undo表空间的消耗,,防止以上表空间报空间不足。

  • 相关阅读:
    BZOJ3670: [Noi2014]动物园
    BZOJ4424: Cf19E Fairy
    BZOJ1257: [CQOI2007]余数之和
    BZOJ2438: [中山市选2011]杀人游戏
    SDOI2017第一轮
    BZOJ4820: [Sdoi2017]硬币游戏
    NOIP2016
    HDU1848 Fibonacci again and again(SG 函数)
    HDU1517 Multiply Game
    HDU1907 Jhon
  • 原文地址:https://www.cnblogs.com/feiyun8616/p/9272705.html
Copyright © 2011-2022 走看看