zoukankan      html  css  js  c++  java
  • Oracle-recyclebin过大导致的insert逻辑读暴增问题【学习测试】

    一、参考[学习DBA 范计杰]

     

    https://www.modb.pro/db/40753
    http://www.360doc.com/content/21/0121/05/70704971_958071999.shtml
    Purging RECYCLEBIN Running Fosrever (Doc ID 2284986.1)

    二、概述

       某客户的数据库一条insert语句某段时间突然变慢,平均单次执行逻辑读暴增至20万,最终定位为insert时因可用空间不充足递归清理recyclebin中的对象导致,
    递归delete from RECYCLEBIN$,因没有合适的索引走full scan,正常情况下也没啥影响,但该客户的这套库SYS.RECYCLEBIN$竟然达到700M.最终导致了该问题。
    因为是学习案例,并没有接触这个案例,需要重新梳理:
    1.客户是执行Insert语句突然变慢的问题; 2.后续定位是递归SQL中,存在delete from RECYCLEBIN$逻辑读很高,发现问题<最终创建合适索引,并且对基表收集统计信息后解决问题! 问题: 1.什么情况下会导致这个问题? 2.如何最快速度判断是这个问题? 3.如何避免出现这个问题? 4.正常drop table, drop table purge,这个操作与RECYCLEBIN$基表的数据有什么关联? 如下测试围绕着上述问题进行测试,并进行小结。

    三、测试

     

    3.1 drop table, drop table purge,这个操作与RECYCLEBIN$基表的数据有什么关联?

    1)清空回收站!
    SQL> select count(*) from RECYCLEBIN$;
      COUNT(*)
    ----------
            14
    SQL> purge dba_recyclebin;
    DBA Recyclebin purged.
    
    SQL>  select count(*) from RECYCLEBIN$;
      COUNT(*)
    ----------
             0
    可以发现,清空回收站后,基表RECYCLEBIN$记录被删除。
    
    
    2)drop table xx purge;
    SQL> conn scott/tiger
    SQL> drop table a purge;
    Table dropped.
    SQL> show recyclebin;
    无记录
    
    3)drop table xx 那么在基表会有多少条记录呢?
     一条? 两条???
    SQL> create table a as select * from dba_objects;
    SQL> select count(extent_id) from user_extents where segment_name='A';
    COUNT(EXTENT_ID)
    ----------------
                  25
    SQL> drop table a;
    
    SQL>  select count(*) from RECYCLEBIN$;
    
      COUNT(*)
    ----------
             1
    
    SQL> select * from RECYCLEBIN$;
    
          OBJ#     OWNER# ORIGINAL_NAME                     OPERATION      TYPE#
    ---------- ---------- -------------------------------- ---------- ----------
           TS#      FILE#     BLOCK# DROPTIME     DROPSCN
    ---------- ---------- ---------- --------- ----------
    PARTITION_NAME                        FLAGS    RELATED         BO   PURGEOBJ
    -------------------------------- ---------- ---------- ---------- ----------
      BASE_TS# BASE_OWNER#      SPACE       CON#     SPARE1     SPARE2     SPARE3
    ---------- ----------- ---------- ---------- ---------- ---------- ----------
         90660         83 A                                         0          1
             4          4       4154 17-JAN-21   35784550
                                             30      90660      90660      90660
                                 1280          0
    
    SQL> show recyclebin;
    ORIGINAL NAME    RECYCLEBIN NAME                OBJECT TYPE  DROP TIME
    ---------------- ------------------------------ ------------ -------------------
    A                BIN$uQgoIhFLEGngU0U8qMB9Sg==$0 TABLE        2021-01-17:00:41:12

    ? 也就是说多个extents 但是基表只占用1条记录?

    4)测试使用一个分区表! 进行drop 测试

    SQL> select SEGMENT_NAME,PARTITION_NAME from user_segments where segment_name='RANGE_PART_TAB1' order by 2;

    SEGMENT_NAME PARTITION_NAME
    ------------------------------ ------------------------------
    RANGE_PART_TAB1 P1

    ······

    RANGE_PART_TAB1 P_MAX

    13 rows selected.

    SQL> drop table RANGE_PART_TAB1;

    SQL> show recyclebin;
    ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
    ---------------- ------------------------------ ------------ -------------------
    A BIN$uQgoIhFLEGngU0U8qMB9Sg==$0 TABLE 2021-01-17:00:41:12
    RANGE_PART_TAB1 BIN$uQgoIhFMEGngU0U8qMB9Sg==$0 TABLE 2021-01-17:02:49:25

    
    

    SQL> select count(*) from RECYCLEBIN$;

    COUNT(*)
    ----------
    15

    也就是说非分区表,drop table 1条记录! 分区表,一个segment drop table 将占用多少记录进行存储至基表RECYCLEBIN$中
    所以也就是说,当数据库经常出现drop table的操作或者删除大量分区表,并且没有加上purge,也没有定期清空回收站的情况下,容易造成回收站基表的增大!

    3.2 模拟出现insert 慢,源头是delete 回收站基表SQL慢的情况

     

    将基表Segment大小变大! 目的是后续Insert时,申请空间不足,需要delete基表SQL执行很慢,从而影响insert 速度!

    SQL> select count(*) from recyclebin$;
      COUNT(*)
    ----------
            15
    
    SQL> select bytes/1024/1024 from user_segments where segment_name='RECYCLEBIN$';
    BYTES/1024/1024
    ---------------
              .0625
    
    SQL>insert into sys.RECYCLEBIN$  select t.* from sys.RECYCLEBIN$ t connect by level<2000000
                    *
    ERROR at line 1:
    ORA-01013: user requested cancel of current operation
    
    SQL> select count(*) from recyclebin$;
      COUNT(*)
    ----------
            15

    select segment_name,segment_type,bytes/1024/1024 from user_segments where segment_name like 'RECYCLEBIN%'; SEGMENT_NAME SEGMENT_TYPE BYTES/1024/1024 ------------------------- ------------------ --------------- RECYCLEBIN$ TABLE 480 RECYCLEBIN$_OBJ INDEX 160 RECYCLEBIN$_TS INDEX 136 RECYCLEBIN$_OWNER INDEX 144

    现在基表Segment达到了480Mbytes.
    SQL> purge dba_recyclebin;

    select segment_name,segment_type,bytes/1024/1024 from user_segments where segment_name like 'RECYCLEBIN%' SEGMENT_NAME SEGMENT_TYPE BYTES/1024/1024 ------------------------- ------------------ --------------- RECYCLEBIN$ TABLE 480 RECYCLEBIN$_OBJ INDEX 160 RECYCLEBIN$_TS INDEX 136 RECYCLEBIN$_OWNER INDEX 144

    基表没有变化! 说明清空回收站并不会影响基表的大小!

      

     制造一个很小的表空间

    SQL> create tablespace testtbs datafile '/11.2.0.4/app/oracle/tt/testtbs.dbf' size 5m;
    SQL> create table scott.tmp1 tablespace testtbs as select * from dba_objects where rownum<=10000;
    SQL> SELECT OWNER,SEGMENT_NAME,SEGMENT_TYPE,TABLESPACE_NAME,BYTES/1024/1024,EXTENTS FROM DBA_SEGMENTS WHERE OWNER='SCOTT' AND SEGMENT_NAME='TMP1'; OWNER SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME BYTES/1024/1024 EXTENTS ------------------------------ ------------------------- ------------------ ------------------------------ --------------- ---------- SCOTT TMP1 TABLE TESTTBS 2 17

    SQL> createtable SCOTT.tmp2 tablespace testtbs asselect * from dba_objects where1=0;

    droptable SCOTT.tmp1 ;

    SQL> SELECT * FROM DBA_FREE_SPACE WHERE TABLESPACE_NAME='TESTTBS'; TABLESPACE_NAME FILE_ID BLOCK_ID BYTES BLOCKS RELATIVE_FNO ------------------------------ ------- ---------- ---------- ---------- ------------ TESTTBS 10 384 2097152 256 10 TESTTBS 10 128 65536 8 10 TESTTBS 10 136 65536 8 10 TESTTBS 10 144 65536 8 10 TESTTBS 10 152 65536 8 10 TESTTBS 10 160 65536 8 10 TESTTBS 10 168 65536 8 10 TESTTBS 10 176 65536 8 10 TESTTBS 10 184 65536 8 10 TESTTBS 10 192 65536 8 10 TESTTBS 10 200 65536 8 10 TESTTBS 10 208 65536 8 10 TESTTBS 10 216 65536 8 10 TESTTBS 10 224 65536 8 10 TESTTBS 10 232 65536 8 10 TESTTBS 10 240 65536 8 10 TESTTBS 10 248 65536 8 10 TESTTBS 10 256 1048576 128 10 18 rows selected.

    SQL> SELECT SUM(BYTES)/1024/1024 FROM DBA_FREE_SPACE WHERE TABLESPACE_NAME='TESTTBS'; SUM(BYTES)/1024/1024 -------------------- 4

    从dba_free_space block_id 248 ,连续8个blocks 248+7=255,连续下一个区的第一个block_id=256!
    block_id 256 ,连续128个blocks
    也就是说这个表空间文件头部占用了1Mbytes,第一次分配给tmp1 2Mbytes空间,drop table后标记为空闲!,其次是表空间数据文件创建后,剩余的2Mbytes连续未分配的空间block_id=384!
    按照上述测试,插入1万条记录占用2Mbytes,那么如果一次插入1万条记录占用2Mbytes可以直接从free使用!
    再次插入1万条记录,Oracle drop table的底层数据并未真正删除,还在回收站,因此需要内部执行SQL,其中涉及delete基表记录,SQL跑的很慢!

    测试对比insert SQL执行效率

    SQL> select count(*) from recyclebin$;
      COUNT(*)
    ----------
             1
    SQL> oradebug setmypid Statement processed. SQL> oradebug event 10046 trace name context forever,level 12 Statement processed. SQL> oradebug tracefile_name /u01/app/oracle/diag/rdbms/tt/tt/trace/tt_ora_3996.trc

    set timing on insert /*+gather_plan_statistics */into scott.tmp2 select * from dba_objects where rownum<=10000;
    10000 rows created.
    Elapsed: 00:00:00.28

    set linesize 200 pagesize 999 select * from table(dbms_xplan.display_cursor(null,null,'advanced -PROJECTION -bytes iostats,last'));

    Plan hash value: 310638681

     

    ------------------------------------------------------------------------------------------------------------------------------------------

    | Id  | Operation                       | Name        | Starts | E-Rows | Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers | Reads  |

    ------------------------------------------------------------------------------------------------------------------------------------------

    |   0 | INSERT STATEMENT                |             |      1 |        |   303 (100)|          |      0 |00:00:00.10 |    2580 |     15 |

    |   1 |  LOAD TABLE CONVENTIONAL        |             |      1 |        |            |          |      0 |00:00:00.10 |    2580 |     15 |

    |*  2 |   COUNT STOPKEY                 |             |      1 |        |            |          |  10000 |00:00:00.07 |     124 |      0 |

    |   3 |    VIEW                         | DBA_OBJECTS |      1 |  86957 |   303   (1)| 00:00:04 |  10000 |00:00:00.07 |     124 |      0 |

    |   4 |     UNION-ALL                   |             |      1 |        |            |          |  10000 |00:00:00.07 |     124 |      0 |

    |*  5 |      TABLE ACCESS BY INDEX ROWID| SUM$        |      0 |      1 |     1   (0)| 00:00:01 |      0 |00:00:00.01 |       0 |      0 |

    |*  6 |       INDEX UNIQUE SCAN         | I_SUM$_1    |      0 |      1 |     0   (0)|          |      0 |00:00:00.01 |       0 |      0 |

    |   7 |      TABLE ACCESS BY INDEX ROWID| OBJ$        |      0 |      1 |     3   (0)| 00:00:01 |      0 |00:00:00.01 |       0 |      0 |

    |*  8 |       INDEX RANGE SCAN          | I_OBJ1      |      0 |      1 |     2   (0)| 00:00:01 |      0 |00:00:00.01 |       0 |      0 |

    |*  9 |      FILTER                     |             |      1 |        |            |          |  10000 |00:00:00.04 |     124 |      0 |

    |* 10 |       HASH JOIN                 |             |      1 |  86955 |   299   (2)| 00:00:04 |  10000 |00:00:00.03 |     124 |      0 |

    |  11 |        TABLE ACCESS FULL        | USER$       |      1 |     99 |     3   (0)| 00:00:01 |    104 |00:00:00.01 |       6 |      0 |

    |* 12 |        HASH JOIN                |             |      1 |  86955 |   295   (1)| 00:00:04 |  10000 |00:00:00.02 |     118 |      0 |

    |  13 |         INDEX FULL SCAN         | I_USER2     |      1 |     99 |     1   (0)| 00:00:01 |    104 |00:00:00.01 |       1 |      0 |

    |* 14 |         TABLE ACCESS FULL       | OBJ$        |      1 |  86955 |   294   (1)| 00:00:04 |  10000 |00:00:00.01 |     117 |      0 |

    |  15 |       NESTED LOOPS              |             |      0 |      1 |     2   (0)| 00:00:01 |      0 |00:00:00.01 |       0 |      0 |

    |* 16 |        INDEX SKIP SCAN          | I_USER2     |      0 |      1 |     1   (0)| 00:00:01 |      0 |00:00:00.01 |       0 |      0 |

    |* 17 |        INDEX RANGE SCAN         | I_OBJ4      |      0 |      1 |     1   (0)| 00:00:01 |      0 |00:00:00.01 |       0 |      0 |

    |  18 |      NESTED LOOPS               |             |      0 |      2 |     4   (0)| 00:00:01 |      0 |00:00:00.01 |       0 |      0 |

    |  19 |       TABLE ACCESS FULL         | LINK$       |      0 |      2 |     2   (0)| 00:00:01 |      0 |00:00:00.01 |       0 |      0 |

    |  20 |       TABLE ACCESS CLUSTER      | USER$       |      0 |      1 |     1   (0)| 00:00:01 |      0 |00:00:00.01 |       0 |      0 |

    |* 21 |        INDEX UNIQUE SCAN        | I_USER#     |      0 |      1 |     0   (0)|          |      0 |00:00:00.01 |       0 |      0 |

    ------------------------------------------------------------------------------------------------------------------------------------------

    Predicate Information (identified by operation id):

    ---------------------------------------------------

     

       2 - filter(ROWNUM<=10000)

       5 - filter(BITAND("S"."XPFLAGS",8388608)=8388608)

       6 - access("S"."OBJ#"=:B1)

       8 - access("EO"."OBJ#"=:B1)

       9 - filter((("O"."TYPE#"<>4 AND "O"."TYPE#"<>5 AND "O"."TYPE#"<>7 AND "O"."TYPE#"<>8 AND "O"."TYPE#"<>9 AND "O"."TYPE#"<>11

                  AND "O"."TYPE#"<>12 AND "O"."TYPE#"<>13 AND "O"."TYPE#"<>14 AND "O"."TYPE#"<>22 AND "O"."TYPE#"<>87 AND "O"."TYPE#"<>88) OR

                  BITAND("U"."SPARE1",16)=0 OR (INTERNAL_FUNCTION("O"."TYPE#") AND ((SYS_CONTEXT('userenv','current_edition_name')='ORA$BASE' AND

                  "U"."TYPE#"<>2) OR ("U"."TYPE#"=2 AND "U"."SPARE2"=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) OR  IS NOT NULL))))

      10 - access("O"."SPARE3"="U"."USER#")

      12 - access("O"."OWNER#"="U"."USER#")

      14 - filter(("O"."TYPE#"<>10 AND "O"."NAME"<>'_NEXT_OBJECT' AND "O"."NAME"<>'_default_auditing_options_' AND "O"."LINKNAME" IS

                  NULL AND BITAND("O"."FLAGS",128)=0))

      16 - access("U2"."TYPE#"=2 AND "U2"."SPARE2"=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id')))

           filter(("U2"."TYPE#"=2 AND "U2"."SPARE2"=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))))

      17 - access("O2"."DATAOBJ#"=:B1 AND "O2"."TYPE#"=88 AND "O2"."OWNER#"="U2"."USER#")

      21 - access("L"."OWNER#"="U"."USER#")

    再次Insert

    SQL>   insert /*+gather_plan_statistics */into scott.tmp2 select * from dba_objects where rownum<=10000;
    10000 rows created.
    Elapsed: 00:00:00.63

    可能从时间上,看起来没啥区别! 数据量少,我们来看执行计划
    Plan hash value: 310638681 ------------------------------------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers | Reads | ------------------------------------------------------------------------------------------------------------------------------------------ | 0 | INSERT STATEMENT | | 1 | | 303 (100)| | 0 |00:00:00.63 | 123K| 81 | | 1 | LOAD TABLE CONVENTIONAL | | 1 | | | | 0 |00:00:00.63 | 123K| 81 | |* 2 | COUNT STOPKEY | | 1 | | | | 10000 |00:00:00.07 | 124 | 0 | | 3 | VIEW | DBA_OBJECTS | 1 | 86957 | 303 (1)| 00:00:04 | 10000 |00:00:00.07 | 124 | 0 | | 4 | UNION-ALL | | 1 | | | | 10000 |00:00:00.07 | 124 | 0 | |* 5 | TABLE ACCESS BY INDEX ROWID| SUM$ | 0 | 1 | 1 (0)| 00:00:01 | 0 |00:00:00.01 | 0 | 0 | |* 6 | INDEX UNIQUE SCAN | I_SUM$_1 | 0 | 1 | 0 (0)| | 0 |00:00:00.01 | 0 | 0 | | 7 | TABLE ACCESS BY INDEX ROWID| OBJ$ | 0 | 1 | 3 (0)| 00:00:01 | 0 |00:00:00.01 | 0 | 0 | |* 8 | INDEX RANGE SCAN | I_OBJ1 | 0 | 1 | 2 (0)| 00:00:01 | 0 |00:00:00.01 | 0 | 0 | |* 9 | FILTER | | 1 | | | | 10000 |00:00:00.04 | 124 | 0 | |* 10 | HASH JOIN | | 1 | 86955 | 299 (2)| 00:00:04 | 10000 |00:00:00.03 | 124 | 0 | | 11 | TABLE ACCESS FULL | USER$ | 1 | 99 | 3 (0)| 00:00:01 | 104 |00:00:00.01 | 6 | 0 | |* 12 | HASH JOIN | | 1 | 86955 | 295 (1)| 00:00:04 | 10000 |00:00:00.02 | 118 | 0 | | 13 | INDEX FULL SCAN | I_USER2 | 1 | 99 | 1 (0)| 00:00:01 | 104 |00:00:00.01 | 1 | 0 | |* 14 | TABLE ACCESS FULL | OBJ$ | 1 | 86955 | 294 (1)| 00:00:04 | 10000 |00:00:00.01 | 117 | 0 | | 15 | NESTED LOOPS | | 0 | 1 | 2 (0)| 00:00:01 | 0 |00:00:00.01 | 0 | 0 | |* 16 | INDEX SKIP SCAN | I_USER2 | 0 | 1 | 1 (0)| 00:00:01 | 0 |00:00:00.01 | 0 | 0 | |* 17 | INDEX RANGE SCAN | I_OBJ4 | 0 | 1 | 1 (0)| 00:00:01 | 0 |00:00:00.01 | 0 | 0 | | 18 | NESTED LOOPS | | 0 | 2 | 4 (0)| 00:00:01 | 0 |00:00:00.01 | 0 | 0 | | 19 | TABLE ACCESS FULL | LINK$ | 0 | 2 | 2 (0)| 00:00:01 | 0 |00:00:00.01 | 0 | 0 | | 20 | TABLE ACCESS CLUSTER | USER$ | 0 | 1 | 1 (0)| 00:00:01 | 0 |00:00:00.01 | 0 | 0 | |* 21 | INDEX UNIQUE SCAN | I_USER# | 0 | 1 | 0 (0)| | 0 |00:00:00.01 | 0 | 0 | ------------------------------------------------------------------------------------------------------------------------------------------
    同样的执行计划?  为什么Buffers 第一次Insert是2580,第二次123K !!!

    SQL>oradebug event 10046 trace name context off
    
    
    $tkprof  /u01/app/oracle/diag/rdbms/tt/tt/trace/tt_ora_3996.trc tk.txt
    $ cat tk.txt |grep -A5 disk >disk.txt
    通过人肉扫描可以发现query 60835较高的消耗
    call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 0.16 0.16 0 60835 0 1 total 4 0.16 0.16 0 60835 0 1

    $vi xx 定位这个区域
    ******************************************************************************** SQL ID: 0hhmdwwgxbw0r Plan Hash: 652194454 select obj#, type#, flags, related, bo, purgeobj, con# from RecycleBin$ where ts#=:1 and to_number(bitand(flags, 16)) = 16 order by dropscn call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 0.16 0.16 0 60835 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 0.16 0.16 0 60835 0 1 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: CHOOSE Parsing user id: SYS (recursive depth: 1) Number of plan statistics captured: 1 Rows (1st) Rows (avg) Rows (max) Row Source Operation ---------- ---------- ---------- --------------------------------------------------- 1 1 1 SORT ORDER BY (cr=60835 pr=0 pw=0 time=163601 us cost=3 size=37 card=1) 1 1 1 TABLE ACCESS FULL RECYCLEBIN$ (cr=60835 pr=0 pw=0 time=163579 us cost=2 size=37 card=1) ********************************************************************************
    drop table "SCOTT"."BIN$uQz/Q0JVEMfgU0U8qMDA7w==$0" purge

    可以发现是先查询这个基表,随后drop table xx purge 清空回收站的记录!!!

    ******************************************************************************** SQL ID: b52m6vduutr8j Plan Hash: 716146596 delete from RecycleBin$ where bo=:1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.13 0.13 0 60835 10 1 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 2 0.13 0.13 0 60835 10 1 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: CHOOSE Parsing user id: SYS (recursive depth: 1) Number of plan statistics captured: 1 Rows (1st) Rows (avg) Rows (max) Row Source Operation ---------- ---------- ---------- --------------------------------------------------- 0 0 0 DELETE RECYCLEBIN$ (cr=60835 pr=0 pw=0 time=136017 us) 1 1 1 TABLE ACCESS FULL RECYCLEBIN$ (cr=60835 pr=0 pw=0 time=135694 us cost=2 size=16 card=1) ********************************************************************************


    在这个trace文件中,发现了两个调用RECYCLEBIN$的SQL,都是全表扫描,逻辑读消耗高!!!

    四、问题处理

    create index RecycleBin$_bo on RecycleBin$(bo);
    exec dbms_stats.gather_table_stats(ownname=>'SYS',tabname=>'RecycleBin$', cascade=>TRUE);

    select instance_number,sql_id,name, datatype_string,last_captured,value_string from dba_hist_sqlbind where sql_id='&sql'
    and rownum<10 order by LAST_CAPTURED,POSITION; INSTANCE_NUMBER SQL_ID NAME DATATYPE_STRI LAST_CAPTURED VALUE_STRING --------------- --------------- ---------- ------------- ------------------------ ------------------------------ 1 0hhmdwwgxbw0r :1 NUMBER 17-JAN-21 12
    1 b52m6vduutr8j :1 NUMBER 17-JAN-21 90695

    测试!
    EXPLAIN PLAN FOR select obj#, type#, flags, related, bo, purgeobj, con#
    from
     RecycleBin$    where ts#=12 and to_number(bitand(flags, 16)) = 16    order
      by dropscn;

    SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY); PLAN_TABLE_OUTPUT 
    -------------------------------------------------------
    Plan hash value: 885754405
    --------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    -------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | 117 | 3 (34)| 00:00:01 |
    | 1 | SORT ORDER BY | | 1 | 117 | 3 (34)| 00:00:01 |
    |* 2 | TABLE ACCESS BY INDEX ROWID| RECYCLEBIN$ | 1 | 117 | 2 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | RECYCLEBIN$_TS | 1 | | 2 (0)| 00:00:01 |
    ---------- Predicate Information (identified by operation id):
    ---------------------------------------------------
    2 - filter(TO_NUMBER(TO_CHAR(BITAND("FLAGS",16)))=16)
    3 - access("TS#"=12)
    ??? 有索引,之前没走???  原因是cost不准确导致的,实际上这个文件对应的drop 回收站的记录只有1条!!! 实际环境中并不一定就好使!


    第二个delete sql explan 看看
    EXPLAIN PLAN FOR delete from RecycleBin$
    where
    
     bo=90695;
    SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
    ------------------------------------------------------------------------------------
    | Id  | Operation         | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    ------------------------------------------------------------------------------------
    |   0 | DELETE STATEMENT  |                |     1 |    52 |     1   (0)| 00:00:01 |
    |   1 |  DELETE           | RECYCLEBIN$    |       |       |            |          |
    |*  2 |   INDEX RANGE SCAN| RECYCLEBIN$_BO |     1 |    52 |     1   (0)| 00:00:01 |
    ------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("BO"=90695)
    

    尝试第三种方式shrink ?

    SQL> select segment_name,segment_type,bytes/1024/1024 from user_segments where segment_name like 'RECYCLEBIN%';

    SEGMENT_NAME SEGMENT_TYPE BYTES/1024/1024

    ------------------------- ------------------ ---------------

    RECYCLEBIN$ TABLE 480

    RECYCLEBIN$_OBJ INDEX 160

    RECYCLEBIN$_TS INDEX 136

    RECYCLEBIN$_OWNER INDEX 144

    RECYCLEBIN$_BO INDEX .0625

    SQL> alter table RECYCLEBIN$ ENABLE ROW MOVEMENT;

    SQL> ALTER TABLE RECYCLEBIN$ SHRINK SPACE COMPACT CASCADE;

    ORA-10635: Invalid segment or tablespace type

    SQL> select tablespace_name from dba_segments where segment_name='RECYCLEBIN$';

    TABLESPACE_NAME

    ------------------------------

    SYSTEM

    SQL> select tablespace_name,CONTENTS,EXTENT_MANAGEMENT,SEGMENT_SPACE_MANAGEMENT from dba_tablespaces;

    TABLESPACE_NAME CONTENTS EXTENT_MAN SEGMEN

    ------------------------------ ---------- ---------- ------

    SYSTEM PERMANENT LOCAL MANUAL

    SYSAUX PERMANENT LOCAL AUTO

    UNDOTBS1 UNDO LOCAL MANUAL

    TEMP TEMPORARY LOCAL MANUAL

    USERS PERMANENT LOCAL AUTO

    EXAMPLE PERMANENT LOCAL AUTO

    shrink 不允许! 那么只能move了!

    SQL> alter table RECYCLEBIN$ disable ROW MOVEMENT;

    SQL> alter table RECYCLEBIN$ move;

    SQL> select segment_name,segment_type,bytes/1024/1024 from user_segments where segment_name like 'RECYCLEBIN%';

    SEGMENT_NAME SEGMENT_TYPE BYTES/1024/1024

    ------------------------- ------------------ ---------------

    RECYCLEBIN$_OBJ INDEX 160

    RECYCLEBIN$_TS INDEX 136

    RECYCLEBIN$_OWNER INDEX 144

    RECYCLEBIN$_BO INDEX .0625

    RECYCLEBIN$ TABLE .0625

    可以发现索引并未变小! 索引需要重建!

    SQL> alter index RECYCLEBIN$_OBJ rebuild;

    SQL> alter index RECYCLEBIN$_TS rebuild;

    SQL> alter index RECYCLEBIN$_OWNER rebuild;

    SQL> alter index RECYCLEBIN$_BO rebuild;

    SQL> select segment_name,segment_type,bytes/1024/1024 from user_segments where segment_name like 'RECYCLEBIN%';

    SEGMENT_NAME SEGMENT_TYPE BYTES/1024/1024

    ------------------------- ------------------ ---------------

    RECYCLEBIN$_TS INDEX .0625

    RECYCLEBIN$_OWNER INDEX .0625

    RECYCLEBIN$_BO INDEX .0625

    RECYCLEBIN$_OBJ INDEX .0625

    RECYCLEBIN$ TABLE .0625

    SQL> select status,count(*) from dba_indexes group by status;

    STATUS COUNT(*)

    -------- ----------

    VALID 5000 N/A 100

    再次模拟一次空间回收,本次不观察执行计划,只观察功能是否无异常即可。

    SQL> drop table scott.tmp2 purge;

    SQL> SELECT SUM(BYTES)/1024/1024 FROM DBA_FREE_SPACE WHERE TABLESPACE_NAME='TESTTBS';

    SUM(BYTES)/1024/1024

    --------------------

    4

    SQL> create table scott.tmp1 tablespace testtbs as select * from dba_objects where rownum<=10000;
    insert into scott.tmp1 select * from scott.tmp1;
    insert into scott.tmp1 select * from scott.tmp1;
    ORA-01653: unable to extend table SCOTT.TMP1 by 128 in tablespace TESTTBS

    SQL> alter database datafile '/11.2.0.4/app/oracle/tt/testtbs.dbf' resize 10m;
    SQL> insert into scott.tmp1 select * from scott.tmp1; 20000 rows created. SQL> commit;
    不影响! 回收清除策略,如果业务不使用的情况下可以move 基表,重建索引!重新收集统计信息。
    业务高峰期可以创建索引,收集统计信息,走索引也很快。

    select segment_name,segment_type,bytes/1024/1024 from user_segments where segment_name like 'RECYCLEBIN%'
    SEGMENT_NAME              SEGMENT_TYPE       BYTES/1024/1024------------------------- ------------------ ---------------RECYCLEBIN$               TABLE                          480RECYCLEBIN$_OBJ           INDEX                          160RECYCLEBIN$_TS            INDEX                          136RECYCLEBIN$_OWNER         INDEX                          144

  • 相关阅读:
    Server Tomcat v8.0 Server at localhost was unable to start within 45 seconds. If the server requires more time, try increasing the timeout in the server editor.
    用户画像——“打标签”
    python replace函数替换无效问题
    python向mysql插入数据一直报TypeError: must be real number,not str
    《亿级用户下的新浪微博平台架构》读后感
    【2-10】标准 2 维表问题
    【2-8】集合划分问题(给定要分成几个集合)
    【2-7】集合划分问题
    【2-6】排列的字典序问题
    【2-5】有重复元素的排列问题
  • 原文地址:https://www.cnblogs.com/lvcha001/p/14374040.html
Copyright © 2011-2022 走看看