zoukankan      html  css  js  c++  java
  • CHAPTER 1 Architectural Overview of Oracle Database 11g

    Which SGA structures are required, and which are optional? The database buffer cache, log buffer, and shared pool are required; the large pool, Java pool, and Streams pool are optional.

    It is not possible to create a log buffer smaller than the default. If you attempt to,
    it will be set to the default size anyway.(这点值得注意) It is possible to create a log buffer larger than
    the default, but this is often not a good idea. The problem is that when a COMMIT
    statement is issued, part of the commit processing involves writing the contents of the
    log buffer to the redo log files on disk. This write occurs in real time, and while it is
    in progress, the session that issued the COMMIT will hang. Commit processing is a
    critical part of the Oracle architecture. The guarantee that a committed transaction
    will never be lost is based on this: the commit-complete message is not returned to
    the session until the data blocks in the cache have been changed (which means that
    the transaction has been completed) and the change vectors have been written to the
    redo log on disk (and therefore the transaction could be recovered if necessary). A
    large log buffer means that potentially there is more to write when a COMMIT is
    issued, and therefore it may take a longer time before the commit-complete message
    can be sent, and the session can resume work.(这一段很重要)

    The size of the log buffer is static, fixed at instance startup. It
    cannot be automatically managed.(不仅仅不可以更改,而且还是静态的。我以前以为SGA中的所有内存区域都可以动态调整)

    The shared pool size is dynamic and can be automatically
    managed.

    The shared pool is the most complex of the SGA structures. It is divided into dozens of substructures, all of which are managed internally by the Oracle server. This discussion of architecture will briefly discuss only four of the shared pool components:
    • The library cache
    • The data dictionary cache
    • The PL/SQL area
    • The SQL query and PL/SQL function result cache

    (注意,这里讨论了这四种子区,但不代表shared pool只有他们四个子区)

    (这里我们只讨论 sql query and pl/sql function result cache。 因为其它的都很好理解而这个新的子区域是11g的新特性)

    The result cache is a release 11g new feature. The same query is executed many times, by either the same session or many different sessions. Creating a result cache lets the Oracle server store the results of such queries in memory. The next time the query is issued, rather than running the query the server can retrieve the cached result. (result cache是存储sql和pl/sql function执行结果的。这样下次相同的语句执行就可以直接得到结果)The result cache mechanism is intelligent enough to track whether the tables against which the query was run have been updated. If this has happened, the query results will be invalidated and the next time the query is issued, it will be rerun. There is therefore no danger of ever receiving an out-of-date cached result.(该机制非常智能,如果sql或者pl/sql涉及的表有变化 那么该区域的结果就置为无效) By default, use of the SQL query and PL/SQL function result cache is disabled, but if enabled programmatically, it can often dramatically improve performance. The cache is within the shared pool, and unlike the other memory areas described previously, it does afford the DBA some control, as a maximum size can be specified.(默认情况下该区域是disable的。该区域不像其它区域那样由系统自动控制大小,DBA可以手动做一些工作)

    PL/SQL can be issued from user processes, rather than being stored
    in the data dictionary. This is called anonymous PL/SQL. Anonymous PL/SQL
    cannot be cached and reused but must be compiled dynamically. It will
    therefore always perform worse than stored PL/SQL. Developers should
    be encouraged to convert all anonymous PL/SQL into stored PL/SQL.

    The large pool size is dynamic and can be automatically managed

    The Java pool is only required if your application is going to run Java stored procedures within the database: it is used for the heap space needed to instantiate the Java objects. However, a number of Oracle options are written in Java, so the Java pool is considered  standard nowadays. Note that Java code is not cached in the Java pool: it is cached in the shared pool, in the same way that PL/SQL code is cached.

    The Java pool size is dynamic and can be automatically managed.

    he Streams pool size is dynamic and can be automatically
    managed. stream pool 一般是用来把change vector 传输到远方database的。

    below statement will show the SGA components that can be dynamically changed.  attention there is no redo log buffer cache.

    SQL> select * from v$sga_dynamic_components;
    
    COMPONENT                    CURRENT_SIZE   MIN_SIZE   MAX_SIZE USER_SPECIFIED_SIZE OPER_COUNT LAST_OPER_TYP LAST_OPER LAST_OPER GRANULE_SIZE
    ---------------------------- ------------ ---------- ---------- ------------------- ---------- ------------- --------- --------- ------------
    shared pool                     331350016  209715200  331350016                   0         12 GROW          IMMEDIATE 23-JUN-13      4194304
    large pool                        4194304          0    4194304                   0          1 GROW          IMMEDIATE 07-JUN-13      4194304
    java pool                         4194304    4194304    4194304                   0          0 STATIC                                 4194304
    streams pool                            0          0          0                   0          0 STATIC                                 4194304
    DEFAULT buffer cache            205520896  205520896  285212672                   0         13 SHRINK        IMMEDIATE 23-JUN-13      4194304
    KEEP buffer cache                       0          0          0                   0          0 STATIC                                 4194304
    RECYCLE buffer cache                    0          0          0                   0          0 STATIC                                 4194304
    DEFAULT 2K buffer cache                 0          0          0                   0          0 STATIC                                 4194304
    DEFAULT 4K buffer cache                 0          0          0                   0          0 STATIC                                 4194304
    DEFAULT 8K buffer cache                 0          0          0                   0          0 STATIC                                 4194304
    DEFAULT 16K buffer cache                0          0          0                   0          0 STATIC                                 4194304
    DEFAULT 32K buffer cache                0          0          0                   0          0 STATIC                                 4194304
    Shared IO Pool                          0          0          0                   0          0 STATIC                                 4194304
    ASM Buffer Cache                        0          0          0                   0          0 STATIC                                 4194304
    

      

    below statement shows pga infor

    SQL> select * from v$pgastat;
    
    NAME                                                                  VALUE UNIT
    ---------------------------------------------------------------- ---------- ------------
    aggregate PGA target parameter                                    293601280 bytes
    aggregate PGA auto target                                          83755008 bytes
    global memory bound                                                58720256 bytes
    total PGA inuse                                                   200630272 bytes
    total PGA allocated                                               247346176 bytes
    maximum PGA allocated                                             368234496 bytes
    total freeable PGA memory                                          25755648 bytes
    process count                                                            56
    max processes count                                                      72
    PGA memory freed back to OS                                      1.0700E+11 bytes
    total PGA used for auto workareas                                         0 bytes
    maximum PGA used for auto workareas                                58116096 bytes
    total PGA used for manual workareas                                       0 bytes
    maximum PGA used for manual workareas                                539648 bytes
    over allocation count                                                     0
    bytes processed                                                  1.2445E+11 bytes
    extra bytes read/written                                           48525312 bytes
    cache hit percentage                                                  99.96 percent
    recompute count (total)                                             1139862
    

      

    There are five background processes
    that have a long history with Oracle; these are the first five described in the sections
    that follow: System Monitor (SMON), Process Monitor (PMON), Database Writer
    (DBWn), Log Writer (LGWR), and Checkpoint Process (CKPT).

    SMON initially has the task of mounting and opening a database.

    SMON mounts a database by
    locating and validating the database controlfile. It then opens a database by locating
    and validating all the datafiles and online log files. Once the database is opened and
    in use, SMON is responsible for various housekeeping tasks, such as coalescing free
    space in datafiles.

    A user session is a user process that is connected to a server process. The server process
    is launched when the session is created and destroyed when the session ends. An
    orderly exit from a session involves the user logging off. When this occurs, any work
    done will be completed in an orderly fashion, and the server process will be terminated.
    If the session is terminated in a disorderly manner (perhaps because the user’s PC is
    rebooted), then the session will be left in a state that must be cleared up. PMON
    monitors all the server processes and detects any problems with the sessions. If a
    session has terminated abnormally, PMON will destroy the server process, return its
    PGA memory to the operating system’s free memory pool, and roll back any incomplete
    transaction that may have been in progress.

    If a session terminates abnormally, what will happen to an active
    transaction? It will be rolled back, by the PMON background process.

    DBWn writes according to a very lazy algorithm: as little as possible, as rarely as
    possible. There are four circumstances that will cause DBWn to write: no free buffers,
    too many dirty buffers, a three-second timeout, and when there is a checkpoint

    DBWn writes dirty buffers from the database buffer cache to the datafiles—but
    it does not write the buffers as they become dirty. On the contrary: it writes as few
    buffers as possible. The general idea is that disk I/O is bad for performance, so don’t
    do it unless it really is needed. If a block in a buffer has been written to by a session,
    there is a reasonable possibility that it will be written to again—by that session, or a
    different one. Why write the buffer to disk, if it may well be dirtied again in the near
    future? The algorithm DBWn uses to select dirty buffers for writing to disk (which will
    clean them) will select only buffers that have not been recently used. So if a buffer is
    very busy, because sessions are repeatedly reading or writing it, DBWn will not write
    it to disk. There could be hundreds or thousands of writes to a buffer before DBWn
    cleans it. It could be that in a buffer cache of a million buffers, a hundred thousand of
    them are dirty—but DBWn might only write a few hundred of them to disk at a time.
    These will be the few hundred that no session has been interested in for some time.

    What will cause DBWR to write? No free buffers, too many dirty
    buffers, a three-second timeout, or a checkpoint.

    First, when there are no free buffers. If a server process needs to copy a block into
    the database buffer cache, it must find a free buffer. A free buffer is a buffer that is
    neither dirty (updated, and not yet written back to disk) nor pinned (a pinned buffer
    is one that is being used by another session at that very moment). A dirty buffer must
    not be overwritten because if it were changed, data would be lost, and a pinned buffer
    cannot be overwritten because the operating system’s memory protection mechanisms
    will not permit this. If a server process takes too long (this length of time is internally
    determined by Oracle) to find a free buffer, it signals the DBWn to write some dirty
    buffers to disk. Once this is done, these will be clean, free, and available for use.

    Second, there may be too many dirty buffers—”too many” being another internal
    threshold. No one server process may have had a problem finding a free buffer, but
    overall, there could be a large number of dirty buffers: this will cause DBWn to write
    some of them to disk.(注意是some of them 不是全部的。 dbwr总是尽可能的偷懒)

    Third, there is a three-second timeout: every three seconds, DBWn will clean a few
    buffers.(a few not all ) In practice, this event may not be significant in a production system because
    the two previously described circumstances will be forcing the writes, but the timeout
    does mean that even if the system is idle, the database buffer cache will eventually be
    cleaned.

    Fourth, there may be a checkpoint requested. The three reasons already given will
    cause DBWn to write a limited number of dirty buffers to the datafiles. When a
    checkpoint occurs, all dirty buffers are written. (wow 这里要注意了。不是少量的写,而是写全部)This could mean hundreds of thousands
    of them. During a checkpoint, disk I/O rates may hit the roof, CPU usage may go to 100
    percent, end user sessions may experience degraded performance, and people may start
    complaining. Then when the checkpoint is complete (which may take several minutes),
    performance will return to normal. So why have checkpoints? The short answer is, don’t
    have them unless you have to.(可是我记得check point不是每隔几秒发出一次吗?)

    What does DBWn do when a transaction is committed? It does
    absolutely nothing.

    The only moment when a checkpoint is absolutely necessary is as the database is
    closed and the instance is shut down. A checkpoint writes all dirty buffers to disk: this synchronizes the buffer
    cache with the datafiles, the instance with the database. During normal operation,
    the datafiles are always out of date, as they may be missing changes (committed and
    uncommitted). This does not matter, because the copies of blocks in the buffer cache
    are up to date, and it is these that the sessions work on. But on shutdown, it is necessary
    to write everything to disk. Automatic checkpoints only occur on shutdown,(真的是只有在shutdown的时候才会自动发出吗) but a
    checkpoint can be forced at any time with this statement:
    alter system checkpoint;

    Note that from release 8i onward, checkpoints do not occur on log switch (log
    switches are discussed in Chapter 14).
    The checkpoint described so far is a full checkpoint. Partial checkpoints occur more
    frequently(到底什么是partial checkpoint 它是自动发出的么); they force DBWn to write all the dirty buffers containing blocks from just
    one or more datafiles rather than the whole database: when a datafile or tablespace is
    taken offline; when a tablespace is put into backup mode; when a tablespace is made
    read only. These are less drastic than full checkpoints and occur automatically
    whenever the relevant event happens.

    There are
    three circumstances that will cause LGWR to flush the log buffer: if a session issues
    a COMMIT; if the log buffer is one-third full; if DBWn is about to write dirty buffers. 

    First, the write-on-commit. To process a COMMIT, the server process inserts a
    commit record into the log buffer. It will then hang, while LGWR flushes the log
    buffer to disk. Only when this write has completed is a commit-complete message
    returned to the session, and the server process can then continue working. This is the
    guarantee that transactions will never be lost: every change vector for a committed
    transaction will be available in the redo log on disk and can therefore be applied to
    datafile backups. Thus, if the database is ever damaged, it can be restored from backup
    and all work done since the backup was made can be redone.

    It is in fact possible to prevent the LGWR write-on-commit. If this is
    done, sessions will not have to wait for LGWR when they commit: they issue
    the command and then carry on working. This will improve performance
    but also means that work can be lost. It becomes possible for a session to
    COMMIT, then for the instance to crash before LGWR has saved the change
    vectors. Enable this with caution! It is dangerous, and hardly ever necessary.
    There are only a few applications where performance is more important than
    data loss.(虽然这样做没有必要,但我还是想知道怎么做)

    Second, when the log buffer is one-third full, LGWR will flush it to disk. This is
    done primarily for performance reasons. If the log buffer is small (as it usually should
    be) this one-third-full trigger will force LGWR to write the buffer to disk in very nearly
    real time even if no one is committing transactions. The log buffer for many applications
    will be optimally sized at only a few megabytes. The application will generate enough
    redo to fill one third of this in a fraction of a second, so LGWR will be forced to
    stream the change vectors to disk continuously, in very nearly real time. Then, when
    a session does COMMIT, there will be hardly anything to write: so the COMMIT will
    complete almost instantaneously.

    Third, when DBWn needs to write dirty buffers from the database buffer cache to
    the datafiles, before it does so it will signal LGWR to flush the log buffer to the online
    redo log files.

    This is to ensure that it will always be possible to reverse an uncommitted
    transaction.

    For now, it is necessary to know that it is entirely possible for DBWn to write an
    uncommitted transaction to the datafiles. This is fine, so long as the undo data needed
    to reverse the transaction is guaranteed to be available. Generating undo data also
    generates change vectors. As these will be in the redo log files before the datafiles are
    updated, the undo data needed to roll back a transaction (should this be necessary)
    can be reconstructed if necessary.
    Note that it can be said that there is a three-second timeout that causes LGWR
    to write. In fact, the timeout is on DBWR—but because LGWR will always write just
    before DBWn, in effect there is a three-second timeout on LGWR as well.

    EXAM TIP When will LGWR flush the log buffer to disk? On COMMIT; when
    the buffer is one-third full; just before DBWn writes.

    CKPT, the Checkpoint Process
    The purpose of the CKPT changed dramatically between release 8 and release 8i of the
    Oracle database. In release 8 and earlier, checkpoints were necessary at regular intervals
    to make sure that in the event of an instance failure (for example, if the server machine
    should be rebooted) the database could be recovered quickly. These checkpoints were
    initiated by CKPT. The process of recovery is repairing the damage done by an instance
    failure; it is fully described in Chapter 14.
    After a crash, all change vectors referring to dirty buffers (buffers that had not
    been written to disk by DBWn at the time of the failure) must be extracted from
    the redo log, and applied to the data blocks. This is the recovery process. Frequent
    checkpoints would ensure that dirty buffers were written to disk quickly, thus
    minimizing the amount of redo that would have to be applied after a crash and
    therefore minimizing the time taken to recover the database. CKPT was responsible
    for signaling regular checkpoints.
    From release 8i onward, the checkpoint mechanism changed. Rather than letting
    DBWn get a long way behind and then signaling a checkpoint (which forces DBWn to
    catch up and get right up to date, with a dip in performance while this is going on)

    from 8i onward the DBWn performs incremental checkpoints instead of full checkpoints.
    The incremental checkpoint mechanism instructs DBWn to write out dirty buffers at a
    constant rate, so that there is always a predictable gap between DBWn (which writes
    blocks on a lazy algorithm) and LGWR (which writes change vectors in near real
    time). Incremental checkpointing results in much smoother performance and more
    predictable recovery times than the older full checkpoint mechanism.

    The CKPT no longer has to signal full checkpoints, but it does have to keep track
    of where in the redo stream the incremental checkpoint position is, and if necessary
    instruct DBWn to write out some dirty buffers in order to push the checkpoint
    position forward. The current checkpoint position, also known as the RBA (the redo
    byte address), is the point in the redo stream at which recovery must begin in the
    event of an instance crash. CKPT continually updates the controlfile with the current
    checkpoint position.

    (所以现在的  incremental checkpoint就是不一次把data buffer全写入磁盘,只是少些一些)

    When do full checkpoints occur? Only on request, or as part of an
    orderly database shutdown.

    MMON, the Manageability Monitor
    MMON is a process that was introduced with database release 10g and is the enabling
    process for many of the self-monitoring and self-tuning capabilities of the database.
    The database instance gathers a vast number of statistics about activity and
    performance.(收集关于活动和性能的数据) These statistics are accumulated in the SGA, and their current values can
    be interrogated by issuing SQL queries. For performance tuning and also for trend
    analysis and historical reporting, it is necessary to save these statistics to long-term
    storage. MMON regularly (by default, every hour) captures statistics from the SGA and
    writes them to the data dictionary,(默认每小时写入数据字典一次) where they can be stored indefinitely (though by
    default, they are kept for only eight days).(默认在数据字典中只保存8天 )
    Every time MMON gathers a set of statistics (known as a snapshot), it also launches
    the Automatic Database Diagnostic Monitor, the ADDM. The ADDM is a tool that
    analyses database activity using an expert system developed over many years by many
    DBAs. It observes two snapshots (by default, the current and previous snapshots) and
    makes observations and recommendations regarding performance. Chapter 5
    describes the use of ADDM (and other tools) for performance tuning.
    EXAM 

    TIP By default, MMON gathers a snapshot and launches the ADDM
    every hour.

    As well as gathering snapshots, MMON continuously monitors the database and
    the instance to check whether any alerts should be raised.(还负责发出alert 信息) Use of the alert system is
    covered in the second OCP exam(这里我很关心 怎么还有第二次OCP考试) and discussed in Chapter 24. Some alert conditions
    (such as warnings when limits on storage space are reached) are enabled by default;
    others can be configured by the DBA.

    MMNL, the Manageability Monitor Light
    MMNL is a process that assists the MMON. There are times when MMON’s scheduled
    activity needs to be augmented. For example, MMON flushes statistical information
    accumulated in the SGA to the database according to an hourly schedule by default. If
    the memory buffers used to accumulate this information fill before MMON is due to
    flush them, MMNL will take responsibility for flushing the data.

    MMAN, the Memory Manager
    MMAN is a process that was introduced with database release 10g. It enables the
    automatic management of memory allocations.

    Release 11g takes memory management a step further: all the DBA need do is set
    an overall target for memory usage, and MMAN will observe the demand for PGA
    memory and SGA memory, and allocate memory to sessions and to SGA structures
    as needed, while keeping the total allocated memory within a limit set by the DBA.

    LGWR writes the online log files; ARCn reads them. In normal
    running, no other processes touch them at all.

    RECO, the Recoverer Process
    A distributed transaction involves updates to two or more databases. Distributed
    transactions are designed by programmers and operate through database links.
    Consider this example:
    update orders set order_status=complete where customer_id=1000;
    update orders@mirror set order_status=complete where customer_id=1000;
    commit;
    The first update applies to a row in the local database; the second applies to a row in a
    remote database identified by the database link MIRROR.
    The COMMIT command instructs both databases to commit the transaction,
    which consists of both statements. A full description of commit processing appears
    in Chapter 8. Distributed transactions require a two-phase commit. The commit in each
    database must be coordinated: if one were to fail and the other were to succeed, the
    data overall would be in an inconsistent state. A two-phase commit prepares each
    database by instructing its LGWRs to flush the log buffer to disk (the first phase), and

    once this is confirmed, the transaction is flagged as committed everywhere (the second
    phase). If anything goes wrong anywhere between the two phases, RECO takes action
    to cancel the commit and roll back the work in all databases.( 我以为这是 recover的时候用的process 原来是distributed transaction用的。)

  • 相关阅读:
    struts2_20140720
    使用jsp生成验证码
    JAVA笔记1-00
    Myeclipse 2014配置SVN详细图解
    排查IDEA 全局搜索快捷键Ctrl +Shift+F不起作用的原因和解决方法
    linux,java.net.UnknownHostException:XXX:XXX: Name or service not known
    mac使用技巧
    谈创业
    小白3步完成替换tomcat域名
    linux下yum安装redis以及使用
  • 原文地址:https://www.cnblogs.com/kramer/p/3192810.html
Copyright © 2011-2022 走看看