Multi-Version Read Consistency in Oracle

Multi-Version Read Consistency ("MVRC") is a great feature. It allows multiple users to view data even as it is being updated while guaranteeing that each user views the data as it existed as at a specific point-in-time -- when he/she began executing the query. Read Consistency is, by default, at the Statement level.

Imagine User "Q1" starting a query on a large data-set at Time T0. The query takes 25 time intervals (let's say, 25 seconds) to read the entire data-set. The data may be from one table or a join of multiple tables. Imagine User "Q2" starting another query on a data-set that overlaps with the same data as Q1 is viewing, with the provisos that his query begins at T6 and takes 32 time intervals.

Now, imagine User "U1" updates some of the data at Time T10 but issues a commit at Time T18. Neither Q1 nor Q2 will see the changes made by U1 because U1's changes are made permanent *after* their queries began -- even if the data is modified as they are reading it.

If Q1's query comes to a datablock that is updated by U1, it finds that some of the data has been updated and does a lookup on the Undo segment and block containing the change and, in-memory, generates a read-consistent version of the data it needs by "applying" the undo to it. (Q1 identifies the Undo segment by looking up the ITL entry in the header of the datablock). Q2 also gets a read-consistent view of its data in the same manner. In this case, both Q1 and Q2 see the same read consistent data.

Imagine User "Q3" beginning a query at Time T12. Unlike Q1 and Q2, this user's query actually began after the update (which was at T10) but before the commit. Q3 generates the read-consistent of the data as at Time T12 (i.e. to reverse T1's update which is not commited at T12) by applying undo to the datablock in memory.

The datablock in this case has 4 different logical versions, one as changed by U1 and one each read consistent version by the users Q1 to Q3. To understand the real-world complexity, imagine users U2 to U5 also having updated other rows in the same block but not having commited before T12. Users Q1 to Q3 may have to apply *multiple* undo records (representing the different transactions by U1 to U5) to the same block to each get a read-consistent version of the datablock.

This logic extends to multiple blocks being changed by users U1 to U5 and having to be "undone" by users Q1 to Q3.

Note that undo is applied to the block even if only 1 column in 1 row has to be "reversed". This is done with a separate in-memory version of the block.

Edit : Clarifying the "undo is applied to the block" . User Q1 needs to apply undo only as far as required for the query he is running. If 5 rows in that datablock are changed by 5 different transactions but User Q1 needs only one of those 5 rows, his/her server process will identify the transaction and undo related to that single row and apply only that much undo. It will ignore transactions that modified other rows. But this change is done at the level of the cloned datablock. That is why even if 1 column in 1 row is updated, the entire block is cloned in-memory and undo for that 1 row is applied.

In each of the Users Q1 to Q3 queries, if they re-run the same queries after T18 (Q1 at T25 and Q2 at T32), they will see the changes made permanent by U1 and will not have to undo the block (for U1's changes) because the commit by U1 precedes the start of their queries. This is because Read Consistency is at the Statement level. A fresh query (even if execution of the same query) is a new Statement being executed. Such a statement begun after the commit by U1 reads the modified data.



Thank you very much Hemant for that very clear explanation of the Oracle engine concerning the MVRC mechanism. I understand why UNDO tablespace has to be sized properly to avoid ORA-01555. The memory also has to be properly sized if too many blocks need to be undone. Regards. Jean-michel PS : This comment area is very small under Firefox (v17.0.7). Do you recommend using another browser ?

Like
Reply

To view or add a comment, sign in

More articles by Hemant K Chitale

  • Oracle Data Guard : Multiple Standbys

    Why would an organisation have more than 1 Standby Database ? 1. Having additional Disaster Recovery Data Sites.

  • "Tappers and Listeners".

    How poor we are at communicating and receiving instructions Most references to the "Tappers and Listeners" study…

  • Document it ! Share the document !

    During the course of your work, you may be develop and implement fixes and workarounds that are not part of the…

  • Caring about Database Backups

    As a DBA / IT Manager you ensure daily database backups are executed. Do you verify success of the backup run ? Do you…

  • The Importance of Metrics

    Just as Business requests IT to build systems that collect, measure and report on Business Metrics, IT, too, needs to…

  • When DUPLICATEing a Database (or using BACKUP-RESTORE)

    The Oracle Database documentation contains examples of how to DUPLICATE a database (or use BACKUP-RESTORE) when copying…

    1 Comment
  • Instrumentation , Logging and Tracing in Oracle

    Oracle offers a number of facilities for Instrumentation, Logging and Tracing of operations (whether they be part of…

  • Scripting / Programming / Report-Writing Languages

    What languages does someone administering an Oracle database need to be reasonably familiar with ? SQL PL/SQL Shell…

    7 Comments
  • Database Consolidation -- some thoughts

    Just a few thoughts on Database Consolidation. Consolidation can happen at many levels : a.

    2 Comments
  • Should you Switch to the Standby Site ?

    It is nowadays common to build a Standby site for a database. However, DataGuard (in Oracle) configuration and testing…

    6 Comments

Others also viewed

Explore content categories