Consistent Gets Myth…..

Based on the request from some of my regular readers, this is a reposting of my previous blog on viveklsharma.blogspot.com (at present unaccessible).

Any Query executed against a database gets the required rows, which are then fetched to the end user. Everyone is aware with the fact that a table row(s) is/are stored in Oracle Database block(s), hence, when oracle has to get even a single row, a block containing that row has to be read into the buffer cache. From this, it is quite evident that Oracle reads a block at a minimum. Also, as an Optimization Feature, for every query, blocks are fetched to a user from the buffer cache (Parallel Queries are exception). Hence, if a block is not found in the buffer cache, these are reads from the disk into the cache and then given to the user.

In Oracle terminology, a read from disk is termed as a Physical Read or a Disk Reads and, a read from Cache is termed as a Consistent Get or a Logical I/O. As mentioned in my previous paragraph, if a block is not found in the Cache, it is first read from the disk into the cache and fetched to the user. This also means that every logical read includes the count of disk reads.

In this blog, I am going to write on a myth about the calculation assumed and derived by the DBA’s from the value of Consistent Gets.

At one customer site, we were discussing about Application tuning wherein a dba gave a list of Top Queries to the Vendor. The list contained the queries alongwith the statistics called Total I/O in GB and the values in this column was in the range of hundreds of GB to tens of TB. I was surprised to see these values and after the discussion, out of the curiosity, asked the dba about the values calculated in this column. The dba immediately gave me the access to his query that generated the output. The calculation in this column was (buffer_gets*db_block_size)/(1024*1024*1024). It seems that the dba’s assumed that each logical I/O to be a read to a new block and hence, these high values. Hence, if a query does a logical I/O of 1549087, then based on the calculation, the dba’s assumed that a query has read 11 gb (assuming 8k block size) worth of data into the cache. This is not true.

BLOCKS READ ALGORITHM

A Concepual Knowledge on Oracle Blocks Read Algorithm is as under :

  1. User Issues a Query.
  2. Query is Parsed and Optimal Execution path is generated and stored in the Library Cache.
  3. Based on the Execution Path, required Index or Table block is searched in the the Cache.
  4. If the block is not found in the cache, a disk read request is made, and the block is read into the Cache.
  5. The block, from the Cache is then read into a private memory area (UGA) of the User.
  6. Once the Block is read into the private memory of the User, the required row is fetched.

The value of Consistent Read is incremented each time a new block is read into a private memory area of the User, which also means that, a single block, if read multiple times, affect the value of the Consistent Read. While an in-efficient SQL contributes largely to Consistent Read, but one factor that also have significant impact is ARRAY SIZE. I am not going to discuss about In-efficient Queries, but will write on the calculation that undergoes and impact Consistent Gets.

A better way to understand the calculation is by way of an Example and as always, I will run through some of the simple queries and see the impact of the change in ARRAYSIZE to check its impact on Consistent Gets.

I will create a table with 3000 rows and will run a simple query in SQLPLUS prompt. Without an Index on this table, it will be a full table scan.

SQL> create table test_cg  as select * from all_objects where rownum between 1 and 3000;

Table created.

Elapsed: 00:00:01.82

SQL> exec dbms_stats.gather_table_stats(user,'TEST_CG');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.46

SQL> @table_stats TEST_CG
old   2: where table_name='&1'
new   2: where table_name='TEST_CG'

OWNER                          PAR   NUM_ROWS     BLOCKS LAST_ANAL GLO
------------------------------ --- ---------- ---------- --------- ---
VIVEK                          NO        3000         38 02-MAR-10 YES

1 row selected.

SQL> set autot trace
SQL> select * from test_cg;

3000 rows selected.

Elapsed: 00:00:00.17

Execution Plan
----------------------------------------------------------
Plan hash value: 2626677675

-----------------------------------------------------------------------------
| Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |         |  3000 |   249K|    12   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| TEST_CG |  3000 |   249K|    12   (0)| 00:00:01 |
-----------------------------------------------------------------------------


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        238  consistent gets
          0  physical reads
          0  redo size
     300791  bytes sent via SQL*Net to client
       2608  bytes received via SQL*Net from client
        201  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
       3000  rows processed

A simple query with 38 blocks accounted for 238 consistent gets. Does it mean 238*8192 i.e.1.8 MB of data read into the cache ? The answer is plain NO. As mentioned earlier, the blocks are read only once into the cache and is sent to the PGA of the user that require this block. Internally, the calculation for consistent gets is (NUM_ROWS / ARRAYSIZE)+NUM_BLOCKS. In my case, when I executed this query, the arraysize was unchanged and therefore was default, which is 15.

Rows in my Table	: 3000
Blks in my Table	: 38
Default Arraysize	: 15

(3000/15)+38 = 200 + 38 = 238

The calculation matches the Consistent Gets. One value worth discussing here is the number 200 derived from (3000/15). This value means that each of the blocks will be touched multiple times. There are 38 blocks in the table, each of these 38 blocks will be read only once in the cache, but will be touched more than once and therefore, the consistent gets for a single block access will based on the number of times, it will be visited to read all the required rows. Incidently, another statistics that matches our calculation is the “SQL*Net roundtrips to/from client” from the Autotrace output above. The value, with arrays 15 is 201, which means, 200 visits or touch to the block to fetch 3000 rows, and last visit required to check and confirm whether more rows are to be fetched.

Back to our example, let us get into more details of the calculation of Consistent Gets. The table has 3000 rows and 38 blocks. The count of rows in each of these 38 blocks is shown below.

select dbms_rowid.ROWID_BLOCK_NUMBER(rowid) blkno, count(*) cnt
from test_cg
group by dbms_rowid.ROWID_BLOCK_NUMBER(rowid) order by 1;
SQL> /

     BLKNO        CNT
---------- ----------
       641         88
       642         84
       643         81
       644         76
       645         81
       646         80
       647         82
       648         77
.... some lines deleted
.... some lines deleted
       677         78
       678         82
           ----------
sum              3000

The Sequence of Consistent Gets calculation is as under (with default arraysize) :

  1. Get Block 641 in the PGA, fetch 15 Rows – Consistent Gets = 1
  2. Get Block 641 again in the PGA, fetch another 15 rows – Consistent Gets = 2
  3. Get Block 641 again in the PGA, fetch another 15 rows – Consistent Gets = 3
  4. Get Block 641 again in the PGA, fetch another 15 rows – Consistent Gets = 4
  5. Get Block 641 again in the PGA, fetch another 15 rows – Consistent Gets = 5
  6. Get Block 641 again in the PGA, fetch another 13 rows – Consistent Gets = 6
  7. Get Block 642 in the PGA, fetch 2 rows – Consistent Gets = 7
  8. Get Block 642 again in the PGA, fetch another 15 rows – Consistent Gets = 8
  9. and so on….

It is clear from this explanation that a single block is read is multiple times, in our case at at average 6 times and therefore, consistent gets for each of these blocks was around 6 to 7. Run this test case with different arraysize and the calculation should match that shown in this blog. Let us run the query with different values of Arraysize and validate the calculation.

ArraySize = 35 Consistent Gets will be (3000/35)+38 = 86+38 = 124
ArraySize = 40 Consistent Gets will be (3000/40)+38 = 75+38 = 113
ArraySize = 50 Consistent Gets will be (3000/50)+38 = 60+38 = 98

SQL> set arrays 35
SQL>select * from test_cg;

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        124  consistent gets
         87  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
       3000  rows processed

SQL> set arrays 40
SQL> select * from test_cg;

3000 rows selected.

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        113  consistent gets
         76  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
       3000  rows processed

SQL> set arrays 50
SQL> select * from test_cg;

3000 rows selected.

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         98  consistent gets
         61  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
       3000  rows processed

Below is the query, that can be executed to check for the approximate value of the Consistent Gets and should help you understand the way this value is calculated. The value of bind variable is to be set to the ARRAYSIZE. The Query and the details of the columns used is as under :

variable b1 number;
exec :b1:=15;
SQL> compute sum of total_cnt on report
SQL> break on report

select	blkno, total_cnt, final_cnt, rows_remaining,
	case when rows_remaining=0 then touch_cnt+1 else touch_cnt end touch_cnt
from (
select	blkno, total_cnt, final_cnt, rows_remaining,
	case when total_cnt = final_cnt then ceil(final_cnt/:b1) else ceil(final_cnt/:b1)+1 end touch_cnt
from	(
select	blkno, cnt total_cnt, 
	case when rownum=1 or lag(rows_remaining) over (order by blkno)=0 
                     then cnt else (cnt-(:b1-lag(rows_remaining) over (order by blkno))) end final_cnt,
	rows_remaining 
from (
select blkno, cnt, rr, 
lead(rr) over(order by blkno) next_rr,
lead(blkno) over(order by blkno) next_blk,
ceil(rr/:b1) touch_cnt,
mod(rr,:b1) rows_remaining
from (
select dbms_rowid.ROWID_BLOCK_NUMBER(rowid) blkno, count(*) cnt, 
	sum(count(*)) over(order by dbms_rowid.ROWID_BLOCK_NUMBER(rowid)) rr 
from test_cg
group by dbms_rowid.ROWID_BLOCK_NUMBER(rowid) order by 1))));

     BLKNO  TOTAL_CNT  FINAL_CNT ROWS_REMAINING  TOUCH_CNT
---------- ---------- ---------- -------------- ----------
       641         88         88             13          6
       642         84         82              7          7
       643         81         73             13          6
       644         76         74             14          6
       645         81         80              5          7
       646         80         70             10          6
       647         82         77              2          7
       648         77         64              4          6
       649         73         62              2          6
       650         79         66              6          6
       651         79         70             10          6
       652         79         74             14          6
       653         81         80              5          7
       654         82         72             12          6
       655         77         74             14          6
       656         81         80              5          7
       657         80         70             10          6
       658         81         76              1          7
       659         78         64              4          6
       660         78         67              7          6
       661         76         68              8          6
       662         78         71             11          6
       663         78         74             14          6
       664         77         76              1          7
       665         81         67              7          6
       666         79         71             11          6
       667         79         75              0          7
       668         77         77              2          6
       669         77         64              4          6
       670         76         65              5          6
       671         80         70             10          6
       672         77         72             12          6
       673         76         73             13          6
       674         75         73             13          6
       675         79         77              2          7
       676         78         65              5          6
       677         78         68              8          6
       678         82         75              0          7
           ---------- ---------- -------------- ----------
sum              3000       2744            284        238

BLKNO		: Block Number
TOTAL_CNT	: Total Rows in the Block
FINAL_CNT	: Final Number of Rows 
		(Example Block 642 has total 84 Rows, but final row count is 82 as 13 rows were read from 641 and balance 
		2 Rows were fetched from this block, this gives 84-2=82)
ROWS_REMAINING	: Incomplete Arraysize Rows from the current block
TOUCH_CNT	: Touch Count for current block and is our Consistent Gets per Block. Final total at the end.

Consistent Gets are not a measure of number of blocks that are read into the cache but number of times, a block was read into the PGA. A block is read only once in the cache and touched multiple times. Our example above showed that with arraysize of 15, an 8k block was read once but was touched 6-7 times and hence had a 6 or 7 consistent gets per block. This does not mean 6*8192 worth of data. The data was read only once.

About Vivek Sharma
I am an Oracle Professional from Mumbai (India). I blog on the issues that I feel is Interesting for my readers. Some of these are my real life examples, which I hope, you would find interesting. Comments are always a welcome. The Technical Observations & Views here are my own and not necessarily those of Oracle or its affiliates. These are purely based on my understandings, learnings and resolutions of various customer issues.

25 Responses to Consistent Gets Myth…..

  1. Anand Prakash says:

    Hi Vivek,
    Nice one.Good learning for me.What is the difference between consistent gets and db block gets?

    Regards,
    Anand

    Like

    • Vivek says:

      Anand,

      db dblock gets are GETS in the current mode (AS OFF NOW). The blocks are required in the current mode usually for updates / deletes. SELECT Statements require blocks to be read in Consistent mode, from the point in time when the query was started and this may require reconstruction of blocks from the undo segments.

      Regards
      Vivek

      Like

  2. coskan says:

    Good one Vivek but I think formula only works for manual segment space management otherwise there is a slight difference with the formula

    Like

  3. Pingback: Blogroll Report 26/02/2010 – 05/03/2010 « Coskan’s Approach to Oracle

  4. Jigar Doshi says:

    New Follower to your blog 🙂

    Like

  5. roberto says:

    You wrote:
    “Once the Block is read into the private memory of the User, the required row is fetched.”
    I always thought that the server process works directly on the buffer, it does not copy the whole buffer in PGA, it extracts rows and take in PGA only these rows. I’m wrong?

    Like

    • Vivek says:

      Hi Roberto,

      Oracle reads minimum a Block, therefore, a block is copied into the private memory of the User. The Sequence is (Example : 100 Rows in a Block and Arraysize 15)

      1. If the block is found in memory, Increment Consistent Gets to 1
      2. Read the block from the memory into the private memory of the user (UGA).
      3. Fetch 15 rows to the user
      4. Give up the Block
      5. Next 15 rows are required, re-read the block and increment Consistent Gets to 2
      and so on…

      When you say rows are read into the private memory, this holds true only for Hash Joins, where the rows of the build table are hashed and kept in the memory. Similar Hash algorithm is applied to the rows of the probe table (based on the Join Columns) and a perfect match is done.

      Regards
      Vivek

      Like

      • Roberto says:

        Ok.
        If I have to run a block update (for semplicity without subquery)?
        – get block into UGA (current read)
        – modify block in UGA
        – OVERWRITE block on buffer cache?
        NO!

        In this case I believe that Oracle works on buffer cache: it applies change vectors to buffer.

        Like

      • Vivek says:

        Roberto,

        Will have to check this (may be by way of an example). As I understand, it has to read the block into the Private memory area of the user.

        May be my interpretation of the way Oracle fetches the data is wrong. I need to verify this. If I get something that matches my understanding, will surely post you.

        Regards
        Vivek

        Like

  6. Vitek says:

    Great artice! It’s a pity it describes the formula for full table scan only. I wonder what formula is used for calculating consistent gets when it comes to index range scan.

    Like

  7. anil says:

    i think this site has some design issues…….like header font is coming out from its actual space and users are unable to read it…

    Like

    • Vivek says:

      Hi Anil,

      It could be a browser issue, because before posting a blog, I usually check for the font and alignment. Do let me know, which specific posting you find the font to be an issue.

      Regards
      Vivek

      Like

  8. Nakul says:

    Hi

    Its an eye opener ! From the illustration above , does it mean that if we increase the value of arraysize , we can get better performance. Because with an increase in value of array size , consistent gets value decreases , so oracle has to do less work to get the required number of rows.

    Thanks ,
    Nakul.

    Like

    • Vivek says:

      Hi Nakul,

      Yes, increasing the Arraysize will improve the performance but then there has to be some limitation on this. A very high value can also cause a performance bottleneck.

      Regards
      Vivek

      Like

  9. Pingback: Consistent Gets Myth ! Correction… « Real Life Database / SQL Experiences : An Oracle Blog from Vivek Sharma

  10. Balaji says:

    Hi Vivek,

    Thanks for such a good example and detailed explination ….jus wanted to know it this holds good for all versions and have query on this…..

    I tried creating DUAL in my schema and results are as follows


    SQL> create table dual as select * from dual;

    Table created.

    SQL> exec dbms_stats.gather_table_stats(user,'DUAL');

    PL/SQL procedure successfully completed.

    SQL> set autot trace stat
    SQL> select * from dual;

    Statistics
    ----------------------------------------------------------
    0 recursive calls
    12 db block gets
    5 consistent gets
    0 physical reads
    0 redo size
    201 bytes sent via SQL*Net to client
    90 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    SQL> /

    Statistics
    ----------------------------------------------------------
    0 recursive calls
    12 db block gets
    5 consistent gets
    0 physical reads
    0 redo size
    201 bytes sent via SQL*Net to client
    179 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    SQL> /

    Statistics
    ----------------------------------------------------------
    0 recursive calls
    12 db block gets
    5 consistent gets
    0 physical reads
    0 redo size
    201 bytes sent via SQL*Net to client
    179 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    SQL> set autot off
    SQL> select num_rows,blocks from user_tables where table_name='DUAL';

    NUM_ROWS BLOCKS
    ---------- ----------
    1 1

    SQL>

    according to above formuls consisten gets should be less than 1 or 1 but here the consisten gets are 5 why this??

    Like

    • Vivek says:

      Hi Balaji,

      Sorry for replying late to your query. Was bit busy for last months and therefore, my blog access were less. Anyways, to answer your question, please note that during a Full Table Scan, the gets are incremented because of the visit to Segment header and other blocks before actually hitting the blocks where rows are stored. On your database, if you create a small table, like :

      create table emp_test as select * from emp where rownum<=1;
      select sysdate from emp_test;

      This query would also get the results in 5 consistent gets. In my case, it was 3 gets. The header blocks, which are called as overhead, are scanned only once for every Full Table Scans.

      Regards
      Vivek

      Like

  11. Vineet Arya says:

    Hi Vivek,

    Good Article!! But could you please ellaborate more on Balaji’s scenario “during a Full Table Scan, the gets are incremented because of the visit to Segment header and other blocks before actually hitting the blocks where rows are stored”. Could you please provide some practical solution to this, same as you did for touch count on blocks.

    SQL> SELECT * FROM TEST where x<300;

    299 rows selected.

    Statistics
    ———————————————————-
    1 recursive calls
    0 db block gets
    20 consistent gets
    0 physical reads
    0 redo size
    3144 bytes sent via SQL*Net to client
    520 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    299 rows processed

    Arraysize is 5000.

    As per your query i am getting consistent gets as(18) but according to trace it is 20. I am the only user whose working on this so nothing as sort of undo segment generation.

    BLKNO TOTAL_CNT FINAL_CNT ROWS_REMAINING TOUCH_CNT
    ———- ———- ———- ————– ——————————————–
    298473 660 660 660 1
    298474 660 -3680 1320 1
    298475 660 -3020 1980 1
    298476 660 -2360 2640 1
    298477 660 -1700 3300 1
    1185841 660 -1040 3960 1
    1185842 660 -380 4620 1
    1259889 660 280 280 2
    1259890 660 -4060 940 1
    1259891 100 -3960 1040 1
    1259892 660 -3300 1700 1

    BLKNO TOTAL_CNT FINAL_CNT ROWS_REMAINING TOUCH_CNT
    ———- ———- ———- ————– ———————————————–
    3179188 660 -2640 2360 1
    3179189 660 -1980 3020 1
    3179190 660 -1320 3680 1
    3179191 660 -660 4340 1
    3179192 660 0 0 2
    —————————————————————————–
    sum 10000 18

    Thanks In Advance!!!

    Like

    • Vivek says:

      Hi Vineet,

      Yes, its true that for a Full Table Scan, Segment Header block is also visited once. I recreated my test_cg table with 3000 rows. The dba_tables shows that there were 44 blocks in the table. If you query dba_segment.header_block column, it will give you the segment_header block number. See below :

      SQL> select HEADER_BLOCK, blocks, EXTENTS from dba_segments where segment_name='TEST_CG';
      
      HEADER_BLOCK     BLOCKS    EXTENTS
      ------------ ---------- ----------
               242         48          6
      
      Elapsed: 00:00:00.13
      
      SQL> select EXTENT_ID, FILE_ID, BLOCK_ID, BLOCKS from dba_extents where SEGMENT_NAME='TEST_CG';
      
       EXTENT_ID    FILE_ID   BLOCK_ID     BLOCKS
      ---------- ---------- ---------- ----------
               0          4        240          8
               1          4        248          8
               2          4       1536          8
               3          4       1544          8
               4          4       1552          8
               5          4       1560          8
      
      6 rows selected.
      
           BLKNO  TOTAL_CNT  FINAL_CNT ROWS_REMAINING  TOUCH_CNT
      ---------- ---------- ---------- -------------- ----------
             243         88         88             13          6
             244         84         82              7          7
             245         81         73             13          6
             246         76         74             14          6
             247         81         80              5          7
             248         80         70             10          6
             249         82         77              2          7
             250         77         64              4          6
             251         73         62              2          6
             252         78         65              5          6
             253         79         69              9          6
             254         79         73             13          6
             255         81         79              4          7
            1537         82         71             11          6
            1538         77         73             13          6
            1539         81         79              4          7
            1540         80         69              9          6
            1541         81         75              0          7
            1542         78         78              3          6
            1543         78         66              6          6
            1544         76         67              7          6
            1545         78         70             10          6
            1546         78         73             13          6
            1547         76         74             14          6
            1548         81         80              5          7
            1549         79         69              9          6
            1550         79         73             13          6
            1551         77         75              0          7
            1553         78         78              3          6
            1554         76         64              4          6
            1555         79         68              8          6
            1556         77         70             10          6
            1557         76         71             11          6
            1558         75         71             11          6
            1559         79         75              0          7
            1560         77         77              2          6
            1561         78         65              5          6
            1562         82         72             12          6
            1563          3          0              0          2
                 ----------
      sum              3000
      
      39 rows selected.
      

      The Query that shows the touch_count shows 39 blocks in the table that actually contained data (see 39 rows selected). The starting block number with rows is 243, whereas, the dba_extents views shows 240, which means block 240, 241 and 242 are visited once and therefore, for a query with arrays 15 shows 242 consistent gets.

      I am not sure what practical solution are you looking for as this is normal and cannot be avoided. Please feel free to write to me on this.

      Regards
      Vivek

      Like

  12. praveen says:

    Vivek, whats the calulcation logic behind PR ? I am noticing One block is being read per visit to the Disk => Physical Read is almost equal to the Number of Blocks in the Table .. Any insights on this ?

    Like

    • Vivek Sharma says:

      Praveen, Physical Reads will be incremented every time a block is read from disk to the buffer cache. Once read, this block can be touched multiple times. A block, once loaded into the cache, stays there till the time it is aged out. Therefore, you would see physical reads to be more or less equal to the number of blocks and consistent reads will depend on the number of times the block is accessed from the cache.

      Like

  13. satya says:

    awesome article sir . a real eye opener.

    Thanks
    Satya

    Like

  14. emre says:

    i’m trying to test this calculation but the result in the trace is 2 consistent gets less than the query result.

    what can

    Like

    • emre says:

      what can be the reason for this?

      Like

      • Vivek Sharma says:

        Hi Emre…Sorry for the delay. I went through the test case sent to my on my email. It surely looks pretty neat and accurate. I will validate this at my end. Currently, my test database is down due to some hardware upgrade. However, will it be possible to check the calculation using Manual Segment Space Management. As Coskan commented on this blog, may be MSSM and ASSM is causing a difference.

        Like

Leave a comment