Oracle Autonomous Database ! World’s First Autonomous Database

Sangam 2017 was a great hit with around 800+ attendees. The Organizers did a great job in managing the show effectively. As mentioned in my earlier post, this year I presented sessions on “Indexing : Facts and Myth” on 8th December and “Autonomous Database” on 9th December. Apart from these, I also hosted a “Fireside Chat on Database/Application Performance” along with other speakers including Tirthankar Lahiri, VP for Oracle In-Memory Technologies Product Management. Andrew Holdsworth, VP for Oracle Real World Performance joined us as well. Together, we could address some of the queries raised by the participants of this Fire side chat. We had around 100+ attendees for the fire side chat.

While there is always a demand for a technical session and I rightly guessed a descent crowd for my Indexing Session. However, I was surprised to see a full house (around 200+) attendance for the session on Autonomous Database. This clearly means that the Technical Community wanted to know more about this interesting new technology, which is world’s first Autonomous Database. The session was very interactive and I tried responding to many of the queries, including the top most concern on the DBA Role.

My presentation kicked off with a Q&A on some of the Automated Features Oracle introduced since Oracle 9i. In my opinion, Automatic Segment Space Management (ASSM) introduced in Oracle 9i was the very first self-tuning feature as it dynamically adapts to (re)configuration of RAC Nodes without any changes required. This shows that Oracle’s journey to Autonomous Database started more than a Decade ago. Remember, Oracle 9i was released in 2001. Since then, Oracle introduced many features that reduced the burden off the DBA’s. All these features had one thing in common – AUTOMATIC. Automation is obviously one of the key drivers when it comes to Autonomous Database.

During the session, I also discussed about the difference between Automatic and Autonomous. Many organizations has introduced some or the other Automation to reduce or eliminate some of the mundane tasks. Certain amount of Automation can be done, however, to make a critical database entirely Autonomous, Full end-to-end Automation that too Automation of Complex tasks is required.

The underlying database for Autonomous DB is Oracle 18c. However, many were confused that Oracle 18c is an Autonomous Database. Therefore, it is important to know that Oracle 18c alone is not an Autonomous Database.

Autonomous Database = Oracle 18c + Oracle Cloud Automation and Innovation + Secret Sauce

So, Autonomous Database is made up of multiple component. The core underlying database version is 18c, which is integrated with Oracle Cloud and then uses some specialized tooling and automation that Oracle has created on cloud and some of them developed over the years. Machine Learning algorithm is used at every layer to make it more proactive.

Exadata has been a proven innovation when it comes to running an Oracle Database. Autonomous Database runs on Exadata, which further provides a healthy, highly available and best performance database for any kind of workload.

I can write more about Autonomous Database, but would want to hold for some other part. Thought of writing on this, as it generated a huge interest during Sangam and this excited me a lot. 🙂

Would be happy to respond to any of the queries related to Autonomous Database.

Advertisements

Sangam 2017 ! Largest Annual Event of All India Oracle User Group

Sangam, India’s largest All India Oracle User Group Event is back and this time in Hyderabad. Oracle experts across the globe would be speaking and therefore, this is the best opportunity to learn from them and collaborate with other Oracle professionals.

I will be presenting 2 sessions and will be hosting a “Fireside Chat with Performance Tuning Experts” round table discussions. Along side me, there would be Tirthankar Lahiri, VP-Product Management for In-Memory Technologies, Oracle, Karan Dodwal and Kishore Surve.

Oracle Autonomous Database is one of the most talked about technology and many of you would be eager to know more about it. I will be speaking on Autonomous Database on 9th December 2017. At present, it is scheduled for 2 pm India Time. Please do check the schedule, as it can change. Seats are limited. Therefore, to avoid disappointment, kindly book your seats in advance.

Another session of mine is on “Indexing : Facts and Myths”. This session will walk you through some of the common myths and few facts. A must attend session for Technical Folks. Again, due to the limited capacity, it is better to book your seat in advance. This is scheduled for 8th December 2017.

On 9th December 2017, join me and other Oracle Experts for a fireside chat on performance. Bring your questions and get response from the experts. Performance queries are difficult to address without some data or artifacts. If you can carry some of these, that would help us further and the chat would me more meaningful.

See you in Hyderabad on 8th December.

Index – Ordering of Columns

Post my previous Blog on Consistent Gets for an Index Scan, there were few other queries that I received over the chat. The queries were related to Clustering Factor of an Index and therefore, the recommendation or comment was that the Columns should be ordered to keep the value of a Clustering Factor as low as possible. The other comment was that High Cardinality or Low Cardinality plays a critical role with an exception for Unique or Primary Key Indexes. Again, Clustering Factor was cited as the reason behind this comment. Thought of clarifying this as well. In my opinion, this is again a MYTH.

The myth that I am going to address here is – Indexes and Column Ordering should be designed such that the Clustering Factor is as low as possible. This means, an Index on (B,A) should be preferred than (A,B) if the clustering factor of (A,B) is higher than (B,A).

Assuming, I have application queries, like :

select C, D from TABLE_A where A=:b1;

select C, D from TABLE_A where B=:b1 and A=:b2;

If I go by the above myth, shall I then create an Index on (B,A) ? Will this index be used for Query#1 ? Remember, Index Skip Scan is an Optimizer decision and is chosen if the leading column is a low cardinality column. The definition of low cardinality would be an Optimizer decision based on the costing. If an index on (B,A) is not used by Query#1, then shall I go ahead and create another Index on A ? Indexes improve query performance but are overhead for DML’s and therefore, less the number of Indexes, better the DML performance would be. Further, if a column is badly clustered, whether it is a leading column or a trailing column, it doesn’t matter and we will see this with an example. Therefore, as I mentioned in earlier and in my previous blog, Column Ordering of an Index should be basis your application queries and should not be designed basis any other table or column level statistics.

In this case, I am creating a table VIVEK_TEST with 100K rows. Column BC stands for BAD_CLUSTERED and will dominate the clustering factor. There are other 2 columns of interest – Object_ID which is a non unique column and RNUM_UNQ which is a Unique Column.


exec dbms_random.seed(0);

create table vivek_test as
with test as
(select * from all_objects where rownum between 1 and 10000)
select  trunc((rownum-1)/1000) object_id,
	rownum	rnum_unq,
        round(dbms_random.value(1,100),0) bc,
        a.object_name,
        a.temporary,
        a.created
from    test a,
        test b
where  rownum between 1 and 10000;

exec dbms_stats.gather_table_stats(user,'VIVEK_TEST');

column owner for a30
select owner, num_rows, blocks from dba_tables where table_name='VIVEK_TEST';

OWNER                  NUM_ROWS     BLOCKS
-------------------- ---------- ----------
SCOTT                     10000         62

First, I will create 4 Indexes – 2 of these are Single Column Indexes and 2 are Multi-Column.

create index vivek_test_oid on vivek_test(object_id);
create index vivek_test_oid_bc on vivek_test(object_id, bc);
create index vivek_test_bc on vivek_test(bc);
create index vivek_test_bc_oid on vivek_test(bc,object_id);

column index_name for a30
select index_name, num_rows, blevel, leaf_blocks, distinct_keys, clustering_factor
from    dba_indexes
where   table_name='VIVEK_TEST'
order by 1;

INDEX_NAME                       NUM_ROWS     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR
------------------------------ ---------- ---------- ----------- ------------- -----------------
VIVEK_TEST_BC                       10000          1          20           100              4642 <-- Single Column on BC (Bad_Clustered)
VIVEK_TEST_BC_OID                   10000          1          24          1000              4642 <-- BC is the Leading Column
VIVEK_TEST_OID                      10000          1          20            10                56
VIVEK_TEST_OID_BC                   10000          1          24          1000              4856 <-- BC is the Trailing Column

As seen in the example above, column ordering doesn’t matter when it comes to the Clustering Factor. My choice of Indexing would be :

VIVEK_TEST_BC_OID if most of my queries are on 
	BC=:b1;
	BC =:b1 and object_id=:b2;

VIVEK_TEST_OID_BC if most of my queries are on 
	object_id=:b1;
	BC=:b1 and object_id=:b2;

Now, I will drop these indexes and create new indexes on the combination of BC and RNUM_UNQ. RNUM_UNQ is 100% Unique Column.

create index vivek_test_rnum on vivek_test(rnum_unq);
create index vivek_test_rnum_bc on vivek_test(rnum_unq, bc);
create index vivek_test_bc on vivek_test(bc);
create index vivek_test_bc_rnum on vivek_test(bc, rnum_unq);


column index_name for a30
select index_name, num_rows, blevel, leaf_blocks, distinct_keys, clustering_factor
from    dba_indexes
where   table_name='VIVEK_TEST'
order by 1;

INDEX_NAME                       NUM_ROWS     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR
------------------------------ ---------- ---------- ----------- ------------- -----------------
VIVEK_TEST_BC                       10000          1          20           100              4642 <-- Single Column on BC (Bad_Clustered)
VIVEK_TEST_BC_RNUM                  10000          1          26         10000              4642 <-- Leading Column dominates CF
VIVEK_TEST_RNUM                     10000          1          21         10000                56
VIVEK_TEST_RNUM_BC                  10000          1          26         10000                56


For an Index with a Column, either leading or trailing, with 100% Unique values, the leading column would dictate the calculation of Clustering Factor. This is obvious as for the Clustering factor calculation, the values are sorted on the first_column then on Second_Column. For a better idea of how is it calculated, you may run the following queries :

## CF Calculation for BC, RNUM_UNQ (it matches the value as per DBA_INDEXES)

select  sum(block_change) from (
select  block_fno, bc, RNUM_UNQ, prev_bfno,
        (case when nvl(prev_bfno,0)!=block_fno then 1 else 0 end) block_change from (
        select  block_fno, bc, RNUM_UNQ, lag(block_fno) over (order by bc, rnum_unq) prev_bfno from (
                select  dbms_rowid.rowid_block_number(rowid)||'.'||
                        dbms_rowid.ROWID_TO_ABSOLUTE_FNO(rowid,'&Schema','&table_name') block_fno,
                        bc, RNUM_UNQ
                from  VIVEK_TEST
                order by bc, RNUM_UNQ, block_fno)
        )
);

SUM(BLOCK_CHANGE)
-----------------
             4642


## CF Calculation for RNUM_UNQ, BC (it matches the value as per DBA_INDEXES)

select  sum(block_change) from (
select  block_fno, RNUM_UNQ, bc, prev_bfno,
        (case when nvl(prev_bfno,0)!=block_fno then 1 else 0 end) block_change from (
        select  block_fno, RNUM_UNQ, bc, lag(block_fno) over (order by rnum_unq, bc) prev_bfno from (
                select  dbms_rowid.rowid_block_number(rowid)||'.'||
                        dbms_rowid.ROWID_TO_ABSOLUTE_FNO(rowid,'&Schema','&table_name') block_fno,
                        bc, RNUM_UNQ
                from  VIVEK_TEST
                order by RNUM_UNQ, bc, block_fno)
        )
);

SUM(BLOCK_CHANGE)
-----------------
               56

This query can be further modified to get the calculation. Remove the SUM(BLOCK_CHANGE) SELECT from the query and you could generate the output with the calculation. For every Index Entry, if the table block is changed, the BLOCK_CHANGE value is set to 1. This query will help you understand the calculation. This holds true for a combination of Non-Unique columns as well. It is not that for column involving Unique Column, the calculation is different and this will be clear if you deep dive into the queries pasted above.

During my session, I also mentioned a FLAW in Clustering Factor. It is that it doesn’t take into consideration the caching effect and to address this, Oracle introduced TABLE_CACHED_BLOCKS preference. I am against setting manually setting clustering_factor to a lower value as it will be overwritten by the next statistics gathering process. Also, I am STRONGLY against CTAS using ORDER BY to improve the clustering_factor. By using CTAS, you can improve the clustering_factor of one index, but it will impact this for other indexes. The best way is to use the TABLE_CACHED_BLOCKS preference as depicted below.

I will use the queries pasted above to come out with the actual clustering_factor and will then set the TABLE_CACHED_BLOCKS preference. For demonstration, I will work on an Index on BC, RNUM_UNQ. This process can be used for any of the indexes.

## I will create a temporary table to hold the clustering_factor calculation


create table cluf_factor as
select bc, blkno,
        lag(blkno,1,blkno) over(order by bc) prev_blkno,
        case when blkno!=lag(blkno,1,blkno) over(order by bc) or rownum=1
           then 1 else null end cluf_ft from
(select bc, rnum_unq, dbms_rowid.rowid_block_number(rowid) blkno
from    VIVEK_TEST
where   BC is not null
order by bc);

compute sum of cnt on report
break on report
select blkno, count(*) cnt from cluf_factor group by blkno order by 1;

     BLKNO        CNT
---------- ----------
    255883        188
    255884        185
    255885        185
    255886        185
    255887        185
    255936        182
    255937        180
    255938        180
    255939        180
    255940        180
    255941        180
    255942        180
    255943        180
    255945        180
    255946        180
    255947        180
    255948        180
    255949        180
    255950        180
    255951        180
    258896        180
    258897        180
    258898        180
    258899        180
    258900        180
    258901        180
    258902        180
    258903        180
    258905        180
    258906        180
    258907        180
    258908        180
    258909        180
    258910        180
    258911        180
    260480        180
    260481        180
    260482        180
    260483        180
    260484        180
    260485        180
    260486        180
    260487        180
    260489        180
    260490        180
    260491        180
    260492        180
    260493        180
    260494        180
    260495        180
    260496        180
    260497        180
    260498        180
    260499        180
    260500        180
    260501         70
           ----------
sum             10000

56 rows selected.

As per the output, each of the block was touched around 180 times. Each of these touch were considered as it these were read from the Disk. However, Oracle purely works in Memory (Buffer_Cache, Shared_Pool etc) and once the block is read from disk, it is cached in the Buffer_cache thus avoiding the disk i/o. This flaw causes unwanted high clustering_factor and therefore, is addressed by way of TABLE_CACHED_BLOCKS.

TABLE_CACHED_BLOCKS value defaults to 1. Increasing this value (to, say 250) means, clustering_factor for an Index will not be incremented, if the table block being referenced by the current Index entry has been referenced by any of the previous 250 Index Entries. In our case, each of the table blocks were referenced by previous 180 index entries. Therefore, I will change this value to 180.

select index_name, leaf_blocks, clustering_factor from dba_indexes
where table_name = 'VIVEK_TEST';

INDEX_NAME                     LEAF_BLOCKS CLUSTERING_FACTOR
------------------------------ ----------- -----------------
VIVEK_TEST_RNUM                         21                56
VIVEK_TEST_RNUM_BC                      26                56
VIVEK_TEST_BC                           20              4642
VIVEK_TEST_BC_RNUM                      26              4642

select dbms_stats.get_prefs(pname=>'TABLE_CACHED_BLOCKS',ownname=>'SCOTT',tabname=>'VIVEK_TEST') preference from dual;

PREFERENCE
-----------------------------------------------------------
1

## Setting TABLE_CACHED_BLOCKS to 180
exec dbms_stats.set_table_prefs(ownname=>'SCOTT',tabname=>'VIVEK_TEST',pname=>'TABLE_CACHED_BLOCKS',PVALUE=>180);

select dbms_stats.get_prefs(pname=>'TABLE_CACHED_BLOCKS',ownname=>'SCOTT',tabname=>'VIVEK_TEST') preference from dual;

PREFERENCE
------------------------------------------------------------
180

exec dbms_stats.gather_table_stats(user,'VIVEK_TEST',method_opt=>'for all columns size 1', no_invalidate=>false);

select index_name, leaf_blocks, clustering_factor from dba_indexes
where table_name = 'VIVEK_TEST';

INDEX_NAME                     LEAF_BLOCKS CLUSTERING_FACTOR
------------------------------ ----------- -----------------
VIVEK_TEST_RNUM                         21                56
VIVEK_TEST_RNUM_BC                      26                56
VIVEK_TEST_BC                           20                56
VIVEK_TEST_BC_RNUM                      26                56

Clustering Factor is calculated for Unique Indexes as well and is critical for costing. The only time when clustering_factos is not considered or not relevant for Unique or Primary Keys is when querying for all the columns part of the Unique or Primary Key i.e. INDEX UNIQUE SCAN. However, clustering_factor will play a critical role when accessing the Unique or Primary Key using INDEX RANGE SCAN.

With this blog, I tried addressing following Myth’s –

- 	Indexes and Column Ordering should be designed such that the Clustering Factor is as low as possible. 
- 	Clustering_Factor is irrelevant for Unique and Primary Key Indexes.
-	High Cardinality or Low Cardinality play important roles with an exception for Unique & Primary Key.

Be it a Unique Index / Primary Key Index or any other Index, column ordering has to be basis your application queries.

One interesting fact on TABLE_CACHED_BLOCKS : While testing this, I actually created a table with 100k rows. The number of blocks in the table were around 469 and the table block were referenced by around 230 previous index entries. Therefore, I modified this preference to 230 and regathered the statistics. It didn’t work. The same table creation script with 10k rows, everything worked as expected. I am investigating this and will update the blog, once I have something to share.

Consistent Gets for an Index Scan

This question was raised by a participant during my User Group Session on “Indexing : Fact & Myth Session”. Therefore, I thought of writing about it with the same example that I demonstrated..

I was demonstrating on a Myth that “High Cardinality Column should be a Leading Column of an Index” and for this, I created following table with 2 Indexes.

create table t1 as
select * from all_objects;

exec dbms_stats.gather_table_stats(user,'T1');

SQL> select owner, num_rows, blocks from dba_tables where table_name='T1' and owner='SCOTT';

OWNER                  NUM_ROWS     BLOCKS
-------------------- ---------- ----------
SCOTT                     68605       1377

SQL> select column_name, num_distinct, num_nulls from dba_tab_columns
where   owner='SCOTT'
and     table_name='T1'
and     column_name in ('OBJECT_ID','TEMPORARY')
order by 1;

COLUMN_NAME                    NUM_DISTINCT  NUM_NULLS
------------------------------ ------------ ----------
OBJECT_ID                             68605          0
TEMPORARY                                 2          0

SQL> create index t1_ot on t1(object_id, temporary);

Index created.

SQL> create index t1_to on t1(temporary,object_id);

Index created.

SQL> select index_name, blevel, leaf_blocks, clustering_factor from dba_indexes
where   table_name='T1'
order by 1;

INDEX_NAME                         BLEVEL LEAF_BLOCKS CLUSTERING_FACTOR
------------------------------ ---------- ----------- -----------------
T1_OT                                   1         171              1458
T1_TO                                   1         171              1494

So, I have a table with 68605 rows. Object_ID is 100% Distinct and Temporary has 2 Distinct Values. I have 2 indexes on it, which are on object_id & temporary. T1_OT is on (Object_ID, Temporary) and T1_TO is on (Temporary, Object_ID). The naming convention stands for the first letter of the columns in the order they are in the Index. So, for T1_OT O->Object_id and T->Temporary.

The Index Statistics shows that the two indexes are almost same, in terms of Height (BLEVEL), number of Leaf Blocks. A minor different in the Clustering_factor though.

I than executed the queries to demonstrate that the I/O’s done by queries using any of the 2 indexes is exactly same. For this, I executed the query and it used Index T1_OT and and then hinted the query to make use of T1_TO Index. The Cost and IO’s for both the queries are exactly same, which leads to a conclusion that cardinality doesn’t matter when designing a Index Strategy. Index Columns should be based on Application Queries and the Columns. The queries were executed twice to ensure that the consistent gets that we are post the hard parsing.


## IO's for the Query using an Index T1_OT

select owner, object_name from t1
where      object_id = 58
and        temporary='N';

OWNER                OBJECT_NAME
-------------------- ------------------------------
SYS                  I_CCOL2

set autot trace
select owner, object_name from t1
where      object_id = 58
and        temporary='N';

Execution Plan
----------------------------------------------------------
Plan hash value: 3109227855

---------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |     1 |    48 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |     1 |    48 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_OT |     1 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_ID"=58 AND "TEMPORARY"='N')

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          4  consistent gets 
          0  physical reads
          0  redo size
...
...
          1  rows processed

## IO's with Index T1_TO

SQL> select /*+ index(t1,t1_to) */ owner, object_name from t1
     where      object_id = 58
     and        temporary='N';

OWNER                OBJECT_NAME
-------------------- ------------------------------
SYS                  I_CCOL2

Elapsed: 00:00:00.00
SQL> pause;

set autot trace
select /*+ index(t1,t1_to) */ owner, object_name from t1
where      object_id = 58
and        temporary='N';


Execution Plan
----------------------------------------------------------
Plan hash value: 1129381402

---------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |     1 |    48 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |     1 |    48 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_TO |     1 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("TEMPORARY"='N' AND "OBJECT_ID"=58)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          4  consistent gets
          0  physical reads
          0  redo size
...
...
          1  rows processed

While I have 2 Indexes on the same columns only the ordered changed, optimizer chose an Index on Object_ID and Temporary. As my sessions are interactive (and this sometimes mean that my sessions take more time than alloted :)), I asked the participants the reason behind this optimizer decision and there were lot many assumptions. Anju Garg came out with the correct guess. However, will disclose this later.

At this point, the question raised was, why there are 4 Consistent I/O’s? The assumption here was that it should be 3 (BLEVEL + LEAF BLOCK Access + Table Block).

Next, I dropped any one of the Index and re-created it as a Unique Index. Remember, Object_ID is 100% Distinct. So, I will drop and re-create T1_OT.

SQL> drop index t1_ot;

Index dropped.

Elapsed: 00:00:00.04
SQL> create unique index t1_ot_uq on t1(object_id, temporary);

Index created.

SQL> select index_name, blevel, leaf_blocks, clustering_factor, uniqueness from dba_indexes
where   table_name='T1'
order by 1;
  2    3
INDEX_NAME                         BLEVEL LEAF_BLOCKS CLUSTERING_FACTOR UNIQUENES
------------------------------ ---------- ----------- ----------------- ---------
T1_OT_UQ                                1         162              1457 UNIQUE
T1_TO                                   1         171              1493 NONUNIQUE

I will then execute the queries to check for the consistent gets. This time, the consistent gets for the execution plan with Unique Index is 3 (as against 4 for the same non-unique index).

select owner, object_name from t1
where      object_id = 58
and        temporary='N';

Execution Plan
----------------------------------------------------------
Plan hash value: 1959391432

----------------------------------------------------------------------------------------
| Id  | Operation                   | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |          |     1 |    48 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1       |     1 |    48 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | T1_OT_UQ |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_ID"=58 AND "TEMPORARY"='N')


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads

There is a difference in the Consistent Gets with Unique and Non-Unique Index. The height of both these Indexes are exactly same. This difference was important to get to original question. So, I generated a 10046 trace for the 2 Queries (with Unique and Without Unique Scan) and the relevant portion from the trace is as under, which will explain the reason behind 4 Consistent Gets.

## 10046 portion for Non-Unique index. Please see the BOLD and UNDERLINED portion. The extra cr=1 for FETCH#18446604434610142176.
## In this case, once the Blocks are fetched from an Index (cr=2) and Table (cr=1) Total cr=3, there is an extra cr immediately after 
## SQL*Net message to client. So, the total cr = 2 + 1 + 1 (extra) = 4.

PARSE #18446604434610142176:c=118,e=119,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1259244381,tim=7093069466
EXEC #18446604434610142176:c=88,e=93,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1259244381,tim=7093070117
WAIT #18446604434610142176: nam='SQL*Net message to client' ela= 8 driver id=1413697536 #bytes=1 p3=0 obj#=0 tim=7093070421
WAIT #18446604434610142176: nam='db file scattered read' ela= 2292 file#=12 block#=275312 blocks=8 obj#=77545 tim=7093073241
WAIT #18446604434610142176: nam='db file sequential read' ela= 49 file#=12 block#=255875 blocks=1 obj#=77543 tim=7093073719
FETCH #18446604434610142176:c=2088,e=3043,p=9,cr=3,cu=0,mis=0,r=1,dep=0,og=1,plh=1259244381,tim=7093073891
WAIT #18446604434610142176: nam='SQL*Net message from client' ela= 624 driver id=1413697536 #bytes=1 p3=0 obj#=77543 tim=7093074878
FETCH #18446604434610142176:c=104,e=104,p=0,cr=1,cu=0,mis=0,r=0,dep=0,og=1,plh=1259244381,tim=7093075164
STAT #18446604434610142176 id=1 cnt=1 pid=0 pos=1 obj=77543 op='TABLE ACCESS BY INDEX ROWID T1 (cr=4 pr=9 pw=0 str=1 time=3109 us cost=2 size=48 card=1)'
STAT #18446604434610142176 id=2 cnt=1 pid=1 pos=1 obj=77545 op='INDEX RANGE SCAN T1_TO (cr=3 pr=8 pw=0 str=1 time=2835 us cost=1 size=0 card=1)'
WAIT #18446604434610142176: nam='SQL*Net message to client' ela= 6 driver id=1413697536 #bytes=1 p3=0 obj#=77543 tim=7093076331

## 10046 portion for Unique index. In this case, once the Blocks are fetched from an Index (cr=2) and Table (cr=1), there is no extra cr.

PARSE #18446604434610123376:c=134,e=135,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1959391432,tim=7120639467
EXEC #18446604434610123376:c=89,e=90,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1959391432,tim=7120640009
WAIT #18446604434610123376: nam='SQL*Net message to client' ela= 8 driver id=1413697536 #bytes=1 p3=0 obj#=0 tim=7120640248
WAIT #18446604434610123376: nam='db file scattered read' ela= 5875 file#=12 block#=260496 blocks=8 obj#=77546 tim=7120646397
WAIT #18446604434610123376: nam='db file sequential read' ela= 64 file#=12 block#=255875 blocks=1 obj#=77543 tim=7120646786
FETCH #18446604434610123376:c=1407,e=6569,p=9,cr=3,cu=0,mis=0,r=1,dep=0,og=1,plh=1959391432,tim=7120646947
STAT #18446604434610123376 id=1 cnt=1 pid=0 pos=1 obj=77543 op='TABLE ACCESS BY INDEX ROWID T1 (cr=3 pr=9 pw=0 str=1 time=6549 us cost=2 size=48 card=1)'
STAT #18446604434610123376 id=2 cnt=1 pid=1 pos=1 obj=77546 op='INDEX UNIQUE SCAN T1_OT_UQ (cr=2 pr=8 pw=0 str=1 time=6240 us cost=1 size=0 card=1)'
WAIT #18446604434610123376: nam='SQL*Net message from client' ela= 583 driver id=1413697536 #bytes=1 p3=0 obj#=77543 tim=7120652705
FETCH #18446604434610123376:c=15,e=15,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=1959391432,tim=7120652925
WAIT #18446604434610123376: nam='SQL*Net message to client' ela= 5 driver id=1413697536 #bytes=1 p3=0 obj#=77543 tim=7120653074

From this, I can assume that the steps carried out for a Non-Unique Index is as under (as it an Index-Range Scan).

-	Read the Root Block to get the address of the Leaf Block (IO = 1). 
- 	Read the Leaf Block to check for the matching Object_ID and Temporary Column. Get the ROWID for the table block. (IO = 2).
-	Go to the Table Block to read the other required columns listed in the SELECT list. (IO=3).
-	Go back to the Index Block to check for any other Object_ID and Temporary Values. (IO=4).
-	It is here that it gets to know that there are no more rows.

Bullet Point 4 is not required for a Unique Scan as Oracle is aware that it is a Unique Index and therefore, there will be no additional read required. Further, to confirm this, I executed a query on OBJECT_ID using an Unique Index. Remember, while Object_Id is 100% Unique, but the Unique Index is on the 2 columns and I am not referring the 2nd column in the query, which will change the plan from UNIQUE SCAN to RANGE SCAN.

select owner, object_name from t1
where      object_id = 58;

Execution Plan
----------------------------------------------------------
Plan hash value: 1834913555

------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |     1 |    46 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1       |     1 |    46 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_OT_UQ |     1 |       |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_ID"=58)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          4  consistent gets 

So, this answers the question raised during the session on the rationale behind 4 Consistent Reads. Unique and Non-Unique Indexes are internally same with only a difference in the way these are accessed. With Index Range Scan, the behaviour of both the indexes are exactly same.

Now, for the another question on why the optimizer used T1_OT (by default) and not T1_TO ? The reason was a TIE between the 2 indexes, which is usually very rare with Cost Based Optimization and due to the TIE, optimizer preferred an Index in an alphabetical order. TI_OT comes before TI_TO. To demonstrate this, I dropped and recreated T1_TO as T1_1TO and optimized started using T1_1TO by default.

Cloud Day : North India User Group Chapter

Presenting multiple sessions during Cloud Day scheduled in Gurgaon on 11th November 2017. Will be speaking on “Oracle IaaS and PaaS”, “Oracle In-Memory Database” and “Indexes : Myths and Misconceptions”. For registration, click on :

https://www.meraevents.com/event/aioug-nic-cloud-day

Performance Tuning Day – Mumbai Chapter

Presenting a Full Day session on Performance Optimisation for Mumbai Chapter of All India Oracle User Group. This is on 5th August 2017. For registration, click on this Performance Tuning Day. I have revised the content of this session, to incorporate the latest Optimiser behaviour and changes.

See you all there…

Mumbai Chapter AIOUG Event – 21st Jan 2017

Mumbai Chapter of All India Oracle User group has organized a Tech Day on 21st January 2017. I am presenting too and this time on a new topic “Basics of Big Data”. Please do join us and motivate the Mumbai Chapter Team for more such events in future. For Registration and Detailed Agenda, please click on the following link:

Mumbai Chapter 21st Jan 2017

Sangam 2016

Sangam 2016, an Annual Event organized by All India Oracle User Group is just a 35 days from now. It is scheduled for 11th and 12th November in Bangalore. A chance to meet and listen to World renowned speakers from all over the world. Visit Sangam 2016 for more details about the event, the speakers, their sessions etc.

I am presenting 2 back to back sessions on Cost Based Optimizer. My session is on 12th November 2016. Hope to see you all in large numbers :). I will also be hosting an In-Memory Demo Booth, along with the Product Management Team from US.

i_am_speaking

Optimizer – Part III (Frequency & Height Balanced)

Finally, got some time to write the third post of this series. The Optimizer – Part I and Optimizer – Part II are the best reference before reading this post. From the Part II, we inferred that :

  • TIME_ID – Assumptions v/s Actuals √
  • AMOUNT_SOLD – Assumptions v/s Actuals Χ
  • PROMO_ID – Assumptions v/s Actuals Χ

However, with a minor change, which was on a copy of TEST_SELECTIVITY table, the equation changed to:

  • TIME_ID – Assumptions v/s Actuals √
  • TIME_ID – Assumptions v/s Actuals (minor change) Χ
  • AMOUNT_SOLD – Assumptions v/s Actuals Χ
  • PROMO_ID – Assumptions v/s Actuals Χ

A small change triggered a mismatch in the cardinality calculations of TIME_ID column, which was otherwise nearly accurate. For a Query Performance, optimal execution plan is very critical and for an optimal execution plan, it is very important that the Optimizer comes out with an accurate cardinality. As we have seen, in our previous blogs, SELECTIVITY is another significant factor and is the starting point for the Optimizer. While Cardinality is calculated by the Optimizer, Selectivity is (in most of the cases) stored in the data dictionary, by way of Statistics gathered using dbms_stats (or any other method provided by some Application Vendors).

Optimizer is a piece of code. The default behaviour (at least for a newly created table) of the optimizer is that it considers the data distribution as UNIFORM. For example, in our case (before the minor change), the data in TIME_ID column was Uniform and therefore, the optimizer calculation was nearly accurate. However, the other two columns (AMOUNT_SOLD & PROMO_ID), the data was non-uniform and therefore, Optimizer assumption v/s the actual data distribution were way out. After the table creation, the statistics were gathered automatically (as a part new feature of 12c). In 11g or earlier versions, you will have to gather the statistics manually. You should see the same results. The initial statistics were fed to the optimizer as a Uniform data. See below :

COLUMN_NAME        NUM_DISTINCT  NUM_NULLS    DENSITY
------------------ ------------ ---------- ----------
AMOUNT_SOLD                 636          0 .001572327
CUST_ID                    7056          0 .000141723
PROD_ID                      72          0 .013888889
PROMO_ID                      4          0        .25
QUANTITY_SOLD                 1          0          1
TIME_ID                    1096          0 .000912409

How do we fix the problem of Mis-Estimates? In this case, the DENSITY column was used as a SELECTIVITY and for each of the columns, it is calculated as if the data is Uniform. This mis-calculation resulted in errorneous optimizer calculation. How do we fix it? As mentioned, optimizer is a piece of code and it has to come out with it’s calculation based on the input provided. In the absence of additional statistics or accurate statistics, Optimizer will assume UNIFORM distribution and will mis-calculate the SELECTIVITY and the CARDINALITY, as we have seen with our test cases. We have to provide accurate inputs for the optimizer to come up with nearly accurate statistics and one approach to provide these additional and accurate statistics are Histograms.

Let us regather statistics on the table again and check the change in the DENSITY value for each of the columns.

exec dbms_stats.gather_table_stats(user,'TEST_SELECTIVITY', method_opt=>'for all columns size auto', estimate_percent=>100);

The resultant output is as below:

select column_id, column_name, num_distinct, num_nulls,
	density, histogram
from	dba_tab_columns
where	owner='SCOTT'
and	table_name='TEST_SELECTIVITY'
order by 1;

 COLUMN_ID COLUMN_NAME       NUM_DISTINCT  NUM_NULLS    DENSITY HISTOGRAM
---------- ----------------- ------------ ---------- ---------- --------------------
         1 PROD_ID                     72          0 .013888889 NONE
         2 CUST_ID                   7056          0 .000141723 NONE
         3 TIME_ID                   1096          0 .000912409 NONE
         4 PROMO_ID                     4          0 .000000625 FREQUENCY
         5 QUANTITY_SOLD                1          0          1 NONE
         6 AMOUNT_SOLD                636          0   .0018217 HEIGHT BALANCED

The Density for the two out of the three columns is changed and the HISTOGRAM column gives an additional information that we have some additional statistics on the two columns.

There are 2 questions here

  • Why the subsequent gathering of statistics gathered additional statistics (HISTOGRAM)?
  • Why there are no Additional Statistics (HISTOGRAMS) on other Columns?

The answer to the first question is that the queries on each of the tables and each of the columns are tracked in SYS.COL_USAGE$. The subsequent stats gathering job will refer to this table to get the column details on which the additional statistics are required. See below :

exec dbms_stats.flush_database_monitoring_info();

select intcol#, column_name, equality_preds, RANGE_PREDS
from	sys.col_usage$ cu, dba_tab_columns tc
where	obj# = (select data_object_id from dba_objects
		where object_name='TEST_SELECTIVITY')
and	cu.intcol# = tc.column_id
and	tc.table_name='TEST_SELECTIVITY';

   INTCOL# COLUMN_NAME       EQUALITY_PREDS RANGE_PREDS
---------- ----------------- -------------- -----------
         6 AMOUNT_SOLD                    1           1
         4 PROMO_ID                       1           0
         3 TIME_ID                        1           1

The answer to the second question is for the other columns (except TIME_ID), there were no queries executed, thus there were no information collected in COL_USAGE$. For the TIME_ID, there are no HISTOGRAMS even though we executed few queries (and COL_USAGE$ has an entry). The data in this column is UNIFORM and this is the additional check, that is internally made at the time of gathering statistics. During statistics generation, sample data for each of the column is computed and data is validated. If the data is found to be UNIFORM, no histograms are generated as it is a resource intensive process and generating histogram will not make any sense (at least not worth the resources required to generate histograms).

If you recollect from our Part II, the minor changes on the TIME_ID column was on another table TEST_SELECTIVITY_M, which was an exact replica of TEST_SELECTIVITY. If we gather statistics on TEST_SELECTIVITY_M, let’s see the results.

select column_id, column_name, num_distinct, num_nulls,
	density, histogram
from	dba_tab_columns
where	owner='SCOTT'
and	table_name='TEST_SELECTIVITY_M'
order by 1;

 COLUMN_ID COLUMN_NAME          NUM_DISTINCT  NUM_NULLS    DENSITY HISTOGRAM
---------- -------------------- ------------ ---------- ---------- ---------------
         1 PROD_ID                        72          0 .013888889 NONE
         2 CUST_ID                      7056          0 .000141723 NONE
         3 TIME_ID                      1097          0 .000914025 HEIGHT BALANCED
         4 PROMO_ID                        4          0        .25 NONE
         5 QUANTITY_SOLD                   1          0          1 NONE
         6 AMOUNT_SOLD                   636          0 .001572327 NONE

On this table, the query executed was only on TIME_ID column and therefore, the additional statistics were on TIME_ID column.

Coming back to TEST_SELECTIVITY. Now, we have a Frequency Histograms on PROMO_ID Column and Height Balanced Histogram on AMOUNT_SOLD column. Until 11g, we had these 2 types of Histograms. 12c introduced TopN Frequency and Hybrid Histograms, which I will cover in the last part of this series. I am on 12c and therefore, to generate Frequency and Height Balanced Histograms, I had to use estimate_percent as 100 (more on this in the next blog).

Frequency Histograms are generated if the number of distinct values are less than the number of Buckets. These Buckets, if not specified during statistics gathering, defaults to 254. PROMO_ID column has 4 distinct values, whereas, AMOUNT_SOLD has 636, which is more than 254 and hence Height Balanced Histograms. Lets execute our queries on these 2 columns and check the CARDINALITY estimates.

select column_id, column_name, num_distinct, num_nulls,
	density, histogram
from	dba_tab_columns
where	owner='SCOTT'
and	table_name='TEST_SELECTIVITY'
order by 1;

 COLUMN_ID COLUMN_NAME       NUM_DISTINCT  NUM_NULLS    DENSITY HISTOGRAM
---------- ----------------- ------------ ---------- ---------- --------------------
         1 PROD_ID                     72          0 .013888889 NONE
         2 CUST_ID                   7056          0 .000141723 NONE
         3 TIME_ID                   1096          0 .000912409 NONE
         4 PROMO_ID                     4          0 .000000625 FREQUENCY
         5 QUANTITY_SOLD                1          0          1 NONE
         6 AMOUNT_SOLD                636          0   .0018217 HEIGHT BALANCED

Since we have additional statistics, lets check the details from DBA_TAB_HISTOGRAMS for this column.

SQL> select ENDPOINT_NUMBER, ENDPOINT_VALUE
  2  from dba_tab_histograms
  3  where table_name='TEST_SELECTIVITY'
  4  and   column_name='PROMO_ID'
  5  order by 1;

ENDPOINT_NUMBER ENDPOINT_VALUE
--------------- --------------
           2074             33
          20052            350
          22297            351
         800000            999

For the Frequency Histogram, the data is stored in a cumulative manner. The Endpoint_number stores the cumulative number of rows and the Endpoint_value stores the actual column value. For example, for PROMO_ID=33, we expect 2074 rows, for PROMO_ID=350, we expect 20052-2074=17981 rows, for PROMO_ID=351, we expect 22297-20052=2245 rows and so on.. Lets run the queries for each of these PROMO_ID’s.

SQL> set autot trace
SQL> select cust_id, amount_sold, promo_id from test_selectivity where promo_id=999;

777703 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 4083831454

--------------------------------------------------------------------------------------
| Id  | Operation         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                  |   777K|  9873K|   960   (2)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_SELECTIVITY |   777K|  9873K|   960   (2)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("PROMO_ID"=999)

The Optimizer Calculation for cardinality matches the actual number of rows fetched. For other values too, these were perfectly matching (see below).

SQL> select cust_id, amount_sold, promo_id from test_selectivity where promo_id=350;

17978 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 4083831454

--------------------------------------------------------------------------------------
| Id  | Operation         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                  | 17978 |   228K|   958   (2)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_SELECTIVITY | 17978 |   228K|   958   (2)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("PROMO_ID"=350)

SQL> select cust_id, amount_sold, promo_id from test_selectivity where promo_id=33;

2074 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 4083831454

--------------------------------------------------------------------------------------
| Id  | Operation         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                  |  2074 | 26962 |   958   (2)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_SELECTIVITY |  2074 | 26962 |   958   (2)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("PROMO_ID"=33)

SQL> select cust_id, amount_sold, promo_id from test_selectivity where promo_id=351;

2245 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 4083831454

--------------------------------------------------------------------------------------
| Id  | Operation         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                  |  2245 | 29185 |   958   (2)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_SELECTIVITY |  2245 | 29185 |   958   (2)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("PROMO_ID"=351)

Perfect. The calculation in this case is very simple. Take the values from DBA_TAB_HISTOGRAMS and get the accurate CARDINALITY. However, this stands good for the values that exists and are part of the histograms. What if we run a query against a value that doesn’t exists in the table or had no rows when the stats were gathered, but have few or more rows when the queries are executed against this value ? This value will have no cumulative data into DBA_TAB_HISTOGRAMS. In such cases, will Optimizer fall back to CARDINALITY = SELECTIVITY x NUM_ROWS, where SELECTIVITY is DENSITY ? Lets check.

select column_id, column_name, num_distinct, num_nulls,
	density, histogram
from	dba_tab_columns
where	owner='SCOTT'
and	table_name='TEST_SELECTIVITY'
and	column_name='PROMO_ID';

 COLUMN_ID COLUMN_NAME       NUM_DISTINCT  NUM_NULLS    DENSITY HISTOGRAM
---------- ----------------- ------------ ---------- ---------- --------------------
         4 PROMO_ID                     4          0 .000000625 FREQUENCY

SQL> select &&optdensity * 800000 Cardinality from dual;
old   1: select &&optdensity * 800000 Cardinality from dual
new   1: select .000000625 * 800000 Cardinality from dual

CARDINALITY
-----------
         .5

If the Density is considered as a SELECTIVITY, the expected CARDINALITY will be 1 (ceil of 0.5). I will run a query with PROMO_ID=500, which doesn’t exists.

SQL> set autot trace
SQL> select * from test_selectivity where promo_id=500;

no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 4083831454

--------------------------------------------------------------------------------------
| Id  | Operation         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                  |  1037 | 25925 |   958   (2)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_SELECTIVITY |  1037 | 25925 |   958   (2)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("PROMO_ID"=500)

Actual Number of Rows are ZERO, Optimizer Estimated as 1037 and SELECTIVITY (density) based expected was 1. ZERO v/s 1037, a huge mis-estimate. Also, we can see that with histograms, optimizer does not consider DENSITY column. How do we get the calculation ? Here, 10053 trace file comes handy. Lets generate a 10053 trace for a non-existent value and see the relevant portion that contains the calculation.

SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for TEST_SELECTIVITY[A]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Column (#4):
    NewDensity:0.001296, OldDensity:0.000001 BktCnt:800000.000000, PopBktCnt:800000.000000, PopValCnt:4, NDV:4
  Column (#4): PROMO_ID(NUMBER)
    AvgLen: 4 NDV: 4 Nulls: 0 <b?Density: 0.001296 Min: 33.000000 Max: 999.000000
    Histogram: Freq  #Bkts: 4  UncompBkts: 800000  EndPtVals: 4  ActualVal: yes
  Table: TEST_SELECTIVITY  Alias: A
    Card: Original: 800000.000000  Rounded: 1037  Computed: 1037.000000  Non Adjusted: 1037.000000

As per 10053, the Rounded and Computed Cardinality is 1037. The Density is 0.001296. However, the Density from DBA_TAB_COLUMNS is .000000625. There are two additional statistics : NewDensity and OldDensity. OldDensity is 0.000001, which is the rounded off value for the actual Density stored in DBA_TAB_COLUMNS i.e .000000625. What is NewDensity ? The value against this is used as a final Density to calculate the Cardinality i.e.0.001296*800000 = 1037. It seems, for a non-existent value, Optimizer computes this NewDensity and uses this as a SELECTIVITY to come out with the Expected Cardinality.

The calculation for NewDensity, in case of Frequency Histogram is 50% of the lowest number of rows in DBA_TAB_HISTOGRAMS, which is 0.5 x 2074/NUM_ROWS = 0.00129625. So, NewDensity becomes the SELECTIVITY and CARDINALITY is SELECTIVITY x NUM_ROWS, 0.00129625 x 800000 = 1037(see below).

select promo_Id, count(*) from test_selectivity group by promo_id order by 2;

  PROMO_ID   COUNT(*)
---------- ----------
        33       2074  select 0.5*2074/800000 NewDensity from dual;

NEWDENSITY
----------
 .00129625
SQL> select round(&&new_density*800000,0) from dual;
old   1: select round(&&new_density*800000,0) from dual
new   1: select round( .00129625*800000,0) from dual

ROUND(.00129625*800000,0)
-------------------------
                     1037

Before we get into more details, let us check the Height Balanced Histograms. We have a Height Balanced Histogram on Amount_Sold Column.

SQL> select column_id, column_name, num_distinct, num_nulls,
  2                  density, histogram
  3  from       dba_tab_columns
  4  where      owner='SCOTT'
  5  and        table_name='TEST_SELECTIVITY'
  6  and             column_name='AMOUNT_SOLD'
  7  order by 1;

 COLUMN_ID COLUMN_NAME       NUM_DISTINCT  NUM_NULLS    DENSITY HISTOGRAM
---------- ----------------- ------------ ---------- ---------- --------------------
         6 AMOUNT_SOLD                636          0   .0018217 HEIGHT BALANCED

We have 636 Distinct Values for this column and the maximum number of Buckets are 254. The way these histograms are generated is that the number of rows in the table is equally divided into 254 buckets. The Maximum value for each of the bucket is calculated and then the buckets are compressed, if a value spans across more than 1 Bucket. I executed a query, which is similar to the query executed by the Optimizer during the statistics gathering (see below).

SQL> select bucket, count(*), min(amount_sold) min_amt, max(amount_sold) max_amt from (
  2  select amount_sold, ntile(254) over (order by amount_sold) bucket
  3  from       test_selectivity
  4  order by amount_sold)
  5  group by bucket
  6  order by 1;

    BUCKET   COUNT(*)    MIN_AMT    MAX_AMT
---------- ---------- ---------- ----------
         1       3150          6          7
         2       3150          7          7
         3       3150          7          7 <-- Popular Value (3 Buckets)
         4       3150          7          8
         5       3150          8          8
         6       3150          8          8
         7       3150          8          8
         8       3150          8          8
         9       3150          8          8
        10       3150          8          8
        11       3150          8          8 <-- Popular Value (8 Buckets)
        12       3150          8          9
        13       3150          9          9
        14       3150          9          9
        15       3150          9          9
        16       3150          9          9
        17       3150          9          9
        18       3150          9          9
        19       3150          9          9
        20       3150          9          9
        21       3150          9          9 <-- Popular Value (10 Buckets)
        22       3150          9         10
        23       3150         10         10
        24       3150         10         10
        25       3150         10         10
        26       3150         10         10
        27       3150         10         10
        28       3150         10         10
        29       3150         10         10
        30       3150         10         10
        31       3150         10         10
        32       3150         10         10
        33       3150         10         11
        34       3150         11         11
        35       3150         11         11
        36       3150         11         11
        37       3150         11         11
        38       3150         11         11
        39       3150         11         11
        40       3150         11         11
        41       3150         11         11
        42       3150         11         12
        43       3150         12         12
        44       3150         12         12
        45       3150         12         12
        46       3150         12         13
        47       3150         13         13
        48       3150         13         13
        49       3150         13         13
        50       3150         13         13
        51       3150         13         13
        52       3150         13         13
        53       3150         13         13
        54       3150         13         14
        55       3150         14         14
        56       3150         14         14
        57       3150         14         14
        58       3150         14         15 <-- Non-Popular (Only 1 Bucket)
        59       3150         15         16
        60       3150         16         16
        61       3150         16         17
        62       3150         17         17
        63       3150         17         17
        64       3150         17         17
        65       3150         17         18 <-- Non-Popular (1 Bucket)
        66       3150         18         19
        67       3150         19         19
        68       3150         19         19
        69       3150         19         20
        70       3150         20         20
        71       3150         20         21
        72       3150         21         21
        73       3150         21         21
        74       3150         21         21
        75       3150         21         21
        76       3150         21         21
        77       3150         21         22
        78       3150         22         22
        79       3150         22         22
        80       3150         22         22
        81       3150         22         22
        82       3150         22         23
        83       3150         23         23
        84       3150         23         23
        85       3150         23         23
        86       3150         23         23
        87       3150         23         24
        88       3150         24         24
        89       3150         24         24
        90       3150         24         24
        91       3150         24         24
        92       3150         24         25
        93       3150         25         25
        94       3150         25         25
        95       3150         25         25
        96       3150         25         25
        97       3150         25         26
        98       3150         26         26
        99       3150         26         26
       100       3150         26         26
       101       3150         26         27
       102       3150         27         27
       103       3150         27         28
       104       3150         28         28
       105       3150         28         28
       106       3150         28         28
       107       3150         28         29
       108       3150         29         29
       109       3150         29         29
       110       3150         29         30
       111       3150         30         30
       112       3150         30         30
       113       3150         30         30
       114       3150         30         30
       115       3150         30         31
       116       3150         31         31
       117       3150         31         31
       118       3150         31         32
       119       3150         32         32
       120       3150         32         33
       121       3150         33         33
       122       3150         33         33
       123       3150         33         34
       124       3150         34         34
       125       3150         34         34
       126       3150         34         35
       127       3150         35         36
       128       3150         36         36
       129       3150         36         38
       130       3150         38         38
       131       3150         38         38
       132       3150         38         39
       133       3150         39         39
       134       3150         39         39
       135       3150         39         40
       136       3150         40         40
       137       3150         40         41
       138       3150         41         41
       139       3150         41         42
       140       3150         42         42
       141       3150         42         43
       142       3150         43         43
       143       3150         43         45
       144       3150         45         45
       145       3150         45         46
       146       3150         46         46
       147       3150         46         46
       148       3150         46         46
       149       3150         46         46
       150       3150         46         46
       151       3150         46         47
       152       3150         47         47
       153       3150         47         47
       154       3150         47         47
       155       3149         47         47
       156       3149         47         47
       157       3149         47         47
       158       3149         47         47
       159       3149         47         48
       160       3149         48         48
       161       3149         48         48
       162       3149         48         48
       163       3149         48         48
       164       3149         48         48
       165       3149         48         48
       166       3149         48         48
       167       3149         48         49
       168       3149         49         49
       169       3149         49         49
       170       3149         49         49
       171       3149         49         49
       172       3149         49         49
       173       3149         49         49
       174       3149         49         49
       175       3149         49         50
       176       3149         50         50
       177       3149         50         51
       178       3149         51         51
       179       3149         51         51
       180       3149         51         51
       181       3149         51         51
       182       3149         51         52
       183       3149         52         52
       184       3149         52         52
       185       3149         52         53
       186       3149         53         53
       187       3149         53         53
       188       3149         53         54
       189       3149         54         54
       190       3149         54         54
       191       3149         54         55
       192       3149         55         56
       193       3149         56         56
       194       3149         56         57
       195       3149         57         57
       196       3149         57         58
       197       3149         58         58
       198       3149         58         59
       199       3149         59         60
       200       3149         60         60
       201       3149         60         62
       202       3149         62         62
       203       3149         62         63
       204       3149         63         63
       205       3149         63         64
       206       3149         64         64
       207       3149         64         65
       208       3149         65         66
       209       3149         66         70
       210       3149         70         72
       211       3149         72         74
       212       3149         74         79
       213       3149         79         90
       214       3149         90         94
       215       3149         94         97
       216       3149         97        101
       217       3149        101        113
       218       3149        113        115
       219       3149        115        117
       220       3149        117        123
       221       3149        123        125
       222       3149        125        127
       223       3149        127        131
       224       3149        131        136
       225       3149        136        151
       226       3149        151        158
       227       3149        158        163
       228       3149        163        170
       229       3149        170        180
       230       3149        180        199
       231       3149        199        203
       232       3149        203        208
       233       3149        208        211
       234       3149        211        214
       235       3149        214        225
       236       3149        225        302
       237       3149        302        307
       238       3149        307        531
       239       3149        531        552
       240       3149        552        594
       241       3149        594        602
       242       3149        602        629
       243       3149        629        900
       244       3149        900        973
       245       3149        973       1016
       246       3149       1016       1054
       247       3149       1054       1093
       248       3149       1093       1192
       249       3149       1192       1237
       250       3149       1237       1301
       251       3149       1301       1463
       252       3149       1463       1546
       253       3149       1546       1639
       254       3149       1639       1783

254 rows selected.

Total Number of rows in this table is 800000 divided by 254 Buckets is 3149 Rows. From the output above, it can be seen that each bucket has 3149 rows and there are some popular and non-popular values. For example : 7.8.9 are Popular (there are other popular values as well) and 15,18 are Non-Popular (there are other non-popular values as well). Popular values are values spanning across 2 or more Buckets. Non-Popular Values are values with 1 or less bucket. Finally, when the histogram is generated, the popular buckets are compressed to save dictionary space and the resultant output from DBA_TAB_HISTOGRAM is as under.

SQL> select ENDPOINT_NUMBER, ENDPOINT_VALUE
  2  from dba_tab_histograms
  3  where table_name='TEST_SELECTIVITY'
  4  and   column_name='AMOUNT_SOLD'
  5  order by 1;

ENDPOINT_NUMBER ENDPOINT_VALUE
--------------- --------------
              0              6 <-- Popular Value
              3              7 <-- Popular Value (3-0=3 Buckets)
             11              8 <-- Popular Value (11-3=8 Buckets)
             21              9 <-- Popular Value (21-11=10 Buckets)
             32             10
             41             11
             45             12
             53             13
             57             14
             58             15 <-- Non-Popular Value (58-57=1 Bucket)
             60             16
             64             17
             65             18
             68             19
             70             20
             76             21
             81             22
             86             23
             91             24
             96             25
            100             26
            102             27
            106             28
            109             29
            114             30
            117             31
            119             32
            122             33
            125             34
            126             35
            128             36
            131             38
            134             39
            136             40
            138             41
            140             42
            142             43
            144             45
            150             46
            158             47
            166             48
            174             49
            176             50
            181             51
            184             52
            187             53
            190             54
            191             55
            193             56
            195             57
            197             58
            198             59
            200             60
            202             62
            204             63
            206             64
            207             65
            208             66
            209             70
            210             72
            211             74
            212             79
            213             90
            214             94
            215             97
            216            101
            217            113
            218            115
            219            117
            220            123
            221            125
            222            127
            223            131
            224            136
            225            151
            226            158
            227            163
            228            170
            229            180
            230            199
            231            203
            232            208
            233            211
            234            214
            235            225
            236            302
            237            307
            238            531
            239            552
            240            594
            241            602
            242            629
            243            895
            244            973
            245           1016
            246           1054
            247           1093
            248           1192
            249           1237
            250           1301
            251           1463
            252           1546
            253           1639
            254           1783

104 rows selected.

254 Buckets are compressed into 104 Buckets. The CARDINALITY calculations, in these cases are very simple. For Popular Value, it is 3149 (number of rows in each bucket) multiplied by number of Buckets. Let us run the queries and see the results.

## For 2 Buckets

SQL> select * from test_selectivity where amount_sold=56;

6204 rows selected.

Elapsed: 00:00:00.16

Execution Plan
----------------------------------------------------------
Plan hash value: 4083831454

--------------------------------------------------------------------------------------
| Id  | Operation         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                  |  6299 |   153K|   960   (2)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_SELECTIVITY |  6299 |   153K|   960   (2)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("AMOUNT_SOLD"=56)

## For 10 Buckets

SQL> select * from test_selectivity where amount_sold=9;

31964 rows selected.

Elapsed: 00:00:00.64

Execution Plan
----------------------------------------------------------
Plan hash value: 4083831454

--------------------------------------------------------------------------------------
| Id  | Operation         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                  | 31496 |   768K|   960   (2)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_SELECTIVITY | 31496 |   768K|   960   (2)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("AMOUNT_SOLD"=9)

For Non-Popular or Non-Existent values.
Will it be DENSITY x NUM_ROWS ? i.e. 0.0018217 x 800000 = 1457. Lets run the query to check this.

SQL> select column_id, column_name, num_distinct, num_nulls,
  2                  density, histogram
  3  from       dba_tab_columns
  4  where      owner='SCOTT'
  5  and        table_name='TEST_SELECTIVITY'
  6  and             column_name='AMOUNT_SOLD'
  7  order by 1;

 COLUMN_ID COLUMN_NAME       NUM_DISTINCT  NUM_NULLS    DENSITY HISTOGRAM
---------- ----------------- ------------ ---------- ---------- --------------------
         6 AMOUNT_SOLD                636          0   .0018217 HEIGHT BALANCED

SQL> select &&densit*800000 from dual;
old   1: select &&densit*800000 from dual
new   1: select   .0018217*800000 from dual

.0018217*800000
---------------
        1457.36

The Cardinality for non-popular values, as can be seen after executing the queries is as under.

SQL> select * from test_selectivity where amount_sold=55;

3372 rows selected.

Elapsed: 00:00:00.11

Execution Plan
----------------------------------------------------------
Plan hash value: 4083831454

--------------------------------------------------------------------------------------
| Id  | Operation         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                  |   285 |  7125 |   960   (2)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_SELECTIVITY |   285 |  7125 |   960   (2)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("AMOUNT_SOLD"=55)

Value 55 is a Non-Popular Value. We expected the expected cardinality as 1457, but it is 285. Let us generate a 10053 trace for this and check the trace.

SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for TEST_SELECTIVITY[A]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Column (#6):
    NewDensity:0.000356, OldDensity:0.001822 BktCnt:254.000000, PopBktCnt:201.000000, PopValCnt:50, NDV:636
  Column (#6): AMOUNT_SOLD(NUMBER)
    AvgLen: 4 NDV: 636 Nulls: 0 Density: 0.000356 Min: 6.000000 Max: 1783.000000
    Histogram: HtBal  #Bkts: 254  UncompBkts: 254  EndPtVals: 104  ActualVal: yes
  Table: TEST_SELECTIVITY  Alias: A
    Card: Original: 800000.000000  Rounded: 285  Computed: 284.862003  Non Adjusted: 284.862003

We see a similar pattern here. NewDensity is used as a SELECTIVITY to compute the CARDINALITY (0.000356×800000=285). How is this NewDensity calculated for Height Balanced Histograms ? It is computed as :

[(NPBKTCNT)/(BKTCNT * (NDV – POPVALCNT))]

From the 10053 trace, we can get the values of each of these. BKTCNT (Bucket Count) is 254, POPBKCNT (Popular Bucket Count) are 201. This makes NPBKCNT as 254-201=53. NDV (Number of Distinct Values is 636 and POPVALCNT (Popular Value Counts) are 50. Applying these values, we get [53/(254 *(636-50))] = .000356078

SQL> select (53/(254*(636-50))) newdensity from dual;

NEWDENSITY
----------
.000356078

SQL> select ceil(&&ndensit * 800000) from dual;
old   1: select ceil(&&ndensit * 800000) from dual
new   1: select ceil(.000356078 * 800000) from dual

CEIL(.000356078*800000)
-----------------------
                    285

NewDensity, I assume, was introduced in 11g, but is backported in 10204 as well. This was introduced as a Bug Fix. However, in our case, this is actually causing a mis-estimation. How do we disable this fix? The solution is disabling the fix_control 5483301 and setting _optimizer_enable_density_improvements to FALSE. Both these needs to be set together. We will set this at the session level and see the results for a Non-Existent value in a Frequency Histogram and a Non-Popular value in a Height Balanced Histogram.

SQL> alter session set "_fix_control"='5483301:off';
SQL> alter session set "_optimizer_enable_density_improvements"=false;

SQL> set autot trace
SQL> select * from test_selectivity where promo_id=500;
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 4083831454

--------------------------------------------------------------------------------------
| Id  | Operation         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                  |     1 |    25 |   958   (2)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_SELECTIVITY |     1 |    25 |   958   (2)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("PROMO_ID"=500)

SQL> select * from test_selectivity where amount_sold=55;

3372 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 4083831454

--------------------------------------------------------------------------------------
| Id  | Operation         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                  |  1457 | 36425 |   960   (2)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_SELECTIVITY |  1457 | 36425 |   960   (2)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("AMOUNT_SOLD"=55)

With these 2 settings, the Optimizer falls back to its Original Calculation of DENSITY x NUM_ROWS for Cardinality calculation.

It had been a long posting, however, I felt this to be necessary because many people still don’t know about this NewDensity. I was myself surprised when I was working on a real life issue and came across this mis-estimation. A 10053 trace revealed NewDensity, which was new for me as well. For the real life example, see below.

select count(*) from nca.s_p_attributes a1
WHERE   a1.value='olwassenen';

    COUNT(*)
------------
      591168

SQL> select plan_table_output from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
----------------------------------------------------------
SQL_ID	79dfpvydpk710, child number 0
-------------------------------------
select count(*) from nca.s_p_attributes a1 WHERE
a1.value='olwassenen’
----------------------------------------------------------
| Id  | Operation	        | Name 	        | Rows	| 
----------------------------------------------------------
|   0 | SELECT STATEMENT 	|		|	|
|   1 |  SORT AGGREGATE  	|		|      1| 
|*  2 |   INDEX SKIP SCAN	| SP_P_IND3     |      8| 
----------------------------------------------------------

The estimated and actual is way out. 8 Rows v/s 591168 Rows. At this point, I requested a 10053 trace, which pointed me to NewDensity value. The issue was resolved by way of disabling the fix_control and setting _optimizer_enable_density_improvements to FALSE.

Optimizer – Part I

Just concluded a Full Day Event on Performance in Chandigarh for the North India Chapter of “All India Oracle User Group”. As committed in my earlier user group update blog, I thought for the benefit of the readers, posting the technical details on Optimizer, especially histograms, would help. Another reason, this is important is these sessions are attended by Developer communities as well and grasping everything in one session is very difficult. writeup will help them understand this critical piece.

I usually blog on real life challenges. The motivation behind this blog as well is an issue that I was working on and the fix that I applied to resolve it. This will be discussed at a relevant time and in a relevant part of this series.

I will publish a 4 part series, starting with the basics of Optimizer, the formulaes and then few examples with its various calculations. The series will be divided into 4 parts, such as :

  1. Optimizer Basics & Calculations
  2. Cardinality – Actuals v/s Assumed
  3. Histograms – Frequency & Height Balanced
  4. 12c Enhancements – Hybrid and TopN Frequency Histograms

In the context of the Optimizer, the two terminologies commonly used are SELECTIVITY and CARDINALITY.

SELECTVITY :It is measured as a percentage of rows that would be returned from or filtered out of a row set. Thus the Selectivity of a predicate indicates how many rows pass a predicate test. Selectivity ranges from 0.0 to 1.0. A Selectivity of 0.0 means no rows are selected from a row set, whereas a selectivity of 1.0 means all the rows are selected. A predicate become more Selective as the values approaches 0.0 and less selective if it approaches 1.0. It drives the Access Path for example, tablescan or an Index Scan. With No Histograms, Selectivity for a column is computed as 1/NUM_DISTINCT or DENSITY.

CARDINALITY :Cardinality is the estimated number of rows returned by each operation in an Execution Plan. The Optimizer determines cardinality for each operation based on complex set of formulas that use both, the table and column level statistics or dynamic statistics. Cardinality estimates must be as accurate as possible because they influence all aspects of an execution plan. Cardinality is important when the Optimizer determines the cost of a Join. For example, in a Nested Loop Join between an EMP and DEPT table, the number of rows returned by EMP table determines how often the DEPT table will be probed. Cardinality drives the Access Order.

Just to simplify, for a Gender column with M & F (2 Distinct Values), the selectivity will be 1/2 or 0.5. If this table has around 100 rows, then the Cardinality will be Selectivity X Num_Rows, which is 0.5 x 100 = 50.

For a 100 row table with 2 columns, each with distinct values as 4 and 2, the combined selectivity (for AND predicate) will be 0.5 X 0.25 = 0.125 and the Cardinality will be 0.125 X 100 = 12.5 rounded off to 13.

Selectivity Calculation

Assume Column C
NDV is the Number of Distinct Values
minv is the Minumim Value for C
maxv is the Maximum Value for C

The formulae for Selectivity Calculation would be as under :

  • SEL(=C) = 1/NDV or DENSITY
  • SEL(<C) = (C-minv)/(maxv-minv)
  • SEL(<=C) = SEL(=C) + SEL(<C)
  • SEL(>C) = (maxv-C)/(maxv-minv)
  • SEL(>=C) = SEL(=C) + SEL(>C)

In case of a Range Predicate (<, , >=), the Numerator part is called as the Required Range and the Denominator Part is called as an Available Range.

Once the Selectivity is derived, Cardinality will be Selectivity multiplied by the Number of Rows. For multiple predicates involved, the Selectivity of each of these is derived based on above formulas and then used based on AND or OR predicates. For example :

  • WHERE A=:b1 and B=:b2 = SEL(=A) X SEL(=B)
  • WHERE A=:b1 and B>=:b2 = SEL(=A) X (SEL(=B) + SEL(>B))
  • WHERE A=:b1 or B=:b2 = SEL(=A) + SEL(=B) – (SEL(=A) X SEL(=B))

This was the first part of this series. In the next part, we will create a sample table and run through each of these formulas.

%d bloggers like this: