Cost based Optimizer ! Inefficient input yields inefficient Output…

Since the time my old blog went inaccessible, and still is, I have been receiving several mails on my previous blogs that helped some of my readers relate and resolve their performance issues. They wanted access to my previous blogs and I have been requesting them to let me know any particular blog that was of interest and I may publish them right here. There were many requests to publish blogs on Cost Based Optimizer and therefore, thought of publishing some of them. While these blogs may not be relevant for the current versions of Oracle, there are still some production systems that are still on Oracle 8i and 9i, and these blogs may be of help for those instances. Wherever any of the Cost Based Optimization blog that are relevant for certain versions of Oracle, I will specifically mention them for the benefits of the readers.

This blog is relevant for Oracle 8i and 9i (without System Statistics)

Cost Based Optimizer has always been a mystery for most of the Database Administrators and Developers. The calculation made by the Optimizer to come-up with an Execution Plan has been doubted by many of the Oracle users, especially, when (to name a few) :

  1. The Application Performance Degrades after Version Upgrade
  2. Plan of a Query change after gathering of fresh Statistics, and
  3. The Performance of a Query with higher cost is much better that the plan with a least cost

Cost Based Optimizer has improved a lot in previous few versions. Therefore, it can be said that CBO is still undergoing some enhancements. With these enhancements, Oracle Users have accepted the fact that the Application Performance might degrade after version upgrade and leave no room with rigorous testing of the application before production upgrade. It is fact no.2 and 3 that raises a doubt over Cost Based Optimization.

Cost Based Optimizer is a piece of code, wherein, you provide Inputs, by way of Optimizer Parameters, Statistics and it computes and generates an Output which is seen as an Execution Plan. It is not only the Optimizer Parameters and Statistics, which is passed as an Input. There is one more component, which is mostly overlooked, a QUERY. The initial input passed is the Query and based on the way it is written, optimizer does its cost calculation by taking into account the other two : Parameters and Statistics. In this blog, I will address the last two facts stated above, plus, a common misconception that Index Scan is better that Full Table Scan and finally, the issue with the critical first component passed as an input to CBO, A Query.

Recently, was working on a performance issue, related to a critical process. The overall performance of the system was OK, but one critical process started taking 18-20 hours, failing several times with ora-1555. This process, usually takes 1 hour 30 minutes. Therefore, all of a sudden, why this degradation ?, was the question raised by the customer.

The Database version is 8174. This is one of a critical process and a part of seeded application. It connects to a user, dedicated for this process, therefore, when this process is running, it is very easy to get the sid of the session executing this process. While monitoring the performance of this process, by querying v$session_wait for the sid, I could see one particular query doing exorbitant amount of Logical Reads. The text of the query and execution plan is as under :

explain plan for
select CS.CO_ID ,CS.SPCODE ,CS.SNCODE ,CS.TMCODE ,CS.CS_SEQNO ,CS.CS_STAT_CHNG ,
to_char(CS.TRIAL_END_DATE,'YYYYMMDDHH24MISS')
from       CR_SERVICES CS
where      (CS.CO_ID not  in (select CO_ID
                             from CO_ALL
                             where (CO_ID=CS.CO_ID and CUSTOMER_ID=DEALER_ID))
and        CS.SNCODE in (select distinct SNCODE  from MPV
                     where (SNCODE=CS.SNCODE and RATING_IND='Y')));

Explained.

SQL> @?/rdbms/admin/utlxpls

Plan Table
------------------------------------------------------------------------------------------
| Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------
| SELECT STATEMENT          |                    |     1K|   77K| 141575 |       |       |
|  FILTER                   |                    |       |      |        |       |       |
|   NESTED LOOPS            |                    |     1K|   77K| 141575 |       |       |
|    TABLE ACCESS FULL      |MPV                 |    74 |  444 |     13 |       |       |
|    TABLE ACCESS BY INDEX R|CR_SERVICES         |     3M|  120M|   1913 |       |       |
|     INDEX RANGE SCAN      |I1_CR_SERVICES      |     3M|      |   1616 |       |       |
|   FILTER                  |                    |       |      |        |       |       |
|    INDEX RANGE SCAN       |IDX_CUST_CO_DEALER  |     1 |   14 |      2 |       |       |
--------------------------------------------------------------------------------

There are three tables in the query. The Driving Table, from the explain plan, MPV is a small table with 220 rows and 80 Blocks, while the other two are huge tables with 69 Million and 30 Million Rows. Usually, such situation can crop-up due to following reasons :

  1. Change in Optimizer Parameters
  2. Change in Statistics
  3. Creation of New Index

Initial Investigation revealed that the Statistics of CR_SERVICES table and indexes were gathered the previous night. Once this was confirmed, Cost Based Optimization was at the receiving end. This is quite natural as without any change in the application query, the plan changed for bad, due to CBO. Therefore, thorough investigation was required about the cause of this plan change. Seeing the query and other optimizer statistics, Full Table Scan on CR_SERVICES would have been a better choice and therefore, my investigation was around this. Let us revisit the relevant portion of the Execution Plan, the statistics and the Query.

## Table Statistics of CR_SERVICES

OWNER                          PAR   NUM_ROWS     BLOCKS 
------------------------------ --- ---------- ---------- 
SM                             NO    69875510    1107733 

## Column Stats of the Columns used on the Query

COLUMN_NAME                    NUM_DISTINCT  NUM_NULLS    DENSITY
------------------------------ ------------ ---------- ----------
CO_ID                               6863377          0 1.4570E-07
SNCODE                                  370          0 .002702703  @?/rdbms/admin/utlxpls

Plan Table
------------------------------------------------------------------------------------------
| Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------
|   NESTED LOOPS            |                    |     1K|   77K| 141575 |       |       |
|    TABLE ACCESS FULL      |MPV                 |    74 |  444 |     13 |       |       |
|    TABLE ACCESS BY INDEX R|CR_SERVICES         |     3M|  120M|   1913 |       |       |
|     INDEX RANGE SCAN      |I1_CR_SERVICES      |     3M|      |   1616 |       |       |

The Query is using an Index on SNCODE column, which is a very low cardinality column. Therefore, this was my primary target and I worked on the Optimizer Calculation to get the possible cause of this Index Scan.

Optimizer Calculation


The Cost Calculation of an Index Scan and Table Scan via Index is

Index Cost = Blevel+ceil(leaf_blocks*ix_selectivity) 
Table Scan = ceil(clustering_factor*table_selectivity) 

select 3+ceil(596745*1/370) "Index Scan Cost", ceil(40527179*1/370) from dual;
Index Cost = 1616 ## This matches the execution plan
Table Scan = 109533
Final Cost = 1616+109533=111149

select 3+ceil(596745*1/370) "Index Scan Cost", ceil(40527179*1/370*1/370) from dual;
Index Cost = 1616
Table Scan = 297
Final Cost = 1616+297=1913 ## This Matches with the Cost calculated by the Optimizer

This additional 1/370, was due to the fact that SNCODE column is used twice in the query. One for Index Filter, therefore considered only once for an index cost, and twice for Table cost due to SNCODE=CS.SNCODE join. The final Nested Loop Cost is 141575 and this is calculated as

NL Cost = Cost of an Outer Table + (Cost of Inner Table * Cardinality of Outer Table)
NL Cost = 13 + (1913*74)
NL Cost = 141575

Cost of Full Table Scan = Num Blocks / 6.59 (8k block size)
Cost of Full Table Scan = 1107733 / 6.59
Cost of Full Table Scan = 168093.02

Clearly, the cost of a Full Table Scan is higher than the Nested Loop Join and optimizer opted for the cheaper costed plan, which is, a Nested Loop Join and this requires an Index Scan of an Inner Table. When I checked the execution plan of this query with a FULL hint on CR_SERVICES, the FTS cost was 168200. I am yet to figure out the difference between my calculation and optimizer calculation, but for investigation purpose, I considered the calculation done by the optimizer. Based on this calculation, I came out with three possible solutions, these are :

  1. Modify the Query (Immediately Not possible, as it is a seeded query)
  2. Change Optimizer Statistics
  3. Change Optimizer Parameters

The first optimization was a better and safe approach, but was not possible immediately. The other two required some testing, as change in Optimizer Statistis can have negative impact on other queries using these objects and, change in optimizer parameters at db level can have an global impact on the overall performance. As mentioned earlier, this process connects to a user dedicated for this process, therefore, ON-LOGON trigger with the Optimizer Parameter change at session level was thought of as a better option without impacting overall application performance. The parameter, I could immediately think of was, OPTIMIZER_INDEX_COST_ADJ. This parameter was default i.e.100. For this query, to force Index Cost be Costlier than FTS, the calculation is OICA = (FTS Cost/NL Cost)*100 = (168200/141575)*100 = 118.80. Any value above this, would compute the NL cost be costlier than Full Table Cost and would opt for FTS and Hash Join. Initially, the Change was done at session level and the plan changed.

Plan Table
------------------------------------------------------------------------------------------
| Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------
| SELECT STATEMENT          |                    |     1K|   77K| 168219 |       |       |
|  FILTER                   |                    |       |      |        |       |       |
|   HASH JOIN               |                    |     1K|   77K| 168219 |       |       |
|    TABLE ACCESS FULL      |MPV                 |    74 |  444 |     13 |       |       |
|    TABLE ACCESS FULL      |CR_SERVICES         |     3M|  120M| 168200 |       |       |
|   FILTER                  |                    |       |      |        |       |       |
|    INDEX RANGE SCAN       |IDX_CUST_CO_DEALER  |     1 |   14 |      2 |       |       |
--------------------------------------------------------------------------------

Finally, ON-LOGON trigger was created to set this parameter to 120, whenever, this user logged on to the database. The process, when re-executed, completed in 1 Hour 15 minutes and the desired performance was achieved.

Optimizer Calculation with OICA as 120


NL Table Scan Cost = (Current Cost of Inner Table Scan * Outer Table Card)*OICA/100
NL Table Scan Cost = (1913*74)*120/100
NL Table Scan Cost = (1913*74)*120/100
NL Table Scan Cost = 169874.4 > 168200 (FTS)

With this setting, the plan changed. Now for the reverse engineering. I wanted to check for any of the Index Statistics, say Clustering Factor, that would have calculated NL Join costlier than Full Table Scan. The Cost of NL Join with OICA is 169874.4. The cost of Table Scan via Index should be round(169874.4/74), which is 2296. Index Scan Cost is 1616, so the Table Scan cost component should be 2296-1616 is 680. Based on this, I calculated that if the Clustering Factor is changed to 93092000, then without any parameter setting, the query would go for a Full Table Scan. 

SQL> exec dbms_stats.SET_INDEX_STATS('SM','I1_CR_SERVICES',CLSTFCT=>93092000);

PL/SQL procedure successfully completed.

SQL> @index_stats
Enter value for o: SM
old   3: where table_owner='&O'
new   3: where table_owner='SM'
Enter value for t: CR_SERVICES
old   4: and   table_name='&T'
new   4: and   table_name='CR_SERVICES'

INDEX_NAME                       NUM_ROWS     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR
------------------------------ ---------- ---------- ----------- ------------- -----------------
I1_CR_SERVICES                   70724121          3      596745      11207514          93092000 

12 rows selected.

explain plan for
select CS.CO_ID ,CS.SPCODE ,CS.SNCODE ,CS.TMCODE ,CS.CS_SEQNO ,CS.CS_STAT_CHNG ,
to_char(CS.TRIAL_END_DATE,'YYYYMMDDHH24MISS')
from       CR_SERVICES CS
where      (CS.CO_ID not  in (select CO_ID
                             from CO_ALL
                             where (CO_ID=CS.CO_ID and CUSTOMER_ID=DEALER_ID))
and        CS.SNCODE in (select distinct SNCODE  from MPV
                     where (SNCODE=CS.SNCODE and RATING_IND='Y')));

Explained.

SQL> @?/rdbms/admin/utlxpls

Plan Table
------------------------------------------------------------------------------------------
| Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------
| SELECT STATEMENT          |                    |     1K|   77K| 168219 |       |       |
|  FILTER                   |                    |       |      |        |       |       |
|   HASH JOIN               |                    |     1K|   77K| 168219 |       |       |
|    TABLE ACCESS FULL      |MPV                 |    74 |  444 |     13 |       |       |
|    TABLE ACCESS FULL      |CR_SERVICES         |     3M|  120M| 168200 |       |       |
|   FILTER                  |                    |       |      |        |       |       |
|    INDEX RANGE SCAN       |IDX_CUST_CO_DEALER  |     1 |   14 |      2 |       |       |
--------------------------------------------------------------------------------

11 rows selected.

Back to the problem, does it look like to be an Optimizer Issue ? This table has a very high growth rate. While the num_blocks value of a table grows faster, the statistics of Leaf Blocks, BLevel and Clustering Factor does not change significantly. This is enough to blow up the cost of a Full Table Scan. If you see the query and the relevant portion of the query, pasted above, the join on SNCODE is really not required as it uses an IN clause. This unwanted join, forced the optimizer to consider the Density of SNCODE twice in Table Scan cost calculation.

## Relevant portion of the Query and Plan

and        CS.SNCODE in (select distinct SNCODE  from MPV
                     where (SNCODE=CS.SNCODE and RATING_IND='Y')));

## Optimizer Calculation (1/370 is considered twice to get Table Scan cost of 297, thus making it cheaper).
Index Cost = Blevel+ceil(leaf_blocks*ix_selectivity) 
Table Scan = ceil(clustering_factor*table_selectivity) 

select 3+ceil(596745*1/370) "Index Scan Cost", ceil(40527179*1/370*1/370) from dual;
Index Cost = 1616
Table Scan = 297
Final Cost = 1616+297=1913 

If this portion of the query is modified as :


and        CS.SNCODE in (select distinct SNCODE  from MPV
                     where (RATING_IND='Y')));

## Optimizer Calculation, with this change
Index Cost = Blevel+ceil(leaf_blocks*ix_selectivity) 
Table Scan = ceil(clustering_factor*table_selectivity) 

select 3+ceil(596745*1/370) "Index Scan Cost", ceil(40527179*1/370) from dual;
Index Cost = 1616
Table Scan = 109533
Final Cost = 1616+109533= 111149

## With this Cost, the cost of NL Join would have been

NL Cost = Cost of an Outer Table + (Cost of Inner Table * Cardinality of Outer Table)
NL Cost = 13 + (111149*74)
NL Cost = 8225039 > 168200 (FTS)

One unwanted Join Condition made Cost based Optimizer to come up with an in-efficient plan. Therefore, as mentioned earlier, my Initial and Crucial input to the optimizer is a Query and if these are not optimally written, Cost Based Optimizer is bound to compute an sub-optimal plan. The Query, modified by removing unwanted join and no parameter change or statistics change, gave the desired and better plan (see below ).

explain plan for
select CS.CO_ID ,CS.SPCODE ,CS.SNCODE ,CS.TMCODE ,CS.CS_SEQNO ,CS.CS_STAT_CHNG ,
to_char(CS.TRIAL_END_DATE,'YYYYMMDDHH24MISS')
from       CR_SERVICES CS  
where      (CS.CO_ID not  in (select CO_ID
                               from CO_ALL
                               where (CO_ID=CS.CO_ID and CUSTOMER_ID=DEALER_ID))
and        CS.SNCODE in (select distinct SNCODE  from MPV
                       where (RATING_IND='Y')));

Explained.

SQL> @?/rdbms/admin/utlxpls

Plan Table
------------------------------------------------------------------------------------------
| Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------
| SELECT STATEMENT          |                    |     1K|   77K| 168219 |       |       |
|  FILTER                   |                    |       |      |        |       |       |
|   HASH JOIN               |                    |     1K|   77K| 168219 |       |       |
|    TABLE ACCESS FULL      |MPV                 |    74 |  444 |     13 |       |       |
|    TABLE ACCESS FULL      |CR_SERVICES         |     3M|  120M| 168200 |       |       |
|   FILTER                  |                    |       |      |        |       |       |
|    INDEX RANGE SCAN       |IDX_CUST_CO_DEALER  |     1 |   14 |      2 |       |       |
--------------------------------------------------------------------------------

Many developers are of the Opinion that an Execution Plan with cheaper cost is better than a Plan with higher cost, and Index Scans are better than Full Table Scans. This blog demonstrates that these are just misconceptions and it is the application design and knowledge that should be taken into account while writing an effecient code.

To sum up, it is just not the Optimizer Statistics and Parameter that control the Cost Based Optimizer, Queries play an important role too. A well written code should never bother or doubt about Cost Based Optimization.

Advertisements

About Vivek Sharma
I am an Oracle Professional from Mumbai (India). I blog on the issues that I feel is Interesting for my readers. Some of these are my real life examples, which I hope, you would find interesting. Comments are always a welcome. The Technical Observations & Views here are my own and not necessarily those of Oracle or its affiliates. These are purely based on my understandings, learnings and resolutions of various customer issues.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s