Cost based Optimizer ! Inefficient input yields inefficient Output…

Since the time my old blog went inaccessible, and still is, I have been receiving several mails on my previous blogs that helped some of my readers relate and resolve their performance issues. They wanted access to my previous blogs and I have been requesting them to let me know any particular blog that was of interest and I may publish them right here. There were many requests to publish blogs on Cost Based Optimizer and therefore, thought of publishing some of them. While these blogs may not be relevant for the current versions of Oracle, there are still some production systems that are still on Oracle 8i and 9i, and these blogs may be of help for those instances. Wherever any of the Cost Based Optimization blog that are relevant for certain versions of Oracle, I will specifically mention them for the benefits of the readers.

This blog is relevant for Oracle 8i and 9i (without System Statistics)

Cost Based Optimizer has always been a mystery for most of the Database Administrators and Developers. The calculation made by the Optimizer to come-up with an Execution Plan has been doubted by many of the Oracle users, especially, when (to name a few) :

  1. The Application Performance Degrades after Version Upgrade
  2. Plan of a Query change after gathering of fresh Statistics, and
  3. The Performance of a Query with higher cost is much better that the plan with a least cost

Cost Based Optimizer has improved a lot in previous few versions. Therefore, it can be said that CBO is still undergoing some enhancements. With these enhancements, Oracle Users have accepted the fact that the Application Performance might degrade after version upgrade and leave no room with rigorous testing of the application before production upgrade. It is fact no.2 and 3 that raises a doubt over Cost Based Optimization.

Cost Based Optimizer is a piece of code, wherein, you provide Inputs, by way of Optimizer Parameters, Statistics and it computes and generates an Output which is seen as an Execution Plan. It is not only the Optimizer Parameters and Statistics, which is passed as an Input. There is one more component, which is mostly overlooked, a QUERY. The initial input passed is the Query and based on the way it is written, optimizer does its cost calculation by taking into account the other two : Parameters and Statistics. In this blog, I will address the last two facts stated above, plus, a common misconception that Index Scan is better that Full Table Scan and finally, the issue with the critical first component passed as an input to CBO, A Query.

Recently, was working on a performance issue, related to a critical process. The overall performance of the system was OK, but one critical process started taking 18-20 hours, failing several times with ora-1555. This process, usually takes 1 hour 30 minutes. Therefore, all of a sudden, why this degradation ?, was the question raised by the customer.

The Database version is 8174. This is one of a critical process and a part of seeded application. It connects to a user, dedicated for this process, therefore, when this process is running, it is very easy to get the sid of the session executing this process. While monitoring the performance of this process, by querying v$session_wait for the sid, I could see one particular query doing exorbitant amount of Logical Reads. The text of the query and execution plan is as under :

explain plan for
select CS.CO_ID ,CS.SPCODE ,CS.SNCODE ,CS.TMCODE ,CS.CS_SEQNO ,CS.CS_STAT_CHNG ,
to_char(CS.TRIAL_END_DATE,'YYYYMMDDHH24MISS')
from       CR_SERVICES CS
where      (CS.CO_ID not  in (select CO_ID
                             from CO_ALL
                             where (CO_ID=CS.CO_ID and CUSTOMER_ID=DEALER_ID))
and        CS.SNCODE in (select distinct SNCODE  from MPV
                     where (SNCODE=CS.SNCODE and RATING_IND='Y')));

Explained.

SQL> @?/rdbms/admin/utlxpls

Plan Table
------------------------------------------------------------------------------------------
| Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------
| SELECT STATEMENT          |                    |     1K|   77K| 141575 |       |       |
|  FILTER                   |                    |       |      |        |       |       |
|   NESTED LOOPS            |                    |     1K|   77K| 141575 |       |       |
|    TABLE ACCESS FULL      |MPV                 |    74 |  444 |     13 |       |       |
|    TABLE ACCESS BY INDEX R|CR_SERVICES         |     3M|  120M|   1913 |       |       |
|     INDEX RANGE SCAN      |I1_CR_SERVICES      |     3M|      |   1616 |       |       |
|   FILTER                  |                    |       |      |        |       |       |
|    INDEX RANGE SCAN       |IDX_CUST_CO_DEALER  |     1 |   14 |      2 |       |       |
--------------------------------------------------------------------------------

There are three tables in the query. The Driving Table, from the explain plan, MPV is a small table with 220 rows and 80 Blocks, while the other two are huge tables with 69 Million and 30 Million Rows. Usually, such situation can crop-up due to following reasons :

  1. Change in Optimizer Parameters
  2. Change in Statistics
  3. Creation of New Index

Initial Investigation revealed that the Statistics of CR_SERVICES table and indexes were gathered the previous night. Once this was confirmed, Cost Based Optimization was at the receiving end. This is quite natural as without any change in the application query, the plan changed for bad, due to CBO. Therefore, thorough investigation was required about the cause of this plan change. Seeing the query and other optimizer statistics, Full Table Scan on CR_SERVICES would have been a better choice and therefore, my investigation was around this. Let us revisit the relevant portion of the Execution Plan, the statistics and the Query.

## Table Statistics of CR_SERVICES

OWNER                          PAR   NUM_ROWS     BLOCKS 
------------------------------ --- ---------- ---------- 
SM                             NO    69875510    1107733 

## Column Stats of the Columns used on the Query

COLUMN_NAME                    NUM_DISTINCT  NUM_NULLS    DENSITY
------------------------------ ------------ ---------- ----------
CO_ID                               6863377          0 1.4570E-07
SNCODE                                  370          0 .002702703  @?/rdbms/admin/utlxpls

Plan Table
------------------------------------------------------------------------------------------
| Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------
|   NESTED LOOPS            |                    |     1K|   77K| 141575 |       |       |
|    TABLE ACCESS FULL      |MPV                 |    74 |  444 |     13 |       |       |
|    TABLE ACCESS BY INDEX R|CR_SERVICES         |     3M|  120M|   1913 |       |       |
|     INDEX RANGE SCAN      |I1_CR_SERVICES      |     3M|      |   1616 |       |       |

The Query is using an Index on SNCODE column, which is a very low cardinality column. Therefore, this was my primary target and I worked on the Optimizer Calculation to get the possible cause of this Index Scan.

Optimizer Calculation


The Cost Calculation of an Index Scan and Table Scan via Index is

Index Cost = Blevel+ceil(leaf_blocks*ix_selectivity) 
Table Scan = ceil(clustering_factor*table_selectivity) 

select 3+ceil(596745*1/370) "Index Scan Cost", ceil(40527179*1/370) from dual;
Index Cost = 1616 ## This matches the execution plan
Table Scan = 109533
Final Cost = 1616+109533=111149

select 3+ceil(596745*1/370) "Index Scan Cost", ceil(40527179*1/370*1/370) from dual;
Index Cost = 1616
Table Scan = 297
Final Cost = 1616+297=1913 ## This Matches with the Cost calculated by the Optimizer

This additional 1/370, was due to the fact that SNCODE column is used twice in the query. One for Index Filter, therefore considered only once for an index cost, and twice for Table cost due to SNCODE=CS.SNCODE join. The final Nested Loop Cost is 141575 and this is calculated as

NL Cost = Cost of an Outer Table + (Cost of Inner Table * Cardinality of Outer Table)
NL Cost = 13 + (1913*74)
NL Cost = 141575

Cost of Full Table Scan = Num Blocks / 6.59 (8k block size)
Cost of Full Table Scan = 1107733 / 6.59
Cost of Full Table Scan = 168093.02

Clearly, the cost of a Full Table Scan is higher than the Nested Loop Join and optimizer opted for the cheaper costed plan, which is, a Nested Loop Join and this requires an Index Scan of an Inner Table. When I checked the execution plan of this query with a FULL hint on CR_SERVICES, the FTS cost was 168200. I am yet to figure out the difference between my calculation and optimizer calculation, but for investigation purpose, I considered the calculation done by the optimizer. Based on this calculation, I came out with three possible solutions, these are :

  1. Modify the Query (Immediately Not possible, as it is a seeded query)
  2. Change Optimizer Statistics
  3. Change Optimizer Parameters

The first optimization was a better and safe approach, but was not possible immediately. The other two required some testing, as change in Optimizer Statistis can have negative impact on other queries using these objects and, change in optimizer parameters at db level can have an global impact on the overall performance. As mentioned earlier, this process connects to a user dedicated for this process, therefore, ON-LOGON trigger with the Optimizer Parameter change at session level was thought of as a better option without impacting overall application performance. The parameter, I could immediately think of was, OPTIMIZER_INDEX_COST_ADJ. This parameter was default i.e.100. For this query, to force Index Cost be Costlier than FTS, the calculation is OICA = (FTS Cost/NL Cost)*100 = (168200/141575)*100 = 118.80. Any value above this, would compute the NL cost be costlier than Full Table Cost and would opt for FTS and Hash Join. Initially, the Change was done at session level and the plan changed.

Plan Table
------------------------------------------------------------------------------------------
| Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------
| SELECT STATEMENT          |                    |     1K|   77K| 168219 |       |       |
|  FILTER                   |                    |       |      |        |       |       |
|   HASH JOIN               |                    |     1K|   77K| 168219 |       |       |
|    TABLE ACCESS FULL      |MPV                 |    74 |  444 |     13 |       |       |
|    TABLE ACCESS FULL      |CR_SERVICES         |     3M|  120M| 168200 |       |       |
|   FILTER                  |                    |       |      |        |       |       |
|    INDEX RANGE SCAN       |IDX_CUST_CO_DEALER  |     1 |   14 |      2 |       |       |
--------------------------------------------------------------------------------

Finally, ON-LOGON trigger was created to set this parameter to 120, whenever, this user logged on to the database. The process, when re-executed, completed in 1 Hour 15 minutes and the desired performance was achieved.

Optimizer Calculation with OICA as 120


NL Table Scan Cost = (Current Cost of Inner Table Scan * Outer Table Card)*OICA/100
NL Table Scan Cost = (1913*74)*120/100
NL Table Scan Cost = (1913*74)*120/100
NL Table Scan Cost = 169874.4 > 168200 (FTS)

With this setting, the plan changed. Now for the reverse engineering. I wanted to check for any of the Index Statistics, say Clustering Factor, that would have calculated NL Join costlier than Full Table Scan. The Cost of NL Join with OICA is 169874.4. The cost of Table Scan via Index should be round(169874.4/74), which is 2296. Index Scan Cost is 1616, so the Table Scan cost component should be 2296-1616 is 680. Based on this, I calculated that if the Clustering Factor is changed to 93092000, then without any parameter setting, the query would go for a Full Table Scan. 

SQL> exec dbms_stats.SET_INDEX_STATS('SM','I1_CR_SERVICES',CLSTFCT=>93092000);

PL/SQL procedure successfully completed.

SQL> @index_stats
Enter value for o: SM
old   3: where table_owner='&O'
new   3: where table_owner='SM'
Enter value for t: CR_SERVICES
old   4: and   table_name='&T'
new   4: and   table_name='CR_SERVICES'

INDEX_NAME                       NUM_ROWS     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR
------------------------------ ---------- ---------- ----------- ------------- -----------------
I1_CR_SERVICES                   70724121          3      596745      11207514          93092000 

12 rows selected.

explain plan for
select CS.CO_ID ,CS.SPCODE ,CS.SNCODE ,CS.TMCODE ,CS.CS_SEQNO ,CS.CS_STAT_CHNG ,
to_char(CS.TRIAL_END_DATE,'YYYYMMDDHH24MISS')
from       CR_SERVICES CS
where      (CS.CO_ID not  in (select CO_ID
                             from CO_ALL
                             where (CO_ID=CS.CO_ID and CUSTOMER_ID=DEALER_ID))
and        CS.SNCODE in (select distinct SNCODE  from MPV
                     where (SNCODE=CS.SNCODE and RATING_IND='Y')));

Explained.

SQL> @?/rdbms/admin/utlxpls

Plan Table
------------------------------------------------------------------------------------------
| Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------
| SELECT STATEMENT          |                    |     1K|   77K| 168219 |       |       |
|  FILTER                   |                    |       |      |        |       |       |
|   HASH JOIN               |                    |     1K|   77K| 168219 |       |       |
|    TABLE ACCESS FULL      |MPV                 |    74 |  444 |     13 |       |       |
|    TABLE ACCESS FULL      |CR_SERVICES         |     3M|  120M| 168200 |       |       |
|   FILTER                  |                    |       |      |        |       |       |
|    INDEX RANGE SCAN       |IDX_CUST_CO_DEALER  |     1 |   14 |      2 |       |       |
--------------------------------------------------------------------------------

11 rows selected.

Back to the problem, does it look like to be an Optimizer Issue ? This table has a very high growth rate. While the num_blocks value of a table grows faster, the statistics of Leaf Blocks, BLevel and Clustering Factor does not change significantly. This is enough to blow up the cost of a Full Table Scan. If you see the query and the relevant portion of the query, pasted above, the join on SNCODE is really not required as it uses an IN clause. This unwanted join, forced the optimizer to consider the Density of SNCODE twice in Table Scan cost calculation.

## Relevant portion of the Query and Plan

and        CS.SNCODE in (select distinct SNCODE  from MPV
                     where (SNCODE=CS.SNCODE and RATING_IND='Y')));

## Optimizer Calculation (1/370 is considered twice to get Table Scan cost of 297, thus making it cheaper).
Index Cost = Blevel+ceil(leaf_blocks*ix_selectivity) 
Table Scan = ceil(clustering_factor*table_selectivity) 

select 3+ceil(596745*1/370) "Index Scan Cost", ceil(40527179*1/370*1/370) from dual;
Index Cost = 1616
Table Scan = 297
Final Cost = 1616+297=1913 

If this portion of the query is modified as :


and        CS.SNCODE in (select distinct SNCODE  from MPV
                     where (RATING_IND='Y')));

## Optimizer Calculation, with this change
Index Cost = Blevel+ceil(leaf_blocks*ix_selectivity) 
Table Scan = ceil(clustering_factor*table_selectivity) 

select 3+ceil(596745*1/370) "Index Scan Cost", ceil(40527179*1/370) from dual;
Index Cost = 1616
Table Scan = 109533
Final Cost = 1616+109533= 111149

## With this Cost, the cost of NL Join would have been

NL Cost = Cost of an Outer Table + (Cost of Inner Table * Cardinality of Outer Table)
NL Cost = 13 + (111149*74)
NL Cost = 8225039 > 168200 (FTS)

One unwanted Join Condition made Cost based Optimizer to come up with an in-efficient plan. Therefore, as mentioned earlier, my Initial and Crucial input to the optimizer is a Query and if these are not optimally written, Cost Based Optimizer is bound to compute an sub-optimal plan. The Query, modified by removing unwanted join and no parameter change or statistics change, gave the desired and better plan (see below ).

explain plan for
select CS.CO_ID ,CS.SPCODE ,CS.SNCODE ,CS.TMCODE ,CS.CS_SEQNO ,CS.CS_STAT_CHNG ,
to_char(CS.TRIAL_END_DATE,'YYYYMMDDHH24MISS')
from       CR_SERVICES CS  
where      (CS.CO_ID not  in (select CO_ID
                               from CO_ALL
                               where (CO_ID=CS.CO_ID and CUSTOMER_ID=DEALER_ID))
and        CS.SNCODE in (select distinct SNCODE  from MPV
                       where (RATING_IND='Y')));

Explained.

SQL> @?/rdbms/admin/utlxpls

Plan Table
------------------------------------------------------------------------------------------
| Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------
| SELECT STATEMENT          |                    |     1K|   77K| 168219 |       |       |
|  FILTER                   |                    |       |      |        |       |       |
|   HASH JOIN               |                    |     1K|   77K| 168219 |       |       |
|    TABLE ACCESS FULL      |MPV                 |    74 |  444 |     13 |       |       |
|    TABLE ACCESS FULL      |CR_SERVICES         |     3M|  120M| 168200 |       |       |
|   FILTER                  |                    |       |      |        |       |       |
|    INDEX RANGE SCAN       |IDX_CUST_CO_DEALER  |     1 |   14 |      2 |       |       |
--------------------------------------------------------------------------------

Many developers are of the Opinion that an Execution Plan with cheaper cost is better than a Plan with higher cost, and Index Scans are better than Full Table Scans. This blog demonstrates that these are just misconceptions and it is the application design and knowledge that should be taken into account while writing an effecient code.

To sum up, it is just not the Optimizer Statistics and Parameter that control the Cost Based Optimizer, Queries play an important role too. A well written code should never bother or doubt about Cost Based Optimization.

Advertisements

PL/SQL Parallelism made easier in 11g

Today was the first day of the two day AIOUG Event Sangam 2010. Jonathan Lewis was the Key Speaker this year and I presented a session on “Developing a Scalable Application”. One of my slide mentioned about PL/SQL Parallelism. This is in continuation to my previous blog where I posted about Manual Parallelism that takes the ROWID’s of the table and distributes it across multiple chunks. The limitation with Manual Parallelism is that even though a pl/sql block needs to process 5% to 10% of the rows from a huge table, the Manual Parallelism will select and split the ROWIDs of the entire table. This means, many of the chunks may process very less data or almost zero data.

Oracle Database 11g Release 2 introduced a package dbms_parallel_execute, that makes it easier to split a table into multiple chunks, based on ROWID’s or SQL or by Number Column. During my AIOUG session, I demonstrated the use of this package to split a table into multiple chunks and schedule a batch process in parallel. The best part of this package is that the chunks can be split based on a condition and only the chunks that contain to-be-processed data will be created. Thus the rows to be processed by each of the chunks will have uniform distribution.

As mentioned in my earlier blog, I will reiterate the benefit of pl/sql parallelism, which is, optimal use of available resources. In this blog, I will post the test case that I executed during the AIOUG session and this should help you reproduce this in your environment.

In this example, consider a Nightly Batch Job NIGHTLY_JOB and this job processes the rows from VIVEK_TEST Table using an approach which is widely implemented by the developers or the dba’s. This is single job processing the required rows and thus is a very slow and an inefficient approach. There are 7200 rows in this table to be processed and assuming that the business logic in the pl/sql takes atleast 1 second and then moves on to another row. Which means, at a minimum, each row to be processed will take atleast 1 second, thus the entire process will take atleast 7200 second (2 Hours). This is the code of a batch process :

create table vivek_test pctfree 50 pctused 50 as
select * from all_objects a
where rownum=1000 then
    commit;
    l_limit:=0;
  end if;
  end loop;
end;
/
exec nightly_batch;

Once the job is executed, you can monitor the progress of this batch process from another session. V$SESSION will report the progress in MODULE column, as posted by dbms_application_info procedure.

The changes required to parallize the process is pasted below. To monitor the progress and maintain the log of each of the CHUNKS, I have created a process_status table. One thing worth noticing is the PCTFREE and PCTUSED setting, which is set to 50. The reason I have this setting is that the number of chunks to be created is dependant on the number of extents in the table and I wanted multiple extents for my table. In a production environment, number of extents should not be an issue.

create table process_status (
process_name		varchar2(30),
job_id			number,
s_rowid			rowid,
e_rowid			rowid,
start_time		date,
end_time		date,
processed_rows		number,
completed_yn		varchar2(1),
status			varchar2(1000));

create or replace procedure parallel_job(chks in number) is
l_statement	varchar2(1000);
l_jobno		number;
l_retry		number;
l_status	   number;
l_chunkid	   number;
l_srowid	   rowid;
l_erowid	   rowid;
l_anyrows   	boolean;
l_errcode	   pls_integer;
begin

  execute immediate 'truncate table process_status';
  /* CREATE A PARALLEL TASK */
  dbms_parallel_execute.create_task('NIGHT_JOB');

  dbms_parallel_execute.create_chunks_by_rowid('NIGHT_JOB','VIVEK','VIVEK_TEST',true, chks);
 /* INSERT THE CHUNKS INTO PROCESS_STATUS TO LOG THE STATUS OF EACH JOB */
  insert into process_status(process_name, job_id, s_rowid, e_rowid)
  select 'NIGHT_JOB', chunk_id, start_rowid, end_rowid from user_parallel_execute_chunks
  where task_name='NIGHT_JOB' order by 1;
  commit;
  /* Run the Procedure here */
  LOOP
      -- Get a chunk to process; if there is nothing to process, then exit the 
      -- loop;
      DBMS_PARALLEL_EXECUTE.GET_ROWID_CHUNK('NIGHT_JOB',l_chunkid, l_srowid, l_erowid,l_anyrows);
      IF (l_anyrows = false) THEN EXIT; END IF;
      BEGIN
	l_statement:='nightly_batch('||l_chunkid||','||''''||l_srowid||''''||','||''''||l_erowid||''''||');';
	dbms_job.submit(l_jobno,l_statement);
	update process_status set completed_yn='P', start_time=sysdate where job_id=l_chunkid;
	commit;
        DBMS_PARALLEL_EXECUTE.SET_CHUNK_STATUS('NIGHT_JOB',l_chunkid,DBMS_PARALLEL_EXECUTE.PROCESSED);
      EXCEPTION WHEN OTHERS THEN
        DBMS_PARALLEL_EXECUTE.SET_CHUNK_STATUS('NIGHT_JOB', l_chunkid,
          DBMS_PARALLEL_EXECUTE.PROCESSED_WITH_ERROR, SQLCODE, SQLERRM);
	  l_errcode:=sqlcode;
  	update process_status set completed_yn='F', end_time=sysdate, status='FAILED WITH ERROR '||l_errcode||'.' where job_id=l_chunkid;
      END;
  commit;
  END LOOP;
  dbms_parallel_execute.drop_task('NIGHT_JOB');
end;
/

The only change required in our NIGHTLY_BATCH is the introduction of INPUT parameters and update of the process_status at the completion or failure of a chunk.

create or replace procedure nightly_batch(jobid number, low_rowid rowid, high_rowid rowid) is
l_count		number:=0;
l_module	    varchar2(30);
l_limit		number:=0;
l_errcode	    pls_integer;
cursor c_main is
select rowid, owner, object_name, object_type from vivek_test
where nvl(processed_flag,'N')='N'
and	rowid between low_rowid and high_rowid;
begin
  for i in c_main
  loop
  l_count:=l_count+1;
  l_limit:=l_limit+1;
  l_module:=l_count||' out of 7200/40=180';
  dbms_application_info.set_module('PL_SQL',l_module);
  dbms_lock.sleep(1); /* Assuming Processing of a row takes 1 second */
  update vivek_test set processed_flag='Y' where rowid = i.rowid;
  if l_limit>=1000 then
    commit;
    l_limit:=0;
  end if;
  end loop;
  update process_status set completed_yn='Y', end_time=sysdate, processed_rows=l_count 
  where job_id = jobid;
  commit;
  exception when others then
  l_errcode:=sqlcode;
  update process_status set completed_yn='F', status='FAILED WITH '||l_errcode||'.' where job_id = jobid;
  commit;
end;
/
exec parallel_job(100);

Monitor the progress of this job and you would see a significant improvement as the job completes in less than 5 minutes, as against 2 Hours, and utilizes the hardware resources optimally.

%d bloggers like this: