Rowid based Parallelism (Manual) v/s Oracle Parallelism !

This blog was one of my most appreciated writeup, with a good amount of hits. Since, my previous blog url is unaccessible and I want my readers to have access to this blog, thought of re-posting it again. Another reason, to re-post this blog, is to make the dba’s, and more importantly, the developers aware of a very effective and failure free strategy, when it comes to accessing a Huge Table.

A background on a recent Purging Activity (Reason behing re-posting my previous blog)
On 17th August, I wrote on “Efficient Purging”, which discussed on purging data from a 60GB table. This purging was achieved in a very efficient and timely manner during my visit to a Customer in July 2009. Recently, came across a similar purging activity from a partitioned table of 75 GB in size. The strategy that I used, in this case, was Approach 1, as mentioned in my previous blog. Reason being, rows from all the partitions were to be accessed and therefore, CTAS seemed to be a better approach here. Since, I worked on two purging activities, blogs on these two topic were worth posting.

The recent purging activity was for one of my customer. August 15th, being a National Holiday in India, and this year, it coincided with a long weekend, the customer took a downtime of 24 hours to purge the data from some set of tables. At a very last moment, changing the strategy seemed to be non-feasible, therefore, customer went ahead with the pre-decided strategy of NORMAL Delete. While 90% of the tables were small in size, 8-9 tables were huge, especially one table that was 75 GB in size. This was a partitioned table. The biggest challenge, in this activity was, maintaining Data Consistency. This means, that once the activity starts and data from some of the tables is purged and commited, then the activity is either to be completed 100% or rolledback. Therefore, before the activity, backup of the entire database was taken.

Once you know the strategy used by the application team to get the data purged, you will get the link between my previous blog and this.

Strategy used and planned by the Application Team
The strategy planned by the Application Team for the huge tables were :

  1. Populate 5 Tables with the ROWID of the rows to be Purged.
  2. Drop all the Indexes from the to-be-purged table.
  3. Delete from the Table from 5 different sessions.
  4. Re-Org the Table and Create all required Indexes.

Each of the 5 tables, as mentioned in Step 1 above, were populated with “Total Rows to be Purged” / 5 number of rows, with a motive of achieving parallelism. These tables were populated before the downtime. Once these tables were populated, the DELETE statement was executed from 5 different sessions. Clearly, this logic is very resource intensive, time consuming and an inefficient way of achieving the goal, reasone being :

  1. The to-be-purged table, a huge table, is scanned five times.
  2. Each of these scans were Full Table Scans.
  3. Delete generates heavy redo and is not an efficient way to delete large number of rows.
  4. Table re-org and Index Creation will be required after the activity.

The most inefficient step was Full Table Scan of a huge table, and this is where the activity was once at a stage, where it was on a verge of roll back. The problem occured for one table of 75 GB and at this stage, Approach 1 i.e. Creating a New Table with the data to be retained, was adopted. The DELETE statements, from 5 different sessions, that ran for almost 4-5 hours, completed in less than 2 hours using CTAS approach.

Once the activity was over, customer requested a so-called learnings meeting to discuss, Why and Where the activity went wrong ? While the conceptual issues with NORMAL Delete, as mentioned in my previous and this blog, was explained, one step that took most of the time and seems to be one of the reason for this failure, was the way parallelism was adopted by the Application Team. This was during this review discussion that the ROWID Parallelism was explained to the Application Team as an efficient way of spawning multiple sessions. The benefit of ROWID parallelism is that it splits huge table into smaller chunks and each session works on its own chunk. This would have been also a better choice, but needs to be adopted based on the resources available. If I am short of resources, then, instead of using Oracle Parallelism, manual parallelism would be a better choice as it eliminates the chances of any failure. This is well explained in my ROWID based Parallelism blog posted on my previous Blog URL.

For the benefit of my readers, I am again posting this on this site and it explains the way I efficiently used this on a huge table and on a low capacity server.

The Extract from my BLOG

Recently, I had come across an interesting challenge of purging data from a 500gb table. This was not a plain PURGE but was based on a join and filtering on two other tables. The reason I term it as a challenge is because the database was on a low end server with 12 CPU and 52 GB RAM.The Original Query used for purging is as under :

create table wia_values_new_1 nologging parallel 24
tablespace A_TS_IDX as
select a.*from wia_values a,
(select a.it_type ,a.it_key ,b.name from wi a, wia b
where a.begin_date > ’01-NOV-2007′ and a.it_type=b.it_type) vv
where a.it_type=vv.it_type
and a.it_key=vv.it_key
and a.name=vv.name;

To accomplish this task, it was obvious to run the queries with parallel processes. On a 12 CPU Machine, the Nested Loop Join took more than 10-12 hours and had to be aborted as this much time was unacceptable. With a HASH Join, the query used to fail with UNABLE TO Extent Temporary Segments after 3-4 hours of execution. This means, restarting of the entire process.

Oracle 9i onwards, one can use dbms_resumable to take care of these errors. Once fixed, the process will continue from the point where it failed.

Finally, I recollected a very good methodology, called as ROWID Parallelism, introduced by Thomas Kyte. I read this long back in his (and one of my favourite) book “Effective Oracle by Design”, but have practically used and implemented this for the first time. Believe me, with this parallelism, I got a the tremendous performance benefit and could accomplish this challenging task in less than 2 hours with great ease.

Unfortunately, I could not use these on production because due to the challenge involved, the customer provided us with additional CPU’s and Memory, and a NESTED Loop query did the purge in 3 hours. But, based on the comparision, these rowid parallelism could have achieved the task in less than an hour and half.

The steps that I performed were as under :

  1. Split the table into multiple chunks (based on the number of rowid parallelism) that would be used. In my case, the extents of 500 GB table were scattered across 208 datafiles, hence, I splitted the table into 200 chunks.
  2. Create a Job_Table, to store the low and high rowids of each chunk. This table will store some additional data that can be used later.
  3. Spawn and Schedule parallel jobs using dbms_job. Each job will process its own chunk.
  4. Merge the data into one single table.

The advantage of using this parallelism is :

  1. Easier to split the table into multiple chunks.
  2. Easier to manage, based on the CPU Utilization.
  3. If anyone of the script or job fails, it is easier to restart that single job rather than restarting the entire process all again.

These are the queries that were used to accomplish this purging task using ROWID Parallelism.

Job_Table

create table job_table(
job_id		int primary key,
lo_rid		rowid,
hi_rid		rowid,
completed_yn	varchar2(1),
rows_processed	number(20),
start_time	date,
end_time	date);

Spawning and storing of multiple chunks into job_table

insert into job_table(job_id, lo_rid, hi_rid)
select  grp, dbms_rowid.rowid_create(1,data_object_id, lo_fno, lo_block,0) min_rid, 
	dbms_rowid.rowid_create(1,data_object_id, hi_fno, hi_block,0) max_rid
from	(select  distinct grp, first_value(relative_fno) over (partition by grp order by relative_fno, 
	block_id rows  between unbounded preceding and unbounded following) lo_fno, 
	first_value(block_id) over (partition by grp order by relative_fno, 
	block_id rows  between unbounded preceding and unbounded following) lo_block, 
	last_value(relative_fno) over (partition by grp order by relative_fno, 
	block_id rows  between unbounded preceding and unbounded following) hi_fno, 
	last_value(block_id) over (partition by grp order by relative_fno, 
	block_id rows  between unbounded preceding and unbounded following) hi_block, 
	sum(blocks) over (partition by grp) sum_blocks 
	from	(select  relative_fno, block_id, blocks, sum(blocks) over (order by relative_fno, block_id) cum_blocks, 
		sum(blocks) over () tot_blocks, trunc((sum(blocks) over (order by relative_fno, block_id)-0.01) / (sum(blocks) over ()/200)) grp
		from	dba_extents
		where	segment_name='WIA_VALUES'
		and	owner=user
		order by relative_fno, block_id)),
		(select data_object_id from dba_objects
		where	owner=user
		and	object_name='WIA_VALUES');
commit;

Now I have 200 rows in this table with low and high rowids that take care of the entire table.

GRP MIN_RID            MAX_RID
0   AABDksAALAAAAAJAAA AABDksAB2AAB9f5AAA         
1   AABDksAB2AAB9gZAAA AABDksACkAACSFJAAA         
2   AABDksACkAACSFZAAA AABDksADSAAAsxJAAA         
3   AABDksADSAAAsx5AAA AABDksADzAACT6pAAA         
4   AABDksADzAACT7JAAA AABDksAEkAAAADJAAA         
5   AABDksAEkAAAADpAAA AABDksAFMAAAKFZAAA         
6   AABDksAFMAAAKFpAAA AABDksAGmAAAYVJAAA         
7   AABDksAGmAAAYV5AAA AABDksAHTAAAnsJAAA         
8   AABDksAHTAAAnsZAAA AABDksAH6AACS0pAAA         
9   AABDksAH6AACS1JAAA AABDksAIeAABo65AAA        
10  AABDksAIeAABo+5AAA AABDksAJRAAAwmZAAA        
11  AABDksAJRAAAwnZAAA AABDksAJ1AACDppAAA

I created 200 tables in which, a parallel job will insert required rows into.

declare
l_statement varchar2(200);
begin
  for i in 0..199
  loop
  l_statement:='create table wia_new_'i' tablespace A_TS_IDX as select * from wia_values where 1=2';
  execute immediate l_statement;
  end loop;
end;

Then I created a Database Procedure that will be executed in parallel and the input value will be the job_id. Based on this input job_id, the insert script will pickup the low and high rowids and will do a rowid range scan of the table. Based on the status (completion or failure), the job_table will be updated.

create or replace procedure p_job(p_job in number) as
l_job_table  job_table%rowtype;
l_completed varchar2(1);
l_statement varchar2(2000);
l_rows  number(20):=0;
l_date  date;
begin
  select * into l_job_table from job_table
  where job_id=p_job
  and   nvl(completed_yn,'N')!='P'
  and	(completed_yn!='Y' or completed_yn is null);
  l_rows:=sql%rowcount;
  if l_rows>0 then 
    execute immediate 'alter session set db_file_multiblock_read_count=128'; 
    select sysdate into l_date from dual; 
    update job_table set completed_yn='P', start_time=l_date where job_id=p_job; 
    commit; 
    l_statement:='insert /*+ append */ into wia_new_'||p_job||' nologging select  a.* from  wia_values a, (select a.it_type ,a.it_key  ,b.name  from  wi a,wia b  where  a.begin_date > ''''''01-NOV-2007''''''  and  a.it_type=b.it_type) vv where  a.it_type=vv.it_type and  a.it_key=vv.it_key and  a.name=vv.name and a.rowid between'''''l_job_table.lo_rid''''' and'''''l_job_table.hi_rid''''; 
    begin 
       execute immediate l_statement; 
       l_rows:=sql%rowcount; 
       select sysdate into l_date from dual; 
       update  job_table set  completed_yn='Y',  rows_processed=l_rows,  end_time=l_date where  job_id=p_job; 
       commit;  
       exception when others then  
       update  job_table  set  completed_yn='F',   start_time=null  where  job_id=p_job;  
       commit; 
    end;
  end if;
end;

DBMS_JOB is used to schedule jobs at regular interval and based on the CPU Utilization.

declare
l_jobno number;
BEGIN
  for i in 0..30
  loop   
  DBMS_JOB.SUBMIT(l_jobno,'p_job('i');', trunc(sysdate)+16/24+00/(24*60));
  end loop;   
  COMMIT;
END;

Based on the Utilization, I spawned 30 parallel jobs. Each Job took 5-7 minutes to complete a rowid range scan of wia_values. This I observed from v$session_longops. Once the rowid range scan was complete, hash join and insertion took another 10 minutes. Hence, one job took almost 20-25 minutes. Hence, I scheduled another set of 30 jobs at 10 minute interval.The overall start time and end time, as an example, I followed was
10:00 – 10:25 30 Jobs,
10:10 – 10:35 30 Jobs,
10:35 – 11:00 30 Jobs (After 100% Completion of Previous 60 Jobs).
10:45 – 11:10 30 Jobs,
11:10 – 11:35 30 Jobs (After 100% Completion of Previous 60 Jobs)
11:20 – 11:45 30 Jobs
11:45 – 12:10 20 Jobs (Remaining)

A Sample output after completion of all the scripts…

JOB_ID	LO_RID             HI_RID             C	PROCESSED	START_TIME		END_TIME
0	AABDksAALAAAAAJAAA AABDksAB2AAB9f5AAA Y	29429239	02-03-2008 16:00:06	02-03-2008 16:19:26         
1	AABDksAB2AAB9gZAAA AABDksACkAACSFJAAA Y	208119		02-03-2008 16:00:06	02-03-2008 16:15:51         
2	AABDksACkAACSFZAAA AABDksADSAAAsxJAAA Y	199466		02-03-2008 16:00:06	02-03-2008 16:13:23         
3	AABDksADSAAAsx5AAA AABDksADzAACT6pAAA Y	10776162	02-03-2008 16:00:06	02-03-2008 16:18:48         
4	AABDksADzAACT7JAAA AABDksAEkAAAADJAAA Y	18897287	02-03-2008 16:00:06	02-03-2008 16:19:03

Conclusion : Oracle Parallelism uses same rowid parallelism mechanism to accomplish a task, but manual parallelism works well when you are short of resources, like, in my case, CPU’s and Temporary Space. Though, these require some manual work, it makes a task easier and faster.

END OF THE BLOG

Advertisements

About Vivek Sharma
I am an Oracle Professional from Mumbai (India). I blog on the issues that I feel is Interesting for my readers. Some of these are my real life examples, which I hope, you would find interesting. Comments are always a welcome. The Technical Observations & Views here are my own and not necessarily those of Oracle or its affiliates. These are purely based on my understandings, learnings and resolutions of various customer issues.

12 Responses to Rowid based Parallelism (Manual) v/s Oracle Parallelism !

  1. Ash says:

    complete statement is missing from you publish, as shown below

    l_statement:=’insert /*+ append */ into wia_new_’p_job’ nologging select a.* from wia_values a, (select a.it_type ,a.i

    is it possbile for you to added them as well.

    Like

  2. Hey, I read a lot of blogs on a daily basis and for the most part, people lack substance but, I just wanted to make a quick comment to say GREAT blog!…..I”ll be checking in on a regularly now….Keep up the good work! πŸ™‚

    I’m Out! πŸ™‚

    Like

  3. KTK says:

    Hi,

    I have to delete few tables having more than 50M rows and have planned to use following method with 10 multiple session (may be via job) for each table.

    But suddenly have got one doubt.

    Will rowid change when parallel session starts committing data after delete and the base query of cursor using MOD(DBMS_ROWID.ROWID_ROW_NUMBER(ROWID),9) may ignore some rows to be deleted?

    The following code is put in one procedure to pass p_seq;

    DECLARE
    CURSOR l_cur IS
    SELECT rowid
    FROM tab1
    WHERE MOD(DBMS_ROWID.ROWID_ROW_NUMBER(ROWID),9)=p_seq;
    TYPE l_type IS TABLE OF ROWID INDEX BY BINARY_INTEGER;
    l_rec l_type;
    BEGIN
    EXECUTE IMMEDIATE ‘ALTER TABLE tab1 DISABLE ALL TRIGGERS’;
    OPEN l_cur;
    LOOP
    FETCH l_cur
    BULK COLLECT INTO l_rec LIMIT 5000;
    EXIT WHEN l_rec.COUNT = 0;
    FORALL i IN l_rec.first..l_rec.last
    DELETE FROM tab1
    WHERE rowid = l_rec(i);
    COMMIT;
    END LOOP;
    CLOSE l_cur;
    COMMIT;
    EXECUTE IMMEDIATE ‘ALTER TABLE tab1 ENABLE ALL TRIGGERS’;
    EXCEPTION
    WHEN OTHERS THEN
    EXECUTE IMMEDIATE ‘ALTER TABLE tab1 ENABLE ALL TRIGGERS’;
    ROLLBACK;
    END;

    I have to do it tomorrow and it would be great if you could enlighten it.

    Like

  4. KTK says:

    Sorry missed something..There is one more condition in following query

    SELECT rowid
    FROM tab1
    WHERE NOT EXISTS (select 1 from tab2 where tab2.partial_pk = tab1.pk)
    AND MOD(DBMS_ROWID.ROWID_ROW_NUMBER(ROWID),9)=p_seq;

    Like

  5. KTK says:

    It failed with contention.

    Like

    • Vivek says:

      Hi KTK,

      The way you have implemented parallelism, it is bound to fail with block level contention. Why not split the table based on the blocks, so that, a block, once assigned to session 1, will not be accessed by any another session. This is what I had demonstrated in my blog. Do let me know if you need any help from me on this.

      Regards
      Vivek

      Like

  6. Pingback: Manual Parallelism – Rowid Based for Partitioned / Sub Partitioned Tables « Real Life Database / SQL Experiences : An Oracle Blog from Vivek Sharma

  7. Rob says:

    I am trying something similar to what you have done above on 10g using dbms_job.submit and Tom’s job param table , but I keep running into ORA-01422 when I use execute immediate ctas or execute immediate insert select via dbms job. the procedure runs fine if i call it outside of dbms_job. I am cacthing the error and it always points to the line with execute immediate. Have you ran into that issue before? If not do you have time to take a look?

    Like

    • Vivek says:

      Hi Rob,

      1422 – Exact fetch returns more than requested number of rows. Very difficult to comment without seeing the code. Can you also go through my recent posts on this optimization. May be these will help you relate to your pl/sql block and the way it should be coded.

      Regards
      Vivek

      Like

  8. parag983 says:

    This blog will guide you and help you to solve different kind of Linux server problem.
    This scripts are tested first on server by me and then published on this blog for you

    Like

  9. tinu984 says:

    http://linux-scripts4u.blogspot.com

    This blog will guide you and help you to solve different kind of Linux server problem.
    This scripts are tested first on server by me and then published on this blog for you and
    this blog will update you with new linux server issues time by time so that you will get new solution and update your knowledge.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s