dbms_parallel_execute ! still unused feature of Oracle Database 11g

I still find many developers unaware of dbms_parallel_execute, which is a better and efficient way of distributing the data across multiple chunks. Recently, I came across one such implementation, where the Developers introduced their own way of creating multiple chunks using ROWNUM pseudo column and then distributing these records into required streams. In this case, the Developers intended to spawn 64 Streams. Further, they created an Index on the some of the columns to Update the records that were processed. The drawback of this implementation was :

  1. Block Level Contention for Tables, as each Streams can request and process Blocks from other Streams
  2. Index Block Contention, as multiple Streams would update same Block. A Better way would be to select ROWID and UPDATE based on ROWID.

This Implemention required around 50 Million Rows to be processed in parallel streams of 64. Due to the issues mentioned above, the process took abnormal time and was kept on hold for production movement. Our advice was sought on the Optimization and we could see that the Developers still unware of a fantastic and readymade approach, called dbms_parallel_execute. I had one of my previous Blog on the same topic and wanted them to go through it for some examples. We also explained them the benefit of this approach over their current implementation. They readily accepted this and thought of giving it a try before moving it on production. I was expecting some queries from the Developers, as this was an overall new concept for them and Yes, the first concern raised was over the CHUNK_SIZE=>64 value in dbms_parallel_execute.create_chunks_by_rowid proceedure. The Developers expected this procedure to create 64 Chunks of the entire table, which can then be processed by 64 parallel jobs. In this case, since the table was huge, it created around 5000+ Chunks. This was a bit surprising for them. Further, even if it created more number of chunks, the Challenge here was building of an additional logic to run only 64 Streams at any given point in time. Further, the logic should also take care of the finished jobs and to spawn a new one automatically.

First, let us address and explain the CHUNK_SIZE argument of create_chunks_by_rowid procedure. This value works with BY_ROWS argument. If BY_ROWS is specified as TRUE, the number of chunks are created based on the approximate number of rows divided into each stream. For example, in case of CHUNK_SIZE=>64 and BY_ROWS=>TRUE, each stream will have approximately 64 or less rows. With CHUNK_SIZE=>64 and BY_ROWS=>FALSE, the division is based on BLOCKS, with each stream containing approximately 64 or less blocks. Further, each stream will not span across multiple extents and therefore, if a table has smaller extents, you may see a high count of CHUNKS created (as 5000+ in the case of above mentioned example).

The Start & End ROWID of each of the Chunks are stored in user_parallel_execute_chunks view. Since this is a SYS Object and we do not have much control over this. I usually create my own table, where I Insert the data from USER_PARALLEL_EXECUTE_CHUNKS. Once the data into our own table is Inserted, it is easier to update our table with additional information required for analysis. Let us walk through a simple example.

## Create our own table to track the status of each job.

create table process_status (
process_name            varchar2(30),
job_id                  number,
s_rowid                 rowid,
e_rowid                 rowid,
start_time              date,
end_time                date,
processed_rows          number,
completed_yn            varchar2(1),
status                  varchar2(1000));

## Create a Parallel Job Task
exec  dbms_parallel_execute.create_task('VIVEK_JOB');

## Create Chunks, with the approximate Blocks / Rows (if BY_ROWS=>FALSE, it will distribute on Blocks else on Rows)
exec  dbms_parallel_execute.create_chunks_by_rowid('VIVEK_JOB','FLEX','CH_OD_AGREEMENT',false, 100);

## Insert the Chunks in our process_status table
insert into process_status(process_name, job_id, s_rowid, e_rowid)
select 'VIVEK_JOB', chunk_id, start_rowid, end_rowid from user_parallel_execute_chunks
where task_name='VIVEK_JOB' order by 1;

## Our Procedure that will run in parallel. We need to pass start_rowid and end_rowid as the Input parameters

create or replace procedure FLEX.vivek_proc (st_rowid in rowid, ed_rowid in rowid) as
l_cnt      number;
update FLEX.process_status set start_time=sysdate, status='RUNNING' 
 where s_rowid=st_rowid;  
 select count(*) into l_cnt from FLEX.CH_OD_AGREEMENT 
 where rowid between st_rowid and ed_rowid;  
 update FLEX.process_status set status='COMPLETED', end_time=sysdate, processed_rows=l_cnt 
 where s_rowid=st_rowid;  

VIVEK_PROC is the procedure that is required to be run in parallel streams. The Input Parameters are START & END ROWID. I have also modified the procedure to update the PROCESS_STATUS table, with the start_time, status and at the completion of the main business logic, the end_time, Processed_rows and status. Further this data can be maintained as a History (by inserting into history table before the next run). Due to this flexibility, I choose creation of PROCESS_STATUS table.

The total number of chunks created by dbms_parallel_execute for my test table was around 3500 and I wanted to spawn only 64 streams. Many Developers chose restricting job_queue_processes to 64 and schedule all the 3500 jobs together. Restricting job_queue_processes to 64 will ensure only 64 jobs running. This is achievable, but would be a very crude way of doing so. Further, this would only work if there are no other jobs scheduled during this time, else it will also be queued. A best option is again using dbms_parallel_execute. There is a procedure called RUN_TASK and an argument PARALLEL_LEVEL limits the number of parallel jobs to be run at any given point of time. No additional logic is required to check for the completion of a job and scheduling of a new job. Oracle will do it for us.

## Run the Task, with number of parallel job queue processes. This is controlled using PARALLEL_LEVEL

      ( task_name      => 'VIVEK_JOB',
        sql_stmt       => 'begin FLEX.vivek_proc( :start_id, :end_id ); end;',
        language_flag  => DBMS_SQL.NATIVE,
        parallel_level => 64 );
## Once the Jobs are done, drop the Job Task
exec dbms_parallel_execute.drop_task('VIVEK_JOB');

DROP_TASK will remove the entries from USER_PARALLEL_EXECUTE_CHUNKS table.


About Vivek Sharma
I am an Oracle Professional from Mumbai (India). I blog on the issues that I feel is Interesting for my readers. Some of these are my real life examples, which I hope, you would find interesting. Comments are always a welcome. The Technical Observations & Views here are my own and not necessarily those of Oracle or its affiliates. These are purely based on my understandings, learnings and resolutions of various customer issues.

13 Responses to dbms_parallel_execute ! still unused feature of Oracle Database 11g

  1. yogendra mehta says:

    Good article, very useful


  2. OraKle says:

    SQL> begin
    dbms_parallel_execute.create_chunks_by_rowid( task_name => ‘DACHIN_JOB’, table_owner => ‘EMS_NEW’, table_name => ‘DAC’,by_row => true,chunk_size => 50000 );

    PL/SQL procedure successfully completed.

    SQL> select START_ROWID, END_ROWID,(select count(*) from DAC where ROWID between START_ROWID and END_ROWID) cnt from user_parallel_execute_chunks where rownum<10;

    —————— —————— ———-

    Why no data in the chunks?


  3. OraKle says:

    dbms_parallel_execute.create_chunks_by_rowid( task_name => ‘DACHIN_JOB’, table_owner => ‘TFS_NEW’, table_name => ‘DAC’,by_row => false,chunk_size => 5000 );

    With block size such high – I see chunk size is 3k records. This table doesnt contain big large data. it is a copy of all_tables (bloated)


    • Vivek says:

      Hi Orakle,

      Sorry for the delayed response. Can you send me your table creation script and other details, like number of rows, blocks in the table. Will check on this and revert.



  4. Shailesh Dhus says:

    Vivek , Nice article ,helped me to rewrite one procedure in parallel. Thanks , keep posting !!!


  5. eva lona says:

    Good post. Really usefull.oracleconnections


  6. Farooq says:

    What is the minimum chunk_size value (for row and block)?


  7. Im trying to use this in my appln but when i try to execute the dbms_execute_parallel for procedure it wont work but if i use it for normal upadate statement it works , its been in unassigned state only


  8. Ivan says:

    Hi. Using dbms_parallel_execute for a few years. Currently focused on problem with use of parallel execute run by ranges of primary key ID.
    Example: 1 batch consists of 500 bulk payments each having 1 to 300 payments. Bulk payments are parsed sequentially and for each parsed bulk payment parallel execution is executed. Thus within one run there are sequentially started parallel tasks processing those 1 to 300 individual payments (divided into 25payments chunks).
    Problem I faced is, that each started parallel execution left unclosed cursor (even though task was dropped before starting new one). From open_cursors view is visible, that unclosed cursor is implicit from dbms parallel:

    “select min(“PKID”), max(“PKID”) from “HP…”

    Therefore in case of larger batch (of 300+ bulk payments) we hit max open cursors limit and fail.

    Could anyone help, how to finalize successfully finished parallel execution including close open cursor and other “cookies”???

    I have tried DBMS_PARALLEL_EXECUTE.stop_task, DBMS_PARALLEL_EXECUTE.drop_task, but no success at all …


  9. Thomas Perl says:

    Hi Vivek

    Thanks for providing this excellent article, its very helpful. I have a 100 million rows table out of which I delete 1.5 million rows on daily basis and add almost same amount of rows every day. I am not sure how to split the chunk as I want to delete only rows between two dates and I do not have any number column. Is this table still can be used for dbms_parallel_execute ? if so can you please tell me how ?. This table does not have a primary key but there is a unique key on column which is char column.



  10. Ravi says:

    Is it possible to add new chunks(startid and endid) to an existing task which is already in progress ?


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s