dbms_parallel_execute ! still unused feature of Oracle Database 11g

I still find many developers unaware of dbms_parallel_execute, which is a better and efficient way of distributing the data across multiple chunks. Recently, I came across one such implementation, where the Developers introduced their own way of creating multiple chunks using ROWNUM pseudo column and then distributing these records into required streams. In this case, the Developers intended to spawn 64 Streams. Further, they created an Index on the some of the columns to Update the records that were processed. The drawback of this implementation was :

  1. Block Level Contention for Tables, as each Streams can request and process Blocks from other Streams
  2. Index Block Contention, as multiple Streams would update same Block. A Better way would be to select ROWID and UPDATE based on ROWID.

This Implemention required around 50 Million Rows to be processed in parallel streams of 64. Due to the issues mentioned above, the process took abnormal time and was kept on hold for production movement. Our advice was sought on the Optimization and we could see that the Developers still unware of a fantastic and readymade approach, called dbms_parallel_execute. I had one of my previous Blog on the same topic and wanted them to go through it for some examples. We also explained them the benefit of this approach over their current implementation. They readily accepted this and thought of giving it a try before moving it on production. I was expecting some queries from the Developers, as this was an overall new concept for them and Yes, the first concern raised was over the CHUNK_SIZE=>64 value in dbms_parallel_execute.create_chunks_by_rowid proceedure. The Developers expected this procedure to create 64 Chunks of the entire table, which can then be processed by 64 parallel jobs. In this case, since the table was huge, it created around 5000+ Chunks. This was a bit surprising for them. Further, even if it created more number of chunks, the Challenge here was building of an additional logic to run only 64 Streams at any given point in time. Further, the logic should also take care of the finished jobs and to spawn a new one automatically.

First, let us address and explain the CHUNK_SIZE argument of create_chunks_by_rowid procedure. This value works with BY_ROWS argument. If BY_ROWS is specified as TRUE, the number of chunks are created based on the approximate number of rows divided into each stream. For example, in case of CHUNK_SIZE=>64 and BY_ROWS=>TRUE, each stream will have approximately 64 or less rows. With CHUNK_SIZE=>64 and BY_ROWS=>FALSE, the division is based on BLOCKS, with each stream containing approximately 64 or less blocks. Further, each stream will not span across multiple extents and therefore, if a table has smaller extents, you may see a high count of CHUNKS created (as 5000+ in the case of above mentioned example).

The Start & End ROWID of each of the Chunks are stored in user_parallel_execute_chunks view. Since this is a SYS Object and we do not have much control over this. I usually create my own table, where I Insert the data from USER_PARALLEL_EXECUTE_CHUNKS. Once the data into our own table is Inserted, it is easier to update our table with additional information required for analysis. Let us walk through a simple example.

## Create our own table to track the status of each job.

create table process_status (
process_name            varchar2(30),
job_id                  number,
s_rowid                 rowid,
e_rowid                 rowid,
start_time              date,
end_time                date,
processed_rows          number,
completed_yn            varchar2(1),
status                  varchar2(1000));

## Create a Parallel Job Task
exec  dbms_parallel_execute.create_task('VIVEK_JOB');

## Create Chunks, with the approximate Blocks / Rows (if BY_ROWS=>FALSE, it will distribute on Blocks else on Rows)
exec  dbms_parallel_execute.create_chunks_by_rowid('VIVEK_JOB','FLEX','CH_OD_AGREEMENT',false, 100);

## Insert the Chunks in our process_status table
insert into process_status(process_name, job_id, s_rowid, e_rowid)
select 'VIVEK_JOB', chunk_id, start_rowid, end_rowid from user_parallel_execute_chunks
where task_name='VIVEK_JOB' order by 1;
commit;

## Our Procedure that will run in parallel. We need to pass start_rowid and end_rowid as the Input parameters

create or replace procedure FLEX.vivek_proc (st_rowid in rowid, ed_rowid in rowid) as
l_cnt      number;
begin
update FLEX.process_status set start_time=sysdate, status='RUNNING' 
 where s_rowid=st_rowid;  
 commit;  
 select count(*) into l_cnt from FLEX.CH_OD_AGREEMENT 
 where rowid between st_rowid and ed_rowid;  
 update FLEX.process_status set status='COMPLETED', end_time=sysdate, processed_rows=l_cnt 
 where s_rowid=st_rowid;  
 commit; 
end; 
/

VIVEK_PROC is the procedure that is required to be run in parallel streams. The Input Parameters are START & END ROWID. I have also modified the procedure to update the PROCESS_STATUS table, with the start_time, status and at the completion of the main business logic, the end_time, Processed_rows and status. Further this data can be maintained as a History (by inserting into history table before the next run). Due to this flexibility, I choose creation of PROCESS_STATUS table.

The total number of chunks created by dbms_parallel_execute for my test table was around 3500 and I wanted to spawn only 64 streams. Many Developers chose restricting job_queue_processes to 64 and schedule all the 3500 jobs together. Restricting job_queue_processes to 64 will ensure only 64 jobs running. This is achievable, but would be a very crude way of doing so. Further, this would only work if there are no other jobs scheduled during this time, else it will also be queued. A best option is again using dbms_parallel_execute. There is a procedure called RUN_TASK and an argument PARALLEL_LEVEL limits the number of parallel jobs to be run at any given point of time. No additional logic is required to check for the completion of a job and scheduling of a new job. Oracle will do it for us.

## Run the Task, with number of parallel job queue processes. This is controlled using PARALLEL_LEVEL

begin
      dbms_parallel_execute.run_task
      ( task_name      => 'VIVEK_JOB',
        sql_stmt       => 'begin FLEX.vivek_proc( :start_id, :end_id ); end;',
        language_flag  => DBMS_SQL.NATIVE,
        parallel_level => 64 );
      commit;
end;
/
## Once the Jobs are done, drop the Job Task
exec dbms_parallel_execute.drop_task('VIVEK_JOB');

DROP_TASK will remove the entries from USER_PARALLEL_EXECUTE_CHUNKS table.

About these ads

About Vivek Sharma
I am an Oracle Professional from Mumbai (India). I blog on the issues that I feel is Interesting for my readers. These are all my real life examples, which I hope, you would find interesting. Comments are always a welcome.

6 Responses to dbms_parallel_execute ! still unused feature of Oracle Database 11g

  1. yogendra mehta says:

    Good article, very useful

  2. OraKle says:

    SQL> begin
    dbms_parallel_execute.drop_task(‘DACHIN_JOB’);
    dbms_parallel_execute.create_task(‘DACHIN_JOB’);
    dbms_parallel_execute.create_chunks_by_rowid( task_name => ‘DACHIN_JOB’, table_owner => ‘EMS_NEW’, table_name => ‘DAC’,by_row => true,chunk_size => 50000 );
    end;
    /

    PL/SQL procedure successfully completed.

    SQL> select START_ROWID, END_ROWID,(select count(*) from DAC where ROWID between START_ROWID and END_ROWID) cnt from user_parallel_execute_chunks where rownum<10;

    START_ROWID END_ROWID CNT
    —————— —————— ———-
    AAEf6VAAIAADAGAAAA AAEf6VAAIAADAGxCcP 0
    AAEf6VAAIAADAGyAAA AAEf6VAAIAADAHjCcP 0
    AAEf6VAAIAADAHkAAA AAEf6VAAIAADAH/CcP 0
    AAEf6VAAIAADAIAAAA AAEf6VAAIAADAIxCcP 0
    AAEf6VAAIAADAIyAAA AAEf6VAAIAADAJjCcP 0
    AAEf6VAAIAADAJkAAA AAEf6VAAIAADAJ/CcP 0
    AAEf6VAAIAADAKAAAA AAEf6VAAIAADAKxCcP 0
    AAEf6VAAIAADAKyAAA AAEf6VAAIAADALjCcP 0
    AAEf6VAAIAADALkAAA AAEf6VAAIAADAL/CcP 0

    Why no data in the chunks?

  3. OraKle says:

    begin
    dbms_parallel_execute.drop_task(‘DACHIN_JOB’);
    dbms_parallel_execute.create_task(‘DACHIN_JOB’);
    dbms_parallel_execute.create_chunks_by_rowid( task_name => ‘DACHIN_JOB’, table_owner => ‘TFS_NEW’, table_name => ‘DAC’,by_row => false,chunk_size => 5000 );
    end;
    /

    With block size such high – I see chunk size is 3k records. This table doesnt contain big large data. it is a copy of all_tables (bloated)

    • Vivek says:

      Hi Orakle,

      Sorry for the delayed response. Can you send me your table creation script and other details, like number of rows, blocks in the table. Will check on this and revert.

      Regards
      Vivek

  4. Shailesh Dhus says:

    Vivek , Nice article ,helped me to rewrite one procedure in parallel. Thanks , keep posting !!!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 125 other followers