dbms_parallel_execute ! still unused feature of Oracle Database 11g
March 4, 2013 4 Comments
I still find many developers unaware of dbms_parallel_execute, which is a better and efficient way of distributing the data across multiple chunks. Recently, I came across one such implementation, where the Developers introduced their own way of creating multiple chunks using ROWNUM pseudo column and then distributing these records into required streams. In this case, the Developers intended to spawn 64 Streams. Further, they created an Index on the some of the columns to Update the records that were processed. The drawback of this implementation was :
- Block Level Contention for Tables, as each Streams can request and process Blocks from other Streams
- Index Block Contention, as multiple Streams would update same Block. A Better way would be to select ROWID and UPDATE based on ROWID.
This Implemention required around 50 Million Rows to be processed in parallel streams of 64. Due to the issues mentioned above, the process took abnormal time and was kept on hold for production movement. Our advice was sought on the Optimization and we could see that the Developers still unware of a fantastic and readymade approach, called dbms_parallel_execute. I had one of my previous Blog on the same topic and wanted them to go through it for some examples. We also explained them the benefit of this approach over their current implementation. They readily accepted this and thought of giving it a try before moving it on production. I was expecting some queries from the Developers, as this was an overall new concept for them and Yes, the first concern raised was over the CHUNK_SIZE=>64 value in dbms_parallel_execute.create_chunks_by_rowid proceedure. The Developers expected this procedure to create 64 Chunks of the entire table, which can then be processed by 64 parallel jobs. In this case, since the table was huge, it created around 5000+ Chunks. This was a bit surprising for them. Further, even if it created more number of chunks, the Challenge here was building of an additional logic to run only 64 Streams at any given point in time. Further, the logic should also take care of the finished jobs and to spawn a new one automatically.
First, let us address and explain the CHUNK_SIZE argument of create_chunks_by_rowid procedure. This value works with BY_ROWS argument. If BY_ROWS is specified as TRUE, the number of chunks are created based on the approximate number of rows divided into each stream. For example, in case of CHUNK_SIZE=>64 and BY_ROWS=>TRUE, each stream will have approximately 64 or less rows. With CHUNK_SIZE=>64 and BY_ROWS=>FALSE, the division is based on BLOCKS, with each stream containing approximately 64 or less blocks. Further, each stream will not span across multiple extents and therefore, if a table has smaller extents, you may see a high count of CHUNKS created (as 5000+ in the case of above mentioned example).
The Start & End ROWID of each of the Chunks are stored in user_parallel_execute_chunks view. Since this is a SYS Object and we do not have much control over this. I usually create my own table, where I Insert the data from USER_PARALLEL_EXECUTE_CHUNKS. Once the data into our own table is Inserted, it is easier to update our table with additional information required for analysis. Let us walk through a simple example.
## Create our own table to track the status of each job. create table process_status ( process_name varchar2(30), job_id number, s_rowid rowid, e_rowid rowid, start_time date, end_time date, processed_rows number, completed_yn varchar2(1), status varchar2(1000)); ## Create a Parallel Job Task exec dbms_parallel_execute.create_task('VIVEK_JOB'); ## Create Chunks, with the approximate Blocks / Rows (if BY_ROWS=>FALSE, it will distribute on Blocks else on Rows) exec dbms_parallel_execute.create_chunks_by_rowid('VIVEK_JOB','FLEX','CH_OD_AGREEMENT',false, 100); ## Insert the Chunks in our process_status table insert into process_status(process_name, job_id, s_rowid, e_rowid) select 'VIVEK_JOB', chunk_id, start_rowid, end_rowid from user_parallel_execute_chunks where task_name='VIVEK_JOB' order by 1; commit; ## Our Procedure that will run in parallel. We need to pass start_rowid and end_rowid as the Input parameters create or replace procedure FLEX.vivek_proc (st_rowid in rowid, ed_rowid in rowid) as l_cnt number; begin update FLEX.process_status set start_time=sysdate, status='RUNNING' where s_rowid=st_rowid; commit; select count(*) into l_cnt from FLEX.CH_OD_AGREEMENT where rowid between st_rowid and ed_rowid; update FLEX.process_status set status='COMPLETED', end_time=sysdate, processed_rows=l_cnt where s_rowid=st_rowid; commit; end; /
VIVEK_PROC is the procedure that is required to be run in parallel streams. The Input Parameters are START & END ROWID. I have also modified the procedure to update the PROCESS_STATUS table, with the start_time, status and at the completion of the main business logic, the end_time, Processed_rows and status. Further this data can be maintained as a History (by inserting into history table before the next run). Due to this flexibility, I choose creation of PROCESS_STATUS table.
The total number of chunks created by dbms_parallel_execute for my test table was around 3500 and I wanted to spawn only 64 streams. Many Developers chose restricting job_queue_processes to 64 and schedule all the 3500 jobs together. Restricting job_queue_processes to 64 will ensure only 64 jobs running. This is achievable, but would be a very crude way of doing so. Further, this would only work if there are no other jobs scheduled during this time, else it will also be queued. A best option is again using dbms_parallel_execute. There is a procedure called RUN_TASK and an argument PARALLEL_LEVEL limits the number of parallel jobs to be run at any given point of time. No additional logic is required to check for the completion of a job and scheduling of a new job. Oracle will do it for us.
## Run the Task, with number of parallel job queue processes. This is controlled using PARALLEL_LEVEL begin dbms_parallel_execute.run_task ( task_name => 'VIVEK_JOB', sql_stmt => 'begin FLEX.vivek_proc( :start_id, :end_id ); end;', language_flag => DBMS_SQL.NATIVE, parallel_level => 64 ); commit; end; / ## Once the Jobs are done, drop the Job Task exec dbms_parallel_execute.drop_task('VIVEK_JOB');
DROP_TASK will remove the entries from USER_PARALLEL_EXECUTE_CHUNKS table.