oracle - Is this how non-bulk binding PL/SQL code should be translated to bulk-binding code, and is there ever a reason to forgo buk binding? -


(this oracle 10g):

create or replace function bar(...)     v_first_type varchar2(100) ;     v_second_type varchar2(100);      cursor cur select a,b source_table ;     v_a int;     v_b char; begin     open cur;     <<l_next>> --10g doesn't have continue statement.     loop         fetch cur v_a, v_b ;         exit when cur%notfound ;          --ignore record case: ignore record entirely         if == -1             --             goto l_next ; --10g doesn't have continue statement.         else             -- else             v_first := 'success' ;         end if;          -- transform case:         if b == 'z'             --             v_second := 'something';         else             --             v_second := 'something else';         end if;           insert report_table values (v_first, v_second);     end loop;     close cur; exception      ... end; 

i'm @ first job out of college. i'm looking through legacy code looks general framework above (except several hundreds of lines long , uses more complicated processing (no set-based solution possible)).

its pulling lots of rows 1 table cursor, looping through cursor, making transformations it, , inserting results report table. cursor not insert every record -- if there wrong record or if doesn't whatever reason, skip record without inserting (see goto statements).

problem 1: inserts occurring 1 one inside loop, instead of doing forall @ end, outside of loop.

problem 2: cursor not use bulk collect.

on top of this, there stored procedure has cursor again doesn't use bulk collect, , issues function while looping through records in cursor. 1 commit issued @ end of each record that's been looped over. there no commits in function i'm writing here.

i want rewrite code this:

create or replace function bar(...)     cursor cur select a,b source_table ;      type t_source table of cur%rowtype index pls_integer;     type t_report table of destination_table%rowtype index pls_integer;     v_sources t_source;     v_reports t_report     v_report_inx int := 0; -- prevent sparse collection begin     open cur;     <<l_next>> --10g doesn't have continue statement.     loop         fetch cur bulk collect v_sources limit 100 ;         exit when v_sources.count = 0 ;          in 1 .. v_sources loop             --ignore record case: ignore record entirely             if v_sources(i).a == -1                 --                 goto l_next ; --10g doesn't have continue statement.             else                 -- else                 v_reports(v_report_inx).first := 'success' ;             end if;              -- transform case:             if v_sources(i).b == 'z'                 --                 v_reports(v_report_inx).second := 'something';             else                 --                 v_reports(v_report_inx).second := 'something else';             end if;              v_report_inx := v_report_inx + 1;         end loop;       end loop;      forall in 1 .. v_reports.count             insert report_table (first, second) values (v_reports(i).first, v_reports(i).v_second);      close cur; exception      ... end; 

the material changes 1) using bulk collect associative array, , 2) using forall out of different associative array.

i have 2 questions:

1) based on framework provided in first snippet, alterations superior way it? way?

2) there reason not thinking of make not use bulk collect , forall? perhaps complicated processing haven't realized yet in legacy code? code formed in 2002 (so assume 8i or 9i) has been updated since then. 9i had bulk binding. 8i also had bulk binding. both had associative arrays. feel there must reason why hadn't been using bulk binding.

there nothing wrong general idea of migrating bulk collect. bulk operations minimizes number of context switches , database round trips.

there 1 generic thing wrong code. limit clause prevents overusing memory bulk operations, it's proper decision use bulk collect. v_reports grows uncontrolled. therefore move bulk insert inside loop , clear v_reports afterwards.

there inaccuracies in altered code. please review code fragment below, comments in /**/ style mine.

create or replace function bar(...)     cursor cur select a,b source_table ;      type t_source table of cur%rowtype index pls_integer;     type t_report table of destination_table%rowtype index pls_integer;     v_sources t_source;     v_reports t_report      /* 1. correct type same type of index        2. there nothing wrong sparse collections, separate             counter incremented continuously needed t_report.     */     v_report_inx pls_integer := 0; -- prevent sparse collection  begin     open cur;     <<l_next>> --10g doesn't have continue statement.     loop         fetch cur bulk collect v_sources limit 100 ;          /* on last step v_sources.count < 100, not 0.            if there no elements no processing done,             check @ end of loop.         exit when v_sources.count = 0;         */          /* correct way loop 1 count           (.last , .first not usable because both null empty array)         */         in 1  .. v_sources.count loop              v_report_inx := v_report_inx + 1;              --ignore record case: ignore record entirely             if v_sources(i).a = -1                 --                 goto l_next ; --10g doesn't have continue statement.             end if;              /* no need else here, execution continues */              -- else              v_reports(v_report_inx).first := 'success' ;               -- transform case:             if v_sources(i).b = 'z'                 --                 v_reports(v_report_inx).second := 'something';             else                 --                 v_reports(v_report_inx).second := 'something else';             end if;          end loop;           /* use "indicies of" construct deal sparsed collections */         forall in indices of v_reports               /* t_report declared %rowtype                   insert entire row, works faster */               insert report_table values v_reports(i);          /* cleanup after insert */         v_reports.delete;          /* if number of selected records less limit last row reached. */         exit when v_sources.count < 100;      end loop;       close cur; exception     ... end; 

update

thanks @jonearles. encouraged me test performance different approaches handle cursors in pl/sql.

below results test 3 000 000 records. it's migration plain explicit cursor bulk collect approach give real performance gain.
@ same time explicit cursor bulk collect option , choosed limit outperform implicit cursor, difference between them lies in acceptable bounds.

variant name           | time (sec) ------------------------------------- bulk_cursor_limit_500  |  1.26 bulk_cursor_limit_100  |  1.52 bulk_unlimited         |  1.75 implicit_cursor        |  1.83 plain_cursor           | 27.20 

below code test (limited sqlfiddle example here)

scheme setup

drop table t / drop table log_run / create table t(a number, b number) / insert t select level, level dual connect level <= 3000000 /  create table log_run(id varchar2(30), seconds number); /  delete log_run / 

single test run

declare   cursor test_cur     select a, b t;    test_rec test_cur%rowtype;   counter    number;    vstart timestamp;   vend timestamp;   vtimeformat varchar2(30) := 'sssss.ff9'; begin    vstart := systimestamp;    open test_cur;   loop     fetch test_cur test_rec;     exit when test_cur%notfound;     counter := counter + 1;   end loop;   close test_cur;    vend := systimestamp;   insert log_run(id, seconds)      values('plain_cursor',               to_number(to_char(vend,vtimeformat))              -              to_number(to_char(vstart,vtimeformat))            )   ;  end; /  --implicit cursor --0.2 seconds declare   test_rec   t%rowtype;   counter    number;    vstart timestamp;   vend timestamp;   vtimeformat varchar2(30) := 'sssss.ff9'; begin    vstart := systimestamp;    c_test_rec in (select a, b t) loop     test_rec.a := c_test_rec.a;     test_rec.b := c_test_rec.b;     counter := counter + 1;   end loop;    vend := systimestamp;   insert log_run(id, seconds)      values('implicit_cursor',               to_number(to_char(vend,vtimeformat))              -              to_number(to_char(vstart,vtimeformat))            )   ;  end; /  declare   cursor test_cur     select a, b t;    type t_test_table table of t%rowtype;    test_tab   t_test_table;   counter    number;    vstart timestamp;   vend timestamp;   vtimeformat varchar2(30) := 'sssss.ff9'; begin    vstart := systimestamp;    open test_cur;   loop     fetch test_cur bulk collect test_tab limit 100;     in 1 .. test_tab.count loop       counter := counter + 1;     end loop;      exit when test_tab.count < 100;   end loop;    close test_cur;    vend := systimestamp;   insert log_run(id, seconds)      values('bulk_cursor_limit_100',               to_number(to_char(vend,vtimeformat))              -              to_number(to_char(vstart,vtimeformat))            )   ;  end; /   declare   cursor test_cur     select a, b t;    type t_test_table table of t%rowtype;    test_tab   t_test_table;   counter    number;    vstart timestamp;   vend timestamp;   vtimeformat varchar2(30) := 'sssss.ff9'; begin    vstart := systimestamp;    open test_cur;   loop     fetch test_cur bulk collect test_tab limit 500;     in 1 .. test_tab.count loop       counter := counter + 1;     end loop;      exit when test_tab.count < 500;   end loop;    close test_cur;    vend := systimestamp;   insert log_run(id, seconds)      values('bulk_cursor_limit_500',               to_number(to_char(vend,vtimeformat))              -              to_number(to_char(vstart,vtimeformat))            )   ;  end; /  declare    type t_test_table table of t%rowtype;    test_tab   t_test_table;   counter    number;    vstart timestamp;   vend timestamp;   vtimeformat varchar2(30) := 'sssss.ff9'; begin    vstart := systimestamp;    select * bulk collect test_tab t;    in 1 .. test_tab.count loop     counter := counter + 1;   end loop;    vend := systimestamp;   insert log_run(id, seconds)      values('bulk_unlimited',               to_number(to_char(vend,vtimeformat))              -              to_number(to_char(vstart,vtimeformat))            )   ;  end; / 

select average results

select * (    select lr.id, trunc(avg(seconds),2) seconds     log_run lr group lr.id)    order seconds ) 

Comments

Popular posts from this blog

css - Which browser returns the correct result for getBoundingClientRect of an SVG element? -

gcc - Calling fftR4() in c from assembly -

.htaccess - Matching full URL in RewriteCond -