Hi Tom,
I have a PROC that will be run infrequently. It will be used to delete rules from a table. There are 75,000 rules and over 14,000 will be deleted in the first run of the proc with another 1200 potentially deleted.
I need to check if the 75,000 rules are in 90 rows in a sorted table that is ordered by the most used table to the least used one.
The job takes over 7 hours to run using Bulk Collects and/or simple cursors. It seems the time is consistent with both except the Bulk Collect for the 75,000 rules runs faster.
Info:
Total Rules read = 73257
Total Rules inserted to TEMP table = 15,268
Total Loops 1,792,194 - This is how many times I am executing “ EXECUTE IMMEDIATE” to determine if the rule is used.
Total Loops2 6,796 - This is how many times I need to look blindly at a table of 750,000 rows when I do not get a hit above.
Is there anything I can do to make it faster?
FYI: After this PROC is run once and 14,000 rules are deleted, it will run faster as I will be eliminating 1,377,000 of the 1,792,194 EXECUTE IMMEDIATE statements.
hList is the 73,257 rules
sList is the 90 rows in TEMP table sorted where the rules can live on. This has to be my issue with EXECUTE IMMEDIATE which is executed 1,792,194 times
The TEMP table has 90 rows similar to this:
Select count (*) from VIN_RULE where nvl ( VIN_RULE_COND, 0 ) = <<<<***this is where the Rule_ID from hlist goes
v_select := sList(s).SELECT_STATEMENT2 = Select count (*) from VIN_RULE where nvl ( VIN_RULE_COND, 0 ) = 1234567
EXECUTE IMMEDIATE v_select INTO v_cnt;
mList is when I do not get a hit in the 90 rows in glist. . This is executed 6,796 times
Here is most of code:
if hList.count = 0 then
open gList;
fetch gList bulk collect into hList;
close gList;
if hList.count > 0 then
FOR h IN 1..hList.count LOOP -- 75,000 rules
v_select := '';
open tList;
fetch tList bulk collect into sList;
close tList;
--for sList in tList loop --- Simple cursor – no noticeable time difference between this and Bulk Collect
FOR s IN 1..sList.count LOOP -- TEMP table with 90 rows
v_loop := v_loop + 1;
v_select := sList(s).SELECT_STATEMENT2 || hlist(h).rule_id;
BEGIN
EXECUTE IMMEDIATE v_select INTO v_cnt;
EXCEPTION WHEN OTHERS THEN v_cnt := 0;
END;
if v_cnt > 0 then
EXIT;
end if;
end loop;
if nvl( v_cnt, 0 ) = 0 then
v_bind_var := concat ( concat ( '%', hlist(h).rule_id ), '%' );
v_cnt_Psuedo := 0;
v_2nd_rule_id := null;
for mList in nList loop -- In Pseudo Code and now looking for Rule ID
v_loop2 := v_loop2 + 1;
v_Pseudo_SW := 2;
v_cnt_Psuedo := v_cnt_Psuedo + 1;
BEGIN
select rule_id
into v_2nd_rule_id
from rule
where rule_pseudo_code = mList.TABLE_KEY_ID;
EXCEPTION
WHEN OTHERS
THEN v_2nd_rule_id := null;
END;
if v_2nd_rule_id is not null then
v_Pseudo_code := mList.TABLE _KEY_ID;
v_Pseudo_SW := 1;
EXIT;
end if;
end loop;
if nvl ( v_cnt_Psuedo, 0 ) = 0 or v_Pseudo_SW in ( 1, 2 ) then
BEGIN
INSERT INTO TABLE
END;
BLAH BLAH BALH BALH
Thanks
There are two potential issues here
1) each SQL is *reasonably* quick, but 1.3million of them adds up to too much, or
2) some of the SQL's are slow and blow out the performance of the entire routine
So we need to see which of these it is. So can you try this:
SQL> exec dbms_monitor.session_trace_enable(waits=>true);
SQL> exec your_proc;
SQL> exec dbms_monitor.session_trace_disable;
Even just running it for (say) 30 mins and not the whole 7 hours should gather enough data to make some decisions here.
Run tkprof on the trace file and work from there. If you get stuck, come back to us with a review and the tkprof file.