Skip to Main Content
  • Questions
  • Long Running PROC takes over 7 hours

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Brian.

Asked: February 10, 2020 - 2:25 pm UTC

Answered by: Connor McDonald - Last updated: February 17, 2020 - 1:41 am UTC

Category: SQL Developer - Version: Version 4.1.3.20

Viewed 100+ times

You Asked

Hi Tom,
I have a PROC that will be run infrequently. It will be used to delete rules from a table. There are 75,000 rules and over 14,000 will be deleted in the first run of the proc with another 1200 potentially deleted.
I need to check if the 75,000 rules are in 90 rows in a sorted table that is ordered by the most used table to the least used one.
The job takes over 7 hours to run using Bulk Collects and/or simple cursors. It seems the time is consistent with both except the Bulk Collect for the 75,000 rules runs faster.

Info:
Total Rules read = 73257
Total Rules inserted to TEMP table = 15,268
Total Loops 1,792,194 - This is how many times I am executing “ EXECUTE IMMEDIATE” to determine if the rule is used.
Total Loops2 6,796 - This is how many times I need to look blindly at a table of 750,000 rows when I do not get a hit above.

Is there anything I can do to make it faster?
FYI: After this PROC is run once and 14,000 rules are deleted, it will run faster as I will be eliminating 1,377,000 of the 1,792,194 EXECUTE IMMEDIATE statements.

hList is the 73,257 rules
sList is the 90 rows in TEMP table sorted where the rules can live on. This has to be my issue with EXECUTE IMMEDIATE which is executed 1,792,194 times
The TEMP table has 90 rows similar to this:
Select count (*) from VIN_RULE where nvl ( VIN_RULE_COND, 0 ) = <<<<***this is where the Rule_ID from hlist goes
v_select := sList(s).SELECT_STATEMENT2 = Select count (*) from VIN_RULE where nvl ( VIN_RULE_COND, 0 ) = 1234567
EXECUTE IMMEDIATE v_select INTO v_cnt;
mList is when I do not get a hit in the 90 rows in glist. . This is executed 6,796 times

Here is most of code:
    if hList.count = 0 then
        open gList;
        fetch gList bulk collect into hList;
        close gList;
        if hList.count > 0 then 
          FOR h IN 1..hList.count LOOP    -- 75,000 rules  
              v_select := '';
              open tList;
              fetch tList bulk collect into sList;
              close tList;              
              --for sList in tList loop  --- Simple cursor – no noticeable time difference between this and Bulk Collect
              FOR s IN 1..sList.count LOOP    -- TEMP table with 90 rows
                  v_loop       := v_loop + 1;
                  v_select     := sList(s).SELECT_STATEMENT2 || hlist(h).rule_id; 
                  BEGIN
                     EXECUTE IMMEDIATE v_select INTO v_cnt;
                     EXCEPTION  WHEN OTHERS     THEN v_cnt := 0;
                  END;
                  if v_cnt > 0 then
                      EXIT; 
                  end if;
             end loop;
              
              if nvl( v_cnt, 0 ) = 0  then 
                  v_bind_var       := concat ( concat ( '%',  hlist(h).rule_id ), '%' );
                         v_cnt_Psuedo := 0;
                         v_2nd_rule_id := null;
                     for mList in nList loop   -- In Pseudo Code and now looking for Rule ID 
                         v_loop2       := v_loop2 + 1;
                         v_Pseudo_SW   := 2;
                         v_cnt_Psuedo  := v_cnt_Psuedo + 1; 
                         BEGIN                      
                           select rule_id
                           into v_2nd_rule_id
                           from rule
                           where rule_pseudo_code = mList.TABLE_KEY_ID;
                           EXCEPTION
                           WHEN OTHERS
                           THEN v_2nd_rule_id := null; 
                         END;
                             
                         if v_2nd_rule_id is not null then
                             v_Pseudo_code := mList.TABLE _KEY_ID;
                             v_Pseudo_SW   := 1;
                             EXIT; 
                         end if;   
                     end loop; 
                  
                  if nvl ( v_cnt_Psuedo, 0 ) = 0 or  v_Pseudo_SW in ( 1, 2 ) then
                      BEGIN
                         INSERT INTO TABLE
                       END; 
                  BLAH BLAH BALH BALH

Thanks

and we said...

There are two potential issues here

1) each SQL is *reasonably* quick, but 1.3million of them adds up to too much, or
2) some of the SQL's are slow and blow out the performance of the entire routine

So we need to see which of these it is. So can you try this:

SQL> exec dbms_monitor.session_trace_enable(waits=>true);
SQL> exec your_proc;
SQL> exec dbms_monitor.session_trace_disable;


Even just running it for (say) 30 mins and not the whole 7 hours should gather enough data to make some decisions here.

Run tkprof on the trace file and work from there. If you get stuck, come back to us with a review and the tkprof file.

and you rated our response

  (2 ratings)

Reviews

February 11, 2020 - 12:51 pm UTC

Reviewer: A reader

If those rules are basically sql statements, I'd suggest you create a script, which creates a package based on those rules/statements. This way you wont have any dynamic any sql and performance will be much better. I've done something similar.
Connor McDonald

Followup  

February 17, 2020 - 1:41 am UTC

+1

Suggestions

February 11, 2020 - 1:02 pm UTC

Reviewer: Tubby from Calgary AB

Seeing a proper trace as suggested would be very helpful but absent that 2 things I would recommend.

1) use bind variables in your execute immediate, it is likely you're spending a lot of time parsing given that many executions.

2) add a WHERE rownum = 1 (or some similar mechanism) to your dynamic SQLs, your code only cares if the count is 0 or 1, so why examine more rows than you have to?

Cheers,
Connor McDonald

Followup  

February 17, 2020 - 1:41 am UTC

+1

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.