Skip to Main Content
  • Questions
  • Problem with BULK COLLECT with million rows

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Haranadh .

Asked: October 09, 2002 - 8:36 pm UTC

Last updated: December 17, 2012 - 3:47 pm UTC

Version: 9.0.1.4

Viewed 100K+ times! This question is

You Asked

Hi,

We have a requirement where are supposed to load 58 millions of rows into a FACT Table in our DATA WAREHOUSE. We initially planned to use Oracle Warehouse Builder but due to performance reasons, decided to write custom code. We wrote a custome procedure which opens a simple cursor and reads all the 58 million rows from the SOURCE Table and in a loop processes the rows and inserts the records into a TARGET Table. The logic works fine but it took 20hrs to complete the load.

We then tried to leverage the BULK COLLECT and FORALL and PARALLEL options and modified our PL/SQL code completely to reflect these. Our code looks very simple.
1. We declared PL/SQL BINARY_INDEXed Tables to store the data in memory.
2. We used BULK COLLECT into FETCH the data.
3. We used FORALL statement while inserting the data.
We did not introduce any of our transformation logic yet.

We tried with the 600,000 records first and it completed in 1 min and 29 sec with no problems. We then doubled the no. of rows to 1.2 million and the program crashed with the following error:

*
ERROR at line 1:
ORA-04030: out of process memory when trying to allocate 16408 bytes (koh-kghu
call ,pmucalm coll)
ORA-06512: at "VVA.BULKLOAD", line 66
ORA-06512: at line 1

We got the same error even with 1 million rows.

We do have the following configuration:
SGA - 8.2 GB
PGA
- Aggregate Target - 3GB
- Current Allocated - 439444KB (439 MB)
- Maximum allocated - 2695753 KB (2.6 GB)
Temp Table Space - 60.9 GB (Total)
- 20 GB (Available approximately)

I think we do have more than enough memory to process the 1 million rows!!

Also, some times the same program results in the following error:
SQL> exec bulkload
BEGIN bulkload; END;

*
ERROR at line 1:
ORA-03113: end-of-file on communication channel

We did not even attempt the full load. Also, we are not using the PARALLEL option yet.

Are we hitting any bug here? Or PL/SQL is not capable of mass loads? I would appreciate any thoughts on this?

Thanks,
Haranadh

Following is the code:
--------------------------------------------------------------------------------
set echo off
set timing on

create or replace procedure bulkload as

-- SOURCE --
TYPE src_cpd_dt IS TABLE OF ima_ama_acct.cpd_dt%TYPE;
TYPE src_acqr_ctry_cd IS TABLE OF ima_ama_acct.acqr_ctry_cd%TYPE;
TYPE src_acqr_pcr_ctry_cd IS TABLE OF ima_ama_acct.acqr_pcr_ctry_cd%TYPE;
TYPE src_issr_bin IS TABLE OF ima_ama_acct.issr_bin%TYPE;
TYPE src_mrch_locn_ref_id IS TABLE OF ima_ama_acct.mrch_locn_ref_id%TYPE;
TYPE src_ntwrk_id IS TABLE OF ima_ama_acct.ntwrk_id%TYPE;
TYPE src_stip_advc_cd IS TABLE OF ima_ama_acct.stip_advc_cd%TYPE;
TYPE src_authn_resp_cd IS TABLE OF ima_ama_acct.authn_resp_cd%TYPE;
TYPE src_authn_actvy_cd IS TABLE OF ima_ama_acct.authn_actvy_cd%TYPE;
TYPE src_resp_tm_id IS TABLE OF ima_ama_acct.resp_tm_id%TYPE;
TYPE src_mrch_ref_id IS TABLE OF ima_ama_acct.mrch_ref_id%TYPE;
TYPE src_issr_pcr IS TABLE OF ima_ama_acct.issr_pcr%TYPE;
TYPE src_issr_ctry_cd IS TABLE OF ima_ama_acct.issr_ctry_cd%TYPE;
TYPE src_acct_num IS TABLE OF ima_ama_acct.acct_num%TYPE;
TYPE src_tran_cnt IS TABLE OF ima_ama_acct.tran_cnt%TYPE;
TYPE src_usd_tran_amt IS TABLE OF ima_ama_acct.usd_tran_amt%TYPE;

src_cpd_dt_array src_cpd_dt;
src_acqr_ctry_cd_array src_acqr_ctry_cd;
src_acqr_pcr_ctry_cd_array src_acqr_pcr_ctry_cd;
src_issr_bin_array src_issr_bin;
src_mrch_locn_ref_id_array src_mrch_locn_ref_id;
src_ntwrk_id_array src_ntwrk_id;
src_stip_advc_cd_array src_stip_advc_cd;
src_authn_resp_cd_array src_authn_resp_cd;
src_authn_actvy_cd_array src_authn_actvy_cd;
src_resp_tm_id_array src_resp_tm_id;
src_mrch_ref_id_array src_mrch_ref_id;
src_issr_pcr_array src_issr_pcr;
src_issr_ctry_cd_array src_issr_ctry_cd;
src_acct_num_array src_acct_num;
src_tran_cnt_array src_tran_cnt;
src_usd_tran_amt_array src_usd_tran_amt;


j number := 1;

CURSOR c1 IS
SELECT
cpd_dt,
acqr_ctry_cd ,
acqr_pcr_ctry_cd,
issr_bin,
mrch_locn_ref_id,
ntwrk_id,
stip_advc_cd,
authn_resp_cd,
authn_actvy_cd,
resp_tm_id,
mrch_ref_id,
issr_pcr,
issr_ctry_cd,
acct_num,
tran_cnt,
usd_tran_amt
FROM ima_ama_acct ima_ama_acct
ORDER BY issr_bin;

BEGIN

OPEN c1;

FETCH c1 bulk collect into
src_cpd_dt_array ,
src_acqr_ctry_cd_array ,
src_acqr_pcr_ctry_cd_array,
src_issr_bin_array ,
src_mrch_locn_ref_id_array,
src_ntwrk_id_array ,
src_stip_advc_cd_array ,
src_authn_resp_cd_array ,
src_authn_actvy_cd_array ,
src_resp_tm_id_array ,
src_mrch_ref_id_array ,
src_issr_pcr_array ,
src_issr_ctry_cd_array ,
src_acct_num_array ,
src_tran_cnt_array ,
src_usd_tran_amt_array ;

CLOSE C1;


FORALL j in 1 .. src_cpd_dt_array.count
INSERT INTO ima_dly_acct (
CPD_DT,
ACQR_CTRY_CD,
ACQR_TIER_CD,
ACQR_PCR_CTRY_CD,
ACQR_PCR_TIER_CD,
ISSR_BIN,
OWNR_BUS_ID,
USER_BUS_ID,
MRCH_LOCN_REF_ID,
NTWRK_ID,
STIP_ADVC_CD,
AUTHN_RESP_CD,
AUTHN_ACTVY_CD,
RESP_TM_ID,
PROD_REF_ID,
MRCH_REF_ID,
ISSR_PCR,
ISSR_CTRY_CD,
ACCT_NUM,
TRAN_CNT,
USD_TRAN_AMT)

VALUES (
src_cpd_dt_array(j),
src_acqr_ctry_cd_array(j),
null,
src_acqr_pcr_ctry_cd_array(j),
null,
src_issr_bin_array(j),
null,
null,
src_mrch_locn_ref_id_array(j),
src_ntwrk_id_array(j),
src_stip_advc_cd_array(j),
src_authn_resp_cd_array(j),
src_authn_actvy_cd_array(j),
src_resp_tm_id_array(j),
null,
src_mrch_ref_id_array(j),
src_issr_pcr_array(j),
src_issr_ctry_cd_array(j),
src_acct_num_array(j),
src_tran_cnt_array(j),
src_usd_tran_amt_array(j));
COMMIT;
END bulkload;
/
SHOW ERRORS
--------------------------------------------

and Tom said...

good gosh -- you aren't serious are you???


Use the LIMIT clause, bulk collect say 100 to 1000 rows -- process them, bulk insert them, get the next 100/1000 rows.


You blew process memory -- not SGA. Your process got bigger then your OS would allow you (you hit an OS limit, might be ulimit related or whatever).

do something like this:


open cursor;
loop
fetch c bulk collect into l_c1, l_c2, ....... LIMIT 1000;
for i in 1 .. l_c1.count
loop
process....
end loop;
forall i in 1 .. l_c1.count
insert into ..... values ( L_c1(i), .... );
end loop;
exit when c%notfound;
end loop;

close cursor

1000 would be extreme, 100 is reasonable.



Rating

  (201 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Am I missing something obvious ?

Connor McDonald, October 10, 2002 - 9:14 am UTC

Isn't this just

insert into x
select * from y

with some nologging, parallel, append etc etc thrown in if required?

Tom Kyte
October 11, 2002 - 7:15 pm UTC

they have that unknown "process" in the middle.

If the process can be done in SQL, yes, this should be a single INSERT INTO select * FROM ...



LIMIT clause : Does it do things right

Shivaji, October 10, 2002 - 9:59 am UTC

LIMIT clause will do one fetch less if the last fetch has less then [limit] rows. in the above eg. 1000.
not sure if this is any version specfic.

The correct way would be to
exit when c%rowcount=0;
This was advised by a user on metalink forums. I am not a programmer. Keen to know your view on it.
FYI, On my site guys use rownum clause to limit rows when using bulk-binds.


Tom Kyte
October 11, 2002 - 7:28 pm UTC

I sure hope no one follows that advice, else they have just programmed that rascally "infinite loop"

Consider:

ops$tkyte@ORA920.US.ORACLE.COM> declare
  2      type array is table of number index by binary_integer;
  3      l_data array;
  4      cursor c is select empno from emp;
  5  begin
  6      open c;
  7      loop
  8          fetch c bulk collect into l_data limit 5;
  9          /* process data here */
 10
 11          dbms_output.put_line( 'Looping, c%rowcount = ' || c%rowcount );
 12          exit when c%notfound;
 13      end loop;
 14      dbms_output.put_line( 'out of loop, c%rowcount = ' || c%rowcount );
 15      close c;
 16  end;
 17  /
Looping, c%rowcount = 5
Looping, c%rowcount = 10
Looping, c%rowcount = 14
out of loop, c%rowcount = 14

PL/SQL procedure successfully completed.


See, c%rowcount never "goes to zero", it goes up and up and stays at the total number of rows fetched.  Exiting when it equals zero would work ONLY on a empty table!!  all other tables - infinite loop.


Use the c%notfound at the bottom like I do and this won't happen.


Also, I utterly fail to see how you could use rownum to limit rows with bulk binds, it it not possible. 

LIMIT clause : Does it do things right

Shivaji, October 10, 2002 - 10:03 am UTC

LIMIT clause will do one fetch less if the last fetch has less then [limit] rows. in the above eg. 1000.
not sure if this is any version specfic.

The correct way would be to
exit when c%rowcount=0;
This was advised by a user on metalink forums. I am not a programmer. Keen to know your view on it.
FYI, On my site guys use rownum clause to limit rows when using bulk-binds.


Limit on Bulk Collect !

Rama, October 10, 2002 - 11:16 am UTC

Is 1000 the upper limit? or is it lot more faster to work with 100 and bulk insert and then grab the next set ?

Tom Kyte
October 11, 2002 - 7:34 pm UTC

It is a suggestion (from me).

You want to "stream" data -- get some, process some, write some, get some, process some, write some.

You don't want to GET ALL, process all, WRITE ALL and flood one thing or the other -- do a bit, process a bit, write a bit, start over.

Haranadh, October 10, 2002 - 12:42 pm UTC

Thanks Tom. This is very useful. I will change the code to process 100 at a time.

Thanks,
Haranadh

Larry Liang, October 10, 2002 - 5:43 pm UTC

Excellent explanation. I am rewriting a process to use bulk collect instead of inserting into a temporary table. This is great info.


Sunil Nettyam, November 20, 2002 - 5:52 pm UTC

Hi Tom,

The info is excelent, very usefull for us.
Just wanted to know
why for LIMIT
1000 would be extreme, 100 is reasonable.

Why cannot i say LIMIT 10000

Thanks
Sunil



Tom Kyte
November 21, 2002 - 12:56 pm UTC

You can say it, you can say whatever you like.

To me -- 1000 rows in an array fetch, based on experience -- past performance -- memory usage -- it 10 times larger then I would recommend.

Perhaps this is what Shivaji refers to

Basil, December 08, 2002 - 8:36 pm UTC

A poster above, Shivaji, indicates that he uses rownum to limit the size of the bulk bind. The Oracle docs on this *sort of* imply that rownum is the way to do it:

</code> http://otn.oracle.com/docs/products/oracle9i/doc_library/release2/appdev.920/a96624/05_colls.htm#28329 <code>

(Search the page for "ROWNUM to limit the number of rows".)

Tom Kyte
December 09, 2002 - 7:19 am UTC

rownum would limit the TOTAL NUMBER OF ROWS IN THE ENTIRE RESULT SET.

LIMIT is used with bulk collect in order to fetch N rows at a time from a result set.

So, rownum changes the answer -- limits total number of rows.

LIMIT on a bulk collect lets you get the entire answer, N rows at at time.

Robert, December 09, 2002 - 12:10 pm UTC

>>Also, I utterly fail to see how you could use rownum to >>limit rows with bulk binds, it it not possible.

I think he meant "...using ROWNUM to limit rows returned by the SQL" like:
WHERE ....
AND ROWNUM < LEAST (maxrows, 1000);
'Cause he did say "...on my site....."
He also did say he's no programmer, so let's give'm a break...


BULK Collect with dbms_sql

Edgar, December 21, 2002 - 8:29 am UTC

Can we use bulk collect when we are performing select using dbms_sql package on Oracle9iR2? Thank you.

Tom Kyte
December 21, 2002 - 9:10 am UTC

No, but you can use array fetching. Read the dbms_sql docs in the supplied packages guide for details.

bulk collect = native dynamic sql syntax.

array binds and array fetches = dbms_sql

Periodically Commiting

John Durst, June 12, 2003 - 1:42 pm UTC

I am doing something very similar to this, but would like to commit changes periodically during the bulk fetch. I have commit logic in the loop, but Oracle appears to be ignoring the commits. Is this because (unlike single row cursors) bulk fetch cursors do not allow fetches across commits?

Tom Kyte
June 12, 2003 - 3:47 pm UTC

why do you say that? do you have a test case.

we don't "ignore" things like that.

Never Mind!

John Durst, June 12, 2003 - 1:50 pm UTC

Just a case of fast extraction of existing code and integration with another person's code followed by insufficient review and testing. (In other words, I screwed up by not including the section that included the initialization of my commit counter.) What do you get when you add one to null???

Tom Kyte
June 12, 2003 - 3:50 pm UTC

:)

answer: something different then when you concatenate a non-null string to a null.

Planning Something Similiar

Brian, June 12, 2003 - 5:35 pm UTC

Currently, I have SQL code that I run when some data warehouse tables are finished updating at the end of each month.

I would like to move my coding into a trigger that writes my results to a table when the data warehouse tables are updated at the end of the month, but to do so I would have to re-write all my code in PL/SQL (I think to use triggers), which I do not want to do because I'm joining multiple tables, and I am using inline views.

Is the answer you gave initially the only way to write results to a new table in PL/SQL when processing in excess of a million rows?

Thanks,

Brian


Tom Kyte
June 12, 2003 - 8:01 pm UTC

don't understand what you are saying/asking at all?

but -- if you have something written in SQL, that would be the most efficient way to do it. A single sql statement is much more efficient then lots of procedural -- row by row code. Stick with it.

Long Day Yesterday

Brian, June 13, 2003 - 12:39 pm UTC

Is there anyway to automate SQL code to run when tables are updated, created, etc. without using Triggers that require PL/SQL?

Brian

Tom Kyte
June 13, 2003 - 1:04 pm UTC

PLSQL runs SQL just fine. I still don't understand why you think by that by using a trigger, you would not be able to use SQL?


We Don't "Ignore" Things Like That

John Durst, June 13, 2003 - 4:34 pm UTC

Have you ever put commits in a loop for a cursor that does a SELECT FOR UPDATE?

Tom Kyte
June 14, 2003 - 8:15 am UTC

I don't understand the context of this comment?

SQL in PL/SQL

Brian, June 16, 2003 - 4:01 pm UTC

Is there a way to copy and paste SQL code as is in between BEGIN and END in PL/SQL without having to re-write the SQL code in PL/SQL?

I ask because my query looks like:

select x, y, z, sum(a), sum(b), sum(c)

from (

select x, to_number(substr(y,1,5)), z, sum(a) a, sum(b) b, sum(c) c
where x = 200305
from cl.x = vl.x

group by x, y, z
)

where y = 20336

group by x, y, z
/

Thanks


Tom Kyte
June 17, 2003 - 6:53 am UTC

tell me, what would you have to "rewrite" PLSQL is a programming language that lets you embedd sql in it easily?

(i hope your query doesn't really look like that -- forget for a minute that it isn't valid...)

Sorry About the Typos

Brian, June 17, 2003 - 11:09 am UTC

I have a few typos in the above query. I attempted from memory to duplicate the general look of the query I had written, which is a lot longer, pulls off of 4 different tables, and has multiple joins.

Nevertheless, before I return back to this thread, let me make an attempt to convert it over to PL/SQL.

I'm just having a hard time understanding how to integrate SQL into PL/SQL.

Thanks.

Tom Kyte
June 17, 2003 - 12:52 pm UTC

It is trivial -- maybe you are making it too hard? it is just


for x in ( select .... )
loop
process.....



Thanks

Brian, June 19, 2003 - 1:49 pm UTC

I got hung up on the coding aspect of PL/SQL.

The CURSOR FOR loop worked great for my dbms_jobs. Though, per your book, I need to use ref cursors for analytic functions unless I have them in a view.

Realizing that the CURSOR FOR loop allowed me to do nightly inserts into tables with query results, when would it be best to use to BULK COLLECT features as the original question was using?

Thanks


Tom Kyte
June 20, 2003 - 3:28 pm UTC

if you are processing more then about 100 rows, BULK collect makes a great deal of sense.

One Mode Doubt on Bulk Insert "FORALL" clause

H.S.Anand, July 22, 2003 - 12:19 am UTC

Hi Tom,
Thank you for the explanation. I am facing a problem while using Bulk Collect and FORALL clause!
The error message that i get is : "Implementation Restriction".
What i do is that I create record type and then create a PL/SQL "index by binary_integer" table based on that record type. I populate this table using the normal loop. Now when I use forall ... for the insert, i get the error of Implementation Restriction. Can you please clarify?
The sample code is as follows:
DECLARE
rec_emp1 is record (
empno emp.empno%type,
ename emp.ename%type,
job emp.job%type
);
tb_emp1 is table of rec_emp1 index by binary_integer;
tab_emp1 tb_emp1;
x_ctr binary_integer :=1;
BEGIN
FOR i in (select empno,ename,job from emp) loop
tab_emp1(x_ctr).empno := i.empno;
tab_emp1(x_ctr).ename := i.ename;
tab_emp1(x_ctr).job := i.job;
x_ctr := x_ctr+1;
END LOOP;
FORALL J IN tab_emp1.first..tab_emp1.last
insert into emp2 (empno,ename,job) values (tab_emp1(j).empno,tab_emp1(j).ename,tab_emp1(j).job);

commit;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('Errors OCcurred:'||sqlerrm);
END;
/

---------
What is wrong with the above code?

Thanks,


Tom Kyte
July 22, 2003 - 8:13 am UTC

in earlier, pre9i releases, you could use a RECORD of ARRAYS

you could NOT use an ARRAY of RECORDS.


Thanks

Brian, July 22, 2003 - 3:40 pm UTC

I found the article 'Forall update - updating mulitple columns' to be quite insightful as to how to implement the bulk collect feature when processing more than 100 rows.

I just need to insert into instead of update.

Thanks!

What is better bulk collect or simple for loop

A reader, September 02, 2003 - 7:46 pm UTC

I have a temp table with appropriate indexes with 200,000 rows.
Which is more effcient and faster.
for i in ( select * from temp_table order by id ) loop
conditionally insert in 3 master detail tables .
end loop;

Or
Use Bulk collect to load temp_table in plsql table with limit clause and
then use
for i in 1..plsqltable.count loop
conditionally insert in 3 Master detail tables .
end loop
( Cant use forall here as it involves 3 tables ..correct me if I am wrong please )
Thanx Sir.


Tom Kyte
September 02, 2003 - 9:21 pm UTC



wrong answer on both counts


insert into select <with conditions>

is better in all cases.


don't know why you think you cannot use "forall", you'll have to explain that a bit.

Explanation:What is better bulk collect or simple for loop

A reader, September 02, 2003 - 10:24 pm UTC

Thanx for you feedback Sir.
I cant do a insert into select because I need to get the generated PK value from the first table and use it as FK in the second insert.
Then use the PK generated by the second insert and
used both ( table 1 and table 2 PKs) as FKs in the third table insert.
Q. given this case how can you use insert select method.

As I am inserting into 3 tables in specific order thats why I think forall wont work here
as forall is just tied to single dml statement. Correct ?

By the way what is the best method here.If not insert select then which of the above 2 would you recommend or is it still possible to use insert select method ?

Tom Kyte
September 03, 2003 - 6:50 am UTC

bulk colllect and forall inserts then.

so what if forall is tied to a single dml??? so is "insert a row at a time".



db version is 817

A reader, September 02, 2003 - 11:18 pm UTC


What is better bulk collect or simple for loop

Mohan, September 03, 2003 - 4:21 am UTC

You can do insert into <table_name> select

You may have to do it three times. Create a trigger to generate PK for the first table and insert into the table. Create a trigger to generate PK for the second table. Insert into this table using a select statement which joins the first table and temp_table. Do the same procedure for the third table. Indexes should be present for optimizing joins.

Mohan

BULK Collect and BULK insert

A reader, September 04, 2003 - 12:30 am UTC

Hi Tom,

Does global temporary tables works as fast as collection objects for inserting a set of rows. Inserting into GTT by using a single SQL statement vs fetch BULK into the collection object.

Another option to transfer rows from collection object to a table other than bulk insert is

insert into <tablename> select * from TABLE(select cast(ntp as numtype) from dual);



Tom Kyte
September 04, 2003 - 8:57 am UTC

you cannot compare the two (gtts and collections) they are totally different beasts.

for every case where you say "collections are better", i'll give you a use case where a gtt is.

and vice versa.

they are both tools, nothing more, nothing less. they both can be used in different circumstances.

bulk collect explain plan

Bogy, January 08, 2004 - 5:15 am UTC

Hi Tom,

Short question.
Does bulk collect clause in PL/SQL code make any changes on execution plan?
Example:
select a
bulk collect
into v_a
from table_abcd t
where t.col_1 = 'aaa'
and t.col_2 = 5;

Let us say that we have index on a t.col_1 and t.col_2. Does this select statement perform index scan (which should do) or do FTS which is wrong?

Thanks,

Bogy

Tom Kyte
January 08, 2004 - 1:28 pm UTC

bulk collect won't make a difference however, your statement:

<quote>
Let us say that we have index on a t.col_1 and t.col_2. Does this select
statement perform index scan (which should do) or do FTS which is wrong?
</quote>

is *wrong*.


create table table_abcd ( col_1 varchar2(3), col_2 int, data char(80) );

insert into table_abcd select 'aaa', 5, object_name from all_objects;
insert into table_abcd select * from table_abcd;
insert into table_abcd select * from table_abcd;
insert into table_abcd select * from table_abcd;
insert into table_abcd select * from table_abcd;

insert into table_abcd values ( 'xyz', 6, 'hello world' );


Now, your query had better full scan, but my query:


select * from v_a from table_abcd where col_1 = 'xyz' and col_2 = 6;

better index range scan.



Is element order in collections gauranteed?

John, February 11, 2004 - 6:46 pm UTC

Hi Tom,

If I run this statement twice into 2 collection variables x and y

select table1.column2 bulk collect into x from table1, table2 where table1.column1=table2.column1;
select table1.column2 bulk collect into y from table1, table2 where table1.column1=table2.column1;

will x(i)=y(i) always?
My test seems to say yes. But I can not say for sure if this is always true. And I recall reading somewhere that 'the order of collections is not gauaranteed'. Can your comment on this? Thanks!

John

Tom Kyte
February 11, 2004 - 6:55 pm UTC

no.

unless you have an ORDER BY, the order of results can be different.

this has to do with the order of rows from a query -- not anything with collections really.

Andrew McCallum, February 11, 2004 - 7:42 pm UTC

Tom, In your sample:

> open cursor;
> loop
> fetch c bulk collect into l_c1, l_c2, ....... LIMIT 1000;
> for i in 1 .. l_c1.count
> loop
> process....
> end loop;
> forall i in 1 .. l_c1.count
> insert into ..... values ( L_c1(i), .... );
> end loop;
> exit when c%notfound;
> end loop;
> close cursor

I don't understand why you put the exit at the bottom of the loop. I usually put it immediately after the fetch. Won't your loop process the forall statements with an empty array on the last time through the loop?

Tom Kyte
February 12, 2004 - 8:28 am UTC

consider:


ops$tkyte@ORA9IR2> declare
  2      type array is table of number index by binary_integer;
  3      l_data array;
  4
  5      cursor c is select empno from emp;
  6  begin
  7      open c;
  8      loop
  9          fetch c bulk collect into l_data limit 10;
 10
 11          if ( c%notfound )
 12          then
 13              dbms_output.put_line
 14              ( 'Cursor returned NOT FOUND but array has ' || l_data.count
 15                 || ' left to process' );
 16          else
 17              dbms_output.put_line
 18              ( 'We have ' || l_data.count
 19                 || ' to process' );
 20          end if;
 21
 22          exit when c%notfound;
 23      end loop;
 24      close c;
 25  end;
 26  /
We have 10 to process
Cursor returned NOT FOUND but array has 4 left to process
 
PL/SQL procedure successfully completed.


if you exit when you get notfound, you end up leaving unprocessed records in the array.  On the second bulk fetch from this 14 row table -- we get 4 records AND get notified that we hit a notfound condition.  

If we left the loop right then, you would have 4 records left over.....

 

Andrew McCallum, February 11, 2004 - 7:43 pm UTC

Tom, In your sample:

> open cursor;
> loop
> fetch c bulk collect into l_c1, l_c2, ....... LIMIT 1000;
> for i in 1 .. l_c1.count
> loop
> process....
> end loop;
> forall i in 1 .. l_c1.count
> insert into ..... values ( L_c1(i), .... );
> end loop;
> exit when c%notfound;
> end loop;
> close cursor

I don't understand why you put the exit at the bottom of the loop. I usually put it immediately after the fetch. Won't your loop process the forall statements with an empty array on the last time through the loop?

I have the exit at the base of a loop in some of my production code!

Neil, February 12, 2004 - 6:24 am UTC

But Andrew's got me thinking so I did a quick test:

DECLARE
-- Rows retrieved exactly divisible by LIMIT
-- EXIT WHEN c%NOTFOUND; at foot of loop
-- Seems to process the last chunk twice
TYPE ARRAY IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
l_data ARRAY;
CURSOR c IS SELECT object_id FROM ALL_OBJECTS WHERE ROWNUM < 101;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT 10;
/* process data here */
dbms_output.put_line( 'Looping, c%rowcount = ' || c%rowcount );
EXIT WHEN c%NOTFOUND;
END LOOP;
dbms_output.put_line( 'out of loop, c%rowcount = ' || c%rowcount );
CLOSE c;
END;
/
Looping, c%rowcount = 10
Looping, c%rowcount = 20
Looping, c%rowcount = 30
Looping, c%rowcount = 40
Looping, c%rowcount = 50
Looping, c%rowcount = 60
Looping, c%rowcount = 70
Looping, c%rowcount = 80
Looping, c%rowcount = 90
Looping, c%rowcount = 100
Looping, c%rowcount = 100 <===
out of loop, c%rowcount = 100

DECLARE
-- Rows retrieved NOT divisible by LIMIT
-- EXIT WHEN c%NOTFOUND; at foot of loop
-- behavior ok
TYPE ARRAY IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
l_data ARRAY;
CURSOR c IS SELECT object_id FROM ALL_OBJECTS WHERE ROWNUM < 100;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT 10;
/* process data here */
dbms_output.put_line( 'Looping, c%rowcount = ' || c%rowcount );
EXIT WHEN c%NOTFOUND;
END LOOP;
dbms_output.put_line( 'out of loop, c%rowcount = ' || c%rowcount );
CLOSE c;
END;
/
Looping, c%rowcount = 10
Looping, c%rowcount = 20
Looping, c%rowcount = 30
Looping, c%rowcount = 40
Looping, c%rowcount = 50
Looping, c%rowcount = 60
Looping, c%rowcount = 70
Looping, c%rowcount = 80
Looping, c%rowcount = 90
Looping, c%rowcount = 99
out of loop, c%rowcount = 99

DECLARE
-- Rows retrieved exactly divisible by LIMIT
-- EXIT WHEN c%NOTFOUND; after fetch
-- behaviour ok
TYPE ARRAY IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
l_data ARRAY;
CURSOR c IS SELECT object_id FROM ALL_OBJECTS WHERE ROWNUM < 101;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT 10;
EXIT WHEN c%NOTFOUND;
/* process data here */
dbms_output.put_line( 'Looping, c%rowcount = ' || c%rowcount );
END LOOP;
dbms_output.put_line( 'out of loop, c%rowcount = ' || c%rowcount );
CLOSE c;
END;
/
Looping, c%rowcount = 10
Looping, c%rowcount = 20
Looping, c%rowcount = 30
Looping, c%rowcount = 40
Looping, c%rowcount = 50
Looping, c%rowcount = 60
Looping, c%rowcount = 70
Looping, c%rowcount = 80
Looping, c%rowcount = 90
Looping, c%rowcount = 100
out of loop, c%rowcount = 100
DECLARE
-- Rows retrieved NOT divisible by LIMIT
-- EXIT WHEN c%NOTFOUND; after fetch
-- Last 9 records not dealt with !
TYPE ARRAY IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
l_data ARRAY;
CURSOR c IS SELECT object_id FROM ALL_OBJECTS WHERE ROWNUM < 100;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT 10;
EXIT WHEN c%NOTFOUND;
/* process data here */
dbms_output.put_line( 'Looping, c%rowcount = ' || c%rowcount );
END LOOP;
dbms_output.put_line( 'out of loop, c%rowcount = ' || c%rowcount );
CLOSE c;
END;
/
Looping, c%rowcount = 10
Looping, c%rowcount = 20
Looping, c%rowcount = 30
Looping, c%rowcount = 40
Looping, c%rowcount = 50
Looping, c%rowcount = 60
Looping, c%rowcount = 70
Looping, c%rowcount = 80
Looping, c%rowcount = 90
out of loop, c%rowcount = 99

So it appears that an exit immediately after the fetch will not deal with the last chunk of rows if the number of rows is NOT divisible by LIMIT.
It also appears that an exit at the foot of the loop will deal with the last chunk twice if the amount of records is divisible by the limit!
What's the official line?

Tom Kyte
February 12, 2004 - 8:53 am UTC

see above -- the last fetch in general will

a) signal the notdatafound
b) return some more rows to be processed

by definition.

To Neil

Kim Berg Hansen, February 12, 2004 - 9:39 am UTC

Hi

Last chunk is not processed twice - you get an empty array :


  1  DECLARE
  2    -- Rows retrieved exactly divisible by LIMIT
  3    -- EXIT WHEN c%NOTFOUND; at foot of loop
  4    -- Seems to process the last chunk twice
  5    TYPE ARRAY IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
  6    l_data ARRAY;
  7    CURSOR c IS SELECT object_id FROM ALL_OBJECTS WHERE ROWNUM < 101;
  8  BEGIN
  9    OPEN c;
 10    LOOP
 11      FETCH c BULK COLLECT INTO l_data LIMIT 10;
 12      /* process data here */
 13      dbms_output.put_line( 'Looping, c%rowcount = ' || c%rowcount || ' - array count = ' || l_data.count);
 14      EXIT WHEN c%NOTFOUND;
 15    END LOOP;
 16      dbms_output.put_line( 'out of loop, c%rowcount = ' || c%rowcount );
 17    CLOSE c;
 18* END;
SQL> /
Looping, c%rowcount = 10 - array count = 10
Looping, c%rowcount = 20 - array count = 10
Looping, c%rowcount = 30 - array count = 10
Looping, c%rowcount = 40 - array count = 10
Looping, c%rowcount = 50 - array count = 10
Looping, c%rowcount = 60 - array count = 10
Looping, c%rowcount = 70 - array count = 10
Looping, c%rowcount = 80 - array count = 10
Looping, c%rowcount = 90 - array count = 10
Looping, c%rowcount = 100 - array count = 10
Looping, c%rowcount = 100 - array count = 0
out of loop, c%rowcount = 100

PL/SQL procedure successfully completed.


If you feel you don't wan't to "waste" time processing the empty array then you might do :


DECLARE
  -- Rows retrieved exactly divisible by LIMIT
  -- EXIT WHEN c%NOTFOUND; at foot of loop
  TYPE ARRAY IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
  l_data ARRAY;
  CURSOR c IS SELECT object_id FROM ALL_OBJECTS WHERE ROWNUM < 101;
BEGIN
  OPEN c;
  LOOP
    FETCH c BULK COLLECT INTO l_data LIMIT 10;
    IF l_data.count > 0 THEN
      /* process data here */
    END IF;
    EXIT WHEN c%NOTFOUND;
  END LOOP;
  CLOSE c;
END;


Tom? I guess Neil's concern is whether this forall :
>       forall i in 1 .. l_c1.count
>            insert into ..... values ( L_c1(i), .... );
>       end loop;
is "bad" with an empty array, or what ?

Is it "worth it" to enclose the logic in "IF l_data.count > 0 THEN" or is it allright to just let the forall statement decide that it's not going to do anything at all?

My quess is, that it's probable not worth worrying about, as you almost always will use some construct that references l_data.count anyway to process your array.

 

Correction to last post

Kim Berg Hansen, February 12, 2004 - 9:59 am UTC

Sorry - made a typo

Please erase line :

"Tom? I guess Neil's concern is whether this forall :"

and replace with :

"Tom? I guess Andrew's concern is whether this forall :"


Sorry ;-)

Tom Kyte
February 12, 2004 - 12:00 pm UTC

you could code:


loop
fetch c bulk collect into l_data limit N;
exit when l_data.count = 0;
process...
exit when c%notfound;
end loop;

to catch "both" conditions ASAP


It can be done at either the head or the foot

Neil, February 12, 2004 - 10:23 am UTC

But the test has to change

"the last fetch in general will

a) signal the notdatafound
b) return some more rows to be processed

by definition."

Each fetch must therefore do a read ahead to determine whether there are any more rows to process (Of course it does, I remember reading it in 1 on 1 now!). So, if the rowcount is tested at the head of the loop directly after the fetch, one risks leaving the last n rows unprocessed. Putting the same test at the foot of the loop will work correctly, beacuse the condition is not tested until after the last rows are fetched and processed.

However, if I wanted to see the exit condition at the head of the loop, I could code the following which seems to work:

DECLARE
-- EXIT WHEN l_data.COUNT = 0; after fetch
-- behaviour ok
TYPE ARRAY IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
l_data ARRAY;
CURSOR c IS SELECT object_id FROM ALL_OBJECTS WHERE ROWNUM < 100;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT 10;
EXIT WHEN l_data.COUNT = 0;
/* process data here */
dbms_output.put_line( 'Looping, c%rowcount = ' || c%rowcount );
END LOOP;
dbms_output.put_line( 'out of loop, c%rowcount = ' || c%rowcount );
CLOSE c;
END;

The array must automatically be cleared before a fetch is made.


Tom Kyte
February 12, 2004 - 12:00 pm UTC

see above -- you would "over fetch" technically -- we can skip all of the work by using "two" exits.

Behaviour of %FOUND and %NOTFOUND with BULK COLLECT

Kim Berg Hansen, February 12, 2004 - 12:05 pm UTC

Just got to test it a bit further.

I don't know if this surprises some :


  1  CREATE OR REPLACE PROCEDURE do_bulk(do_rows IN NUMBER)
  2  IS
  3    TYPE ARRAY IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
  4    l_data ARRAY;
  5    CURSOR c IS SELECT object_id FROM ALL_OBJECTS WHERE ROWNUM <= do_rows;
  6  BEGIN
  7    dbms_output.put_line( 'Before loop, do_rows: ' || do_rows );
  8    OPEN c;
  9    LOOP
 10      FETCH c BULK COLLECT INTO l_data LIMIT 10;
 11      dbms_output.put( '  Looping, rowcount: ' || c%rowcount || ' arraycount: ' || l_data.count );
 12      IF c%FOUND THEN
 13        dbms_output.put( ' found=true' );
 14      ELSE
 15        dbms_output.put( ' found=false' );
 16      END IF;
 17      IF c%NOTFOUND THEN
 18        dbms_output.put( ' notfound=true' );
 19      ELSE
 20        dbms_output.put( ' notfound=false' );
 21      END IF;
 22      dbms_output.new_line;
 23      EXIT WHEN c%NOTFOUND;
 24    END LOOP;
 25    dbms_output.put_line( 'Out of loop, rowcount: ' || c%rowcount || ' arraycount: ' || l_data.count );
 26    CLOSE c;
 27* END;
SQL> /

Procedure created.


SQL> call do_bulk(19);
Before loop, do_rows: 19
Looping, rowcount: 10 arraycount: 10 found=true notfound=false
Looping, rowcount: 19 arraycount: 9 found=false notfound=true
Out of loop, rowcount: 19 arraycount: 9

Call completed.

SQL> call do_bulk(20);
Before loop, do_rows: 20
Looping, rowcount: 10 arraycount: 10 found=true notfound=false
Looping, rowcount: 20 arraycount: 10 found=true notfound=false
Looping, rowcount: 20 arraycount: 0 found=false notfound=true
Out of loop, rowcount: 20 arraycount: 0

Call completed.

SQL> call do_bulk(21);
Before loop, do_rows: 21
Looping, rowcount: 10 arraycount: 10 found=true notfound=false
Looping, rowcount: 20 arraycount: 10 found=true notfound=false
Looping, rowcount: 21 arraycount: 1 found=false notfound=true
Out of loop, rowcount: 21 arraycount: 1

Call completed.



So c%FOUND is only true if the array has been filled !

So you WILL have errors if you test for %NOTFOUND at the top of the loop.

If your body of the loop in some way uses either a FOR or a FORALL that uses the array.count - then you can safely test for %NOTFOUND at the BOTTOM of the loop.
Even when you hit a number of rows that's exactly divisible with the LIMIT, then your array.count will just be zero and your FOR/FORALL will just do nothing.

Or you can just always test for array.count.
 

Belt and Braces

Neil, February 12, 2004 - 1:39 pm UTC

Woah! Those replies were fast! I guess we overlapped a bit there, guys!
Two tests - to be sure to be sure!
-_*



Removing collection elements and using forall

Chris MillHill, February 20, 2004 - 3:59 am UTC

Tom,

How can I best use

* bulk collect into a collection
* loop and do some processing where I need to remove some of the collection elements
* forall insert of the collection

Like:

loop
fetch c bulk collect into l_c1, l_c2, ....... LIMIT 1000;
for i in 1 .. l_c1.count
loop
process....
if some_condidtion then
l_c1.delete(i);
l_c2.delete(i);
end if;
end loop;
forall i in 1 .. l_c1.count
insert into ..... values ( L_c1(i), .... );
end loop;
exit when c%notfound;
end loop;

Fully understandable, this runs into a ORA-22160 since some of the elements has been removed. But how can I easily bypass the above problem? During the processing I could build up a new collection of elements that should be inserted, instead of removing some from the existing, but since only approx 1% of the elements will be deleted, it doesn't feel like a good solution.

Thanks,

Tom Kyte
February 20, 2004 - 9:30 am UTC

before I answer in 9i and before terms, lets look at what 10g will let us do.  Basically 2 new things:

o "forall i in INDICES of collection_variable" -- iterate over a "sparse array" with holes in it (caused by .delete(i))

o "forall i in values of some_collection_filled_with_subscripts" -- iterate over the values in some collection -- using the values you find in that array/collection as the subscripts.  sort of like saying "forall i in ( 1, 3, 5, 6, 9, 10, 11 )"  -- if you had those values in some collection, it would do that.


Here are the examples:

ops$ora10g@ORA10G> create table emp as select * from scott.emp;
Table created.
 
ops$ora10g@ORA10G> declare
  2      type array is table of number;
  3      l_empnos   array;
  4      l_sals     array;
  5  begin
  6      select empno, sal
  7        bulk collect into l_empnos, l_sals
  8        from emp;
  9      for i in 1 .. l_empnos.count
 10      loop
 11          if ( l_sals(i) < 1000 )
 12          then
 13              l_empnos.delete(i);
 14          end if;
 15      end loop;<b>
 16      forall i in INDICES OF l_empnos</b>
 17          delete from emp where empno = l_empnos(i);
 18  end;
 19  /
 
PL/SQL procedure successfully completed.
 
<b>there we forall'ed over all "filled in" elements in the l_empnos collection -- deleting 12 of 14 rows:</b>
 
ops$ora10g@ORA10G> select empno, ename, sal from emp;
 
     EMPNO ENAME             SAL
---------- ---------- ----------
      7369 SMITH             800
      7900 JAMES             950
 
ops$ora10g@ORA10G> delete from emp;
2 rows deleted.
 
ops$ora10g@ORA10G> insert into emp select * from scott.emp;
14 rows created.
 
ops$ora10g@ORA10G> commit;
Commit complete.

<b>Here we'll do the same logic sort of.  we still want to delete those guys, but we want to update the other two.  we'll setup a pair of "subscript arrays" in this case, one full of subscripts into the l_empnos collection to be deleted, the other to be updated:</b>
 
ops$ora10g@ORA10G> declare
  2      type array is table of number;
  3      type idx_array is table of binary_integer index by binary_integer;
  4      l_empnos   array;
  5      l_sals     array;
  6      l_update   idx_array;
  7      l_delete   idx_array;
  8  begin
  9      select empno, sal
 10        bulk collect into l_empnos, l_sals
 11        from emp;
 12
 13      for i in 1 .. l_empnos.count
 14      loop
 15          if ( l_sals(i) < 1000 )
 16          then
 17              l_update(l_update.count) := i;
 18          else
 19              l_delete(l_delete.count) := i;
 20          end if;
 21      end loop;
 22<b>
 23      forall i in values of l_update
 24          update emp set sal = sal * 2 where empno = l_empnos(i);
 25
 26      forall i in values of l_delete
 27          delete from emp where empno = l_empnos(i);</b>
 28  end;
 29  /
 
PL/SQL procedure successfully completed.
 
ops$ora10g@ORA10G> select empno, ename, sal from emp;
 
     EMPNO ENAME             SAL
---------- ---------- ----------
      7369 SMITH            1600
      7900 JAMES            1900
 

<b>Ok, but what about 9i, how to achieve your goal without lots of duplication.  Well, we sort of have to keep the array "dense" in 9i and before.  that is fairly easy with this technique:</b>

ops$tkyte@ORA920PC> create table emp as select empno, ename from scott.emp where 1=0;
 
Table created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> declare
  2      type array is table of varchar2(30) index by binary_integer;
  3
  4      l_empnos array;
  5      l_enames array;
  6      l_offset number := 0;
  7  begin
  8      select empno, ename
  9        bulk collect
 10        into l_empnos, l_enames
 11        from scott.emp;
 12
 13      for i in 1 .. l_empnos.count
 14      loop
 15          if ( l_enames(i) like '%A%' )
 16          then
 17              l_offset := l_offset+1;
 18          elsif ( l_offset > 0 )
 19          then
 20              l_empnos(i-l_offset) := l_empnos(i);
 21              l_enames(i-l_offset) := l_enames(i);
 22          end if;
 23      end loop;
 24
 25      forall i in 1 .. l_empnos.count-l_offset
 26          insert into emp (empno,ename)
 27          values (l_empnos(i), l_enames(i) );
 28  end;
 29  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> select * from emp;
 
     EMPNO ENAME
---------- ----------
      7369 SMITH
      7566 JONES
      7788 SCOTT
      7839 KING
      7844 TURNER
      7902 FORD
      7934 MILLER
 
7 rows selected.


<b>Just slide the "end of the array" over the deleted entries.  Alternatively, we could use object types and skip forall:</b>


ops$tkyte@ORA920PC> create or replace type myScalarType as object
  2  (EMPNO       NUMBER(4),
  3   ENAME       VARCHAR2(10),
  4   JOB         VARCHAR2(9),
  5   MGR         NUMBER(4),
  6   HIREDATE    DATE,
  7   SAL         NUMBER(7,2),
  8   COMM        NUMBER(7,2),
  9   DEPTNO      NUMBER(2)
 10  )
 11  /

Type created.
 
 
ops$tkyte@ORA920PC> create or replace type myTableType as table of myScalarType
  2  /
 
Type created.
 
ops$tkyte@ORA920PC> declare
  2      l_data   myTableType;
  3  begin
  4      select myScalarType( empno, ename, job, mgr, hiredate, sal, comm, deptno )
  5        bulk collect
  6        into l_data
  7        from scott.emp;
  8
  9      for i in 1 .. l_data.count
 10      loop
 11          if ( l_data(i).ename like '%A%' )
 12          then
 13              l_data.delete(i);
 14          end if;
 15      end loop;
 16
 17      insert into emp
 18      select *
 19        from TABLE( cast( l_data as myTableType ) );
 20  end;
 21  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> select empno, ename from emp;
 
     EMPNO ENAME
---------- ----------
      7369 SMITH
      7566 JONES
      7788 SCOTT
      7839 KING
      7844 TURNER
      7902 FORD
      7934 MILLER
 
7 rows selected.
 




 

Which is better

A reader, February 22, 2004 - 9:52 pm UTC

Tom,

In one of your earlier discussion you had mentioned the following logic for BULK COLLECT :

Case 1 (Your logic)
======
loop
fetch c bulk collect into l_data limit N;
exit when l_data.count = 0;
process...
exit when c%notfound;
end loop;

I had a similar logic where I had to bulk fetch few records, process it, get the next set and so on. I used the following logic.

Case 2
======
Declare
....
Begin
TotalKnt := 0;
CurrIterKnt := 0;

OPEN C1;
LOOP
FETCH C1 BULK COLLECT INTO ....LIMIT n;
CurrIterKnt := C1%RowCount - TotalKnt;
TotalKnt := C1%RowCount;
EXIT WHEN C1%NOTFOUND;
/* Rest of process logic goes here */
END LOOP;

/* will happen if records fetched is not exactly divisible by LIMIT size n */

IF (CurrIterKnt > 0)
THEN
/* Rest of process logic (same as above) goes here */
END IF;

CLOSE C1;
End;
Which one of the given approaches you would recommend (performance wise) or both approaches perform the same? Please clarify so I could change the code

Thanks


Tom Kyte
February 23, 2004 - 7:34 am UTC

case 1 is easiest. You'll be iterating

for i in 1 .. some_array.count


which, when divisible by LIMIT (rare, say 100% of the cases if limit is 100), this loop will not execute, no worries. When not divisible - this loop will.

just put the exit at the bottom.

getting ora-04030

Praveen, February 25, 2004 - 11:20 am UTC

hi Tom
I am inserting duplicate record in a new table by giving this query
create table dup_xx_tab as select * from xx_tab where
rowid not in (select min(rowid) from xx_tab group by frm_srl_no)
but it gives the follwing error:-
"ora-04030 out of process memory when trying to allocate 1036288 bytes"

I have changed the sort area size from 50M to 500M but i got the same error.

For u'er information there are 1200000 approx. records and 60,000 of them are duplicate.

kindly suggest the best solution at the earliest


Tom Kyte
February 25, 2004 - 11:47 am UTC

umm, you are sort of going the wrong way with sort area size there aren't you?

you ran out of ram so you increased the amount you are willing to allocate by an order of magnitude???


You want to talk to your dba/sa to find out why 50m is "too big" on your system (perhaps they have unreasonable ulimits set) or decrease the sort size, not increase it!

re

praveen, February 25, 2004 - 12:38 pm UTC

our ulimlit is 4194302 what should be optimum
and what sort area should be set

Tom Kyte
February 25, 2004 - 12:44 pm UTC

4194302 what ? bytes, kbytes?

if that is 4meg, that is "really small", you have limited processes to 4m of memory.

I run unlimited, but really it is up to you, your DBA and your SA as to what is truly appropriate for you in your circumstance with your hardware.

ora 04030

Praveen, February 26, 2004 - 10:39 pm UTC

hi tom
now i have changed ulimit to unlimited and sort area size to 10M, but still again i am getting the same problem.
is there any patch required on OS level? we are using 9i (RAC) on AIX 5L.




Tom Kyte
February 27, 2004 - 7:24 am UTC

not that I am aware of but then again, I don't run AIX myself -sooo, that would be a question for support.

if you use a smaller SAS (i'd prefer pga_aggregate_target mostly myself), what then?

A reader, March 28, 2004 - 9:44 am UTC


How to handle exception if FORALL is used?

Tony, March 31, 2004 - 4:08 am UTC

Tom, I would like to use FORALL to speed up insert, But don't know how to handle exception for bulk inserts. I've given the code below. Is there any way to optimize the code? Waiting for your valuable suggestions. Thanks in advance.


DECLARE

TYPE ARRAY IS TABLE OF CIRTACC@EC_TO_CIR%ROWTYPE;
stgtbl ARRAY;
g_MsgString VARCHAR2(1000);
m_CIR_ID NUMBER;
g_PgmId VARCHAR2(4):= '23';

CURSOR c_InsCust IS
SELECT acct_id, cir_id, sec_lbl,
acct_cat, acct_cls, acct_create_ts
FROM acc_tab SUBPARTITION(SP0402_2);

BEGIN
OPEN c_InsCust;

LOOP

FETCH c_InsCust BULK COLLECT INTO stgtbl LIMIT 2000;
EXIT WHEN stgtbl.COUNT = 0;

FOR i IN stgtbl.FIRST..stgtbl.LAST
LOOP

BEGIN
INSERT INTO rt_acc@db_lin_acc
VALUES stgtbl(i) ;
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN

g_MsgString := ' Oracle Error Code is: '
||SQLCODE|| '-Oracle Error Msg is: '
||SUBSTR(SQLERRM,1, 200);
dbms_output.put_line(g_MsgString);

BEGIN
SELECT CIR_ID INTO m_CIR_ID
FROM rt_acc@db_lin_acc
WHERE ACCT_ID = stgtbl(i).ACCT_ID
AND ACCT_CAT = stgtbl(i).ACCT_CAT
AND CIR_ID <> stgtbl(i).CIR_ID
AND stgtbl(i).ACCT_CAT IN ('CC','DL','DB');

INSERT INTO ap_tabdb_lin_acc
VALUES (m_CIR_ID, stgtbl(i).SEC_LBL,
stgtbl(i).ACCT_ID, stgtbl(i).CIR_ID,
stgtbl(i).ACCT_CAT) ;

EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN

g_MsgString := ' Oracle Error Code is: '
||SQLCODE|| '-Oracle Error Msg is: '
||SUBSTR(SQLERRM,1, 200);
dbms_output.put_line(g_MsgString);

WHEN OTHERS THEN
g_MsgString := ' Oracle Error Code is: '
||SQLCODE|| '-Oracle Error Msg is: '
||SUBSTR(SQLERRM,1, 200);
dbms_output.put_line(g_MsgString);
END;

WHEN OTHERS THEN

g_MsgString := ' Oracle Error Code is: '
||SQLCODE|| '-Oracle Error Msg is: '
||SUBSTR(SQLERRM,1, 200);
dbms_output.put_line(g_MsgString);
END;

END LOOP;

END LOOP;

CLOSE c_InsCust;

END;


Tom Kyte
March 31, 2004 - 8:45 am UTC

search this site for

"save exceptions"


in quotes like that. those articles show how to forall processing DML in bulk and catch exceptions

not applicable

sagi, June 16, 2004 - 8:51 pm UTC

Hi '
I need to update a table with millions of records, I have a new column that has to be calculated from 2 other colums (e.g. if date < 1/1/2004 then sal=sal+3 if date >1/1/2004 then sal=sal*2) I aso needs to commit every 100,00 records or the rbs will burst in flames.
i tried to use your example bu t it didnt work either syntax and logic
the forall t valus of array is not working
Please help,
I need to use bulk since it is much faster

Tom Kyte
June 17, 2004 - 8:05 am UTC

you do not want to commit have 100,000 or n records. else you'll be doing it procedurally.

if you want to do it procedurally, more power to you, have fun writing that code -- and don't forget to make it restartable for when it crashes with ora-1555, you'll need to pick up where you left off.

people always make the assumption that "gotta save on rbs is my #1 priority" and by doing so do things in a manner that

a) fails (is fragile)
b) is slow as slow can be
c) is really complex
d) generates more undo and redo (not less, more)

sorry you could not take a working example that bulk fetched N rows from a table and forall I updated them. seems "straightforward"? but since you didn't share your attempt nor describe what your issues where, thats all I can say....

Why does it say 2 Total errors instead of 6

Venkat, July 05, 2004 - 8:12 pm UTC

I am following the instructions from your Sep/Oct 2003 Oracle mag article on BULK COLLECT.

pl is a shortcut for DBMS_OUTPUT.PUT_LINE.

drop table t;

create table t as
select * from all_objects
order by object_name ;

drop table t2 ;

create table t2 as
select * from all_objects
where 1=0;

alter table t2 ADD constraint check_object_id
check( object_id <= 33900) ;

DECLARE
TYPE array is TABLE OF t%ROWTYPE
INDEX BY BINARY_INTEGER ;
data array;
errors NUMBER ;
l_cnt NUMBER := 0;
dml_errors EXCEPTION;

PRAGMA EXCEPTION_INIT (dml_errors, -24381) ;

CURSOR c is SELECT * FROM t;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO data LIMIT 100;
BEGIN
FORALL i IN 1..data.COUNT
SAVE EXCEPTIONS
INSERT INTO t2 values data(i) ;
EXCEPTION
WHEN dml_errors THEN
errors := SQL%BULK_EXCEPTIONS.COUNT ;
l_cnt := l_cnt + 1 ;
FOR i IN 1..errors LOOP
pl('Error occurred during iteration ' ||
SQL%BULK_EXCEPTIONS(i).ERROR_INDEX ||
'. Oracle error is ' ||
SQL%BULK_EXCEPTIONS(i).ERROR_CODE );
END LOOP;
END;
EXIT WHEN c%NOTFOUND ;
END LOOP;
CLOSE c;
pl(l_cnt || ' Total Errors.');
END ;
/

vk@mug> DECLARE
2 TYPE array is TABLE OF t%ROWTYPE
3 INDEX BY BINARY_INTEGER ;
4 data array;
5 errors NUMBER ;
6 l_cnt NUMBER := 0;
7 dml_errors EXCEPTION;
8
9 PRAGMA EXCEPTION_INIT (dml_errors, -24381) ;
10
11 CURSOR c is SELECT * FROM t;
12 BEGIN
13 OPEN c;
14 LOOP
15 FETCH c BULK COLLECT INTO data LIMIT 100;
16 BEGIN
17 FORALL i IN 1..data.COUNT
18 SAVE EXCEPTIONS
19 INSERT INTO t2 values data(i) ;
20 EXCEPTION
21 WHEN dml_errors THEN
22 errors := SQL%BULK_EXCEPTIONS.COUNT ;
23 l_cnt := l_cnt + 1 ;
24 FOR i IN 1..errors LOOP
25 pl('Error occurred during iteration ' ||
26 SQL%BULK_EXCEPTIONS(i).ERROR_INDEX ||
27 '. Oracle error is ' ||
28 SQL%BULK_EXCEPTIONS(i).ERROR_CODE );
29 END LOOP;
30 END;
31 EXIT WHEN c%NOTFOUND ;
32 END LOOP;
33 CLOSE c;
34 pl(l_cnt || ' Total Errors.');
35 END ;
36 /
Error occurred during iteration 78. Oracle error is 2290
Error occurred during iteration 98. Oracle error is 2290
Error occurred during iteration 99. Oracle error is 2290
Error occurred during iteration 39. Oracle error is 2290
Error occurred during iteration 42. Oracle error is 2290
Error occurred during iteration 43. Oracle error is 2290
2 Total Errors.

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.07

Why 2 Total errors? Shouldn't it be 6?

Regards,
Venkat

Tom Kyte
July 05, 2004 - 8:29 pm UTC

each bulk operation (of which you had two) has SOME errors. Looks like each had 3 actually.

So, twice, you had three errors. It does not stop at the first error -- it processes all of the 100 rows it cans and reports back a list of the failed ones.

Shalu, July 08, 2004 - 12:15 pm UTC

Tom,

I have one insert stmt like this:

Insert into tab1
Select * from tab2, tab3....;

This select fetches minimum of 40,000 records at a time and is fast enough and takes 7-8 secs. But the insert statement on top of that slows it down quite a bit because the table I am inserting in has 8 indexes on it and has huge data. And the total time increases to 1+ min.

Pls suggest me how can I optimize it ?
Can Bulk collect help ? Because I believe 1 sql is always faster than procedural code.

Any suggestions wld be appreciated!
Shalu

Tom Kyte
July 08, 2004 - 12:30 pm UTC

insert /*+ append */ might have *strictly marginal* affect here.

maintaining 8 indexes is going to take a couple of seconds.

how long is it taking? 40k records it not too many. do you have a tkprof?

Shalu, July 08, 2004 - 1:01 pm UTC

It is taking 1 min 48 secs.
Here is the tkprof, it's huge though:

TKPROF: Release 9.2.0.4.0 - Production on Thu Jul 8 12:54:13 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Trace file: vndbacct_ora_22159.trc
Sort options: prsela exeela fchela
********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

SELECT valuation_sequence.nextval
from dual

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 36532 0.00 1.78 0 0 0 0
Fetch 36532 0.00 42.38 0 109596 36532 36532
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 73064 0.00 44.17 0 109596 36532 36532

Misses in library cache during parse: 0
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER) (recursive depth: 2)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: ALL_ROWS
0 SEQUENCE OF 'VALUATION_SEQUENCE'
0 TABLE ACCESS (FULL) OF 'DUAL'

********************************************************************************

INSERT Into vn_result(group_id, instr_id, price, status)
Select :b4,
instr_id,
price,
0
From
(
Select instr_id,
--t2_wac,
case when (price is null) and (prior_price is null) then
next_price
when (price is not null) and (prior_wac = next_wac) then
prior_price
when (price is null) and (prior_wac <> next_wac) then
prior_price * ((t2_wac-prior_wac)/(next_wac-prior_wac)) + next_price * ((next_wac-t2_wac)/(next_wac-prior_wac))
when (price is null) and (next_price is null) then
prior_price
end price
From (
select wac,
instr_id,
t2_wac,
price,
to_number( substr( prior_rows, 10, 15 )) prior_wac,
to_number( substr( prior_rows, 40, 15 )) prior_price,
to_number( substr( next_rows, 10, 15 )) next_wac,
to_number( substr( next_rows, 40, 15 )) next_price
from (
select wac,
instr_id,
t2_wac,
--t2_wam,
t_wam,
price,
max(prior_rows) over (order by wac) prior_rows,
max(next_rows) over (order by wac desc) next_rows
from (
select nvl(t2.wac,t.wac) wac,
t2.instr_id,
t2.wac t2_wac,
--t2.wam t2_wam,
t.wam t_wam,
t.price,
case when t.wac is not null
then to_char( row_number() over (order by nvl(t2.wac,t.wac), t.wam, price), 'fm000000000') ||
to_char(t.wac,'fm000.00000000000') ||
to_char(t.wam,'fm000.00000000000') ||
to_char(t.price,'fm000.00000000000') ||
to_char( lag(t.wac) over (order by nvl(t2.wac,t.wac), t.wam, price), 'fm000.00000000000' ) ||
to_char( lag(t.wam) over (order by nvl(t2.wac,t.wac), t.wam, price), 'fm000.00000000000' ) ||
to_char( lag(t.price) over (order by nvl(t2.wac,t.wac), t.wam, price), 'fm000.00000000000' )
end prior_rows,
case when t.wac is not null
then to_char( row_number() over (order by nvl(t2.wac,t.wac) desc, t.wam desc, price), 'fm000000000') ||
to_char(t.wac,'fm000.00000000000') ||
to_char(t.wam,'fm000.00000000000') ||
to_char(t.price,'fm000.00000000000') ||
to_char( lag(t.wac) over (order by nvl(t2.wac,t.wac) desc, t.wam desc, price), 'fm000.00000000000' ) ||
to_char( lag(t.wam) over (order by nvl(t2.wac,t.wac) desc, t.wam desc, price), 'fm000.00000000000' ) ||
to_char( lag(t.price) over (order by nvl(t2.wac,t.wac) desc, t.wam desc, price), 'fm000.00000000000' )
end next_rows
from (Select i.instr_id, wac From vn_port_instr p, vn_instrument i Where i.instr_id=p.instr_id And p.portfolio_id=:b1) t2
--full outer join (Select wac, wam, price From vn_port_instr p, vn_instrument i, vn_result r
-- Where i.instr_id=p.instr_id And r.instr_id=i.instr_id And i.product_type='SF_INTERMEDIATE' And p.portfolio_id=2480 And r.group_id=3192) t
full outer join (Select wac, wam, price From vn_group g, vn_instrument i, vn_result r
Where g.group_id=r.group_id And r.instr_id=i.instr_id And i.product_type=:b3 And r.group_id=:b2) t
on (t2.wac = t.wac)
) x
)
where t2_wac is not null
)
) Group By instr_id,price

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 43.61 449 260360 714086 36532
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 43.61 449 260360 714086 36532

Misses in library cache during parse: 0
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT GOAL: ALL_ROWS
0 SORT (GROUP BY)
0 VIEW
0 WINDOW (SORT)
0 WINDOW (SORT)
0 VIEW
0 WINDOW (SORT)
0 WINDOW (SORT)
0 VIEW
0 UNION-ALL
0 HASH JOIN (OUTER)
0 NESTED LOOPS
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'VPR_PK' (UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX
ROWID) OF 'VN_INSTRUMENT'
0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF
'VINST_PK' (UNIQUE)
0 VIEW
0 NESTED LOOPS
0 NESTED LOOPS
0 INDEX GOAL: ANALYZED (UNIQUE SCAN)
OF 'VN_GROUP_PK' (UNIQUE)
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'VRT_GRP_INST_PRICE' (NON-UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX
ROWID) OF 'VN_INSTRUMENT'
0 INDEX GOAL: ANALYZED (UNIQUE SCAN)
OF 'VINST_PK' (UNIQUE)
0 HASH JOIN (ANTI)
0 NESTED LOOPS
0 NESTED LOOPS
0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF
'VN_GROUP_PK' (UNIQUE)
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'VRT_GRP_INST_PRICE' (NON-UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX
ROWID) OF 'VN_INSTRUMENT'
0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF
'VINST_PK' (UNIQUE)
0 VIEW OF 'VW_SQ_1'
0 NESTED LOOPS
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'VPR_PK' (UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY INDEX
ROWID) OF 'VN_INSTRUMENT'
0 INDEX GOAL: ANALYZED (UNIQUE SCAN)
OF 'VINST_PK' (UNIQUE)

********************************************************************************

update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,
cache=:7,highwater=:8,audit$=:9,flags=:10
where
obj#=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 36533 0.00 2.91 0 0 0 0
Execute 36533 0.00 12.87 0 36560 73796 36533
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 73066 0.00 15.78 0 36560 73796 36533

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
1 UPDATE (cr=1 r=0 w=0 time=508 us)
1 INDEX UNIQUE SCAN I_SEQ1 (cr=1 r=0 w=0 time=44 us)(object id 107)

********************************************************************************

SELECT audit_sequence.currval
from
dual


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 36533 0.00 1.58 0 0 0 0
Fetch 36533 0.00 3.17 0 109599 0 36533
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 73066 0.00 4.75 0 109599 0 36533

Misses in library cache during parse: 0
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER) (recursive depth: 2)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: ALL_ROWS
0 SEQUENCE OF 'AUDIT_SEQUENCE'
0 TABLE ACCESS (FULL) OF 'DUAL'

********************************************************************************

declare
lgroup_id number;
begin
VN_INTERPOLATION_ALGORITHMS.INTERPOLATE_USING_BENCHMARKS(54, 2962, 'frmInt', 4208, 1, 'Benchmarks', sysdate, lgroup_id);
dbms_output.put_line('Group id:'||lgroup_id);
end;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 0 0 0
Execute 1 0.00 0.03 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.04 0 0 0 1

Misses in library cache during parse: 1
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER)
********************************************************************************

select file#
from
file$ where ts#=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 30 0.00 0.00 0 0 0 0
Execute 30 0.00 0.00 0 0 0 0
Fetch 90 0.00 0.00 0 150 0 60
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 150 0.00 0.01 0 150 0 60

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 4)

Rows Row Source Operation
------- ---------------------------------------------------
2 TABLE ACCESS BY INDEX ROWID FILE$ (cr=5 r=0 w=0 time=181 us)
2 INDEX RANGE SCAN I_FILE2 (cr=3 r=0 w=0 time=124 us)(object id 42)

********************************************************************************

BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 1

Misses in library cache during parse: 1
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER)
********************************************************************************

COMMIT


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 1 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 0 0 1 0

Misses in library cache during parse: 0
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER) (recursive depth: 1)
********************************************************************************

SELECT group_sequence.nextval
From
dual


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 3 1 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 3 1 1

Misses in library cache during parse: 0
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: ALL_ROWS
0 SEQUENCE OF 'GROUP_SEQUENCE'
0 TABLE ACCESS (FULL) OF 'DUAL'

********************************************************************************

INSERT Into vn_group (group_id, name, asof_date, project_id, user_name, deleted, portfolio_id, algorithm_id)
Values (:b7, :b6||'.'||:b5, :b4, :b3, user, 0, :b2, :b1)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 19 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 0 0 19 1

Misses in library cache during parse: 0
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT GOAL: ALL_ROWS

********************************************************************************

alter session set sql_trace false


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 0

Misses in library cache during parse: 1
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER)
********************************************************************************

alter session set sql_trace true


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER)



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 3 0.00 0.01 0 0 0 0
Execute 4 0.00 0.03 0 0 0 2
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7 0.00 0.05 0 0 0 2

Misses in library cache during parse: 3
Misses in library cache during execute: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 36563 0.00 2.91 0 0 0 0
Execute 109632 0.00 59.86 449 296920 787902 73066
Fetch 73156 0.00 45.57 0 219348 36533 73126
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 219351 0.00 108.35 449 516268 824435 146192

Misses in library cache during parse: 2

10 user SQL statements in session.
36563 internal SQL statements in session.
36573 SQL statements in session.
5 statements EXPLAINed in this session.
********************************************************************************
Trace file: vndbacct_ora_22159.trc
Trace file compatibility: 9.00.01
Sort options: prsela exeela fchela
1 session in tracefile.
10 user SQL statements in trace file.
36563 internal SQL statements in trace file.
36573 SQL statements in trace file.
12 unique SQL statements in trace file.
5 SQL statements EXPLAINed using schema:
VNUSER.prof$plan_table
Default table was used.
Table was created.
Table was dropped.
438874 lines in trace file.




Tom Kyte
July 08, 2004 - 1:20 pm UTC

fix your sequence, looks like you have "nocache" on it. thats a waste, set it to cache a couple thousand.

most of your time is spent *IN SEQUENCES*. look at your logic -- why do you select s.nextval from dual? why is that not just in the insert somewhere -- your logic is what needs to be looked at.

remove as much row by row (slow by slow) processing as you can -- don't select s.nextval from dual -- just insert it! (and if you needed the value in your code for something else, use the returning clause).

never user nocache sequences -- they are useless (it is not like you are going to have a gap free sequence, they just DO NOT exist anywhere)

Shalu, July 08, 2004 - 1:38 pm UTC

Tom,

The two sqls from sequences are actually written on the db trigger of the table I am inserting.

So you are suggesting to include those columns in the Insert statement and not let the db trigger fire for these two columns ?

And if this is correct, do I still need to modify my sequences for 'nocache' thing?

Tom Kyte
July 08, 2004 - 3:02 pm UTC

absolutely! avoid the triggers all together.

yes, you never want to use nocache.

nocache is like setting:

slow=most_defintely

in your init.ora

Shalu, July 08, 2004 - 2:00 pm UTC

Tom, I tried using sequence.nextval in the insert, I get this error:

Line: 41 Column: 29 Error: PL/SQL: ORA-02287: sequence number not allowed here

Looks like we can't use it in Insert Into Select *...

Pls advise!

Tom Kyte
July 08, 2004 - 3:03 pm UTC

sure you can, most of the time -- give me the SMALLEST example like your query that reproduces and I'll see if it cannot be squeezed in there.

Excellent!! ORA-02287

Shalu, July 08, 2004 - 4:55 pm UTC

I am able to reproduce it. It fails when you have a group by. e.g.

insert into b (c1, num)
Select c1, seq1.nextval
from (Select c1 from a)
Group By c1;

I guess we can have one more sub-query on top of Group by qry to avoid this error. Pls suggest!

One more thing, what is the advantage of cache sequences over nocache, does it increase the processing speed ? Can i alter my existing sequence without dropping n re-creating and setting it back to the same no? Re-create will make dependent objects invalid.
CREATE SEQUENCE valuation_sequence
NOMAXVALUE
NOMINVALUE
NOCYCLE
NOCACHE
/

Tom Kyte
July 08, 2004 - 8:51 pm UTC

cache sequences = much less recursive sql and fewer commits you have to wait on.

alter sequence s cache 1000;


ops$tkyte@ORA9IR2> create table t (id int, deptno int, sal int );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create sequence s;
 
Sequence created.
 
 
ops$tkyte@ORA9IR2> alter sequence s cache 1000;
 
Sequence altered.


ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t
  2  select s.nextval, deptno, sum(sal) sal
  3    from emp group by deptno
  4  /
select s.nextval, deptno, sum(sal) sal
         *
ERROR at line 2:
ORA-02287: sequence number not allowed here
 
 
ops$tkyte@ORA9IR2> insert into t
  2  select s.nextval, deptno, sal
  3    from (select deptno, sum(sal) sal from emp group by deptno)
  4  /
 
3 rows created.
 


just move it out a layer, sure. 

Shalu, July 09, 2004 - 9:47 am UTC

Great!! Thanks for the great hints!!

Just one last thing on this:

If I am inserting the sequence columns and not letting the db triggers do that. And I have this code in the db trigger:

if :new.valuation_id is null
then
select valuation_sequence.nextval
into :new.valuation_id from dual;
end if;

This will save this 1 select that is getting fired for each record, but it will still be checking this condition for each record even when I am inserting the value. Is this correct ?

Tom Kyte
July 09, 2004 - 10:32 am UTC

correct, the trigger will *still* be firing for each row -- which is painful.  


Let's look:


ops$tkyte@ORA9IR2> create sequence s;
Sequence created.
 
ops$tkyte@ORA9IR2> create table t ( x int );
Table created.
 
ops$tkyte@ORA9IR2> set timing on
ops$tkyte@ORA9IR2> @mystat "CPU used by this session"

<b>this is my "mystat" script, i'll be chopping that out in subquent runs</b>

ops$tkyte@ORA9IR2> column value new_val V
ops$tkyte@ORA9IR2> define S="&1"
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set autotrace off
ops$tkyte@ORA9IR2> select a.name, b.value
  2  from v$statname a, v$mystat b
  3  where a.statistic# = b.statistic#
  4  and lower(a.name) like '%' || lower('&S')||'%'
  5  /

<b>that's the end of the script</b>

old   4: and lower(a.name) like '%' || lower('&S')||'%'
new   4: and lower(a.name) like '%' || lower('CPU used by this session')||'%'
 
NAME                                VALUE
------------------------------ ----------
CPU used by this session               45
 
Elapsed: 00:00:00.00
ops$tkyte@ORA9IR2> insert into t select s.nextval from big_table.big_table where rownum <= &rows;
old   1: insert into t select s.nextval from big_table.big_table where rownum <= &rows
new   1: insert into t select s.nextval from big_table.big_table where rownum <= 100000
 
100000 rows created.
 
Elapsed: 00:00:03.23
ops$tkyte@ORA9IR2> @mystat2

<b>my mystat2 script -- prints the difference so we can see CPU time used:</b>

ops$tkyte@ORA9IR2> select a.name, b.value V, b.value-&V diff
  2  from v$statname a, v$mystat b
  3  where a.statistic# = b.statistic#
  4  and lower(a.name) like '%' || lower('&S')||'%'
  5  /
old   1: select a.name, b.value V, b.value-&V diff
new   1: select a.name, b.value V, b.value-        45 diff
old   4: and lower(a.name) like '%' || lower('&S')||'%'
new   4: and lower(a.name) like '%' || lower('CPU used by this session')||'%'
 
NAME                                    V       DIFF
------------------------------ ---------- ----------
CPU used by this session              349        304
 
Elapsed: 00:00:00.01

<b>so, that shows that without a trigger, this is a 3 second operation. Now:</b>


ops$tkyte@ORA9IR2> create or replace trigger t_trigger
  2  before insert on t for each row
  3  begin
  4          select s.nextval into :new.x from dual;
  5  end;
  6  /
 
Trigger created.
 
Elapsed: 00:00:00.12
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> truncate table t;
 
Table truncated.
 
Elapsed: 00:00:00.85
ops$tkyte@ORA9IR2> @mystat "CPU used by this session"
...

ops$tkyte@ORA9IR2> insert into t select null from big_table.big_table where rownum <= &rows;
old   1: insert into t select null from big_table.big_table where rownum <= &rows
new   1: insert into t select null from big_table.big_table where rownum <= 100000
 
100000 rows created.
 
Elapsed: 00:00:17.12
ops$tkyte@ORA9IR2> @mystat2
...
 
NAME                                    V       DIFF
------------------------------ ---------- ----------
CPU used by this session             1898       1544
 
Elapsed: 00:00:00.01

<b>that 3 second operation goes to 15.44 cpu seconds with your trigger, if we short circut the logic:</b>


ops$tkyte@ORA9IR2> create or replace trigger t_trigger
  2  before insert on t for each row
  3  begin
  4          if ( :new.x is null )
  5          then
  6                  select s.nextval into :new.x from dual;
  7          end if;
  8  end;
  9  /
 
Trigger created.
 
Elapsed: 00:00:00.10
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> truncate table t;
 
Table truncated.
 
Elapsed: 00:00:00.51
ops$tkyte@ORA9IR2> @mystat "CPU used by this session"
....

ops$tkyte@ORA9IR2> insert into t select s.nextval from big_table.big_table where rownum <= &rows;
old   1: insert into t select s.nextval from big_table.big_table where rownum <= &rows
new   1: insert into t select s.nextval from big_table.big_table where rownum <= 100000
 
100000 rows created.
 
Elapsed: 00:00:06.02
ops$tkyte@ORA9IR2> @mystat2
...
 
NAME                                    V       DIFF
------------------------------ ---------- ----------
CPU used by this session             2430        527
 
Elapsed: 00:00:00.01

<b>it is much better, but we can go one more step and use the WHEN clause:</b>

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace trigger t_trigger
  2  before insert on t for each row<b>
  3  when (new.x is null)</b>
  4  begin
  5          select s.nextval into :new.x from dual;
  6  end;
  7  /
 
Trigger created.
 
Elapsed: 00:00:00.08
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> truncate table t;
 
Table truncated.
 
Elapsed: 00:00:01.46
ops$tkyte@ORA9IR2> @mystat "CPU used by this session"
....

ops$tkyte@ORA9IR2> insert into t select s.nextval from big_table.big_table where rownum <= &rows;
old   1: insert into t select s.nextval from big_table.big_table where rownum <= &rows
new   1: insert into t select s.nextval from big_table.big_table where rownum <= 100000
 
100000 rows created.
 
Elapsed: 00:00:05.34
ops$tkyte@ORA9IR2> @mystat2
...
 
NAME                                    V       DIFF
------------------------------ ---------- ----------
CPU used by this session             2838        403
 
Elapsed: 00:00:00.00
ops$tkyte@ORA9IR2>

<b>so, there the additional overhead of checking whether we need to fire the trigger at all was nominal (well, 33% in this case but we are talking about 3 seconds vs 4 seconds here -- your mileage may vary)

</b> 

Shalu, July 09, 2004 - 10:49 am UTC

Tom, this all was great to know.

After having got rid of those 2 db triggers, I ran the code again, the performance is still same. If you don't mind looking at the tkprof again, here it is:


TKPROF: Release 9.2.0.4.0 - Production on Fri Jul 9 10:25:42 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Trace file: vndbacct_ora_4490.trc
Sort options: prsela exeela fchela
********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

INSERT Into vn_result(group_id, instr_id, price, valuation_id, status, audit_id)
Select :b5,
instr_id,
price,
valuation_sequence.nextval,
0,
:b4
From (Select instr_id, price
From ( Select instr_id,
--t2_wac,
case when (price is null) and (prior_price is null) then
next_price
when (price is not null) and (prior_wac = next_wac) then
prior_price
when (price is null) and (prior_wac <> next_wac) then
prior_price * ((t2_wac-prior_wac)/(next_wac-prior_wac)) + next_price * ((next_wac-t2_wac)/(next_wac-prior_wac))
when (price is null) and (next_price is null) then
prior_price
end price
From ( Select wac,
instr_id,
t2_wac,
price,
to_number( substr( prior_rows, 10, 15 )) prior_wac,
to_number( substr( prior_rows, 40, 15 )) prior_price,
to_number( substr( next_rows, 10, 15 )) next_wac,
to_number( substr( next_rows, 40, 15 )) next_price
From ( Select wac,
instr_id,
t2_wac,
--t2_wam,
t_wam,
price,
max(prior_rows) over (order by wac) prior_rows,
max(next_rows) over (order by wac desc) next_rows
From ( Select nvl(t2.wac,t.wac) wac,
t2.instr_id,
t2.wac t2_wac,
--t2.wam t2_wam,
t.wam t_wam,
t.price,
case when t.wac is not null
then to_char( row_number() over (order by nvl(t2.wac,t.wac), t.wam, price), 'fm000000000') ||
to_char(t.wac,'fm000.00000000000') ||
to_char(t.wam,'fm000.00000000000') ||
to_char(t.price,'fm000.00000000000') ||
to_char( lag(t.wac) over (order by nvl(t2.wac,t.wac), t.wam, price), 'fm000.00000000000' ) ||
to_char( lag(t.wam) over (order by nvl(t2.wac,t.wac), t.wam, price), 'fm000.00000000000' ) ||
to_char( lag(t.price) over (order by nvl(t2.wac,t.wac), t.wam, price), 'fm000.00000000000' )
end prior_rows,
case when t.wac is not null
then to_char( row_number() over (order by nvl(t2.wac,t.wac) desc, t.wam desc, price), 'fm000000000') ||
to_char(t.wac,'fm000.00000000000') ||
to_char(t.wam,'fm000.00000000000') ||
to_char(t.price,'fm000.00000000000') ||
to_char( lag(t.wac) over (order by nvl(t2.wac,t.wac) desc, t.wam desc, price), 'fm000.00000000000' ) ||
to_char( lag(t.wam) over (order by nvl(t2.wac,t.wac) desc, t.wam desc, price), 'fm000.00000000000' ) ||
to_char( lag(t.price) over (order by nvl(t2.wac,t.wac) desc, t.wam desc, price), 'fm000.00000000000' )
end next_rows
From (Select i.instr_id, wac From vn_port_instr p, vn_instrument i Where i.instr_id=p.instr_id And p.portfolio_id=:b1) t2
--full outer join (Select wac, wam, price From vn_port_instr p, vn_instrument i, vn_result r
-- Where i.instr_id=p.instr_id And r.instr_id=i.instr_id And i.product_type='SF_INTERMEDIATE' And p.portfolio_id=2480 And r.group_id=3192) t
full outer join (Select wac, wam, price From vn_group g, vn_instrument i, vn_result r
Where g.group_id=r.group_id And r.instr_id=i.instr_id And i.product_type=:b3 And r.group_id=:b2) t
on (t2.wac = t.wac)
) x
) where t2_wac is not null
)) Group By instr_id,price)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 65.58 559 260748 753796 36532
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 65.58 559 260748 753796 36532

Misses in library cache during parse: 0
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT GOAL: ALL_ROWS
0 SEQUENCE OF 'VALUATION_SEQUENCE'
0 VIEW
0 SORT (GROUP BY)
0 VIEW
0 WINDOW (SORT)
0 WINDOW (SORT)
0 VIEW
0 WINDOW (SORT)
0 WINDOW (SORT)
0 VIEW
0 UNION-ALL
0 HASH JOIN (OUTER)
0 NESTED LOOPS
0 INDEX GOAL: ANALYZED (RANGE SCAN) OF
'VPR_PK' (UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY
INDEX ROWID) OF 'VN_INSTRUMENT'
0 INDEX GOAL: ANALYZED (UNIQUE SCAN)
OF 'VINST_PK' (UNIQUE)
0 VIEW
0 NESTED LOOPS
0 NESTED LOOPS
0 INDEX GOAL: ANALYZED (UNIQUE
SCAN) OF 'VN_GROUP_PK' (UNIQUE)
0 INDEX GOAL: ANALYZED (RANGE SCAN)
OF 'VRT_GRP_INST_PRICE' (NON-UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY
INDEX ROWID) OF 'VN_INSTRUMENT'
0 INDEX GOAL: ANALYZED (UNIQUE
SCAN) OF 'VINST_PK' (UNIQUE)
0 HASH JOIN (ANTI)
0 NESTED LOOPS
0 NESTED LOOPS
0 INDEX GOAL: ANALYZED (UNIQUE SCAN)
OF 'VN_GROUP_PK' (UNIQUE)
0 INDEX GOAL: ANALYZED (RANGE SCAN)
OF 'VRT_GRP_INST_PRICE' (NON-UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY
INDEX ROWID) OF 'VN_INSTRUMENT'
0 INDEX GOAL: ANALYZED (UNIQUE SCAN)
OF 'VINST_PK' (UNIQUE)
0 VIEW OF 'VW_SQ_1'
0 NESTED LOOPS
0 INDEX GOAL: ANALYZED (RANGE SCAN)
OF 'VPR_PK' (UNIQUE)
0 TABLE ACCESS GOAL: ANALYZED (BY
INDEX ROWID) OF 'VN_INSTRUMENT'
0 INDEX GOAL: ANALYZED (UNIQUE
SCAN) OF 'VINST_PK' (UNIQUE)

********************************************************************************

update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,
cache=:7,highwater=:8,audit$=:9,flags=:10
where
obj#=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 36533 0.00 2.79 0 0 0 0
Execute 36533 0.00 13.11 0 36567 73825 36533
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 73066 0.00 15.90 0 36567 73825 36533

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
1 UPDATE (cr=1 r=0 w=0 time=588 us)
1 INDEX UNIQUE SCAN I_SEQ1 (cr=1 r=0 w=0 time=43 us)(object id 107)

********************************************************************************

declare
lgroup_id number;
begin
--VN_INTERPOLATION_ALGORITHMS.INTERPOLATE_USING_BENCHMARKS(38, 2519, 'frmInt', 3238, 1, 'Benchmarks', sysdate, lgroup_id);
--VN_INTERPOLATION_ALGORITHMS.INTERPOLATE_USING_BENCHMARKS(38, 2509, 'frmInt', 3210, 1, 'Benchmarks', sysdate, lgroup_id);
VN_INTERPOLATION_ALGORITHMS.INTERPOLATE_USING_BENCHMARKS(54, 2962, 'frmInt', 4208, 1, 'Benchmarks', sysdate, lgroup_id);
dbms_output.put_line('Group id:'||lgroup_id);
end;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 0 0 0
Execute 1 0.00 0.03 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.04 0 0 0 1

Misses in library cache during parse: 1
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER)
********************************************************************************

select file#
from
file$ where ts#=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 38 0.00 0.00 0 0 0 0
Execute 38 0.00 0.00 0 0 0 0
Fetch 114 0.00 0.00 0 190 0 76
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 190 0.00 0.01 0 190 0 76

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 3)

Rows Row Source Operation
------- ---------------------------------------------------
2 TABLE ACCESS BY INDEX ROWID FILE$ (cr=5 r=0 w=0 time=181 us)
2 INDEX RANGE SCAN I_FILE2 (cr=3 r=0 w=0 time=119 us)(object id 42)

********************************************************************************

COMMIT


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 1 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 0 0 1 0

Misses in library cache during parse: 0
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER) (recursive depth: 1)
********************************************************************************

update tsq$ set blocks=:3,maxblocks=:4,grantor#=:5,priv1=:6,priv2=:7,priv3=:8
where
ts#=:1 and user#=:2


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 4 1 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 4 1 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 2)

Rows Row Source Operation
------- ---------------------------------------------------
0 UPDATE (cr=4 r=0 w=0 time=819 us)
1 TABLE ACCESS CLUSTER TSQ$ (cr=4 r=0 w=0 time=423 us)
1 INDEX UNIQUE SCAN I_USER# (cr=1 r=0 w=0 time=26 us)(object id 11)

********************************************************************************

select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,
o.dataobj#,o.flags
from
obj$ o where o.obj#=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 3 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 3 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 3)
********************************************************************************

SELECT group_sequence.nextval
From
dual


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 3 1 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 3 1 1

Misses in library cache during parse: 0
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: ALL_ROWS
0 SEQUENCE OF 'GROUP_SEQUENCE'
0 TABLE ACCESS (FULL) OF 'DUAL'

********************************************************************************

BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 1

Misses in library cache during parse: 0
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER)
********************************************************************************

alter session set sql_trace=FALSE


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 0

Misses in library cache during parse: 1
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER)
********************************************************************************

INSERT Into vn_group (group_id, name, asof_date, project_id, user_name, deleted, portfolio_id, algorithm_id, audit_id)
Values (:b8, :b7||'.'||:b6, :b5, :b4, user, 0, :b3, :b2, :b1)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 18 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 0 0 18 1

Misses in library cache during parse: 0
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT GOAL: ALL_ROWS

********************************************************************************

alter session set sql_trace=TRUE


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER)
********************************************************************************

SELECT audit_sequence.currval
from
dual


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 3 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 3 0 1

Misses in library cache during parse: 0
Optimizer goal: ALL_ROWS
Parsing user id: 47 (VNUSER) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: ALL_ROWS
0 SEQUENCE OF 'AUDIT_SEQUENCE'
0 TABLE ACCESS (FULL) OF 'DUAL'




********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 3 0.00 0.01 0 0 0 0
Execute 4 0.00 0.03 0 0 0 2
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7 0.00 0.04 0 0 0 2

Misses in library cache during parse: 2
Misses in library cache during execute: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 36573 0.00 2.79 0 0 0 0
Execute 36578 0.00 78.70 559 297319 827641 73067
Fetch 117 0.00 0.00 0 199 1 79
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 73268 0.00 81.51 559 297518 827642 73146

Misses in library cache during parse: 2

9 user SQL statements in session.
36573 internal SQL statements in session.
36582 SQL statements in session.
4 statements EXPLAINed in this session.
********************************************************************************
Trace file: vndbacct_ora_4490.trc
Trace file compatibility: 9.00.01
Sort options: prsela exeela fchela
1 session in tracefile.
9 user SQL statements in trace file.
36573 internal SQL statements in trace file.
36582 SQL statements in trace file.
13 unique SQL statements in trace file.
4 SQL statements EXPLAINed using schema:
VNUSER.prof$plan_table
Default table was used.
Table was created.
Table was dropped.
292843 lines in trace file.




Tom Kyte
July 09, 2004 - 10:51 am UTC

well, as I read it -- you have just observed a 20% reduction in runtimes (over 100 seconds to 80)

*fix your sequence*

don't you see it still getting incremented each and every time????

update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,
cache=:7,highwater=:8,audit$=:9,flags=:10
where
obj#=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 36533 0.00 2.79 0 0 0 0
Execute 36533 0.00 13.11 0 36567 73825 36533
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 73066 0.00 15.90 0 36567 73825 36533

Shalu, July 09, 2004 - 10:55 am UTC

U mean altering it to cache 1000!!

Excellent

Shalu, July 09, 2004 - 11:29 am UTC

I altered the sequences, and the performance is highly improved.

Tom, this is simply great to learn all that I learned in 1 day. Thanks so much!!

I am just being greedy, do you think is there any more scope of optimization?

Tom Kyte
July 09, 2004 - 1:42 pm UTC

other than really analyzing the query itself to make it "better" -- no (it is too big to look at here)

Shalu, July 09, 2004 - 2:27 pm UTC

Ok, that's fine. But thanks, all that helped a lot.

I have a question about sequences:

i am going to alter the sequences at our end to make them cache. is it ok to set the cache to 1000 for all the sequences that we have or there are parameters that we should consider before we decide on that.


Tom Kyte
July 09, 2004 - 4:09 pm UTC

*I have no problem with that*.


you'd have to ask the person that set them to nocache "why did you do that, what was the thought process behind that"

gtts

dxl, September 16, 2004 - 12:12 pm UTC

Tom

I am trying to move some data from one gtt table to another gtt table by doing :

insert into gtt_a select * from gtt_b;

When i have approx 50000 rows in gtt_b i am hitting the ora 04030 error out of process memory.

This occurs on my 9.2.1 dev instance but NOT on my 8.1.7.4 dev instance, where the code runs in about 1 second.

If i only put 25000 rows in gtt_b the insert is successful but takes about 1 minute to run whereas on the 8i instance it takes about 1 second.

What setting/configuration could be affecting this? is it to do with temporary tablespace? or sga size?

Is there any more info i can post you?

Tom Kyte
September 16, 2004 - 1:06 pm UTC

ops$tkyte@ORA9IR2> create table gtt1 as select * from all_objects where 1=0;
 
Table created.
 
ops$tkyte@ORA9IR2> create table gtt2 as select * from all_objects where 1=0;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @connect /
ops$tkyte@ORA9IR2> set termout off
ops$tkyte@ORA9IR2> set termout on
ops$tkyte@ORA9IR2> @mystat "session pga memory"
ops$tkyte@ORA9IR2> set echo off
 
NAME                                VALUE
------------------------------ ----------
session pga memory                 294556
session pga memory max             294556
 
ops$tkyte@ORA9IR2> insert into gtt1 select * from all_objects;
 
27902 rows created.
 
ops$tkyte@ORA9IR2> @mystat2
ops$tkyte@ORA9IR2> set echo off
 
NAME                                    V       DIFF
------------------------------ ---------- ----------
session pga memory                 360092      65536
session pga memory max             360092      65536
 
ops$tkyte@ORA9IR2> @mystat "session pga memory"
ops$tkyte@ORA9IR2> set echo off
 
NAME                                VALUE
------------------------------ ----------
session pga memory                 360092
session pga memory max             360092
 
ops$tkyte@ORA9IR2> insert into gtt2 select * from gtt1;
 
27902 rows created.
 
ops$tkyte@ORA9IR2> @mystat2
ops$tkyte@ORA9IR2> set echo off
 
NAME                                    V       DIFF
------------------------------ ---------- ----------
session pga memory                 360092          0
session pga memory max             360092          0



<b>I would need a method to reproduce - i cannot measure anything different</b>


using both manual and auto memory management, i cannot see it bumping up memory.
 

Ok

dxl, September 17, 2004 - 4:32 am UTC

Ok Tom here goes:

The following script will install all you need to reproduce:




CREATE TABLE LOADER_CONFIG (
SYSTEM_NAME VARCHAR2(15),
FILE_TYPE VARCHAR2(15),
LEADING_CHAR VARCHAR2(10),
ELEMENT_NUMBER NUMBER(5),
TEXT_DELIMITER VARCHAR2(15),
FIELD_DELIMITER VARCHAR2(15),
DESCRIPTION VARCHAR2(15),
DESTINATION VARCHAR2(15));


-- insert some data
INSERT INTO loader_config values
('SYSTEM1','UPLOAD','SYS',5, '"',',', 'PATIENT', 'GTT_PATIENT');
INSERT INTO loader_config values
('SYSTEM1','UPLOAD',null,3, '"',',', 'HOSPITAL', 'GTT_TRAUMA');
INSERT INTO loader_config values
('SYSTEM2','UPLOAD','1',4, null,',', 'PATIENT', 'GTT_PATIENT');


CREATE global temporary table gtt_RawData
(App_id Number(38),
Data Varchar2(4000))
ON COMMIT DELETE ROWS;


CREATE global temporary table gtt_BadData
(App_id Number(38),
Data Varchar2(4000))
ON COMMIT DELETE ROWS;


CREATE OR REPLACE CONTEXT LOAD_CTX USING PKG_LOADER;


CREATE OR REPLACE PACKAGE PKG_LOADER AS

----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
PROCEDURE P_RUN_LOADER ( P_System IN VARCHAR2,
P_File_Type IN VARCHAR2);


----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
PROCEDURE P_LOAD ( P_GTT_STAGE_NAME IN LOADER_CONFIG.Destination%TYPE,
P_Leading_Char IN LOADER_CONFIG.Leading_Char%TYPE,
P_Element_Number IN LOADER_CONFIG.Element_Number%TYPE,
P_Field_Delimiter IN LOADER_CONFIG.Field_Delimiter%TYPE,
P_Text_Delimiter IN LOADER_CONFIG.Text_Delimiter%TYPE);


----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
PROCEDURE P_Build_GTT_STAGE ( P_Element_Number IN NUMBER,
P_Destination IN VARCHAR2);


----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
PROCEDURE P_RUN_Build_GTT_STAGE ( P_System IN VARCHAR2,
P_File_Type IN VARCHAR2);


----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
PROCEDURE P_Build_GTT_BADDATA ( P_Element_Number IN NUMBER,
P_Destination IN VARCHAR2);


END PKG_LOADER;
/


CREATE OR REPLACE PACKAGE BODY PKG_LOADER AS

----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
PROCEDURE P_RUN_LOADER ( P_System IN VARCHAR2,
P_File_Type IN VARCHAR2)

AS

V_Query varchar2(32000);
V_Field_Number number(38);
V_Delim_Number number(38);
--V_Delimiter varchar2(10) := P_Text_Delimiter || P_Field_Delimiter || P_Text_Delimiter;
V_Exclude varchar2(1000);
V_sql varchar2(1000);

V_Delimiter varchar2(10);
i integer;

CURSOR c IS SELECT SYSTEM_NAME,
FILE_TYPE,
LEADING_CHAR,
ELEMENT_NUMBER,
TEXT_DELIMITER,
FIELD_DELIMITER,
DESCRIPTION,
DESTINATION
FROM Loader_Config
WHERE System_name = P_System
AND File_Type = P_File_Type;


BEGIN


V_Exclude := '(';

i := 0;

For rec_Cur IN (SELECT Namespace,
Attribute
FROM Session_Context
WHERE Namespace = 'LOAD_CTX'
AND Attribute LIKE 'IN\_LIST\_%' escape '\' ) LOOP

DBMS_SESSION.SET_CONTEXT( rec_Cur.Namespace||'', rec_Cur.Attribute||'', null);
END LOOP;

FOR rec_Cur IN c LOOP

dbms_output.put_line('rec_Cur.Destination = '|| rec_Cur.Destination);
dbms_output.put_line('rec_Cur.Element_Number = '|| rec_Cur.Element_Number);
dbms_output.put_line('rec_Cur.Field_Delimiter = '|| rec_Cur.Field_Delimiter);
dbms_output.put_line('rec_Cur.Text_Delimiter = '|| rec_Cur.Text_Delimiter);

P_LOAD ( rec_Cur.Destination,
rec_Cur.Leading_Char,
rec_Cur.Element_Number,
rec_Cur.Field_Delimiter,
rec_Cur.Text_Delimiter);

V_Delimiter := rec_Cur.Text_Delimiter || rec_Cur.Field_Delimiter || rec_Cur.Text_Delimiter;
-- V_Exclude := V_Exclude || (rec_Cur.Element_Number - 1) || ',' ;

i := i + 1;

-- IF rec_Cur.Leading_Char IS NULL THEN

DBMS_SESSION.SET_CONTEXT( 'LOAD_CTX', 'in_list_' || i, (rec_Cur.Element_Number - 1) );
-- ELSE

-- DBMS_SESSION.SET_CONTEXT( 'LOAD_CTX', 'in_list_' || i, (rec_Cur.Element_Number) );
-- END IF;


END LOOP;

--V_Exclude := RTRIM(V_Exclude, ',') ||')' ;

--dbms_output.put_line('V_Exclude = '|| V_Exclude );

---------------------------------------------------------------------------------------------------------------
-- Log bad records which have wrong number of elements

V_SQL := '
INSERT INTO gtt_BadData
SELECT App_id,
Data
FROM gtt_RawData
WHERE ((length(data) - length(replace(data, :V_Delimiter, '''')) ) /
(length(:V_Delimiter))) NOT IN ( SELECT TO_NUMBER(Value)
FROM Session_Context
WHERE Namespace = ''LOAD_CTX''
AND Attribute LIKE ''IN\_LIST\_%'' escape ''\''
AND VALUE IS NOT NULL)';

--WHERE ((length(data) - length(replace(data, :V_Delimiter, ''~'')) ) / (length(:V_Delimiter) - 1)) NOT IN ' || V_Exclude ;

--pkg_common.p('V_sql delete = '|| V_sql);


--dbms_output.put_line('V_SQL = '|| V_SQL);

EXECUTE IMMEDIATE V_SQL USING V_Delimiter, V_Delimiter;

--dbms_output.put_line('no records = '|| c%rowcount );





END P_RUN_LOADER ;






----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
PROCEDURE P_LOAD ( P_GTT_STAGE_NAME IN LOADER_CONFIG.Destination%TYPE,
P_Leading_Char IN LOADER_CONFIG.Leading_Char%TYPE,
P_Element_Number IN LOADER_CONFIG.Element_Number%TYPE,
P_Field_Delimiter IN LOADER_CONFIG.Field_Delimiter%TYPE,
P_Text_Delimiter IN LOADER_CONFIG.Text_Delimiter%TYPE)

AS

V_Query varchar2(32000);
V_Element_Number number(38) := P_Element_Number;
V_Delim_Number number(38);
V_Delimiter varchar2(10) := P_Text_Delimiter || P_Field_Delimiter || P_Text_Delimiter;
V_Text_Delimiter LOADER_CONFIG.Text_Delimiter%TYPE;

BEGIN

IF P_Text_Delimiter IS NULL THEN
V_Text_Delimiter := '"';
ELSE
V_Text_Delimiter := P_Text_Delimiter ;
END IF;



V_Query := 'Insert into ' || P_GTT_STAGE_NAME || ' select 1 AS App_ID';



IF P_Leading_Char IS NULL THEN
V_Query := V_Query || ' ,substr( data, 1, delim1-1 ) c1 ';
--ELSE
-- V_Element_Number := V_Element_Number + 1;
END IF;

V_Delim_Number := V_Element_Number - 1;

FOR i in 1 .. V_Delim_Number LOOP

V_Query := V_Query || '
,substr( data, delim' || i || '+' ||length(V_Delimiter) ||', delim' || (i+1)
|| ' - delim' || i || ' -' ||length(V_Delimiter)|| ' ) c' || (i+1) ;

END LOOP;

V_Query := V_Query || ' from (select ';


FOR i in 1 .. V_Delim_Number LOOP
V_Query := V_Query || ' instr(data,''' || V_Delimiter || ''',1,' || i || ') delim' || i ||',' ;
END LOOP;

V_Query := V_Query || '
length(data)+1 delim' || (V_Delim_Number + 1) || ',
data
from (
select rtrim(ltrim(data, ''' || V_Text_Delimiter || '''), ''' || V_Text_Delimiter || ''') data
from gtt_RawData
where (length(data) - length(replace(data, ''' || V_Delimiter || ''', '''')) ) /
(length(''' || V_Delimiter || ''')) =' || V_Delim_Number || '
)
)';

-- can enable checking of leading char by uncommenting the following lines:

--IF P_Leading_Char IS NOT NULL THEN
-- V_Query := V_Query || ' WHERE substr( data, 1, delim1-1 ) = ''' || P_Leading_Char || '''';
--END IF;

--pkg_common.p(V_Query);

EXECUTE IMMEDIATE V_Query;






END P_LOAD ;




----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
PROCEDURE P_Build_GTT_STAGE ( P_Element_Number IN NUMBER,
P_Destination IN VARCHAR2)

AS

V_Sql VARCHAR2(4000);

TABLE_BUILT EXCEPTION;

PRAGMA EXCEPTION_INIT (TABLE_BUILT , -942);

BEGIN

V_Sql := 'CREATE global temporary table ' || P_Destination || ' (App_id Number(38)';

FOR i IN 1 .. P_Element_Number LOOP

V_Sql := V_Sql || ', C' || i || ' VARCHAR2(4000)';

END LOOP;

V_Sql := V_Sql || ') ON COMMIT DELETE ROWS';

BEGIN
EXECUTE IMMEDIATE 'Drop Table ' || P_Destination ;
EXCEPTION WHEN TABLE_BUILT THEN
NULL;
END;

--dbms_output.put_line(v_Sql);

EXECUTE IMMEDIATE V_Sql;


END P_Build_GTT_STAGE ;







----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
PROCEDURE P_RUN_Build_GTT_STAGE ( P_System IN VARCHAR2,
P_File_Type IN VARCHAR2)

AS

V_Query varchar2(32000);
V_Delim_Number number(38);
V_Element_Number LOADER_CONFIG.Element_Number%TYPE;


--V_Delimiter varchar2(10) := P_Text_Delimiter || P_Field_Delimiter || P_Text_Delimiter;

CURSOR c IS SELECT SYSTEM_NAME,
FILE_TYPE,
LEADING_CHAR,
ELEMENT_NUMBER,
TEXT_DELIMITER,
FIELD_DELIMITER,
DESCRIPTION,
DESTINATION
FROM Loader_Config
WHERE System_name = P_System
AND File_Type = P_File_Type;


BEGIN


FOR rec_Cur IN c LOOP

IF rec_Cur.Leading_Char IS NOT NULL THEN

V_Element_Number := rec_Cur.Element_Number - 1;

ELSE
V_Element_Number := rec_Cur.Element_Number;

END IF;

P_Build_GTT_STAGE ( V_Element_Number,
rec_Cur.Destination);


END LOOP;






END P_RUN_Build_GTT_STAGE ;






----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
PROCEDURE P_Build_GTT_BADDATA ( P_Element_Number IN NUMBER,
P_Destination IN VARCHAR2)

AS

V_Sql VARCHAR2(4000);

TABLE_BUILT EXCEPTION;

PRAGMA EXCEPTION_INIT (TABLE_BUILT , -942);

BEGIN

V_Sql := 'CREATE global temporary table ' || P_Destination || ' (App_id Number(38), Data VARCHAR2(4000)) ON COMMIT DELETE ROWS';

BEGIN
EXECUTE IMMEDIATE 'Drop Table ' || P_Destination ;
EXCEPTION WHEN TABLE_BUILT THEN
NULL;
END;

--dbms_output.put_line(v_Sql);

EXECUTE IMMEDIATE V_Sql;






END P_Build_GTT_BADDATA ;





END PKG_LOADER ;
/






--now to run the code

-- builds the gtts
exec PKG_LOADER.P_RUN_Build_GTT_STAGE ('SYSTEM1','UPLOAD');


-- insert some data

Insert into gtt_rawdata
select 1,
'"'||owner||'","'||object_name||'","'||timestamp||'","'||object_id||'","'||status||'"' data
from all_objects;

Insert into gtt_rawdata
select 1,
'"'||owner||'","'||object_name||'","'||object_id||'"' data
from all_objects;

insert into gtt_rawdata values (1, '"fas,","dfdsa","afdf","fdsafd","ffaadf","asfwfw","');
insert into gtt_rawdata values (1, '"fas,","dfdsa"');


The package is trying to take a csv data line and split it into separated fields by building a dynamic sql statemente which will look like:

insert into gtt_patient
select substr( data, 1, delim1-1 ) c1,
substr( data, delim1+1, delim2-delim1-1 ) c2,
substr( data, delim2+1, delim3-delim2-1 ) c3,
substr( data, delim3+1, delim4-delim3-1 ) c4
from (
select instr(data,',',1,1) delim1,
instr(data,',',1,2) delim2,
instr(data,',',1,3) delim3,
length(data)+1 delim4,
data
from gtt_rawdata
);

And it is when it runs this sql that i believe it is getting the error.



Thats all you need to set it up now to reproduce the error:

-- run the loader
exec PKG_LOADER.P_RUN_LOADER ('SYSTEM1','UPLOAD');








This is where i get the error:

BEGIN PKG_LOADER.P_RUN_LOADER ('SYSTEM1','UPLOAD'); END;

*
ERROR at line 1:
ORA-04030: out of process memory when trying to allocate 8200 bytes (cursor work he,kzctxup alloc)
ORA-06512: at "TEST.PKG_LOADER", line 104
ORA-06512: at line 1




As i said previously this worked fine on my 8.1.7.4 instance ie really quick less than 5 seconds, but on my 9i instance i either get the above error or it takes over a minute to run.

Have i got an initialisation parameter set wrongly?

Tom Kyte
September 17, 2004 - 9:14 am UTC

<quote>
I am trying to move some data from one gtt table to another gtt table by doing :

insert into gtt_a select * from gtt_b;
</quote>

this is a little different isn't it?

so, what is line 104 of your very large process.

.

dxl, September 17, 2004 - 9:48 am UTC

I know this is a longer process but I thought i knew where it was falling over so i tried to simplify the problem for you previously.

Hoever it is a bit strange because when i've run it before i could've sworn that it was a different line that fell over (I can't be sure though), anyway line 104 is in the P_RUN_LOADER procedure towards the end :

"....

V_SQL := '
INSERT INTO gtt_BadData
SELECT App_id,
Data
FROM gtt_RawData
WHERE ((length(data) - length(replace(data, :V_Delimiter, '''')) ) /
(length(:V_Delimiter))) NOT IN ( SELECT TO_NUMBER(Value)
FROM Session_Context
WHERE Namespace = ''LOAD_CTX''
AND Attribute LIKE ''IN\_LIST\_%'' escape ''\''
AND VALUE IS NOT NULL)';

--WHERE ((length(data) - length(replace(data, :V_Delimiter, ''~'')) ) / (length(:V_Delimiter) - 1)) NOT IN ' || V_Exclude ;

pkg_common.p('V_sql delete = '|| V_sql);


--dbms_output.put_line('V_SQL = '|| V_SQL);

EXECUTE IMMEDIATE V_SQL USING V_Delimiter, V_Delimiter; -- this is line 104


..."

it is running the sql that moves data from one gtt to another. Why would this run out of process memory?

Could it be some general init parameter that isn't set up right? ie not specific to this package?

Tom Kyte
September 17, 2004 - 10:19 am UTC

no, it could be a really bad plan.

if instead of executing this, you print it out and just run it from sqlplus? (goal = get test case that has 3 or 4 statements in it to reproduce with)

.

dxl, September 17, 2004 - 12:16 pm UTC

ok I think i'm narrowing it down a bit.
Whilst i have been testing this on our hp box 9i, our unix administrator informed me that the box had run out of memory and something had consumed all of its ram so that it was swopping like mad!

So we shutdown the 9i instance, and restarted it. I then used enterprise manager to look at the pga aggregate target advice, which is set to 100 mb. The i ran my procedure and the overflow range on the graph shot up to 400mb, meanwhile the unix guys said that all the memory had gone again and was starting to swop.

So my procedure is eating os memory for some reason. I'm not too familiar with pga memory only sga.


What exactly could be causing the pga memeory to increase? is it something to do with the fact that i'm using gtts??
I thought that gtts just existed in the temp tablespace , why are they causing pga to increase?

or could it be something else in my package?

Is pga memory supposed to just increase beyond control, or is this a leak?


In response to your request I also ran the sql it was complaining about (at line 104 ) in sqlplus and it completed ok with the following trace:

16:19:59 TEST@DEV9>INSERT INTO gtt_BadData
16:19:59 2 SELECT App_id,
16:19:59 3 Data
16:19:59 4 FROM gtt_RawData
16:19:59 5 WHERE ((length(data) - length(replace(data, '","', '')) ) /
16:19:59 6 (length('","'))) NOT IN ( 4 );

29239 rows created.

Elapsed: 00:00:01.01

Execution Plan
----------------------------------------------------------
0 INSERT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'GTT_RAWDATA'




Statistics
----------------------------------------------------------
16 recursive calls
30497 db block gets
581 consistent gets
0 physical reads
4226840 redo size
446 bytes sent via SQL*Net to client
574 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
29239 rows processed


Please can you help with this.

Thanks

Tom Kyte
September 17, 2004 - 1:18 pm UTC

it could be a leak, that is why we need to get a paired down example if this still happens.

but -- if you truly did run out of memory on the machine -- had something consuming all of it - that would definitely cause this to happen easily.

.

dxl, September 20, 2004 - 6:11 am UTC

I have experimented with the part which does the insert for the bad rows ie :

V_SQL := '
INSERT INTO gtt_BadData
SELECT App_id,
Data
FROM gtt_RawData
WHERE ((length(data) - length(replace(data, :V_Delimiter, '''')) ) /
(length(:V_Delimiter))) NOT IN ( SELECT TO_NUMBER(Value)
FROM Session_Context
WHERE Namespace = ''LOAD_CTX''
AND Attribute LIKE ''IN\_LIST\_%'' escape ''\''
AND VALUE IS NOT NULL)';


I took this out and the procedure ran in 2 seconds as it does on the 8i db.

So i then replaced the code above with :

V_SQL := '
INSERT INTO gtt_BadData
SELECT App_id,
Data
FROM gtt_RawData
WHERE ((length(data) - length(replace(data, :V_Delimiter, '''')) ) /
(length(:V_Delimiter))) NOT IN ( 4,2)';


ie the hard coded values, without looking them up in the namespace. This code also ran quickly in 2 seconds. When i revert back to the original it takes 2 minutes.

Why would looking them up in the namespace with a subquery generate such bad performance??
The only reason i'm using this method and not just building a dynamic list of IN values is to speed up the process by utilising bind variables!!
Since this is a loader and will not be run everyday maybe i should (as a workaround) forget about the session contexts and just build the IN list dynamically without bind variables??!



Tom Kyte
September 20, 2004 - 8:48 am UTC

10046 level 12 trace in both systems and compare plans.

Location of c%notfound

A reader, September 20, 2004 - 9:56 am UTC

You said:

"Use the c%notfound at the bottom like I do and this won't happen."

..as in..
...
open c;
loop
/* process data */
....
exit when c%notfound;
end loop;

How about..
...
open c;
loop
exit when c%notfound;
/* process data */
end loop;

They basically the same.. aren't they ?

Tom Kyte
September 20, 2004 - 10:51 am UTC

not with bulk collections.


open c;
loop
fetch c BULK COLLECT into x limit 100;

if you got 50 rows, the last 50, c%notfound would be true, but you
need to process them HERE

and then:

exit when c%notfound; -- after processing them
end loop;
clsoe c;



Bulk Load

Bijay K Pusty, September 20, 2004 - 2:56 pm UTC

cl scr
set serveroutput on size 100000
set echo on
set pause on
set timing on

drop sequence sq
/
create sequence sq
/
select sq.nextval from dual
/
drop table test
/
create table test(
SOL NUMBER,
SOCST VARCHAR2(20))
/
drop table test1
/
create table test1 as select * from test where 1>2
/
alter table test1 add constraint sol_pk primary key(sol)
/
insert into test values(sq.nextval,'A '||sq.nextval)
/
insert into test values(sq.nextval,'A '||sq.nextval)
/
insert into test values(sq.nextval,'A '||sq.nextval)
/
insert into test values(sq.nextval,'A '||sq.nextval)
/
commit;
select * from test
/
select * from test1
/


CREATE OR REPLACE PROCEDURE p1 IS

-- TABLE DEFINATIONS


-- SOURCE --

TYPE src_SOCST_Type IS TABLE OF TEST.SOCST%TYPE ;
TYPE src_SOL_Type IS TABLE OF TEST.SOL%TYPE ;

src_SOCST_Ary src_SOCST_Type ;
src_SOL_Ary src_SOL_Type ;

-- Target --

TYPE Trg_SOCST_Type IS TABLE OF TEST.SOCST%TYPE INDEX BY BINARY_INTEGER ;
TYPE Trg_SOL_Type IS TABLE OF TEST.SOL%TYPE INDEX BY BINARY_INTEGER ;


EMPTY_Trg_SOCST_Ary Trg_SOCST_Type ;
EMPTY_Trg_SOL_Ary Trg_SOL_Type ;

Trg_SOCST_Ary Trg_SOCST_Type default EMPTY_Trg_SOCST_Ary ;
Trg_SOL_Ary Trg_SOL_Type default EMPTY_Trg_SOL_Ary ;

j number :=0;
k number :=0;
error_count number :=0;
Cursor C1 is
Select
SOCST,
SOL
FROM test;

Begin

OPEN C1;
LOOP
FETCH C1 BULK COLLECT INTO
src_SOCST_Ary,
src_SOL_Ary
LIMIT 5;

begin
FORALL k in 1 .. src_SOCST_Ary.count SAVE EXCEPTIONS
INSERT INTO test1(SOCST,SOL) VALUES (Src_SOCST_Ary(k),Src_SOL_Ary (k));
EXCEPTION
WHEN OTHERS THEN
error_count := SQL%BULK_EXCEPTIONS.COUNT;
dbms_output.put_line('No of Errors :' ||error_count);

end;
for indx in 1..error_count LOOP
dbms_output.put_line('Error '||indx || ' in Iteration '||SQL%BULK_EXCEPTIONS(indx).ERROR_INDEX);
dbms_output.put_line('Error Code is :'||(-SQL%BULK_EXCEPTIONS(indx).ERROR_CODE));
dbms_output.put_line('Error Msg is :'||SQLERRM(-SQL%BULK_EXCEPTIONS(indx).ERROR_CODE));
END LOOP;
Dbms_output.put_line('Committed ...!!!');
EXIT WHEN c1%NOTFOUND;


END LOOP;
CLOSE c1;
Dbms_output.put_line('Completed ...!!!');
END p1;
/
show err
--****************************************************************************
--============================================================================


Tom Kyte
September 20, 2004 - 4:20 pm UTC

pretty?

not sure why it is here, but.....

Location of c%notfound

A reader, September 20, 2004 - 3:59 pm UTC

  1  declare
  2        type array is table of number index by binary_integer;
  3        l_data array;
  4        cursor c is select empno from emp;
  5    begin
  6        open c;
  7        loop
  8           exit when c%notfound;
  9            fetch c bulk collect into l_data limit 10;
 10           /* process data here */
 11            dbms_output.put_line( 'Looping, c%rowcount = ' || c%rowcount );
 12        end loop;
 13        dbms_output.put_line( 'out of loop, c%rowcount = ' || c%rowcount );
 14        close c;
 15*   end;
SQL> /
Looping, c%rowcount = 10
Looping, c%rowcount = 14
out of loop, c%rowcount = 14

PL/SQL procedure successfully completed.

I used your example. But is not reproducing what you are saying. I understand, with what you are saying, that it should have gone out of the loop once it tries to fetch the next bulk of rows, since the next fetch would be for 4, it would have raised the notfound.. but somehow it does not. Any insight ? (8.1.7 here) 

Tom Kyte
September 20, 2004 - 4:50 pm UTC

ahh, by "top" of loop -- i assumed you meant "at the top, but after the fetch"

Yes, that would work -- looks funny to me, but it works.

.

dxl, September 21, 2004 - 5:51 am UTC

The output of from tkprof from the 8i db is:

********************************************************************************

SELECT App_id,
Data
FROM gtt_RawData
WHERE ((length(data) - length(replace(data, '","', '')) ) /
(length('","'))) NOT IN ( SELECT TO_NUMBER(Value)
FROM Session_Context
WHERE Namespace = 'LOAD_CTX'
AND Attribute LIKE 'IN\_LIST\_%' escape '\'
AND VALUE IS NOT NULL)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 1.03 1.03 0 453 4 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 1.03 1.03 0 453 4 2

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 34



********************************************************************************



The output of from tkprof from the 9i db is:

********************************************************************************

SELECT App_id,
Data
FROM gtt_RawData
WHERE ((length(data) - length(replace(data, '","', '')) ) /
(length('","'))) NOT IN ( SELECT TO_NUMBER(Value)
FROM Session_Context
WHERE Namespace = 'LOAD_CTX'
AND Attribute LIKE 'IN\_LIST\_%' escape '\'
AND VALUE IS NOT NULL)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 3.40 15.44 0 525 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 3.41 15.47 0 525 0 2

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 60

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 1 0.02 0.02



********************************************************************************



Whilst this elapsed time is a lot more than the 8i instance ie 15 seconds the actual time it took to run was over 2minutes :

10:07:01 TEST@DEV9>SELECT App_id,
10:10:58 2 Data
10:10:58 3 FROM gtt_RawData
10:10:58 4 WHERE ((length(data) - length(replace(data, '","', '')) ) /
10:10:58 5 (length('","'))) NOT IN ( SELECT TO_NUMBER(Value)
10:10:58 6 FROM Session_Context
10:10:58 7 WHERE Namespace = 'LOAD_CTX'
10:10:58 8 AND Attribute LIKE 'IN\_LIST\_%' escape '\'
10:10:58 9 AND VALUE IS NOT NULL);

APP_ID
----------
DATA
---------------------------------------------------------------------------------
1
"fas,","dfdsa","afdf","fdsafd","ffaadf","asfwfw","

1
"fas,","dfdsa"


2 rows selected.

Elapsed: 00:02:22.00
10:13:20 TEST@DEV9>

1)
How come the trace output elapsed time is only 15 seconds but the actual query run time is over 2 minutes?? Is this not strange? how did i lose 2 minutes in the tracing?

2) there also doesn't appear to be any plans in the output either. is this because they are gtt tables?

Tom Kyte
September 21, 2004 - 7:50 am UTC

are you testing on even remotely the same hardware? is the load the same on the two machines.

In light of "no waits", I can only guess the test 9i machine is currently cpu abused -- cpu bound.


The plans will be there if you exit SQLPlus before running tkprof on the trace files.


<quote>
Whilst this elapsed time is a lot more than the 8i instance ie 15 seconds the
actual time it took to run was over 2minutes :
</quote>

if that is true -- you have a serious configuration issue with your network. Look at the 8i tkprof -- if it took you more than 1 second to get the results -- you have a problem. That is took 15 seconds indicates "huge problem of excessively large proportions"

solved -- on this machine at least

dxl, September 21, 2004 - 10:33 am UTC

<quote>
are you testing on even remotely the same hardware? is the load the same on the two machines.
</quote>

-- yes the 8i and the 9i instances are installed on the same hp box under 2 separate oracle homes.

<quote>
In light of "no waits", I can only guess the test 9i machine is currently cpu
abused -- cpu bound.
</quote>

-- so you mean i may have run out of cpu?

<quote>
The plans will be there if you exit SQLPlus before running tkprof on the trace
files.
</quote>

-- i didn't know that , thanks for the tip!


<quote>
if that is true -- you have a serious configuration issue with your network.
Look at the 8i tkprof -- if it took you more than 1 second to get the results --
you have a problem. That is took 15 seconds indicates "huge problem of
excessively large proportions"
</quote>


-- not sure i understand this? the tkprof showed an elapsed time of 15 seconds for the 9i instance which ran in 2 minutes, are you saying that the discrepancy here was in the network time my client pc took in telling me the procedure had finished?? please explain.


I re ran the tracing to get the plan again using
alter session set events '10046 TRACE NAME CONTEXT FOREVER, LEVEL 12';
i ran this from sqlplus on the db server itself so no network involved, here is the relevant tkprof :


********************************************************************************

SELECT App_id,
Data
FROM gtt_RawData
WHERE ((length(data) - length(replace(data, '","', '')) ) /
(length('","'))) NOT IN ( SELECT TO_NUMBER(Value)
FROM Session_Context
WHERE Namespace = 'LOA
D_CTX'
AND Attribute LIKE '
IN\_LIST\_%' escape '\'
AND VALUE IS NOT NUL
L)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 2.53 15.05 0 439 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 2.53 15.06 0 439 0 0

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 60

Rows Row Source Operation
------- ---------------------------------------------------
0 FILTER
46532 TABLE ACCESS FULL OBJ#(30548)
46531 FIXED TABLE FULL X$CONTEXT


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net break/reset to client 2 0.86 0.88
SQL*Net message from client 1 4.54 4.54



********************************************************************************


This was run on the db server and yet even though the tkprof shows 15 seconds elapsed, i had to wait over 2 minutes for the sql to finish??
What could be wrong?!

Tom Kyte
September 21, 2004 - 11:11 am UTC

-- so you mean i may have run out of cpu?

in order to get 2.53 seconds of CPU time, you needed 15 seconds on the wall clock. someone else was using CPU for 13 seconds whilst you were trying.



-- not sure i understand this? the tkprof showed an elapsed time of 15 seconds
for the 9i instance which ran in 2 minutes, are you saying that the discrepancy
here was in the network time my client pc took in telling me the procedure had
finished?? please explain.

if tkprof says "i took 15 seconds to run the query" but sqlplus says "I took over 2 minutes to SHOW YOU the results of the work that took 15 seconds" -- you have a 1 minute 45 second plus gap of time you need to account for. It wasn't in the database



do you have the 8i plan and if you:


SELECT App_id,
Data
FROM gtt_RawData,
( SELECT distinct TO_NUMBER(Value) d
FROM Session_Context
WHERE Namespace = 'LOAD_CTX'
AND Attribute LIKE 'IN\_LIST\_%' escape '\'
AND VALUE IS NOT NULL) x
WHERE ((length(data) - length(replace(data, '","', '')) ) / (length('","'))) = x.d(+)
and x.d is null
/



how does that go?

8i result

dxl, September 21, 2004 - 11:43 am UTC

here is the 8i tkprof:

********************************************************************************

SELECT App_id,
Data
FROM gtt_RawData
WHERE ((length(data) - length(replace(data, '","', '')) ) /
(length('","'))) NOT IN ( SELECT TO_NUMBER(Value)
FROM Session_Context
WHERE Namespace = 'LOA
D_CTX'
AND Attribute LIKE '
IN\_LIST\_%' escape '\'
AND VALUE IS NOT NUL
L)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.90 0.90 0 453 4 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.90 0.92 0 453 4 2

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 34

Rows Row Source Operation
------- ---------------------------------------------------
2 FILTER
50073 TABLE ACCESS FULL GTT_RAWDATA
50072 FIXED TABLE FULL X$CONTEXT




********************************************************************************


I don't understand what else is using cpu because it doesn't occur with the 8i version yet consistently happens on the 9i version. no one else is using this dev server.

Also i am running from sqlplus on the db server so it can't be network time causing the 15 second to 2 minute gap, it must be o/s problem.

my pga_aggregate_target is set to 100M and shoots up to 400M overflow when i run this on the 9i, could the o/s be having memory problems?? if so it must also be related to a 9i issue, since i doesn't occur on 8i on the same machine?!

I am stumped i don't know what is going on.

Tom Kyte
September 21, 2004 - 11:58 am UTC

did you try the alternate query i gave you? (i'm trying to make you get well soon, in as much as figure out what is wrong -- once we get a viable workaround, we can then send a test case off to support to correct the base problem if any)

Replication or Not

reader, September 22, 2004 - 3:29 am UTC

Hi Tom,
In our case also we are required to fetch around 10 million rows from the source system on daily extract for datawarehouse. The source system has load of batch processes which takes few hours to finish during which there are several updates in the tables sometimes on same records. We were considering the option of replication using snapshots logs. We only want the incremental data hence no complete refresh only fast refresh using materialized view logs is under consideration. We are worried about the performance degradation on the source side and see on other option than this.
Can you please share your views on the right approach we should be taking?

Tom Kyte
September 22, 2004 - 8:01 am UTC

one can only measure this by benchmarking in your system -- setup a test system and see.

*anything you do* will add additional processing to the source system. *anything*.


If you just need to pull changes, you could have the source system timestamp each and every record.

You could use streams and mine the redo logs (9ir2 -- see the streams documentation)

You could use basic read only replication as you describe.


that worked

dxl, September 22, 2004 - 5:16 am UTC

yes the workaround you posted worked great, thanks.

As for the original sql:

I have opened a tar with support but at the moment they seem to be focussing on the fact that i get the ora04030 error when i set the pga_aggregate_target parameter, so they say i need more swop space.
BUT
if i shutdown some other instances so that i have more memory available, then i dont get the 0ra 04030 error anymore but it still takes 2 minutes to run??? whereas your workaround takes 2 seconds no matter how much memory is available.

do you think this really is an issue with pga and memory? i mean how can the original sql consume so much more pga memory than your workaround? are the plans really that much different and that less efficient?!!



Thanks for your solution though, i think i'll use it anyway as although possibly less intuitive it seems far more elegant sql!


replication or not

reader, September 22, 2004 - 1:39 pm UTC

Tom, thanks for your reply.
But the readonly snapshots can not 'really' provide incremental data isnt it? Evertime I will use fast refresh, it will always pull the changed records into materialized view which is now exact replica of the source. I still have to do 'something' to get the changed/added records from the materialized view. isnt that true?

regards





Tom Kyte
September 22, 2004 - 2:20 pm UTC

the read only snapshot IS the MV and the MV that is the read only snapshot can be incrementally refreshed.

not sure what you mean?

replication or not

reader, September 23, 2004 - 3:34 am UTC

Hi Tom, sorry for i was not clear. what i meant was i can not use the incremental records for any other purpose after refreshing the MV. We want to upload changed records in our datawarehouse daily. we can fetch these records from source system inside our mv. but once thay are in MV, we can not use them because they loose their identity.
e.g. 1. source table master contains 1000 records
2. we setup mv log on this table. and setup mv on our side. after refreshing it has 1000 records.
3. there are 100 new records in the master table next day. mv log contains 100 records. and master contains 1100 records.
4. we FAST refresh our mv. so our mv now contains 1100 records.
what we want is to push these incremental records, 100 only to our datawarehouse. we can not identify which 100 records unless we do something like writing triggers on our mv.
Isnt that correct?
so snapshot will be eventually used to keep the data at two sites in sync. no way we can use the incremental data directly for any other purpose.

as always your reply is deeply appreciated.

Tom Kyte
September 24, 2004 - 7:55 am UTC

your dataware is the thing with the MV?!?

and if it is not, it should be?

I'm very confused -- the MV is in the DW isn't it? (if not, why not? isn't that where it belongs?)


if you are on 9ir2, read about streams, it may provide what you want (given I'm not really following what it is you want, i would think the "mv" is in the "dw" and the incremental changes are going from transational => dw ...)

array cursor fetch

A reader, September 23, 2004 - 3:41 pm UTC

Hi

say I have this code

for i in (select * from x)
loop
select id
into l_var1
from y
where y_id = i.x_id;
select id
into l_var2
from z
where z_id = i.x_id;
end loop;

I tried to use host arrays for the cursor which works perfectly but it seems that I cannot use host arrays for select into from where... ? Is it how it works? It throws an error saying cannot use arrays in FROM WHERE clause

thx
close cursor




Tom Kyte
September 24, 2004 - 9:29 am UTC

how about instead of giving me "functioning (but not so good -- should be JOINS, a single query) code" -- you show us what you are trying to do?


you say "for the cursor" -- there are three of them here -- sorry, not clear enough.

replication or not

reader, September 24, 2004 - 10:07 am UTC

Tom, i am talking about MV's outside my datawarehouse. The MV's inside the datawarehouse, which are used as summary tables etc are just fine. They are working quite OK.
Here I am referring to other type of Materialized Views, one which fetch data over replication. i.e. snapshots.
Now when I fetch the incremental data from source inside this MV, I have to process this data before I upload that in the FACT tables. Now I want to process only incremental data which I am not able.
Is there any way I can query snapshot log table and master table, to get the changed records? And most importantly is it supported by ORACLE?
Regards


Tom Kyte
September 24, 2004 - 11:28 am UTC

there is only one kind of "MV" - there were formally known as Snapshots.

You can have MV's of MV's in 9i is that what you are looking for?



replication or not

reader, September 24, 2004 - 11:52 am UTC

No I am not talking about MVs over MVs. In expert one-on-one, chap 13, you have said, " Snapshot. this feature was initially designed to support replication, but i would use it to pre-answer large queries.". That replication usage i am talking.
Note from metalink 258227.1
--------------------------------------------

2. Usage of Materialized Views ============================== Materialized views can be used both for
- creating summaries to be utilized in data warehouse environments
- replicating data in distributed environments

---------------------------------------------------

I am not talking about its usage inside datawarehouse but in replication where i am trying to fetch incremental records from source tables. yes streams is definately something i am considering. Just wanted to make sure, the read-only snapshots through FAST refresh, will only be used to refresh the materialized view and I can not use the changed records outside this MV directly. I might be required to query the snapshot log table and the master table. BUT is it supported ?

Tom Kyte
September 24, 2004 - 12:33 pm UTC

materialized views <====> snaphots <====> materialized views

they are *the same*. A snapshot is a MV.


You would use MV's of MV's to cascade -- which is what it sure sounds like you want to do.

You would not query the log table, you would performan whatever ETL type stuff you wanted in the MV itself (which has the power of SQL)

If you cannot do it in MV's of MV's (or in a single MV directly), then streams is something to look at.

Selecting/Inserting 40 million records

Sujit, March 08, 2005 - 11:14 pm UTC

Hi Tom,

I am doing something like this,

Declare
cursor c is select * from emp;
begin
open c;
loop
fetch c bulk collect into l_c1, l_c2, ....... LIMIT 100;
for i in 1 .. l_c1.count
loop
process....
end loop;
forall i in 1 .. l_c1.count
insert into ..... values ( L_c1(i), .... );
commit;
end loop;
exit when c%notfound;
end loop;

close c
end;


Now i have 40 million records in my cursor query. My Question is, Will there be any problem (like out of memory) when i am trying to OPEN the cursor?


Tom Kyte
March 09, 2005 - 7:34 am UTC

No, if you have access to Effective Oracle by Design - there is a big section on how statements are processed - what happens when you open a cursor. (and if you want to make your process faster, you can read about "do it yourself parallelism" in the same)

RE: Selecting/Inserting 40 million records

Sujit, March 09, 2005 - 1:46 am UTC

I had one more doubt regarding My question posted above. If the data in the underlying table changes (i.e some 1 lakh record is Updated/added every day). Will our cursor fail? ie. will there be any SNAPSHOT TOO OLD ERROR?

Tom Kyte
March 09, 2005 - 7:41 am UTC

only if your rollback segments are sized too small and wrap around in the time it takes to process (enter do it yourself parallelism perhaps to make the query "smaller")

Commit Strategy

atish, May 31, 2005 - 1:16 pm UTC

Tom,
Referring back to Haranadh's original question of inserting 58m records using Bulk load and your solution of using the limit clause in Bulk Collection, what should be the ideal commit strategy in Haranadh's case.



Tom Kyte
June 01, 2005 - 7:48 am UTC

if you ask me, at the end.

say you were incrementally committing and at record 21,312,512 it failed.

Ok, now what? how do you restart? (sure, yes, you can write lots more code to make this restartable... but then you have to debug it, maintain it and so on.)

Processing VLTs

Billy, June 01, 2005 - 8:33 am UTC

atish from boston said:

> Referring back to Haranadh's original question of
> inserting 58m records using Bulk load and your solution of
> using the limit clause in Bulk Collection, what
> should be the ideal commit strategy in Haranadh's case.

I agree with Tom that committing inside the loop due to the sheer volume of rows processed, does not solve the problem.

However, what is needed is a clear problem definition. It is truly asine to attempt to process 10's of millions of rows in a VLT as a single transaction. There is something very wrong with the logic of that decision or the design of that table and/or database that requires such a single massively large transaction. (seems like 3rd normal form has now become a nice-to-have the way many developers design logical databases)

So define the actual problem before jumping to the conclusion that it necessarily require an update of every single row in a VLT, or having to insert millions of rows in a single shot.

Granted, once off mass updates could occur (been there myself)- usually because someone screwed up somewhere and now the data needs to be fixed and there's no other way but to update every single row in the billion row table.

However, even in such a case updates can be run on smaller data sets and usually in parallel. And if not possible, then you should grab a lead pipe (a nice and shiny and hard one) and have a serious discussion with the designer of that VLT for not using partitioning.

For inserts.. I prefer running a CTAS to create the data set (usually using PQ) and then simply add that data set (table) as a partition to the VLT (Oracle simply exchanges the empty contents of the partition with the contents of the table).

Not one for making absolute statements, but a VLT should always be partitioned.


bulk collect not working (Oracle 9.2.0.1)

A reader, June 27, 2005 - 1:01 pm UTC

SQL> DECLARE
  2    CURSOR c1 IS
  3    SELECT deptno, dname, loc FROM dept WHERE deptno > 10;
  4    TYPE DeptRecTab IS TABLE OF dept%ROWTYPE;
  5    TYPE DeptRecTab1 IS TABLE OF c1%ROWTYPE;
  6    dept_recs DeptRecTab;
  7    dept_recs1 DeptRecTab1;
  8  BEGIN
  9    OPEN c1;
 10    --FETCH c1 BULK COLLECT INTO dept_recs; -- this works
 11    FETCH c1 BULK COLLECT INTO dept_recs1;
 12  END;
 13  /
  FETCH c1 BULK COLLECT INTO dept_recs1;
                             *
ERROR at line 11:
ORA-06550: line 11, column 30: 
PLS-00597: expression 'DEPT_RECS1' in the INTO list is of wrong type 
ORA-06550: line 11, column 3: 
PL/SQL: SQL Statement ignored 

If I run the above with the collection dept_recs,
it works. Is there a way to use a collection of 
cursor record type (c1%rowtype) and do a bulk collect
into it?

Thanx! 

Tom Kyte
June 27, 2005 - 2:10 pm UTC

ops$tkyte@ORA9IR2> DECLARE
  2    CURSOR c1 IS
  3    SELECT deptno, dname, loc FROM dept WHERE deptno > 10;
  4    TYPE DeptRecTab IS TABLE OF dept%ROWTYPE;
  5    TYPE DeptRecTab1 IS TABLE OF c1%ROWTYPE;
  6    dept_recs DeptRecTab;
  7    dept_recs1 DeptRecTab1;
  8  BEGIN
  9    OPEN c1;
 10    --FETCH c1 BULK COLLECT INTO dept_recs; -- this works
 11    FETCH c1 BULK COLLECT INTO dept_recs1;
 12  END;
 13  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select * from v$version;
 
BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
PL/SQL Release 9.2.0.5.0 - Production
CORE    9.2.0.6.0       Production
TNS for Linux: Version 9.2.0.5.0 - Production
NLSRTL Version 9.2.0.5.0 - Production
 

(patch up and the answer is yes)
 

Thanx!

A reader, June 27, 2005 - 2:21 pm UTC

So you have an Oracle of all patches also installed
on your laptop?

Tom Kyte
June 27, 2005 - 2:49 pm UTC

well, much of the time I'm not doing things on a laptop (contrary to popular opinion).

I have various machines scattered about. I do not have every patch release, just generally the "more current ones", although I need to patch up this 9205 instance sometime.

Oracle error: ORA-04030: out of process memory..

Jagannath Dalvi, July 08, 2005 - 3:27 pm UTC

Hi Tom,

I am trying to use BULK COLLECT, but somehow process fails after inserting certain number of rows (about 26 million out of 100 plus million).
It gives an error: "Oracle error: ORA-04030: out of process memory when trying to allocate 3224
bytes (callheap,psdalf:RPI allocations)"
I am using LIMIT clause for 100 rows but for every round, PGA increases and never comes back. I have also opened a TAR with Oracle, but if you have any suggestions (most of the times you do!), please help me...

Code sample:
===
rows NATURAL := 100;
TYPE reclist IS TABLE OF source_table%ROWTYPE index by binary_integer;
recordset_n reclist;

dml_errors EXCEPTION;
PRAGMA exception_init(dml_errors, -24381);

CURSOR cursor_n IS...
....

BEGIN

....
OPEN cursor_n;
IF cursor_n%FOUND then
LOOP
BEGIN
FETCH cursor_n BULK COLLECT INTO recordset_n LIMIT rows;

FORALL i IN 1..recordset_n.count SAVE EXCEPTIONS
insert into table_n
values recordset_n(i);
COMMIT;
EXCEPTION
WHEN dml_errors THEN
errors := SQL%BULK_EXCEPTIONS.COUNT;

FOR i IN 1..errors LOOP
v_error_index := SQL%BULK_EXCEPTIONS(i).ERROR_INDEX;
v_error_code := SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE);
INSERT INTO bulk_errors
VALUES ('TABLE',
v_error_index,
v_error_code);
COMMIT;
END LOOP;
COMMIT;
END;
EXIT WHEN cursor_n%NOTFOUND;
END LOOP;
....
===
Version:
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
PL/SQL Release 9.2.0.4.0 - Production
CORE 9.2.0.3.0 Production
TNS for IBM/AIX RISC System/6000: Version 9.2.0.4.0 - Production
NLSRTL Version 9.2.0.4.0 - Production
===
oracle:>ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 4194304
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) 2000
===

Thanks,

Jagan



LIMIT-type operation on INSERT INTO SELECT

Darin, September 01, 2005 - 12:35 pm UTC

Tom,

This in in reference to your "and we said..." response to the orignal post in the thread, and the first review, which points out that the BULK COLLECT/FORALL INSERT is equivalent to the INSERT INTO SELECT (provided processing is not necessary b/w the BULK COLLECT & FORALL INSERT).

I have several queries written as INSERT INTO SELECT.
However, we want to process in chunks and commit each chunk b/c of possible large data amounts so I started to rewrite as BULK COLLECT LIMIT/FORALL INSERT.

However, I have several of these queries already written as INSERT INTO SELECT, and I would like to avoid having to convert them all to BULK COLLECT LIMIT/FORALL INSERT if possible.

Is there a syntax to use the INSERT INTO SELECT operation with a LIMIT-type operation?
I know it can be achieved procedurally by writing my own loop and using rownum function (wrapping the origianl select query, with the orignal query ordered) and tracking which rownums were last processed. This seems like much too much work compared to the BULK COLLECT LIMIT/FORALL INSERT. If there is a LIMIT-type short-hand for the INSERT INTO SELECT to process in chunks that would be great.

Thanks as always

Tom Kyte
September 01, 2005 - 4:02 pm UTC

I have several queries written as INSERT INTO SELECT.
However, we want to process in chunks and commit each chunk b/c of possible
large data amounts so I started to rewrite as BULK COLLECT LIMIT/FORALL INSERT.


do not forget to make this "restartable" so when it crashes halfway through, you can pick up where you left off :)


you will be writing procedural code, and more than you think, in order to make it restartable after the inevitable failure.

RE:LIMIT-type operation on INSERT INTO SELECT

Darin, September 02, 2005 - 12:36 pm UTC

Tom,
Yes, I understand the issue you are describing w/ the failure halfway through. That was the reason for wondering if there was a way to perform a LIMIT when using the INSERT INTO SELECT syntax.

I preferred the INSERT INTO SELECT syntax over the BULK COLLECT LIMIT/FORALL INSERT b/c [I assume] I could avoid the over-head of the PL/SQL to SQL context switching.

So I gather:
1. From your response I gather that there is NOT a built-in syntax to perform a LIMIT-type opertation when using the INSERT INTO SELECT. I would have to write my own procedural code to perfrom the chunk processing as well as to handle the inevitable mid-point failures and pick-up processing where the failure occured.

2. By using the BULK COLLECT LIMIT/FORALL SAVE EXCEPTIONS INSERT I should be able to achieve the processing in chunks, and avoid the inevitable mid-point failure. The only real down side to this method is the context switching (PLSQL to SQL).

3. Your recommendation is to use the BULK COLLECT LIMIT/FORALL INSERT SAVE EXCEPTIONS if moving large amounts of data (i.e. we are using an external table as the source to load catalogs of items into our production tables) where we want to (1) process in chunks, and (2) continue processing after exceptions arise.

If my understanding are correct then the minimal over head of the context switching sounds very appealing compared to writing all that procedural code. And in the end should be quicker since most of the work is being performed via SQL vs procedural PLSQL.


Regards

Tom Kyte
September 03, 2005 - 7:34 am UTC

but I'm not sure I get it. How would a "limit" type thing work.


You have rownum to limit the number of rows returned from a query, you can use it with a delete for example:


goal remove all data older than 2 weeks:

l_date := sysdate-14;
loop
delete from t where datefield >= l_date and rownum <= 10000;
exit when sql%rowcount = 0;
commit;
end loop;



not sure I would suggest that, as one of two things would happen:

either "delete from t where datefield >= l_date" SHOULD full scan or SHOULD range scan on an index on l_date. I'll assume in most all cases, it SHOULD full scan, it would be unusual to index scan. So assuming full scan WOULD have happened, either

a) the addition of rownum <= 10000 will cause an index range scan (optimizer says "10,000 rows -- index = good". Meaning a delete that should have full scanned will read EVERY ROW TO DELETE via the index by multiple passes of the delete. Ugh.

b) the addition of rownum <= 10000 will NOT cause an index range scan. But each pass of the delete will instead full scan (until it hits 10,000 rows to delete) and then restart (at the top of the table again) and every pass re-reads ALL OF THE blocks every previous pass did!!!! Ugh again.


new for 10gr2:

</code> http://asktom.oracle.com/Misc/how-cool-is-this.html http://asktom.oracle.com/Misc/how-cool-is-this-part-ii.html http://asktom.oracle.com/Misc/how-cool-is-this-part-iia.html <code>



Better way to delete

Juan Velez, September 16, 2005 - 4:43 pm UTC

If the use of

delete from t where datefield >= l_date and rownum <= 10000

is not "good", then can you show me a better way?

Please bear in mind that the constraints are "snapshot too old" and undo tablespace.

Thanks

Tom Kyte
September 16, 2005 - 6:10 pm UTC

delete from t where datefield >= l_date;


bigger undo - it is not a real constraint, it is simply a self imposed one. Things you do to "work around it" will be suboptimal

exit at bottom

Rahul, October 14, 2005 - 12:02 pm UTC

Thanks for the pointer on putting the exit when at the bottom for bulk collect. Might never have caught it in system test when the number of rows is a multiple of the "limit". Was not immediately clear from the 9i documentation.

</code> http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/05_colls.htm#28385 <code>

<snip>
DECLARE
TYPE NumTab IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
CURSOR c1 IS SELECT empno FROM emp;
empnos NumTab;
rows NATURAL := 10;
BEGIN
OPEN c1;
LOOP
/* The following statement fetches 10 rows (or less). */
FETCH c1 BULK COLLECT INTO empnos LIMIT rows;
EXIT WHEN c1%NOTFOUND;
...
END LOOP;
CLOSE c1;
END;
<snip>

Can't call that downright incorrect, but that can be misleading.

Bulk collect from ref cursor in ORACLE10G

ana, December 17, 2005 - 3:30 pm UTC

Hi,Tom,
Can i make bulk collect in ORACLE 10G
if i have ref cursor?
Ana

Tom Kyte
December 17, 2005 - 4:33 pm UTC

you can do it in 9i, 8i even - yes.

CONTINUE

ANA, December 17, 2005 - 5:35 pm UTC

if i have: open cur for l_select
when l_select is a dynamic select
how can i do bulk_collect?
show me please


Tom Kyte
December 18, 2005 - 10:11 am UTC

ops$tkyte@ORA9IR2> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
PL/SQL Release 9.2.0.6.0 - Production
CORE    9.2.0.6.0       Production
TNS for Linux: Version 9.2.0.6.0 - Production
NLSRTL Version 9.2.0.6.0 - Production

ops$tkyte@ORA9IR2> @test
ops$tkyte@ORA9IR2> declare
  2          type l_array is table of number;
  3
  4          l_data l_array;
  5          l_cur  sys_refcursor;
  6  begin
  7          open l_cur for 'select object_id from all_objects';
  8          fetch l_cur bulk collect into l_data limit 100;
  9          dbms_output.put_line( l_data.count || ' rows fetched' );
 10          close l_cur;
 11  end;
 12  /
100 rows fetched

PL/SQL procedure successfully completed.
 

Bulk collects are cool

Ivor, January 13, 2006 - 12:48 am UTC

I found using a limit of 100000 for my insert of about 20 million rows seemed to result in faster execution time than a smaller limit.

I also found that when I tried to use a bulk collect to do a delete on a remote database via a database link, it seemed to complete but didn't actually delete the rows! I have a tar in with Oracle for that issue, but I was wondering if anyone else has come across it.

Tom Kyte
January 13, 2006 - 11:11 am UTC

array processing is not supported over dblinks.

in_list_num, bulk collect, forall, merge

James, January 24, 2006 - 7:27 pm UTC

Hi Tom,

I have a table with a string of id's which I need to aggreagate into a summary table.

Can you have a glance at this example and see if you would do it differently.

---------------

create type n_tabletype as table of number;

--useful asktom list to n_tableType function
create or replace function in_list_num( p_string in varchar2 ) return n_TableType
as
l_string long default replace(rtrim(p_string,',')||',', ' ');
l_data n_TableType := n_TableType();
n number;
begin

loop

exit when l_string is null;
n := instr( l_string, ',' );
l_data.extend;
l_data(l_data.count) := to_number(ltrim( rtrim( substr( l_string, 1, n-1 ) ) ) );
l_string := substr( l_string, n+1 );

end loop;

return l_data;

end in_list_num;

create table id_list(list varchar2(1000));
insert into id_list values ('4,5,6,7,8,9,10');
insert into id_list values ('5,6,7,8,9,10');
insert into id_list values ('9,10');

create table id_count_t(id number, id_count number);

--now i need to efficiently process id_list into id_count table

/*
mainly a 9i implementation to work around abscence of "forall i in INDICES of..."
*/

declare

id_list n_tabletype;

type num_array is table of number index by binary_integer;

id_count num_array;

dense_id num_array;
dense_count num_array;

offset_n number := 0;

begin

--main loop
for i in (select list from id_list) loop

select a.column_value
bulk collect
into id_list
from the(select cast(csutils.in_list_num(i.list) as n_tabletype) from dual) a;

--list loop
for x in 1 .. id_list.count loop

if id_count.exists(id_list(x)) then
id_count(id_list(x)) := id_count(id_list(x))+1;
else
id_count(id_list(x)) := 1;
offset_n := offset_n + 1;
dense_id(offset_n) := id_list(x);
end if;

end loop;

end loop;

--create dense count array
for x in 1 .. dense_id.count loop

dense_count(x) := id_count(dense_id(x));

end loop;

forall i in 1 .. dense_id.count
merge into id_count_t
using ( select dense_id(i) id,dense_count(i) cnt from dual ) t2
on (id_count_t.id = t2.id)
when matched then update set id_count_t.id_count = id_count_t.id_count + t2.cnt
when not matched then insert (id,id_count) values(t2.id, t2.cnt);


end;

/

select * From id_count_t;

ID ID_COUNT
--------- ----------
4 1
5 2
6 2
7 2
8 2
9 3
10 3


Production volumns:
300,000 lists of ids in id_list table (with 20 ids each)
25,000 distinct id's - this will be the eventually array size in the forall loop

Ideally I would like to do away with the "list loop" and run the "main loop" through with all ids in each row.

Regards,
James

Tom Kyte
January 24, 2006 - 9:12 pm UTC

looks more like a pipelined function example I would think. search site for

str2tbl pipelined

so whats the difference?

James, January 24, 2006 - 11:17 pm UTC

Thanks for your help but how is:

select t2.column_value id
from id_list, TABLE(str2tbl(id_list.list)) t2
/

different from:

select t2.column_value
from id_list,the(select cast(csutils.in_list_num(id_list.list) as n_tabletype) from dual) t2
/

ie the power of the pipeline is lost on me.

Feel free just to tell me to benchmark and find out :-)

Regards


Tom Kyte
January 25, 2006 - 1:25 pm UTC

the pipeline function does not have to build the entire result set into a collection and return it - it just returns stuff as it produces it. It "streams" or "pipes" the data.

the thing calling the pipelined function can get the first row of data from the function before the function ever gets around to creating the last row of data.


Using just the "fill up collection", "return collection", you have to wait for the last row to be generated to get the first row AND plsql has to have a data structure allocated in memory to hold it.

Your thoughts

A reader, June 07, 2006 - 12:59 pm UTC

Tom,
Can I use bulk collect when I am calling a funcion in this way. Please keep in mind that the aud_trans table is over 10M records. Hence bulk collect.

FUNCTION is_SIOBT_FUNC( ID_in NUMBER , act_in VARCHAR2 )
RETURN PLS_INTEGER
IS
CURSOR c (v_ID NUMBER, v_action VARCHAR2)
IS
SELECT 1
FROM aud_trans@dblink -----10 000 000 records
WHERE student_id = v_ID
AND action = v_action
AND voided_date IS NULL;
dummy NUMBER := 0;
BEGIN
OPEN c ( ID_IN, act_IN);
FETCH c INTO dummy;
CLOSE c;
RETURN dummy;
END is_SIOBT_FUNC;

Tom Kyte
June 07, 2006 - 3:29 pm UTC

you are fetching a single row though? so what if the tableis 10,000,000 records, you only get "one"

looks like you wanted to use select into


begin

select 1 into l_dummy
from table@remote
where student_id = v_id
and action = v_action
and voided_date is null
and rownum = 1;

return dummy;

end;



Thanks

A reader, June 07, 2006 - 5:26 pm UTC

Thank you Tom,

I guess the best way would be to get rid of it all together.


Did I miss a step?

A reader, June 07, 2006 - 5:45 pm UTC

Tom,

I re wrote it like this

FUNCTION is_SIOBT_FUNC( ID_in NUMBER , action_in VARCHAR2 )
RETURN PLS_INTEGER
is
l_dummy number;
begin
select 1 into l_dummy
from aud_trans@dblink
where student_id = id_in
and action = action_in
and voided_date is null
and rownum = 1;
return l_dummy;
exception
when no_data_found then
null;
end;

BUT I am getting an error. Did I miss a step?

ERROR at line 1:
ORA-20006: Error in stud_PKG.district - ORA-20006: Error in
stud_PKG.salary2_PROC - ORA-06503: PL/SQL: Function returned without value
ORA-06512: at "stud_PKG", line 607
ORA-06512: at line 2


Tom Kyte
June 07, 2006 - 6:33 pm UTC

you are getting a no data found, upon which you want to "return 0"

not do "null" :)

One more thing...

A reader, June 10, 2006 - 7:21 pm UTC

Tom,
You said you are fetching a single row though? so what if the tableis 10,000,000 records, you only get "one". Can I use a BULK COLLECT in this process? Please advice..





Tom Kyte
June 11, 2006 - 11:51 am UTC

advise on what?

if you are fetching a single record, what possible purpose could a bulk fetch afford you?

I guess I advise not doing a bulk bind, since you are getting a single record.

Error.-

Mariano, June 16, 2006 - 7:28 am UTC

Hi Tom.
Please, see below. I'm getting the ORA-error but not sure why. Any suggestion for getting rid of it?

As usual, thanks a lot.

*****************************
SQL> select * from v$version;

BANNER                                                                          
----------------------------------------------------------------                
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi                
PL/SQL Release 10.2.0.1.0 - Production                                          
CORE    10.2.0.1.0    Production                                                      
TNS for Linux: Version 10.2.0.1.0 - Production                                  
NLSRTL Version 10.2.0.1.0 - Production                                          

SQL> set serveroutput on size 20000
SQL> create table test_tab (
  2  p number,
  3  q number,
  4  r varchar2(2));

Table created.

SQL> create type test_typ
  2  as object (
  3  p number,
  4  q number,
  5  r varchar2(2));
  6  /

Type created.

SQL> create type test_col is table of test_typ;
  2  /

Type created.

SQL> insert into test_tab values (1,2,'A');

1 row created.

SQL> insert into test_tab values (2,2,'A');

1 row created.

SQL> insert into test_tab values (3,2,'B');

1 row created.

SQL> insert into test_tab values (4,2,'B');

1 row created.

SQL> insert into test_tab values (5,2,'A');

1 row created.

SQL> insert into test_tab values (6,2,'A');

1 row created.

SQL> commit work;

Commit complete.

SQL> declare
  2   v test_col;
  3  begin
  4   select * bulk collect into v from test_tab;
  5   for cur in 1.. v.count loop
  6    dbms_output.put_line(v(cur).p||' '||v(cur).q||' '||v(cur).r);
  7   end loop;
  8  end;
  9  /
 select * bulk collect into v from test_tab;
                              *
ERROR at line 4:
ORA-06550: line 4, column 31: 
PL/SQL: ORA-00947: not enough values 
ORA-06550: line 4, column 2: 
PL/SQL: SQL Statement ignored 






 

Tom Kyte
June 16, 2006 - 7:03 pm UTC

you are fetching 3 columns into a single host variable.

so, convert the 3 columns into a single column of your type:

ops$tkyte@ORA10GR2> declare
  2   v test_col;
  3  begin
  4   select test_typ(p,q,r) bulk collect into v from test_tab;
  5   for cur in 1.. v.count loop
  6    dbms_output.put_line(v(cur).p||' '||v(cur).q||' '||v(cur).r);
  7   end loop;
  8  end;
  9  /
1 2 A
2 2 A
3 2 B
4 2 B
5 2 A
6 2 A

PL/SQL procedure successfully completed.
 

Array Processing

A reader, June 21, 2006 - 8:06 am UTC

I have something along the line of this:

DECLARE
TYPE ARRAY IS TABLE OF activity_access_vw%ROWTYPE;
l_data ARRAY;
CURSOR c_data IS
SELECT a, b, c, d, a few more variables
FROM activity_access_vw
where something static order by something static;
BEGIN
OPEN c_data;
LOOP
FETCH c_data BULK COLLECT INTO l_data LIMIT 100;
exit when c_data%notfound;
end loop;
close c_data;
end;

Gives me "wrong number of values into the fetch statement".

I read many of the posts on this topic but most seem to only pull one item from a table and I saw no view examples at all. All I want to do is have a procedure that pulls a set of records from a view, populates an array that can then be passed to a Java procedure for display in the frontend. Right now I am not concerned with the Java just getting the array populated. So my questions are:

1) Can I not use a view in what I am doing?
2) What am I doing wrong in above?
2) Is this the correct method to use to do what I want and if not what is a better method?

Thanks in advance.

Tom Kyte
June 22, 2006 - 10:35 am UTC

well, if you want to pass this to java, you won't be using a PLSQL table of records.

And secondly, you defined the table type as "table of TABLE_NAME%rowtype" but you fetch "select a,b, c, d, 'a few more things' ..."

is a, b, c, d 'a few more things' exactly the same as "table_name%rowtype"????  If not, well, there you go, you are fetching square pegs into round arrays.


view, table - not any different in this context at all (a view "is" a table)


but, here be your example with a view.


ops$tkyte@ORA10GR2> declare
  2  TYPE          ARRAY IS TABLE OF all_objects%ROWTYPE;
  3  l_data        ARRAY;
  4  CURSOR c_data IS
  5       SELECT owner, object_type, object_id, object_name
  6       FROM all_objects;
  7  BEGIN
  8    OPEN c_data;
  9    FETCH c_data BULK COLLECT INTO l_data LIMIT 100;
 10  end;
 11  /
  FETCH c_data BULK COLLECT INTO l_data LIMIT 100;
  *
ERROR at line 9:
ORA-06550: line 9, column 3:
PLS-00394: wrong number of values in the INTO list of a FETCH statement
ORA-06550: line 9, column 3:
PL/SQL: SQL Statement ignored


ops$tkyte@ORA10GR2> declare
  2  TYPE          ARRAY IS TABLE OF all_objects%ROWTYPE;
  3  l_data        ARRAY;
  4  CURSOR c_data IS
  5       SELECT *
  6       FROM all_objects;
  7  BEGIN
  8    OPEN c_data;
  9    FETCH c_data BULK COLLECT INTO l_data LIMIT 100;
 10  end;
 11  /

PL/SQL procedure successfully completed.



But, if you want to pass to JAVA, you'll need to use an object type and a collection of that type.


ops$tkyte@ORA10GR2> create or replace type myScalarType as object (
  2   OWNER                                             VARCHAR2(30),
  3   OBJECT_NAME                                       VARCHAR2(30),
  4   SUBOBJECT_NAME                                    VARCHAR2(30),
  5   OBJECT_ID                                         NUMBER,
  6   DATA_OBJECT_ID                                    NUMBER,
  7   OBJECT_TYPE                                       VARCHAR2(19),
  8   CREATED                                           DATE,
  9   LAST_DDL_TIME                                     DATE,
 10   TIMESTAMP                                         VARCHAR2(19),
 11   STATUS                                            VARCHAR2(7),
 12   TEMPORARY                                         VARCHAR2(1),
 13   GENERATED                                         VARCHAR2(1),
 14   SECONDARY                                         VARCHAR2(1)
 15  )
 16  /

Type created.

ops$tkyte@ORA10GR2> create or replace type myTableType as table of myScalarType
  2  /

Type created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> declare
  2          l_data        myTableType;
  3
  4          CURSOR c_data IS
  5       SELECT myScalarType( OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID,
  6                   DATA_OBJECT_ID, OBJECT_TYPE, CREATED, LAST_DDL_TIME,
  7                   TIMESTAMP, STATUS, TEMPORARY, GENERATED, SECONDARY )
  8       FROM all_objects;
  9  BEGIN
 10    OPEN c_data;
 11    FETCH c_data BULK COLLECT INTO l_data LIMIT 100;
 12  end;
 13  /

PL/SQL procedure successfully completed.

 

Just great! Absolutely what I needed! Really appreciate it!

A reader, June 27, 2006 - 8:02 am UTC


Is ths correct?

A reader, July 07, 2006 - 9:10 pm UTC

Hi Tom,
Is this correct? I was under the impression that the exception SQL%NOTFOUND are raised only for DML statements Is BULK COLLECT treated as DML statements?
DECLARE
lv_schema_name VARCHAR2(30)  := 'TEST';
lv_table_name DBMS_SQL.VARCHAR2_TABLE;
BEGIN
SELECT DISTINCT table_name
BULK COLLECT
INTO lv_table_name
FROM dba_tab_partitions
WHERE table_owner =  lv_schema_name;
/********* 
IF SQL%NOTFOUND THEN
    RAISE_APPLICATION_ERROR (-20001,sqlerrm||' Not partitioned table was found in the schema: '||lv_schema_name);
END IF;
*********/
EXCEPTION
  WHEN NO_DATA_FOUND THEN
    RAISE_APPLICATION_ERROR (-20001,sqlerrm||' Not partitioned table was found in the schema: '||lv_schema_name);
END;
SQL> /

PL/SQL procedure successfully completed.

-- If I remove the comments of the check IF SQL%NOTFOUND 
--then I get the write error
DECLARE
*
ERROR at line 1:
ORA-20001: ORA-0000: normal, successful completion Not partitioned table was
found in the schema: TEST
ORA-06512: at line 11

Regards,
Tarun 

Tom Kyte
July 08, 2006 - 8:48 pm UTC

sql%whatever is valid for the last IMPLICIT CURSOR used - regardless of the "type" (select is in fact DML actually.... it manipulates data...)


ops$tkyte@ORA10GR2> declare
  2          l_data dbms_sql.varchar2_table;
  3  begin
  4          select dummy bulk collect into l_data
  5            from dual
  6           where 1=0;
  7  end;
  8  /

PL/SQL procedure successfully completed.

<b>bulk collect doesn't throw no data found...</b>

ops$tkyte@ORA10GR2> declare
  2          l_data dbms_sql.varchar2_table;
  3  begin
  4          select dummy bulk collect into l_data
  5            from dual
  6           where 1=0;
  7
  8          if ( sql%notfound )
  9          then
 10                  dbms_output.put_line( 'well now... No data found' );
 11          end if;
 12  end;
 13  /
well now... No data found

PL/SQL procedure successfully completed.

<b>but you can if you want (me, I would have used "if l_data.count = 0 then...."</b>

ops$tkyte@ORA10GR2> declare
  2          l_data dual.dummy%type;
  3  begin
  4          select dummy into l_data
  5            from dual
  6           where 1=1;
  7
  8          if ( sql%notfound ) then dbms_output.put_line( 'not found' );
  9          elsif ( sql%found ) then dbms_output.put_line( 'found' );
 10          else dbms_output.put_line( 'very confused' );
 11          end if;
 12  end;
 13  /
found

PL/SQL procedure successfully completed.

<b>shows the %found, %notfound, whatever - work here...</b>

ops$tkyte@ORA10GR2> declare
  2          l_data dual.dummy%type;
  3  begin
  4          select dummy into l_data
  5            from dual
  6           where 1=0;
  7
  8  exception when others then
  9          if ( sql%notfound ) then dbms_output.put_line( 'not found' );
 10          elsif ( sql%found ) then dbms_output.put_line( 'found' );
 11          else dbms_output.put_line( 'very confused' );
 12          end if;
 13  end;
 14  /
not found

PL/SQL procedure successfully completed.
<b>and there..</b>

 

Thanks a lot!

A reader, July 08, 2006 - 10:19 pm UTC

Tom,
Thanks for your more detailed reply.
You mentioned that select is form of DML...
then why we use
for select statement
WHEN NO_DATA_FOUND rather than SQL%NOTFOUND...
Regards,

Tom Kyte
July 09, 2006 - 8:48 am UTC

You don't use "no_data_found" in general for a select statment.

Only for the specially "syntactically sugared select INTO"

That is a language feature. PLSQL throws "no_data_found" for a SELECT INTO that returns NO DATA (and too_many_rows if it returns more than one row). Because SELECT INTO is simply short hand for "fetch me at least and at most ONE row please"


But for a "normal select" - no such exception exists.

And remember - plsql isn't sql. PLSQL is just one of dozens of programming languages you can use to interact with SQL. Other languages do not throw "no_data_found" for example.







Thanks a lot!

A reader, July 10, 2006 - 4:36 pm UTC


PL/SQL processing problem

A reader, February 13, 2007 - 4:58 am UTC

Hi Tom,
Thanks for your help to the oracle community.

I have an unidentifiable problem. Using external table I have to process the flat file which has 4.4 million records.
In the PL/SQL procedure, I am reading the external table using bulk collect with limit 1000 and I am doing the process. The process is simple, just doing some checks.
If the check fails, I will move a record into another PL/SQL(result) table. Finally I will insert the result into table.This final result would have maximum of 3000 records.
Initially I have written this procedure in 10G and I have processed a file of 4.4 million records. It took 22 minutes to process the file. Then I have put the same procedure in 9i and I process the file, it is taking 2 hrs to process the file. There is no problem in reading the data from external table that I have checked. Following the trace file I have taken from both the database.

This from 10G
-------------

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.02 0 0 0 0
Execute 1 1377.48 1351.68 0 3944 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1377.49 1351.70 0 3944 0 1

This is from 9i
---------------
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 7036.29 6880.67 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 7036.29 6880.67 0 0 0 1

SGA of 10g is 5GB and in 9i it is 600 mb. Is that could be the reason for performance? I couldn't find the problem.
Can you please help to solve this issue

Bulk collect

Piyush Sachan, February 14, 2007 - 4:24 am UTC

Hi,

i didn't understand why you are specifing the limit value to 100/1000. What i thing is this should be depended on the number of bytes we are fetching in the and Table/variable and depening upon the version of oracle like (64 or 32 bit).

Pls suggest what would be the optimal limit in bulk collect regarding the version and bytes.
Tom Kyte
February 14, 2007 - 8:29 am UTC

100 to 1000 rows.

There you go, I'll only ever say it that way.

ORA-04030 during BULK COLLECT

Gogo, March 20, 2007 - 3:42 am UTC

Here is my code:

DECLARE
CURSOR cur2 IS SELECT substr(T1.branch,1,3) F1, T1.acc_num F2, DECODE(T1.acc_status,'A',0,'N',1) F3, DECODE(T1.interest_code,'O',0,'R',0,'N',1) F4, null F5, T1.currency F6, null F7, T1.client_num F8, null F9, to_char((case when T1.currency != '975' then T1.dt_turnover * T2.ind_rate end), 'FM99999999999990.00') F10, to_char((case when T1.currency != '975' then T1.cr_turnover * T2.ind_rate end), 'FM99999999999990.00') F11, to_char((case when T1.currency = '975' then T1.dt_turnover end) , 'FM99999999999990.00') F12, to_char((case when T1.currency = '975' then T1.cr_turnover end) , 'FM99999999999990.00') F13, to_char((case when T1.currency != '975' and T1.balance < 0 then T1.balance * T2.ind_rate end) , 'FM99999999999990.00') F14, to_char((case when T1.currency != '975' and T1.balance >= 0 then T1.balance * T2.ind_rate end) , 'FM99999999999990.00') F15, to_char((case when T1.currency = '975' and T1.balance < 0 then T1.balance end) , 'FM99999999999990.00') F16, to_char((case when T1.currency = '975' and T1.balance >= 0 then T1.balance end) , 'FM99999999999990.00') F17, null F18, DECODE(T1.acc_side,'D',0,'C',1,'I',2) F19, to_char(T1.interest_date,'yymmdd') F20, rpad('0',10,'0') F21, to_char(T1.open_date,'yymmdd') F22, null F23, substr(T1.branch,4,2) F24, null F25, null F26, null F27, null F28, '0' F29, to_char(T1.last_movm,'yymmdd') F30, to_char(nvl(T1.cl_date, T1.modif_date),'yymmdd') F31, nvl(T1.uid_close, T1."UID") F32, T1.uid_creat F33, null F34 FROM RS.ACCOUNTS T1 LEFT JOIN RS.EXCH_RATES T2 ON (T1.currency = T2.currency) WHERE to_char(T1.DATE_KEY,'yyyy-mm-dd')='2007-02-20'
AND to_char(T2.acc_date,'yyyy-mm-dd')='2007-02-20';
rec2 cur2%ROWTYPE;
TYPE typ_cur IS TABLE OF cur2%ROWTYPE;
rec_table typ_cur;
num_rows NUMBER := 0;
BEGIN
OPEN cur2;
LOOP
FETCH cur2 BULK COLLECT INTO rec_table LIMIT 500;
FOR i IN 1..rec_table.COUNT LOOP
rec2 := rec_table(i); INSERT INTO A_Table@MDB ( BRANCH, A_ACCOUNT, A_ALIVE, A_LIH, A_CHAR, A_CURRENCY, A_REVAL, A_CLINUM, A_POSHIFF, A_NDTVAL, A_NKTVAL, A_NDTLV, A_NKTLV, A_SDTVAL, A_SKTVAL, A_SDTLV, A_SKTLV, A_SALDOLIH, A_SALDOCOD, A_LIHDATE, A_BACCOUNT, A_CREATEDATE, A_DAYS, A_OFFICE, A_LWA_NATURE, A_LWA_TYPE, A_ANNUITY, A_LWA_NUM, A_BLOCKED, A_LACC_DATE, A_UPD_DATE, A_UPD_UID, A_OPEN_UID, A_SALDO_LY) VALUES ( rec2.F1, rec2.F2, rec2.F3, rec2.F4, rec2.F5, rec2.F6, rec2.F7, rec2.F8, rec2.F9, rec2.F10, rec2.F11, rec2.F12, rec2.F13, rec2.F14, rec2.F15, rec2.F16, rec2.F17, rec2.F18, rec2.F19, rec2.F20, rec2.F21, rec2.F22, rec2.F23, rec2.F24, rec2.F25, rec2.F26, rec2.F27, rec2.F28, rec2.F29, rec2.F30, rec2.F31, rec2.F32, rec2.F33, rec2.F34);
num_rows := num_rows + 1;
END LOOP;
COMMIT;
EXIT WHEN cur2%NOTFOUND;
END LOOP;
CLOSE cur2; DT.AffectedRows:=num_rows; END;

I have a source Oracle table with about 1.5 Million of records. I'm trying to insert all these records into an .MDB file linked with DB link. After 3 hours I received that error:
ORA-04030: out of process memory when trying to allocate 236 bytes (session heap,)
Then I saw that there have been inserted about 1 million of records in the .MDB file before the error had occured.

I restart the database and start the insert again. I noticed how the memory has been increasing constantly.
Why is this happening? I fetch 500 rows, insert them and commit. Why I'm getting out of memory in some time?
The PGA is 2GB.

What can I do to execute successfully my insert?

Thanks in advance
Tom Kyte
March 20, 2007 - 7:52 am UTC

why is there any procedural code here at all?

why isn't this a single insert into as select?

bulk collect

wawan, April 26, 2007 - 5:28 am UTC

Tom,

I copy from metalink

Restriction: You cannot bulk-fetch from a cursor into a collection of 
---------------------------------------------------------------------- 
records, as the 
following example shows: 
DECLARE 
TYPE EmpRecTab IS TABLE OF emp%ROWTYPE; 
emp_recs EmpRecTab; 
CURSOR c1 IS SELECT ename, sal FROM emp WHERE sal > 1000; 
BEGIN 
OPEN c1; 
FETCH c1 BULK COLLECT INTO emp_recs; -- illegal 
... 
END; 



I have similar code like above, and it works,
but why oracle said its illegal ?

the only problem I find is that I can not use
dbms_output.put_line(emp_recs.ename).
is this the reason ?

regards
Tom Kyte
April 26, 2007 - 12:09 pm UTC

because things change over time.

ops$tkyte@ORA817DEV> DECLARE
  2  TYPE EmpRecTab IS TABLE OF emp%ROWTYPE;
  3  emp_recs EmpRecTab;
  4  CURSOR c1 IS SELECT ename, sal FROM emp WHERE sal > 1000;
  5  BEGIN
  6  OPEN c1;
  7  FETCH c1 BULK COLLECT INTO emp_recs; -- illegal
  8  END;
  9  /
FETCH c1 BULK COLLECT INTO emp_recs; -- illegal
                           *
ERROR at line 7:
ORA-06550: line 7, column 28:
PLS-00597: expression 'EMP_RECS' in the INTO list is of wrong type
ORA-06550: line 7, column 1:
PL/SQL: SQL Statement ignored


but later...

ops$tkyte%ORA10GR2> DECLARE
  2  TYPE EmpRecTab IS TABLE OF emp%ROWTYPE;
  3  emp_recs EmpRecTab;
  4  CURSOR c1 IS SELECT ename, sal FROM emp WHERE sal > 1000;
  5  BEGIN
  6  OPEN c1;
  7  FETCH c1 BULK COLLECT INTO emp_recs; -- illegal
  8  END;
  9  /

PL/SQL procedure successfully completed.



dbms_output.put_line( emp_recs.ename);

would "attempt" to print out an "array" - which is not really "defined" - emp_recs(i).ename - that'll work, but the "array", that is not something that is printable by default.

illegal bulk collect

A reader, April 26, 2007 - 9:54 pm UTC

Tom,

when the thing change? ,
because I
get the answer illegal from metalink just
yesterday ( 26 April 2007).

illegal but work well.
do it safe for production?
Tom Kyte
April 27, 2007 - 10:32 am UTC

you didn't give us the note # or any sort of way to see it in metalink ourselves.

I can only guess that you didn't look at the note date or version the note was written against.

Old note, Old version -> old rules apply.

Old rules do not necessarily apply to New versions.


your book

wawan, April 27, 2007 - 4:47 am UTC

Tom,

one more question , even out of topic :)
regarding your books, where can I buy the ebooks?

since difficult find here.
Tom Kyte
April 27, 2007 - 10:58 am UTC

apress.com

Bulk collect issue.

kannan, August 06, 2007 - 11:29 am UTC

Tom,

Please have a look into the following procedure to know the issue.

PROCEDURE Process_Populate_Products
(in_source_system_id IN system_id_table.source_system_id%TYPE
,in_session_id IN s_order.session_id%TYPE
,out_error_code OUT error_log.error_code%TYPE )

IS

CURSOR c_get_temp_data_product
IS
SELECT *
FROM STG_TEMP_PRODUCT_ESP
WHERE product_key_linkage_FLAG = 'N'
AND TIN LIKE 'OPRO%'
AND session_id = in_session_id;
/* Commented Because the linkage rule has changed 08/02/07

CURSOR c_get_products(c_opro_product_key STG_TEMP_PRODUCT_ESP.OPRO_PRODUCT_KEY%TYPE)
IS
select distinct c.tin,c.version,c.product_serial_number,c.node_id
from s_op_cpe_detail_feature a ,s_order_product_pdp_detail b,s_order_product_pdp_detail c
where a.FEAT_PROD_SVC_ID = c_opro_product_key
and a.TIN = b.TIN
and a.version = b.version
and a.product_serial_number =b.PRODUCT_SERIAL_NUMBER
and a.node_id=b.node_id
and c.TIN = b.TIN
and c.version = b.version
and c.PDP_LINE_NUM = b.REL_PDP_LINE_NUMBER;
*/

CURSOR c_get_products(c_opro_product_key STG_TEMP_PRODUCT_ESP.OPRO_PRODUCT_KEY%TYPE)
IS
select distinct c.tin,c.version,c.product_serial_number,c.node_id
from S_OP_DETAIL_PROD_SERV a ,s_order_product_pdp_detail b,s_order_product_pdp_detail c
where a.PROD_SERVICE_ID = c_opro_product_key
and a.TIN = b.TIN
and a.version = b.version
and a.product_serial_number =b.PRODUCT_SERIAL_NUMBER
and a.node_id=b.node_id
and c.TIN = b.TIN
and c.version = b.version
and c.PDP_LINE_NUM = b.REL_PDP_LINE_NUMBER;


TYPE TIN_TAB_products IS TABLE OF S_OP_DETAIL_PROD_SERV.TIN%TYPE ;
TYPE PSN_TAB_products IS TABLE OF S_OP_DETAIL_PROD_SERV.PRODUCT_SERIAL_NUMBER%TYPE ;
TYPE VERSION_TAB_products IS TABLE OF S_OP_DETAIL_PROD_SERV.VERSION%TYPE ;
TYPE NODE_ID_TAB_products IS TABLE OF S_OP_DETAIL_PROD_SERV.NODE_ID%TYPE ;


R_GET_TEMP_DATA_product c_GET_TEMP_DATA_product%ROWTYPE;

T_TIN_products TIN_TAB_products := TIN_TAB_products() ;
T_NODE_ID_products NODE_ID_TAB_products := NODE_ID_TAB_products() ;
T_PSN_products PSN_TAB_products := PSN_TAB_products() ;
T_VERSION_products VERSION_TAB_products := VERSION_TAB_products() ;

BEGIN
OPEN c_get_temp_data_product;
LOOP

FETCH c_get_temp_data_product INTO r_get_temp_data_product;
EXIT WHEN c_get_temp_data_product%NOTFOUND;

dbms_output.put_line('r_get_temp_data_product.opro_product_key is '||r_get_temp_data_product.opro_product_key);

OPEN c_get_products(r_get_temp_data_product.opro_product_key);
LOOP
FETCH c_get_products BULK COLLECT INTO T_TIN_products,T_VERSION_products,T_PSN_products,T_NODE_ID_products;
dbms_output.put_line('product key linka ge found');
EXIT WHEN c_get_products%NOTFOUND;
dbms_output.put_line('after exit');
FORALL x in T_TIN_products.first..T_TIN_products.last
INSERT INTO STG_ORDER_PRODUCT_ESP
(
SESSION_ID,
ORDER_NUMBER,
ORDER_STATUS,
ORDER_TYPE,
PROJECT_NUMBER,
BILLING_ACCOUNT_NUMBER_SOURCE,
CUSTOMER_REQUESTED_DUE_DATE,
BILLING_OPTION,
BILLING_TYPE,
MARKET_SEGMENT,
CUSTOMER_SITE_ID,
CPE_PROVIDER,
PROCUREMENT_TYPE,
SERVICE_DELIVERY_ORG,
CUSTOMER_NAME,
CUSTOMER_SHORT_NAME,
CUSTOMER_MGMT_CENTER,
CUSTOMER_STREET_ADDRESS,
CUSTOMER_CITY,
CUSTOMER_STATE,
CUSTOMER_ZIP,
CUSTOMER_COUNTRY,
CUSTOMER_PRIMARY_CONTACT_NAME,
CUSTOMER_PRIMARY_CONTACT_PHONE,
CUSTOMER_PRIMARY_CONTACT_EMAIL,
SALES_REP_NAME,
PROJECT_MANAGER,
ORDER_COMPLETED_DATE,
TIN,
VERSION,
PRODUCT_SERIAL_NUMBER,
NODE_ID,
CONTRACT_START_DATE,
CONTRACT_EXPIRATION_DATE,
CANCEL_DATE,
ACTIVATION_DATE,
CONTRACT_TERM,
CONTRACT_TYPE,
LOCATION_BILLING_KEY,
SITE_MANAGEMENT_DATE,
SITE_DISCONNECT_DATE,
AHC,
NASP_ID,
CORP_ID,
DNS_ENTITY_NAME,
FEATURE_FLAG,
ENTITY_BILLING_KEY,
ENTITY_MANAGED_DATE,
ENTITY_DISCONNECT_DATE,
PRODUCT_FAMILY,
PRODUCT_TYPE,
PRODUCT_SERVICE_TYPE,
CLASS_OF_SERVICE,
PRODUCT_BILLING_KEY,
PRODUCT_MANAGED_DATE,
PRODUCT_DISCONNECT_DATE,
netcom_equip_instance_id,
opro_product_key,
READY_TO_LOAD_FLAG,
DOWNSTREAM_ORDER_TYPE,
EPMT_ORDER_TYPE,
EPMT_ORDER_TYPE_SUB_CAT,
EPMT_SOLD_SEGMENT,
EPMT_BOOKED_SEGMENT,
SOLD_SEG_RULE,
ORDER_SOURCE,
LEG_ID,
DOWNSTREAM_NASP_ID
)
VALUES
(
in_session_id,
r_get_temp_data_product.ORDER_NUMBER,
r_get_temp_data_product.ORDER_STATUS,
r_get_temp_data_product.order_type,
r_get_temp_data_product.PROJECT_NUMBER,
r_get_temp_data_product.BILLING_ACCOUNT_NUMBER_SOURCE,
r_get_temp_data_product.CUSTOMER_REQUESTED_DUE_DATE,
r_get_temp_data_product.BILLING_OPTION,
r_get_temp_data_product.BILLING_TYPE,
r_get_temp_data_product.market_segment,
r_get_temp_data_product.CUSTOMER_SITE_ID,
r_get_temp_data_product.CPE_PROVIDER,
r_get_temp_data_product.PROCUREMENT_TYPE,
r_get_temp_data_product.SERVICE_DELIVERY_ORG,
r_get_temp_data_product.CUSTOMER_NAME,
r_get_temp_data_product.CUSTOMER_SHORT_NAME,
r_get_temp_data_product.CUSTOMER_MGMT_CENTER,
r_get_temp_data_product.CUSTOMER_STREET_ADDRESS,
r_get_temp_data_product.CUSTOMER_CITY,
r_get_temp_data_product.CUSTOMER_STATE,
r_get_temp_data_product.CUSTOMER_ZIP,
r_get_temp_data_product.CUSTOMER_COUNTRY,
r_get_temp_data_product.CUSTOMER_PRIMARY_CONTACT_NAME,
r_get_temp_data_product.CUSTOMER_PRIMARY_CONTACT_PHONE,
r_get_temp_data_product.CUSTOMER_PRIMARY_CONTACT_EMAIL,
r_get_temp_data_product.SALES_REP_NAME,
r_get_temp_data_product.PROJECT_MANAGER,
r_get_temp_data_product.ORDER_COMPLETED_DATE,
t_tin_products(x),
t_VERSION_products(x),
t_PSN_products(x),
t_NODE_ID_products(x),
r_get_temp_data_product.CONTRACT_START_DATE,
r_get_temp_data_product.CONTRACT_EXPIRATION_DATE,
r_get_temp_data_product.CANCEL_DATE,
r_get_temp_data_product.ACTIVATION_DATE,
NULL,
r_get_temp_data_product.CONTRACT_TYPE,
r_get_temp_data_product.LOCATION_BILLING_KEY,
r_get_temp_data_product.SITE_MANAGEMENT_DATE,
r_get_temp_data_product.SITE_DISCONNECT_DATE,
r_get_temp_data_product.AHC,
r_get_temp_data_product.NASP_ID,
r_get_temp_data_product.CORP_ID,
r_get_temp_data_product.DNS_ENTITY_NAME,
r_get_temp_data_product.FEATURE_FLAG,
r_get_temp_data_product.ENTITY_BILLING_KEY,
r_get_temp_data_product.ENTITY_MANAGED_DATE,
r_get_temp_data_product.ENTITY_DISCONNECT_DATE,
r_get_temp_data_product.PRODUCT_FAMILY,
r_get_temp_data_product.PRODUCT_TYPE,
r_get_temp_data_product.PRODUCT_SERVICE_TYPE,
r_get_temp_data_product.CLASS_OF_SERVICE,
r_get_temp_data_product.PRODUCT_BILLING_KEY,
r_get_temp_data_product.PRODUCT_MANAGED_DATE,
r_get_temp_data_product.PRODUCT_DISCONNECT_DATE,
r_get_temp_data_product.netcom_equip_instance_id,
r_get_temp_data_product.opro_product_key,
'Y',
r_get_temp_data_product.ORDER_TYPE,
r_get_temp_data_product.epmt_order_type,
r_get_temp_data_product.epmt_order_type_sub_cat,
r_get_temp_data_product.epmt_sold_segment,
r_get_temp_data_product.epmt_booked_segment,
r_get_temp_data_product.sold_seg_rule,
r_get_temp_data_product.order_source,
'S'||r_get_temp_data_product.product_billing_key||r_get_temp_data_product.product_family||r_get_temp_data_product.product_type
||r_get_temp_data_product.product_service_type||r_get_temp_data_product.class_of_service||r_get_temp_data_product.order_type,
r_get_temp_data_product.downstream_nasp_id
) ;
/* The following Key will tell us whether there was a linkage based on product key */

if T_TIN_products.COUNT > 0 then
/* The following Key will tell us whether there was a linkage based on product key */
dbms_output.put_line('update flag to Y');
UPDATE STG_TEMP_PRODUCT_ESP
SET product_key_linkage_flag = 'Y'
WHERE opro_product_key = r_get_temp_data_product.opro_product_key;
end if;
EXIT WHEN c_get_products%NOTFOUND;

END LOOP;
CLOSE c_get_products;

END LOOP;
CLOSE c_get_temp_data_product;

out_error_code := 0 ;



EXCEPTION

WHEN OTHERS THEN
out_error_code := 774740 ;

IF c_get_products%ISOPEN THEN
CLOSE c_get_products ;
END IF ;

IF c_get_temp_data_product%ISOPEN THEN
CLOSE c_get_temp_data_product ;
END IF ;

Write_Error_File
(in_session_id
,in_source_system_id
,'PreProcessing_ESP'
,'Process_Populate_Products'
,774740
,', SQL-'||TO_CHAR(SQLCODE)||', '||SQLERRM
||', When Others Process_Populate_Products' ) ;

END Process_Populate_Products;

and i executed the above procedure using :-

set serveroutput on size 1000000
declare
err error_log.error_code%TYPE;
begin
pkg_preprocess_esp.PreProcess_esp(26,108747,err);
dbms_output.put_line('err is '||sqlerrm);
end;

and the output i got is :-

r_get_temp_data_product.opro_product_key is 9090
product key linka ge found
r_get_temp_data_product.opro_product_key is 5000
product key linka ge found
err is ORA-0000: normal, successful completion

The problem is that the cursor c_get_products does have a row for opro_product_key 5000,but still it doesnot go inside the loop.Just wondering if the Exit statement after bulk collect is creating any issue.

thanks,
kannan
Tom Kyte
August 06, 2007 - 12:20 pm UTC

too big, not going to debug a routine of that size. reduce the problem to its very foundation (and likely the problem will become obvious - it usually does for me when I try to strip a large problem down to its barest form)

here the loops

Pasko, August 06, 2007 - 12:47 pm UTC

Hi Tom,

The are two
>>>EXIT WHEN c_get_products%NOTFOUND;

may be the first EXIT is causing problems, because the Update statement would not be executed.


OPEN c_get_temp_data_product;
LOOP

FETCH c_get_temp_data_product INTO r_get_temp_data_product;
EXIT WHEN c_get_temp_data_product%NOTFOUND;

OPEN c_get_products(r_get_temp_data_product.opro_product_key);
LOOP
FETCH c_get_products BULK COLLECT INTO
T_TIN_products,T_VERSION_products,T_PSN_products,T_NODE_ID_products;

EXIT WHEN c_get_products%NOTFOUND;

if T_TIN_products.COUNT > 0 then
UPDATE STG_TEMP_PRODUCT_ESP
SET product_key_linkage_flag = 'Y'
WHERE opro_product_key = r_get_temp_data_product.opro_product_key;
end if;

EXIT WHEN c_get_products%NOTFOUND;

END LOOP;
CLOSE c_get_products;

END LOOP;
CLOSE c_get_temp_data_product;

Tom Kyte
August 07, 2007 - 8:47 am UTC

ops$tkyte%ORA9IR2> declare
  2          type array is table of all_users%rowtype;
  3          l_data array;
  4
  5          cursor c is select * from all_users;
  6  begin
  7          open c;
  8          loop
  9                  fetch c bulk collect into l_data;
 10                  dbms_output.put_line( 'Fetched ' || l_data.count || ' rows' );
 11                  exit when c%notfound;
 12                  dbms_output.put_line( 'update...' );
 13                  exit when c%notfound;
 14          end loop;
 15          dbms_output.put_line( 'end of loop' );
 16          close c;
 17  end;
 18  /
Fetched 30 rows
end of loop

PL/SQL procedure successfully completed.


absolutely - the cursor will always return notfound in this example and the update is never executed.

they need to look at variable.count to see if there is data to process.

but, your extract (which is bite sized) shows they are doing the worst of all things - writing procedural code when none is needed.

First, they are doing the heinous "do it yourself nested loop join". Anytime you see:

for x in (select...)
loop
   for y in (select ... where X....)
   loop



you are looking at a bug in the developed code, it should be:


for z in (select <FROM A JOIN of X and Y!!!!!> )
loop



but in this case, they could just update without any code at all of course - SQL being SQL - it can do a lot.

Bulk collect issue.

kannan, August 07, 2007 - 10:21 am UTC

Pasko Thanks a lot to simplifying the code.

Tom,so in the following code the LIMIT will tell the sql to go further?

declare
type array is table of all_users%rowtype;
l_data array;

cursor c is select * from all_users;
begin
open c;
loop
fetch c bulk collect into l_data limit 10;
dbms_output.put_line( 'Fetched ' || _data.count|| ' rows' );
exit when c%notfound;
dbms_output.put_line( 'update...' );
exit when c%notfound;
end loop;
dbms_output.put_line( 'end of loop' );
close c;
end;

output is :-
Fetched 10 rows
update...
Fetched 10 rows
update...
Fetched 10 rows
update...
Fetched 10 rows
update...
Fetched 10 rows
update...
Fetched 10 rows
update...
Fetched 1 rows
end of loop

Tom Kyte
August 07, 2007 - 2:04 pm UTC

that code has an obvious bug in it the first exit when not found is erroneous, does NOT belong....

erase it - and all works as it should.

can also use T_TIN_products.COUNT <= 0

Pasko, August 07, 2007 - 4:18 pm UTC

Hi Kannan,

Tom has advised you to remove the first exit after the bulk fetch...
you can debug this routine further by avoiding the: 'exit when cursor%notfound' clauses and just check the Arrays if they contain any Data:

like this:
...
exit when ( T_TIN_products.COUNT <= 0 ) ;
...
or
...
if T_TIN_products.COUNT <= 0
then
exit;
end if;
...

Tom Kyte
August 10, 2007 - 2:54 pm UTC

I'd rather use

loop 
  fetch bulk collect
  for i in 1 .. l-array.count
  loop
     process
  end loop
  exit when %notfound
end loop

it covers all of the bases and is most "meaningful" - that is my opinion anyway.

continue Tom's Example using Array Counts

A reader, August 07, 2007 - 4:34 pm UTC

SQL>    Set serveroutput on size 1000000
SQL>    declare
  2              type array is table of all_users%rowtype;
  3              l_data array;
  4              cursor c is select * from all_users;
  5      begin
  6              open c;
  7              loop
  8                 fetch c bulk collect into l_data;
  9                  dbms_output.put_line( 'Fetched ' || l_data.count || ' rows' );
 10                  exit when l_data.count <= 0 ;
 11                  dbms_output.put_line( 'update...' );
 12                  exit when l_data.count <= 0 ;
 13              end loop;
 14              dbms_output.put_line( 'end of loop' );
 15              close c;
 16      end;
 17      /
Fetched 15 rows
update...
Fetched 0 rows
end of loop

PL/SQL procedure successfully completed.


--use LIMIT Clause

SQL> declare
  2              type array is table of all_users%rowtype;
  3              l_data array;
  4              cursor c is select * from all_users;
  5      begin
  6              open c;
  7              loop
  8                 fetch c bulk collect into l_data limit 10;
  9                  dbms_output.put_line( 'Fetched ' || l_data.count || ' rows' );
 10                  exit when l_data.count <= 0 ;
 11                  dbms_output.put_line( 'update...' );
 12                  exit when l_data.count <= 0 ;
 13              end loop;
 14              dbms_output.put_line( 'end of loop' );
 15              close c;
 16      end;
 17      /
Fetched 10 rows
update...
Fetched 5 rows
update...
Fetched 0 rows
end of loop

PL/SQL procedure successfully completed.

for completion, using Single EXIT

Pasko, August 07, 2007 - 4:43 pm UTC

and here Tom's example using only ONE EXIT:

SQL> declare
  2              type array is table of all_users%rowtype;
  3              l_data array;
  4              cursor c is select * from all_users;
  5      begin
  6              open c;
  7              loop
  8                 fetch c bulk collect into l_data;
  9                  dbms_output.put_line( 'Fetched ' || l_data.count || ' rows' );
 10                  --exit when c%notfound;
 11                  dbms_output.put_line( 'update...' );
 12                  exit when c%notfound;
 13              end loop;
 14              dbms_output.put_line( 'end of loop' );
 15              close c;
 16      end;
 17      /
Fetched 15 rows
update...
end of loop

PL/SQL procedure successfully completed.

SQL>

Count takes more time

D, August 08, 2007 - 5:53 am UTC

In Session 1, I am running insert/update on a table test_table which already has some data.
In Session 2, I try to run select count(*) from test_table.

Will the count query take more time as compared to when there is no insert/update happening and why?
Tom Kyte
August 14, 2007 - 9:55 am UTC

probably it would take more time for many reasons

a) if you have a single cpu, you will compete for that resource.
b) even if you have many cpus, you are competing for that resource with other sessions
c) the second session might have to roll back changes to blocks that were modified (eg: do a little more extra work to count) - this is better however than STOPPING AND BLOCKING AND DOING NOTHING - which is what the others would do...

Question

Kishore Kumar, August 22, 2007 - 1:33 pm UTC

I am new to Pl/SQL types,bulk collect and other advanced concepts.

I have a requirement to use Ref cursor, PL/SQL record type and SQL Type.

I need to process 10 million records. The client says that the code made for 11.5.9 using Global Temporary table is giving performance issue(taking around 5 hours to complete.)

So he has asked us to use pl/Sql tables for 11.5.10. We are using a ref cursor since the select statement is dynamically created. We are getting all the data into a PL/SQL record type when this select statement is executed.

We have to update some of the columns in this record based on some conditions. We heard that we need to create a SQL type to use bulk update a PL/SQL table. But we are not able to create a SQL type(Record type). The program is working fine when we are selecting only one column, but giving a error "Inconsistent datatypes: Expecting --- instead of ---". Once the updation is done, we insert the data into another table

Can't we create a SQL record type.

Which is the best approach-- Using Global Temporary table or PL/SQL tables?

Sorry for giving the question in the review section
Tom Kyte
August 22, 2007 - 2:38 pm UTC

there is only one answer possible here:

it depends.


before you tune something, please - for goodness sake - find out where you are spending your time right now.

for example: why is the temporary table being blamed where. is it the issue?

FIND THE ISSUE
FIX THE ISSUE


do not go down your current path, it is pointless.

Bulk Collect taking a long time

Utpal Dhar, September 26, 2007 - 6:47 pm UTC

Hi Tom,

I am using BULK COLLECT and FORALL in the code below to insert 8 million rows from one table to another and i have an autonomous transaction at the end of the code which updates a table when all records of a particular pdate has been inserted.
It took 40 minutes to insert approximatly 100 pdates. There are a total of 1230 distinct pdates.

Is there something wrong in the code?

Thanks
Utpal
CREATE OR REPLACE PROCEDURE NLYS.BULK_MOVE_PJM_DA_LMP_WEBDATA AS

TYPE src_md_date IS TABLE OF dw_pjm_rt_temp.md_date%TYPE INDEX BY BINARY_INTEGER;
TYPE src_md_desc IS TABLE OF dw_pjm_rt_temp.md_desc%TYPE INDEX BY BINARY_INTEGER;
TYPE src_hour_ending IS TABLE OF dw_pjm_rt_temp.hour_ending%TYPE INDEX BY BINARY_INTEGER;
TYPE src_peak_type IS TABLE OF dw_pjm_rt_temp.peak_type%TYPE INDEX BY BINARY_INTEGER;
TYPE src_lmp_value IS TABLE OF dw_pjm_rt_temp.lmp_value%TYPE INDEX BY BINARY_INTEGER;
TYPE src_nodename IS TABLE OF dw_pjm_rt_temp.md_desc_1%TYPE INDEX BY BINARY_INTEGER;
TYPE src_voltage IS TABLE OF dw_pjm_rt_temp.md_desc_2%TYPE INDEX BY BINARY_INTEGER;
TYPE src_equipment IS TABLE OF dw_pjm_rt_temp.md_desc_3%TYPE INDEX BY BINARY_INTEGER;
TYPE src_zone IS TABLE OF dw_pjm_rt_temp.md_desc_4%TYPE INDEX BY BINARY_INTEGER;

src_md_date_array src_md_date;
src_md_desc_array src_md_desc;
src_hour_ending_array src_hour_ending;
src_peak_type_array src_peak_type;
src_lmp_value_array src_lmp_value;
src_nodename_array src_nodename;
src_voltage_array src_voltage;
src_equipment_array src_equipment;
src_zone_array src_zone;

i integer := 1;
j integer := 1;
dml_errors EXCEPTION;
PRAGMA exception_init(dml_errors, -24381);
errors NUMBER;
err_msg varchar2(64);

CURSOR C1 IS
SELECT DISTINCT pdate
FROM webdata.e_pjm_d_da_lmp;

CURSOR C2(p_pdate date) IS
SELECT webdata.e_pjm_d_da_lmp.pdate md_date,
pnodeid md_desc,
r hour_ending,
DECODE(pt.peaktype,'OnPeak',0, 'OffPeak',1) peak_type,
DECODE(r, 1, hr1,
2, hr2,
3, hr3,
4, hr4,
5, hr5,
6, hr6,
7, hr7,
8, hr8,
9, hr9,
10, hr10,
11, hr11,
12, hr12,
13, hr13,
14, hr14,
15, hr15,
16, hr16,
17, hr17,
18, hr18,
19, hr19,
20, hr20,
21, hr21,
22, hr22,
23, hr23,
24, hr24) lmp_value,
nname nodename,
voltage,
equip,
pzone
FROM webdata.e_pjm_d_da_lmp,
(SELECT rownum r FROM all_objects WHERE rownum <= 24),
dw_pjm_peak_type pt
WHERE webdata.e_pjm_d_da_lmp.pdate = p_pdate
AND webdata.e_pjm_d_da_lmp.pdate = pt.pdate
AND r = pt.phour;

BEGIN
FOR c1rec in C1 LOOP
OPEN c2(c1rec.pdate);
LOOP
FETCH c2 BULK COLLECT INTO
src_md_date_array,
src_md_desc_array,
src_hour_ending_array,
src_peak_type_array,
src_lmp_value_array,
src_nodename_array,
src_voltage_array,
src_equipment_array,
src_zone_array LIMIT 700;

BEGIN
FORALL i in 1 .. src_md_date_array.count SAVE EXCEPTIONS
INSERT INTO dw_pjm_rt_temp (md_date,
md_desc,
hour_ending,
peak_type,
lmp_value,
md_desc_1,
md_desc_2,
md_desc_3,
md_desc_4)
VALUES (src_md_date_array(i),
src_md_desc_array(i),
src_hour_ending_array(i),
src_peak_type_array(i),
src_lmp_value_array(i),
src_nodename_array(i),
src_voltage_array(i),
src_equipment_array(i),
src_zone_array(i)
);
EXCEPTION
WHEN dml_errors THEN
errors := SQL%BULK_EXCEPTIONS.COUNT;
--DBMS_OUTPUT.PUT_LINE('Number of statements that failed: ' || errors);
FOR j IN 1..errors LOOP
--DBMS_OUTPUT.PUT_LINE('Error #' || j || ' occurred during '|| 'iteration #' || SQL%BULK_EXCEPTIONS(j).ERROR_INDEX);
--DBMS_OUTPUT.PUT_LINE('Error message is ' || SQLERRM(-SQL%BULK_EXCEPTIONS(j).ERROR_CODE));
err_msg := substr(SQLERRM(-SQL%BULK_EXCEPTIONS(j).ERROR_CODE), 1, 64);
INSERT INTO dw_process_log VALUES (seq_process_error_log.nextval, 'BULK INSERT DA PJM WEBDATA', sysdate, 'F', 'PROCESS FAILED',err_msg);
END LOOP;

END;
EXIT WHEN C2%NOTFOUND;
END LOOP;
CLOSE C2;
COMMIT;
AUTONOMOUS_INSERT (c1rec.pdate);
END LOOP;
END;
/
Tom Kyte
September 26, 2007 - 10:19 pm UTC

what is your version

Bulk Collect taking a long time

Utpal Dhar, September 27, 2007 - 9:45 am UTC

Database Version is 10.2.0.2.0

I kept that procedure executing last night and i just saw that it gave me a ORA-04030: out of process memeory to allocate 32792 bytes
Tom Kyte
September 28, 2007 - 3:28 pm UTC

use

insert /*+ append */ into t select * from other_t log errors reject limit unlimited;


erase all of the code.


http://asktom.oracle.com/Misc/how-cool-is-this.html
http://asktom.oracle.com/Misc/how-cool-is-this-part-ii.html

log errors reject limit unlimited;

Utpal Dhar, October 10, 2007 - 6:44 pm UTC

Tom,

Thanks for the log errors reject limit unlimited tip. It is really wonderful. BUT, it does not work if a unique constraint is vioalted. I guess it only works for NOT NULL and CHECK constraints.

Am i correct.

If i am, then if i had to do the insert and take care of dups, what would be your solution

A reader, February 04, 2008 - 12:29 pm UTC

Tom,
There are 500,6067 records in a schema1.test_table. I need to insert the same records with additional columns into schema2.test_table.
I declared a cursor and did BULK COLLECT INTO test_table_tab LIMIT 50000. I could insert only 500,000 records. I could not insert 6067, since the limit I specified is more. How to insert 6067 records that were not inserted due to LIMT clause. Everyday, there will be different number of records. Can you please suggest what LIMIT should be used?

or Please suggest What's the best way to insert data?

Tom Kyte
February 04, 2008 - 4:24 pm UTC

hah, the best way would have been:

insert into another_table
select c1, c2, .... cn, 'whatever' .....
  from first_table;



you have a bug in your code, you need to code like this IF you persist in using the entirely inefficient procedural approach

open c;
loop
   fetch c bulk collect into l_var1, l_var2, ... limit 500;<b>
   for i in 1 .. l_var1.count</b>
   loop
       whatever, process that record... the last time thru, count will be 67
   end loop;<b>
   exit when c%notfound;</b>
end loop;
close c;





A reader, February 04, 2008 - 12:53 pm UTC

Tom,
There are 506,667 records in a schema1.test_table. I need to insert the same records with
additional columns into schema2.test_table.
I declared a cursor and did BULK COLLECT INTO test_table_tab LIMIT 50000. I could insert only
500,000 records. I could not insert 6067, since the limit I specified is more. How to insert 6067
records that were not inserted due to LIMT clause. Everyday, there will be different number of
records. Can you please suggest what LIMIT should be used?

or Please suggest What's the best way to insert data?

Using LIMIT to Control the Number of Rows In a BULK COLLECT

Stew, February 13, 2008 - 12:44 pm UTC

Tom,

This is about your response:

Followup October 11, 2002 - 7pm US/Eastern:

I sure hope no one follows that advice, else they have just programmed that rascally "infinite
loop"

After someone questioned your use of:
EXIT WHEN c1%NOTFOUND

to loop through a BULK COLLECT cursor that uses LIMIT.

I'm concerned because the method you said could generate infinite loops is one I've been using for a long time. As far as I know, this method has worked successfully, so your suggestion that it's doomed to fail has me worried.

Someone even pointed out a reference to the Oracle manuals that suggested your method could fail, though the link is long out of date. So I went digging in the current manuals and found the same reference here:

http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14261/tuning.htm#sthref2218

If you search for the title: "Using LIMIT to Control the Number of Rows In a BULK COLLECT", you'll find their example code. Look at the line they commented out.

Or you can just check it here (the link was because I didn't want you to think I was making this up!):

DECLARE
   TYPE numtab IS TABLE OF NUMBER INDEX BY PLS_INTEGER;
   CURSOR c1 IS SELECT employee_id FROM employees WHERE department_id = 80;
   empids    numtab;
   rows      PLS_INTEGER := 10;
BEGIN
  OPEN c1;
  LOOP -- the following statement fetches 10 rows or less in each iteration
    FETCH c1 BULK COLLECT INTO empids LIMIT rows;
    EXIT WHEN empids.COUNT = 0;
--  EXIT WHEN c1%NOTFOUND; -- incorrect, can omit some data
    DBMS_OUTPUT.PUT_LINE('------- Results from Each Bulk Fetch --------');
    FOR i IN 1..empids.COUNT LOOP
      DBMS_OUTPUT.PUT_LINE( 'Employee Id: ' || empids(i));
    END LOOP;
  END LOOP;
  CLOSE c1;
END;
/
Could you please help a poor, confused programmer who just wants to do it right??? :-/

Thanks,

Stew
Tom Kyte
February 13, 2008 - 1:18 pm UTC

that is NOT c%rowcount is it.

apples and toaster ovens here........


you are looking at how many rows the last fetch fetched into the array - ARRAY.COUNT

that is NOT cursor%rowcount at all.


your approach works, but my preference is:


loop
   fetch C bulk collect limit ....
   for i in 1 .. array.count 
   loop
      process...
   end loop;
   exit when c%notfound;
end loop;


bulk bind collections with addition of new column

Bonnie, March 06, 2008 - 4:10 pm UTC

Hi Tom,

Thank you for your explainations on Bulk collect and Forall.

I am wondering if there is a way to add additional columns that don't exist in the collection, and carry that forward to the Forall insert.

I've modified one of your examples to show you what I'm trying to do:

open cursor;
loop
fetch c bulk collect into l_c1, l_c2, ... LIMIT 000;
for i in 1 .. l_c1.count
loop
s_newVariable := 'assign new data element to column not in collection' ;
end loop;

forall i in 1 .. l_c1.count
insert into T1(col1, col2,...,
newCol )
values ( L_c1(i).col1, L_c1(i).col2...,
s_newVariable);
end loop;
exit when c%notfound;
end loop;

Is this something that must be done with another data update or can it be done on data insert?

Your help is greatly appreciated as always,
Bonnie

Tom Kyte
March 07, 2008 - 5:40 pm UTC

not following you - in all of the examples, l_c1, l_c2, ... were all collections of scalars... single elements, where would be no "l_c1(i).col1" - just l_c1(i)


so, to "add" to the end is simple, you don't, you just have your own array of scalars.


My example was this:

   open cursor;
   loop
       fetch c bulk collect into l_c1, l_c2, ....... LIMIT 1000;
       for i in 1 .. l_c1.count
       loop
            process....
       end loop;
       forall i in 1 .. l_c1.count
            insert into ..... values ( L_c1(i), .... );
       end loop;
       exit when c%notfound;
   end loop;


You would just:

   open cursor;
   loop
       fetch c bulk collect into l_c1, l_c2, ....... LIMIT 1000;
       for i in 1 .. l_c1.count
       loop
            process....<b>filling in you_array(i)</b>
       end loop;
       forall i in 1 .. l_c1.count
            insert into ..... values ( L_c1(i), ...., <b>your_array(i)</b> );
       end loop;
       exit when c%notfound;
   end loop;

A reader, March 07, 2008 - 7:04 pm UTC

Thank you Tom! Your answer was right on the spot even though I didn't express it correctly.

Bonnie

PLS-00436: implementation restriction: cannot reference fields of

bonnie, March 07, 2008 - 7:26 pm UTC

Hi Tom,

I have another problem: I trying to bulk collect using a cursor, but I keep getting the following error when I run it:

PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND table of records
PLS-00382: expression is of wrong type

What can I do to get around this using forall or something similar. I need to insert millions of rows of data. I tried creating a view from and using a %rowtype to define the type, but I get the same error.

Your help is greatly appreciated.

create or replace package pk1 as

TYPE t_emp IS RECORD(
empId emp.EMPID%TYPE,
firstName emp.firstName%TYPE,
lastName emp.lastName%TYPE,
address emp.address%TYPE,
phone emp.phone%TYPE,
pt_dob emp.dob%TYPE,
deptId dept.deptId%TYPE,
deptName dept.DEPTNAME%TYPE );

end;
/


create or replace package body pk1 as

TYPE tc_emp IS REF CURSOR RETURN t_emp;

c_emp tc_emp;

TYPE nstEmp IS TABLE OF c_emp%ROWTYPE index by pls_integer;

r_emp nstEmp;

cursor c is
select empId, firstName, lastName,
address, phone, dob, depId,deptName
from emp, dept
where emp.deptId = dept.deptid;

begin
open c;
loop
fetch c bulk collect into r_emp limit 100;
begin

forall i in 1..r_emp.count save exceptions
insert into t (EMPID,FIRSTNAME,LASTNAME,ADDRESS,PHONE,DOB,
DEPTID,DEPTNAME)
values (r_emp(i).EMPID,r_emp(i).FIRSTNAME,r_emp(i).LASTNAME,r_emp(i).ADDRESS,r_emp(i).PHONE,r_emp(i).DOB,
r_emp(i).DEPTID,r_emp(i).DEPTNAME);

end;
exit when c%notfound;
end loop;
close c;

end;
/

Thank you.
Bonnie

Tom Kyte
March 10, 2008 - 11:19 am UTC

ops$tkyte%ORA10GR2> drop table t;

Table dropped.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t as select username x, created y, user_id z from all_users;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
  2          type array is table of t%rowtype;
  3          l_data array;
  4  begin
  5          select username, created, user_id
  6            bulk collect into l_data
  7            from all_users;
  8
  9          forall i in 1 .. l_data.count
 10                  insert into t values L_DATA(i);
 11  end;
 12  /

PL/SQL procedure successfully completed.

To Bonnie

A reader, March 10, 2008 - 1:46 am UTC

try changing package body like this to use bulk-collect and test if it works for you.

create or replace package body pk1 as

TYPE tc_emp IS REF CURSOR RETURN t_emp;

c_emp tc_emp;

TYPE nstEmp IS TABLE OF c_emp%ROWTYPE index by pls_integer;

r_emp nstEmp;

type t_empId is table of emp.empid%type index by pls_integer;
type t_firstName is table of emp.firstName%type index by pls_integer;
type t_lastName is table of emp.lastName%type index by pls_integer;
type t_address is table of emp.address%type index by pls_integer;
type t_phone is table of emp.phone%type index by pls_integer;
type t_dob is table of emp.dob%type index by pls_integer;
type t_deptId is table of emp.deptId%type index by pls_integer;
type t_deptName is table of dept.deptName%type index by pls_integer;

r_empid t_empid;
r_firstName r_firstName;
r_lastName t_lastName;
r_address t_address;
r_phone t_phone;
r_dob t_dob;
r_deptId t_deptId;
r_deptName t_deptName;

cursor c is
select empId, firstName, lastName,
address, phone, dob, depId,deptName
from emp, dept
where emp.deptId = dept.deptid;

begin
open c;
loop
fetch c bulk collect into r_empid, r_firstname,r_lastname,r_address,r_phone,r_dob,r_deptid, r_deptname limit 100;
begin

forall i in 1..r_emp.count save exceptions
insert into t (EMPID,FIRSTNAME,LASTNAME,ADDRESS,PHONE,DOB,
DEPTID,DEPTNAME)
values
(r_empid(i), r_firstname(i),r_lastname(i),r_address(i),r_phone(i),r_dob(i),r_deptid(i), r_deptname(i) );

end;
exit when c%notfound;
end loop;
close c;

end;

Having said above why don't you have simply have insert into... select in your package body i.e.

insert into t (EMPID,FIRSTNAME,LASTNAME,ADDRESS,PHONE,DOB,
DEPTID,DEPTNAME)
select empId, firstName, lastName,
address, phone, dob, emp.deptId,deptName
from emp, dept
where emp.deptId = dept.deptid;

Hope this helps

For Bonnie

Anwar, March 10, 2008 - 2:14 am UTC

Change the syntax to:

forall i in 1..r_emp.count save exceptions
insert into (SELECT EMPID,FIRSTNAME,LASTNAME,ADDRESS,PHONE,DOB,DEPTID,DEPTNAME
FROM t)
values r_emp(i);

Bonnie, March 10, 2008 - 11:59 am UTC

Both of the answers help tremendously. Thank you! Thank you!

The reason why I didn't do a simple insert from a select is because I need to manipulate some of the data elements in the result set, and derive others to insert into the new table. Where possible I will use the insert into example!

Thank you again.

Bonnie

RE: manipulate some of the data elements in the result set, and derive others...

Duke Ganote, March 10, 2008 - 12:36 pm UTC

What manipulations and derivations require PL/SQL? (I've truly curious; I've rarely seen anything that couldn't be done better in SQL).

Bonnie, March 10, 2008 - 4:55 pm UTC

You're right. There isn't much now that you can use case statements inside a pl/sql. We just upgraded to 10g from 8i, so I'm more used to doing things the old way. Thank you for the reminder. I'll keep that in mind.


RE: to 10g from 8i, so FORGET BULK BINDS

Duke Ganote, March 12, 2008 - 12:06 pm UTC

Shazam, that's a leap! With 10g, forget bulk binds and jump right into DML error logging!
http://tkyte.blogspot.com/2005/07/how-cool-is-this.html

But I'm a known SQL Snob -- our ETL programmers are oft aghast at how much can be done in SQL, thanks to analytic functions, external tables, MERGE, and DML error logging. And how modular it can be when subquery factoring (WITH clauses) are used.

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:47458244605081#424656900346384289

A reader, March 13, 2008 - 2:33 pm UTC

Thank you for bringing my attention to DML error logging. Funny I took a 10g Upgrading course, but this function was not even mentioned.

I have a question for you SQL gurus.

I'm trying to bring in data from the data source into 3 normalize tables.

The main table can be easily done with an
insert into t select * from src_customer.

For the other normalize tables, I want to create a sequence
for each record in the child tables with respect to the src_customer table: For example:
Each customer can have 1 to many productions. I want to list the products from order of occurance from 1 to x for each customer. Can this be done in sql?

I tried using the bulk collect into
then a for loop to assign the sequence
then a forall to insert the data into the tables. But Since I'm re-assigning the sequence on each new customer, I get the following:

ORA-06531: Reference to uninitialized collection
ORA-06512: at "QSHI_SRC.LOADED", line 463
ORA-06512: at line 2

Any advice would be good.

Thank you.
Bonnie









Tom Kyte
March 15, 2008 - 9:13 am UTC

well, is there some natural key in the src_customer that can be referred to during the insert into the child. It would normally look like this:


ops$tkyte%ORA10GR2> create sequence s;

Sequence created.

ops$tkyte%ORA10GR2> create table p ( x number primary key, natural_key varchar2(30), data varchar2(30) );

Table created.

ops$tkyte%ORA10GR2> create table c ( y number primary key, x references p, data varchar2(30) );

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into p (x, natural_key, data )
  2  select s.nextval, username, '......'
  3    from all_users;

39 rows created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into c (y, x, data)
  2  select s.nextval, P.X, all_objects.object_name
  3    from all_objects, p
  4   where all_objects.owner = p.natural_key
  5  /

29704 rows created.

RE: 10g, DML error logging, and child tables

Duke Ganote, March 14, 2008 - 10:29 am UTC

DML Error logging is a new feature of release 2 of 10g, but I'm assuming you upgraded to R2.
http://www.oracle-base.com/articles/10g/DmlErrorLogging_10gR2.php
As for child tables, are you asking about something like this?
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:259215259051#60077396628694

bulk insert with loop still slow

Sean, March 24, 2008 - 4:14 pm UTC

Hi Tom,

I used bulk collect sample you mentioned in this thread to do insert. But it is still slow. Someone mentioned that I should not use loop and join in the insert statement. Instead, for each bulk collect, I should do insert at once without loop. I could not figure out the syntax to do that. Thanks so much for your help.

-- Sean


create table t1(c1 number);
create table t2(c1 varchar2(10), c2 number);
create table t3(c1 varchar2(10), c2 number);

insert into t1(c1) values(1);
insert into t1(c1) values(2);

insert into t2(c1, c2) values('A', 1);
insert into t2(c1, c2) values('B', 2);
insert into t2(c1, c2) values('C', 3);



create procedure p1
is
cursor c1 is
select c1 from t1;

type c1_type is table of number(12);
c1_arr c1_type;

begin
open c1;
loop

fetch c1 bulk collect into c1_arr limit 10;
forall i in c1_arr.first..c1_arr.last
insert into t3(c1)
select c1 from t2
where t2.c2 =c1_arr(i);

commit;
exit when c1%notfound;
end loop;
close c1;

end;


Tom Kyte
March 24, 2008 - 7:38 pm UTC

simple, and I've written it at least a million times by now:

insert into t3(c1)
select c1
from t2, t1
where t1.c1 = t2.c2;

just join, just insert, no code, just SQL.

Look at your "logic" here:

create procedure p1
is
cursor c1 is select c1 from t1;
begin
open c1;
loop
fetch c1 bulk collect into c1_arr limit 10;
forall i in c1_arr.first..c1_arr.last
insert into t3(c1)
select c1 from t2
where t2.c2 =c1_arr(i);

commit; -- WOW, is that a bug or what, do you even know that????
exit when c1%notfound;
end loop;
close c1;
end;

join T1 to T2 by t2.c2 = t1.c1
insert into T3

how simple is that?


stop thinking procedurally.

forget about writing code

when you are forced to write code - kick yourself and say "I'm going to read this sql manual for 30 minutes to find a way to do it in SQL before giving up"

do that for a couple of years... (yes, it sometimes takes years to become proficient in something because you have to *LET GO* of your old ways)

then you'll know SQL like you know <procedural language goes here> and you'll code (in sql) circles around anyone.

bulk collect with forall

gireeshkumar.K, May 12, 2008 - 3:53 am UTC

Hi Tom,
I done enhancement with BULK COLLECT and FORALL on a procedure which is developed with forloop cursor.

When I excuted the enhanced procedure it taken more time as compare with previous one. like.. the old(FORLOOP) procedure takes 48 mins and the new enhanced (BULK COLLECT)procedure takes 1hr 24 mins.. please help me to resolve the problem .
Thanks,
Gireesh.
Tom Kyte
May 12, 2008 - 1:38 pm UTC

my car won't start.

why not?

we are even as far as level of detail goes...

I do not know why your refactored code is slower than your original, no one could.

bulk data retreival

pradikan, May 17, 2008 - 11:29 am UTC

Hello Tom,

Just a quick question... We have a big DW database and we have few aggregations. we are fetching data from two or more tables having millions of records. The problem now is that to fetch data its taking more than 8 hours...

are there any hints that we can use to improve our response time and collect / retreive data from those tables more faster way

please let me know

thanks

pradikan
Tom Kyte
May 19, 2008 - 3:53 pm UTC

... are there any hints that we can use to improve our response time and collect / retreive data from those tables more faster way ...

query less data - that would be a way.

seriously, you ask the vaguest of vague questions - you cannot expect anything truly useful can you?

where to start - parallel query, your memory settings, do you have more than one disk, what are your IO capabilities, how big is big, millions of records could be megabytes or terabytes - record counts are virtually useless.... where to start...

bulk collect performance test results

Oxnard, June 06, 2008 - 9:30 am UTC

Seen a few writings about collection and the limit size to use. Made my own test. The result were basically what I expected except for when size of the limit was small (l < 10000) My question is why is the performance on small limit sizes so poor? I repeated the test varoius times on both windows (my laptop) and a 8 cpu unix system. The results are basically the same. Would you reconmend a test like this to come up with the best limit size?

Here are the results
TEST_NAME LIMIT# AVG_TIME MAX_TIME MIN_TIME MEDIAN_TIME
coll 100 100 1191.46 3070.91 3.94 1352.82
coll 200 200 613.16 1611.68 3.76 701.19
coll 400 400 316.97 859.64 3.58 368.76
coll 800 800 171.61 474.48 3.87 200.58
coll 1600 1600 99.75 277.17 3.82 117.28
coll 3200 3200 64.58 194.43 4.35 77.12
coll 6400 6400 47.78 124.59 4.63 57.61
coll 12800 12800 37.9 99.21 4.75 46.47
coll 25600 25600 33.4 89.42 5.07 40.95
coll 50000 50000 30.92 82.82 4.99 37.99
coll 100000 100000 29.98 81.31 4.81 37.06
rowXrow  85.83 281.97 15.65 97.57
sql_only  14.73 42.52 1.82 18.25


will put as much as possible of the tests basically I put the procedures in a loop and did them many times:
 procedure do_rowxrow is
 begin
  v_start_time := systimestamp;
  for rec in (select * from t1 left join t2 using (num)) loop
    rec.col1 := upper(rec.col1); -- some work 
    rec.col2 := upper(rec.col2); -- more work
    insert into t3 values(
     rec.num
     ,rec.col1
     ,rec.col2
     ,rec.col3
     ,rec.col4
     ,rec.col5
     ,rec.col6
     ,rec.col7
     ,rec.col8
     ,rec.col9
     ,rec.col10 
     ,rec.col11
     ,rec.col12
     ,rec.col13
     ,rec.col14
     ,rec.col15
     ,rec.col16
     ,rec.col17
     ,rec.col18
     ,rec.col19
     ,rec.col20);
  end loop;
  v_interval := systimestamp - v_start_time;
  insert into results values ('rowXrow','T',v_with_indx,v_start_time,systimestamp,v_interval,os,row#,null
                              ,(extract (day from v_interval) * 86400 +
                                extract (hour from v_interval) * 3600 +
                                extract (minute from v_interval) * 60 +
                                extract (second from v_interval))
                              );
  execute immediate tt3;
  commit;
 end do_rowxrow;

-- the ref cursor is the same SQL 
 procedure coll_method is
 begin
  v_start_time := systimestamp;
  open rc for select * from t1 left join t2 using (num);
  loop
   fetch rc bulk collect into c limit v_limit;
   exit when c.count = 0;
    for i in c.first..c.last loop
     c(i).col1 := upper(c(i).col1); -- manipulate the elements of a collection
     c(i).col2 := upper(c(i).col2);
    end loop;
   forall i in c.first..c.last insert into t3 values c(i);
  end loop;
  v_interval := systimestamp - v_start_time;
  insert into results values ('coll','T',v_with_indx,v_start_time,systimestamp,systimestamp - v_start_time,os,row#,v_limit
                              ,(extract (day from v_interval) * 86400 +
                                extract (hour from v_interval) * 3600 +
                                extract (minute from v_interval) * 60 +
                                extract (second from v_interval))
                              );
  execute immediate tt3;
  commit;
  close rc;
 end coll_method;

 procedure sql_only is
 begin
  v_start_time := systimestamp;
  insert into t3
   select
    num
    ,upper(col1)
    ,upper(col2)
    ,col3
    ,col4
    ,col5
    ,col6
    ,col7
    ,col8
    ,col9
    ,col10
    ,col11
    ,col12
    ,col13
    ,col14
    ,col15
    ,col16
    ,col17
    ,col18
    ,col19
    ,col20
   from t1 left join t2 using (num);
  v_interval := systimestamp - v_start_time;
  insert into results values ('sql_only','T',v_with_indx,v_start_time,systimestamp,systimestamp - v_start_time,os,row#,null
                              ,(extract (day from v_interval) * 86400 +
                                extract (hour from v_interval) * 3600 +
                                extract (minute from v_interval) * 60 +
                                extract (second from v_interval))
                              );
  execute immediate tt3;
  commit;
 end sql_only;



Here's my test tables:

create table t1(
 num int not null,
 col1 varchar2(100) not null,
 col2 varchar2(100) not null,
 col3 varchar2(100) not null,
 col4 varchar2(100) not null,
 col5 varchar2(100) not null,
 col6 varchar2(100) not null,
 col7 varchar2(100) not null,
 col8 varchar2(100) not null,
 col9 varchar2(100) not null,
 col10 varchar2(100) not null);
 
create table t2(
 num int not null, 
 col11 varchar2(100) not null,
 col12 varchar2(100) not null,
 col13 varchar2(100) not null,
 col14 varchar2(100) not null,
 col15 varchar2(100) not null,
 col16 varchar2(100) not null,
 col17 varchar2(100) not null,
 col18 varchar2(100) not null,
 col19 varchar2(100) not null,
 col20 varchar2(100) not null);


create table t3(
 num number(38,0)
 ,col1    varchar2(100) not null
 ,col2    varchar2(100) not null
 ,col3    varchar2(100) not null
 ,col4    varchar2(100) not null
 ,col5    varchar2(100) not null
 ,col6    varchar2(100) not null
 ,col7    varchar2(100) not null
 ,col8    varchar2(100) not null
 ,col9    varchar2(100) not null
 ,col10    varchar2(100) not null
 ,col11    varchar2(100)
 ,col12    varchar2(100)
 ,col13   varchar2(100)
 ,col14   varchar2(100)
 ,col15   varchar2(100)
 ,col16   varchar2(100)
 ,col17   varchar2(100)
 ,col18   varchar2(100)
 ,col19   varchar2(100)
 ,col20    varchar2(100)
) ;

create table results(
 test_name         varchar2(50) not null,
 test_completed    varchar2(1) not null constraint results_chk01 check (test_completed in ('T','F')),
 with_indx         varchar2(1) not null constraint results_chk02 check (with_indx in ('T','F')),
 start_time        timestamp not null,
 end_time          timestamp not null,
 test_interval     interval day to second not null,
 os                varchar2(10) not null,
 test_rows         number(10,0) not null,
 limit#            int,
 sec#              number not null
);



Tom Kyte
June 09, 2008 - 1:33 pm UTC

well - I could not run your code - very incomplete test case.

and your results don't make sense to me.

and your rowxrow is really 100 row pre-fetch (assuming 10.2 - plsql does an implicit array fetch of 100 rows)

so, I rewrote your example as follows:



create table t1(
 num int not null,
 col1 varchar2(100) default rpad('x',100,'x') not null,
 col2 varchar2(100) default rpad('x',100,'x') not null,
 col3 varchar2(100) default rpad('x',100,'x') not null,
 col4 varchar2(100) default rpad('x',100,'x') not null,
 col5 varchar2(100) default rpad('x',100,'x') not null,
 col6 varchar2(100) default rpad('x',100,'x') not null,
 col7 varchar2(100) default rpad('x',100,'x') not null,
 col8 varchar2(100) default rpad('x',100,'x') not null,
 col9 varchar2(100) default rpad('x',100,'x') not null,
 col10 varchar2(100) default rpad('x',100,'x') not null);

create table t2(
 num int not null,
 col11 varchar2(100) default rpad('x',100,'x') not null,
 col12 varchar2(100) default rpad('x',100,'x') not null,
 col13 varchar2(100) default rpad('x',100,'x') not null,
 col14 varchar2(100) default rpad('x',100,'x') not null,
 col15 varchar2(100) default rpad('x',100,'x') not null,
 col16 varchar2(100) default rpad('x',100,'x') not null,
 col17 varchar2(100) default rpad('x',100,'x') not null,
 col18 varchar2(100) default rpad('x',100,'x') not null,
 col19 varchar2(100) default rpad('x',100,'x') not null,
 col20 varchar2(100) default rpad('x',100,'x') not null);


create table t3 as select * from t1 left join t2 using (num);

create table results
( name    varchar2(30),
  start_time timestamp,
  end_time   timestamp,
  cpu_time   number
)
/

insert into t1(num) select rownum from big_table.big_table where rownum <= 100000;
insert into t2(num) select rownum from big_table.big_table where rownum <= 100000;
commit;

create or replace procedure do_rowxrow
is
    cursor c is select * from t1 left join t2 using (num);
    l_cpu   number := dbms_utility.get_cpu_time;
    l_rowid rowid;
 begin
    insert into results( name, start_time ) values ( 'rowxrow', systimestamp ) returning rowid into l_rowid;
    for rec in c
    loop
        rec.col1 := upper(rec.col1); -- some work
        rec.col2 := upper(rec.col2); -- more work
        insert into t3 values rec;
    end loop;
    commit;
    l_cpu := dbms_utility.get_cpu_time-l_cpu;
    update results set end_time = systimestamp, cpu_time = l_cpu where rowid = l_rowid;
end;
/
create or replace procedure do_bulk( p_limit in number )
is
    cursor c is select * from t1 left join t2 using (num);
    type array is table of c%rowtype index by binary_integer;

    l_data  array;
    l_cpu   number := dbms_utility.get_cpu_time;
    l_rowid rowid;
 begin
    insert into results( name, start_time ) values ( 'bulk ' || p_limit, systimestamp ) returning rowid into l_rowid;
    open c;
    loop
        fetch c bulk collect into l_data limit p_limit;
        for i in 1 .. l_data.count
        loop
            l_data(i).col1 := upper(l_data(i).col1); -- some work
            l_data(i).col2 := upper(l_data(i).col2); -- more work
        end loop;
        forall i in 1 .. l_data.count
            insert into t3 values l_data(i);
        exit when c%notfound;
    end loop;
    close c;
    commit;
    l_cpu := dbms_utility.get_cpu_time-l_cpu;
    update results set end_time = systimestamp, cpu_time = l_cpu where rowid = l_rowid;
end;
/
create or replace procedure do_it_correctly
is
    l_cpu   number := dbms_utility.get_cpu_time;
    l_rowid rowid;
begin
    insert into results( name, start_time ) values ( 'the right way', systimestamp ) returning rowid into l_rowid;
    insert into t3 select * from t1 left join t2 using (num);
    commit;
    l_cpu := dbms_utility.get_cpu_time-l_cpu;
    update results set end_time = systimestamp, cpu_time = l_cpu where rowid = l_rowid;
end;
/

declare
    type array is table of number;
    l_data array := array( 100, 500, 1000, 50000, 100000 );
begin
    for i in 1 .. 3
    loop
        do_rowxrow;
        execute immediate 'truncate table t3';
        for j in 1 .. l_data.count
        loop
            do_bulk( l_data(j) );
            execute immediate 'truncate table t3';
        end loop;
        do_it_correctly;
        execute immediate 'truncate table t3';
    end loop;
end;
/

set linesize 1000
select name, count(*),
       avg(cpu_time), min(cpu_time), max(cpu_time),
       avg(elap_time), min(elap_time), max(elap_time)
  from (
select name, cpu_time, extract (day from end_time-start_time) * 86400 +
                                extract (hour from end_time-start_time) * 3600 +
                                extract (minute from end_time-start_time) * 60 +
                                extract (second from end_time-start_time) elap_time
 from results
       )
 group by name
 order by case when name like 'bulk %' then to_number( substr( name, 5 ) ) else 0 end, name
/






and what I saw looked like this (dell poweredge, red hat linux)

NAME             COUNT(*)      ACPU    MINCPU    MAXCPU      AELA    MINELA    MAXELA
--------------- --------- --------- --------- --------- --------- --------- ---------
rowxrow              3.00  1,515.33  1,508.00  1,529.00    121.80     77.24    184.52
the right way        3.00    688.67    683.00    697.00     74.87     73.87     75.51
bulk 100             3.00    800.00    791.00    808.00     74.23     71.84     75.46
bulk 500             3.00    776.67    774.00    779.00     76.92     76.03     77.97
bulk 1000            3.00    760.67    756.00    767.00     75.99     75.92     76.05
bulk 50000           3.00    779.33    773.00    788.00     78.83     76.87     80.05
bulk 100000          3.00    816.67    807.00    825.00    102.77     92.92    119.86

7 rows selected.



or looking at it sorted by cpu:

ops$tkyte%ORA10GR2> select name, count(*),
  2         avg(cpu_time) acpu, min(cpu_time) mincpu, max(cpu_time) maxcpu,
  3         avg(elap_time) aela, min(elap_time) minela, max(elap_time)maxela
  4    from (
  5  select name, cpu_time, extract (day from end_time-start_time) * 86400 +
  6                                  extract (hour from end_time-start_time) * 3600 +                                extract (minute from end_time-start_time) * 60 +                                extract (second from end_time-start_time) elap_time
  7   from results
  8         )
  9   group by name
 10   order by acpu
 11  /

NAME             COUNT(*)      ACPU    MINCPU    MAXCPU      AELA    MINELA    MAXELA
--------------- --------- --------- --------- --------- --------- --------- ---------
the right way        3.00    688.67    683.00    697.00     74.87     73.87     75.51
bulk 1000            3.00    760.67    756.00    767.00     75.99     75.92     76.05
bulk 500             3.00    776.67    774.00    779.00     76.92     76.03     77.97
bulk 50000           3.00    779.33    773.00    788.00     78.83     76.87     80.05
bulk 100             3.00    800.00    791.00    808.00     74.23     71.84     75.46
bulk 100000          3.00    816.67    807.00    825.00    102.77     92.92    119.86
rowxrow              3.00  1,515.33  1,508.00  1,529.00    121.80     77.24    184.52

7 rows selected.



so, can you use this trimmed down example - that gets better information (cpu time is very relevant - if cpu < ela by a large margin - you got stuck waiting for something, that would be a red flag)

Update Millions Data

ANUPAM, June 09, 2008 - 4:20 am UTC

I need to update 706 million table by reading 105 million table, using Oracle Block/stored procedure :

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> set time on
06:28:46 SQL> set timing on
06:28:51 SQL> set serveroutput on
06:29:00 SQL>
06:29:00 SQL> DECLARE
06:30:38 2
06:30:38 3 CURSOR c1 IS
06:30:39 4 SELECT DELINQ_DIM_KEY,ACCT_DIM_NB,EFF_START_DT,EFF_END_DT FROM STG_ODS_DELQ_DIM;
06:30:39 5
06:30:39 6 TYPE c1_type IS TABLE OF c1%ROWTYPE INDEX BY PLS_INTEGER;
06:30:39 7 c1rec c1_type;
06:30:40 8 --cnt number(38):=0;
06:30:40 9
06:30:40 10 BEGIN
06:30:40 11 OPEN c1;
06:30:40 12 LOOP
06:30:41 13
06:30:41 14 FETCH c1 BULK COLLECT INTO c1rec LIMIT 500;
06:30:41 15 EXIT WHEN c1%NOTFOUND;
06:30:41 16
06:30:41 17 FOR i IN 1 .. c1rec.COUNT LOOP
06:30:42 18 UPDATE STG_ODS_DELQ_FCT SET DELINQ_DIM_KEY=c1rec(i).DELINQ_DIM_KEY
06:30:42 19 WHERE ACCT_DIM_NB=c1rec(i).ACCT_DIM_NB AND DATA_AS_OF_DT BETWEEN c1rec(i).EFF_START_DT AND c1rec(i).EFF_END_DT;
06:30:42 20 --cnt := cnt+1;
06:30:42 21
06:30:43 22 END LOOP;
06:30:43 23 COMMIT;
06:30:43 24 END LOOP;
06:30:43 25 CLOSE c1;
06:30:43 26 COMMIT;
06:30:43 27 --DBMS_OUTPUT.PUT_LINE(cnt);
06:30:44 28 EXCEPTION
06:30:44 29 WHEN OTHERS THEN
06:30:44 30 ROLLBACK;
06:30:44 31 DBMS_OUTPUT.PUT_LINE('1'||SQLERRM);
06:30:44 32
06:30:45 33 END;
06:30:45 34 /

I have seen it's updating 50K per minute,means 706m will take 235 hrs.

Could you please help me out of this problem to make it more faster?

Best Ragards,
Anupam
Tom Kyte
June 09, 2008 - 11:58 am UTC

ugh, what the heck sort of logic prompted this:

06:30:44 28 EXCEPTION
06:30:44 29  WHEN OTHERS THEN
06:30:44 30  ROLLBACK;
06:30:44 31  DBMS_OUTPUT.PUT_LINE('1'||SQLERRM);
06:30:44 32
06:30:45 33 END;
06:30:45 34 / 



why, WHY, what is the possible thoughts behind that?????

did you know that if you deleted those lines of code, this is what would happen:

a) the database would have rolled back for you automagically - we do this thing called statement level atomicity - a statement either entirely happens or not - if you just let the error ESCAPE from the database - it will automatically rollback (this is the way it is SUPPOSED TO WORK)

b) you would get the line number the actual error happened on !!!! wouldn't that be neat!!!

c) you'd see the error message on the screen - all nicely formatted and everything, it would be very obvious that it failed.

d) you could have sqlplus exit with a return code - so you would not have to grep '^1' (i assume that is why you have 1 in front of the error message


I'll never ever get it in a billion years. why do people do this.


why did you write code here at all?

merge into STG_ODS_DELQ_FCT a
using STG_ODS_DELQ_DIM b
on ( a.ACCT_DIM_NB = b.ACCT_DIM_NB and a.data_as_of_dt between b.eff_start_dt and b.eff_end_dt )
when matched then update set DELINQ_DIM_KEY = b.DELINQ_DIM_KEY;

will retry

Oxnard, June 09, 2008 - 3:12 pm UTC

could not put the whole code block into the review. will impliment your changes and retry.

Thnaks for the great suggestion

Anupam Liz, August 18, 2008 - 4:17 am UTC

Really Helpful comments!! Thanks a lot!!

Bulk Collect Within PL/SQL Procedure

Eugene, September 23, 2008 - 6:01 pm UTC

I'm trying to do something similar to what Sean of New Jersey mentioned on 3/24/08. If I could use an SELECT / INSERT query I would, but there's also several bits of conditional processing needed. There's also summation which must be done prior to the INSERT action.

The SELECT query, a cursor, joins 7 tables, half of them having 10s of millions of rows. It then conditionally selects data from an 8th table (not mentioned in the notes below).

Is there a better approach to performing the SELECTs and then aggregating the needed subtotals in an associative array? The performance of the current arrangement I'm trying to picture here is suffering greatly.

Thanks - Eugene

/* KEY SECTIONS FROM MY PROC */

CURSOR v_rev_tran_cursor(c_rev_month DATE, c_next_month DATE) IS
SELECT * FROM
(SELECT /*+ PARALLEL (ft,2, 4) */
ft.ft_id, ft.sibling_id, ft.ft_type_flg, ft.gl_division,
ft.cis_division, bc.rs_cd, bc.header_seq, bc.effdt,
cl.seqno, cl.dst_id, cl.calc_amt, cl.bill_sq, cl.rc_seq,
NVL(bch.char_val, '9999') SVC_Prod_cd,
(SELECT descr FROM cisadm.ci_char_val_l
WHERE char_type_cd = 'GLCMP_SC'
AND char_val = bch.char_val) descr
FROM CISADM.ci_ft ft, CISADM.ci_sa sa, CISADM.ci_sa_type t,
CISADM.ci_bseg_calc bc, CISADM.ci_bseg_calc_ln cl,
(SELECT dx.dst_id, dx.effdt, dx.char_val,
MAX(dx.effdt) OVER (PARTITION BY dx.dst_id) max_date
FROM cisadm.ci_dst_cd_char dx
WHERE dx.char_type_cd = 'CI_GLTY '
AND dx.char_val IN ('R', 'RM')
AND dx.effdt <= c_rev_month) dc,
CISADM.ci_bseg_cl_char bch
WHERE ft.sa_id = sa.sa_id
AND sa.cis_division = t.cis_division
AND sa.sa_type_cd = t.sa_type_cd
AND bc.bseg_id = ft.sibling_id
AND cl.bseg_id = bc.bseg_id
AND cl.header_seq = bc.header_seq
AND cl.dst_id = dc.dst_id
AND bch.bseg_id(+) = cl.bseg_id
AND bch.header_seq(+) = cl.header_seq
AND bch.seqno(+) = cl.seqno
AND ft.accounting_dt BETWEEN c_rev_month AND c_next_month
AND ft.gl_distrib_status = 'D'
AND ft.ft_type_flg IN ('BS', 'BX')
AND t.svc_type_cd IN ('E ', 'G ', 'C ')
AND t.sa_type_cd NOT IN ('EX-CO ', 'GX-CO ', 'CONS-TEP')
AND dc.effdt = dc.max_date
AND bch.char_type_cd(+) = 'GLCMP_SC')
ORDER BY ft_id, dst_id, header_seq, seqno;

/* v_rev_tran_row v_rev_tran_cursor%ROWTYPE; */

TYPE rev_tran_array IS TABLE OF v_rev_tran_cursor%ROWTYPE
INDEX BY PLS_INTEGER;
v_rev_tran_row rev_tran_array;

BEGIN
(The values for V_REV_MONTH and V_NEXT_MONTH are
supplied as IN parameters to this proc. The value
I've used for the LIMIT on the BULK COLLECT was
10,000....I hadn't read this thread before setting
that value.)

OPEN v_rev_tran_cursor(v_rev_month, v_next_month);
LOOP
BEGIN

FETCH v_rev_tran_cursor
BULK COLLECT INTO v_rev_tran_row LIMIT v_bulk_col_limit;

/* DEBUGGING - testing use only */
-- v_debug_cur_row_count := v_rev_tran_row.COUNT;

EXCEPTION
WHEN others THEN
v_sql := SQLCODE;
v_text:= SQLERRM || ' ' || v_procname;
INSERT INTO (a log table for error tracking)
VALUES
(sysdate, v_sql, v_text, 'CurFtch', v_debug_ft_id, NULL, NULL);
GOTO errorend;
END;


/* Handle each row found for each and every FT_ID*/
FOR i IN 1 .. v_rev_tran_row.COUNT
LOOP

(approx. 300 lines of code follow here).

(Each cursor row is matched with those of an associative array, based on 8 ¿column¿ values.
If the values match three other values will be accumulated in the associative array row. If
no match a new row will be created in the array.)

(Later in the procedure the associative array¿s rows are inserted into a table with the same )
layout (ie. datatypes of columns) as the array.

END LOOP; /* FOR i IN 1 .. v_bulk_col_limit */


<<nexttran>>

EXIT WHEN v_rev_tran_row.COUNT < v_bulk_col_limit;

END LOOP; /* FOR v_rev_tran_row IN v_rev_tran_.. */

CLOSE v_rev_tran_cursor;


Tom Kyte
September 24, 2008 - 6:45 pm UTC

why can't you just join to the 8th table? tell us in psuedo code/just text why you cannot do this in a single sql?

why do YOU NEED to aggregate, why cannot SQL do it?

Duke, September 24, 2008 - 5:57 am UTC

"To me -- 1000 rows in an array fetch, based on experience -- past performance -- memory usage -- it
10 times larger then I would recommend."

With regard to the above you suggest a limit in a bulk fetch of 1000 based on your experience, i can imagine that this is dependend of the platform and cpu and many other things, also i am not so experienced as you, so how can we make an educated guess of this 1000 limit, i mean over time, when hardware changes this must change a bit. How can we calculate the 1000 limit number?
Tom Kyte
September 24, 2008 - 7:24 pm UTC

paramaterize it (don't hard code it)

measure it (your program response times) with different sizes

do that from time to time over time so you can change it as the "best value" changes.

somewhere between

a) the page size displayed to the end user or
b) 100 to 500 for a batch oriented process (a stored procedure call that must return to client AFTER the last row is fetched)

Anything else is just a waste of memory - think about it, you are fetching into an array of data - bigger array = more to allocate, more to manage, more work that needs to be done on either end of the pipe before switching back and forth...

Should we use exit when cursor%NOTFOUND

Hemant K Chitale, October 22, 2008 - 10:40 pm UTC

Tom,

In your comment :
October 11, 2002 - 7pm US/Eastern: Use the c%notfound at the bottom like I do

you recommend the "exit when c%notfound"

however, Steven Feurstein recommends not using the "%notfound" in his article at https://asktom.oracle.com/Misc/oramag/oracle-database-11g-redux.html

Are the two recommendations inconsistent or in different contexts ? What is the difference ?
Tom Kyte
October 23, 2008 - 9:19 am UTC

I disagree with his approach, it is a matter of 'form' I guess.

I code:

loop
    fetch C into ... BULK COLLECT LIMIT N;
    .... PROCESS FOR I in 1..array.count HERE
    exit when c%notfound;
end loop;



He says to code:

loop
    fetch c into .... BULK COLLECT LIMIT N;
    exit when c.count = 0;
    ..... PROCESS FOR I in 1..array.count HERE
end loop;


we can both argue till the cows come home that our approach is "best" - it is six one one, half a dozen the other.

I do not agree with his closing statement: "Ignore the values returned by the cursor attributes, especially %NOTFOUND."



Re: exit when c%notfound

Stew Ashton, October 23, 2008 - 6:27 pm UTC


Bryn Llewellyn prefers this approach, which avoids a potential extra fetch :
loop
fetch Cur bulk collect into Results limit Batchsize; 
-- The for loop doesn't run when Results.Count() = 0
for j in 1..Results.Count() loop
Process_One_Record(Results(j));
end loop;
exit when Results.Count() < Batchsize;
end loop;
http://www.oracle.com/technology/tech/pl_sql/pdf/doing_sql_from_plsql.pdf
It seems to me your approach is more compact than Bryn's and avoids the extra fetch as well.

Forall insert not saving time

Sudip Bhowmik, October 30, 2008 - 10:05 am UTC

Hi Tom,
I've written a code piece like this. Previously it is a normal fetch and with a normal for loop over one million record was inserted in a table. I've modified it with bulk collect and forall (pl. see the code piece below). But it have not save any time. Like previous normal for it took around 2 hours.Could you pl. tell me why it is taking same time? Shall I missing anything? Can't we write forall like forall index 1..rec.count insert into select a, rec(index) from tab a? If we can't write forall then how we can tune this piece of code.

CREATE OR REPLACE PROCEDURE ANSA_EXPLODE_CONTRACTS
IS
strerrordesc VARCHAR2(2000);
lngerrorcd NUMBER(10);
p_seq VARCHAR2(200);

TYPE agent_id IS TABLE OF ansa_agents.agent_id%TYPE;
TYPE stat_comp_code IS TABLE OF ansa_agents.stat_comp_code%TYPE;
TYPE business_unit_id IS TABLE OF ansa_agents.business_unit_id%TYPE;

TYPE contract_type IS TABLE OF ansa_contracts.contract_type%TYPE;
TYPE START_DATE IS TABLE OF ansa_contracts.START_DATE%TYPE;
TYPE end_date IS TABLE OF ansa_contracts.end_date%TYPE;
TYPE effective_date IS TABLE OF ansa_contracts.effective_date%TYPE;
TYPE elig_product IS TABLE OF ansa_contracts.elig_product%TYPE;
TYPE ROW_ID IS TABLE OF VARCHAR2(200);

rec_agent_id agent_id;
rec_stat_comp_code stat_comp_code;
rec_business_unit_id business_unit_id;
rec_contract_type contract_type;
rec_START_DATE START_DATE;
rec_end_date end_date;
rec_effective_date effective_date;
rec_elig_product elig_product;
rec_ROW_ID ROW_ID;


CURSOR getall
IS
SELECT /*+ RULE */
a.agent_id, a.stat_comp_code, a.business_unit_id, ac.contract_type, ac.START_DATE, ac.end_date, ac.effective_date, ac.elig_product,
ac.ROWID
FROM ansa_contracts ac, ansa_agents a
WHERE <Big where clause>;


BEGIN
en_table_mods.executesql('truncate table ansa_contracts_exploded');
en_table_mods.dropindex('XAK1ANSA_CONTRACTS_EXPLODED');
en_table_mods.dropindex('XIE1ANSA_CONTRACTS_EXPLODED');
en_table_mods.dropindex('XIE2ANSA_CONTRACTS_EXPLODED');


OPEN getall;
LOOP

FETCH getall
BULK COLLECT INTO
rec_agent_id ,
rec_stat_comp_code ,
rec_business_unit_id ,
rec_contract_type ,
rec_START_DATE ,
rec_end_date ,
rec_effective_date ,
rec_elig_product ,
rec_ROW_ID
LIMIT 100;




FORALL indx IN 1 .. rec_agent_id.COUNT
INSERT INTO ansa_contracts_exploded
(writing_agent_id, writing_agent_stat_comp_code, superior_agent_id, superior_agent_level,
contract_start_date, contract_end_date, contract_modified_date, contract_effective_date,
contract_level, contract_type, contract_agent_id, business_unit_id, hierarchy_start_date,
hierarchy_contract_type,elig_product, hierarchy_end_date, broken_contract)
SELECT
DISTINCT rec_agent_id(indx), c.stat_comp_code, c.superior_agent_id, c.superior_agent_level, c.start_date,
c.end_date, c.modified_date, rec_effective_date(indx), c.contract_level, c.contract_type, c.agent_id,
c.business_unit_id, rec_START_DATE(indx), rec_contract_type(indx),
c.elig_product, rec_end_date(indx), 0
FROM ansa_contracts c
WHERE
c.agent_id NOT IN ('99999','MKTNG')
AND c.START_DATE <= rec_end_date(indx)
AND c.end_date >= rec_start_date(indx)
AND c.effective_date =
(SELECT MAX(effective_date)
FROM ansa_contracts
WHERE agent_id = c.agent_id
AND contract_level = c.contract_level
AND stat_comp_code = c.stat_comp_code
AND contract_type = c.contract_type
AND business_unit_id = c.business_unit_id
AND elig_product = c.elig_product
AND start_date = c.start_date
AND effective_date <= rec_effective_date(indx))
AND c.modified_date =
(SELECT MAX(modified_date)
FROM ansa_contracts
WHERE agent_id = c.agent_id
AND contract_level = c.contract_level
AND stat_comp_code = c.stat_comp_code
AND contract_type = c.contract_type
AND business_unit_id = c.business_unit_id
AND elig_product = c.elig_product
AND start_date = c.start_date
AND effective_date = c.effective_date)

START WITH (rowid = rec_ROW_ID(indx)
)
CONNECT BY PRIOR superior_agent_id = c.agent_id
AND PRIOR superior_agent_level = c.contract_level
AND PRIOR stat_comp_code = c.stat_comp_code
AND PRIOR contract_type = c.contract_type
AND PRIOR START_DATE <= rec_end_date(indx)
AND PRIOR end_date >= rec_START_DATE(indx)
AND PRIOR effective_date <= rec_effective_date(indx)
AND PRIOR business_unit_id = business_unit_id
AND PRIOR elig_product = elig_product
;


EXIT WHEN
rec_agent_id.COUNT = 0;
END LOOP;
CLOSE getall;

COMMIT;

--Remaining code


Waiting for your valuable response.
Tom Kyte
November 02, 2008 - 1:52 pm UTC

because it is doubtful that any meaningful percentage of the runtime was spent doing the actual "insert", it looks like most of the time would have been spent doing the 'select'ing of the data to insert.


to insert 1,000,000 records - should take seconds, maybe - in extreme circumstances, a minute or so...

the following times are from my 2.5 year old laptop running a virtual machine! (eg: I am running windows which is running linux which is doing the insert - while I'm typing this on a laptop)

ops$tkyte%ORA10GR2> @big_table 1000000
ops$tkyte%ORA10GR2> create table big_table
  2  as
  3  select rownum id,
  4                 OWNER, OBJECT_NAME, SUBOBJECT_NAME,
  5                 OBJECT_ID, DATA_OBJECT_ID,
  6                 OBJECT_TYPE, CREATED, LAST_DDL_TIME,
  7                 TIMESTAMP, STATUS, TEMPORARY,
  8                 GENERATED, SECONDARY
  9    from all_objects a
 10   where 1=0
 11  /

Table created.

Elapsed: 00:00:03.26
ops$tkyte%ORA10GR2> alter table big_table nologging;

Table altered.

Elapsed: 00:00:00.71
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> declare
  2      l_cnt number;
  3      l_rows number := &1;
  4  begin
  5      insert /*+ append */
  6      into big_table
  7      select rownum,
  8                 OWNER, OBJECT_NAME, SUBOBJECT_NAME,
  9                 OBJECT_ID, DATA_OBJECT_ID,
 10                 OBJECT_TYPE, CREATED, LAST_DDL_TIME,
 11                 TIMESTAMP, STATUS, TEMPORARY,
 12                 GENERATED, SECONDARY
 13        from all_objects a
 14           where rownum <= &1;
 15
 16      l_cnt := sql%rowcount;
 17
 18      commit;
 19
 20      while (l_cnt < l_rows)
 21      loop
 22          insert /*+ APPEND */ into big_table
 23          select rownum+l_cnt,
 24                 OWNER, OBJECT_NAME, SUBOBJECT_NAME,
 25                 OBJECT_ID, DATA_OBJECT_ID,
 26                 OBJECT_TYPE, CREATED, LAST_DDL_TIME,
 27                 TIMESTAMP, STATUS, TEMPORARY,
 28                 GENERATED, SECONDARY
 29            from big_table
 30           where rownum <= l_rows-l_cnt;
 31          l_cnt := l_cnt + sql%rowcount;
 32          commit;
 33      end loop;
 34  end;
 35  /
old   3:     l_rows number := &1;
new   3:     l_rows number := 1000000;
old  14:         where rownum <= &1;
new  14:         where rownum <= 1000000;

PL/SQL procedure successfully completed.

Elapsed: 00:01:23.45



So, your 'sql' must be taking the other 1.999999999 hours of run time. Best to find out where you spend your time before tuning - in your case the red flag should have been "1,000,000 rows in the year 2008 is teeny tiny, we are not spending the bulk of 2 hours doing the insert here, something else is consuming the time"

Forall is not working

Sudip Bhowmik, November 06, 2008 - 7:13 am UTC

Hi Tom,

But I've checked the select and for individual select it doesn't take more than a second.But it seems forall is not working and it is act as a normal for loop.Thats why for million records it took more than 2 hours. Also I've checked that for first 800000 records it took one hour but for remaining 300000 records it took another hour. That means as more records have inserted into the table it got slower and slower. I'm really clueless about what to do. Why the forall is not working? Your help will be highly appreciated.
Tom Kyte
November 11, 2008 - 2:18 pm UTC

a million records, as demonstrated, would not take 2 hours. unless you are doing something else - something complex - to generate those records.

forall is working - but if you do a lot of things that TAKE A LONG TIME, batching them up will not affect your runtime significantly.

your select is slow.

prove me wrong. using tkprof - PROVE me wrong.


you'll see that the insert spends almost all of its time.....selecting.

bulk load

A reader, November 10, 2008 - 12:00 pm UTC

Greetings thomas,

and thanks like always,

I have 400G table in one database i want to move to another.

Which is better:
1)create table .. as select
or
2)use bulk load.

Thanks
Tom Kyte
November 11, 2008 - 3:48 pm UTC

option 3 of course

transport it, just copy the datafiles instead of reloading and taking the time to format blocks and everything.


bulk load

Areader, November 12, 2008 - 8:45 am UTC

Greetings Thomas,

and thanks like always

--transport it, just copy the datafiles instead of reloading and taking the time to format blocks and everything

source database is 9.2.0.6
source tablespace blocksize 16k

target database is 10.2.0.1
target tablespace blocksize 8k

both databases have same platform.

do you think "option 3" still applicable.

Thankx
Tom Kyte
November 12, 2008 - 10:44 am UTC

it would be nice if both were the same blocksize - but you can

a) configure a 16k cache on the 10g instance
b) attach the datafiles
c) use ALTER TABLE MOVE and ALTER INDEX REBUILD

to do it - to get the data into a nice 8k blocksize tablespace on 10g so you can drop the 16k tablespace and get rid of the 16k cache.

the alters can be done in parallel, nologging - skip undo and redo generation.

transportable tablespace

A reader, April 27, 2009 - 10:42 am UTC

greetings thomas,

and thanks like always.

commenting the above post i have found that loading the table using parallel "DIY" bulk load is faster than copying the datafile using ftp.

note: i have opened 7 session ftp to transfer datafile.

please advice.
Tom Kyte
April 27, 2009 - 2:29 pm UTC

please give some details

you know, like how you move the dumped file, how you produce the dumped file, how big the dumped file is, how long it takes from start to finish

and then, well, the same for the transport.

Update statement with bulk collect performing very slow

Keyuri, April 29, 2009 - 5:47 am UTC

Hi Tom,

Thanks a lot for the information.
Could you please throw some light on the below situation.

I need to update 4 million records in a table.
I am just trying to set one flag.

Below is  the query i am using 

DECLARE 
   CURSOR C1 IS   
          SELECT  /*+ parallel(A,5)  */
         A.ACCOUNT_ROW_ID
      from 
         DUMMY.CMC_ACCOUNT_POST_PROCESSING A
         inner join CMC_ACCOUNT_PRE_PROCESSING E on E.ACCOUNT_ROW_ID = A.ACCOUNT_ROW_ID
         inner join SIEBELPRD.S_ORG_EXT B on A.ACCOUNT_ROW_ID = B.ROW_ID and E.ACCOUNT_LAST_UPD = B.LAST_UPD and E.ACCOUNT_MODIFICATION_NUM = B.MODIFICATION_NUM
         inner join SIEBELPRD.S_ADDR_ORG C on A.ADDRESS_ROW_ID = C.ROW_ID and E.ADDRESS_LAST_UPD = C.LAST_UPD and E.ADDRESS_MODIFICATION_NUM = C.MODIFICATION_NUM
   inner join  SIEBELPRD.S_LST_OF_VAL D ON D.LOW =A.COUNTRY_CODE AND D.TYPE = 'COUNTRY' AND D.X_TRILLIUM_FLG = 'Y' AND D.ACTIVE_FLG = 'Y' AND D.LANG_ID = 'ENU' 
   where E.DISA_CLEANSE_FLG = 'N' ;

     TYPE T_ACCOUNT_ROW_ID IS TABLE OF DUMMY.CMC_ACCOUNT_POST_PROCESSING.ACCOUNT_ROW_ID%TYPE;
     
     V_ACCOUNT_ROW_ID T_ACCOUNT_ROW_ID;

   BEGIN 
                  OPEN C1;
                  LOOP
                  FETCH C1 BULK COLLECT INTO 
   V_ACCOUNT_ROW_ID LIMIT 10000;
                  FORALL I IN 1..V_ACCOUNT_ROW_ID.COUNT 
                              UPDATE DUMMY.CMC_ACCOUNT_POST_PROCESSING SET 
                              CHANGED_FLG='Y'
                            WHERE 
                                      ACCOUNT_ROW_ID = V_ACCOUNT_ROW_ID(I);

                          COMMIT;
                
                          EXIT WHEN C1%NOTFOUND;      
                END LOOP;
                CLOSE C1;   
 End;

but the update is happening very slowly. 
Could you please let me if there is some other way or i am missing something.

Indexes are present on almost all the columns in where clause.

No Index on CHANGED_FLG.

Also below is the plan for select statement.

SQL> explain plan for SELECT  /*+ parallel(A,5)  */
  2           A.ACCOUNT_ROW_ID
  3        from
  4           DUMMY.CMC_ACCOUNT_POST_PROCESSING A
  5           inner join CMC_ACCOUNT_PRE_PROCESSING E on E.ACCOUNT_ROW_ID = A.ACCOUNT_ROW_ID
  6           inner join SIEBELPRD.S_ORG_EXT B on A.ACCOUNT_ROW_ID = B.ROW_ID and E.ACCOUNT_LAST_UPD = B.LAST_UPD and E.ACCOUNT_MODIFICATION_NUM = B.MODIFICATION_NUM
  7           inner join SIEBELPRD.S_ADDR_ORG C on A.ADDRESS_ROW_ID = C.ROW_ID and E.ADDRESS_LAST_UPD = C.LAST_UPD and E.ADDRESS_MODIFICATION_NUM = C.MODIFICATION_NUM
  8     inner join  SIEBELPRD.S_LST_OF_VAL D ON D.LOW =A.COUNTRY_CODE AND D.TYPE = 'COUNTRY' AND D.X_TRILLIUM_FLG = 'Y' AND D.ACTIVE_FLG = 'Y' AND D.LANG_ID = 'ENU'
  9     where E.DISA_CLEANSE_FLG = 'N' ;
 
Explained
 
SQL> SELECT *FROM TABLE(DBMS_XPLAN.DISPLAY);
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 678555948
--------------------------------------------------------------------------------
| Id  | Operation                       | Name                           | Rows
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |                                |     1
|   1 |  NESTED LOOPS                   |                                |     1
|   2 |   NESTED LOOPS                  |                                |     1
|   3 |    NESTED LOOPS                 |                                |     9
|   4 |     NESTED LOOPS                |                                |   461
|*  5 |      TABLE ACCESS BY INDEX ROWID| S_LST_OF_VAL                   |     1
|*  6 |       INDEX RANGE SCAN          | S_LST_OF_VAL_X1_X              |   148
|   7 |      TABLE ACCESS BY INDEX ROWID| CMC_ACCOUNT_POST_PROCESSING    | 26901
|*  8 |       INDEX RANGE SCAN          | CMC_ACCOUNT_POST_PROCESSING_P6 | 10760
|*  9 |     TABLE ACCESS BY INDEX ROWID | CMC_ACCOUNT_PRE_PROCESSING     |     1
|* 10 |      INDEX RANGE SCAN           | CMC_ACCOUNT_PRE_PROCESSING_P1  |     2
|* 11 |    INDEX RANGE SCAN             | S_ADDR_ORG_W1                  |     1
|* 12 |   INDEX RANGE SCAN              | S_ORG_EXT_W1                   |     1
--------------------------------------------------------------------------------
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   5 - filter("D"."X_TRILLIUM_FLG"='Y' AND "D"."ACTIVE_FLG"='Y')
   6 - access("D"."TYPE"='COUNTRY' AND "D"."LANG_ID"='ENU')
   8 - access("A"."COUNTRY_CODE"=SYS_OP_C2C("D"."LOW"))
   9 - filter("E"."DISA_CLEANSE_FLG"='N')
  10 - access("E"."ACCOUNT_ROW_ID"="A"."ACCOUNT_ROW_ID")
  11 - access("E"."ADDRESS_LAST_UPD"="C"."LAST_UPD" AND
              "E"."ADDRESS_MODIFICATION_NUM"="C"."MODIFICATION_NUM")
       filter("A"."ADDRESS_ROW_ID"=SYS_OP_C2C("C"."ROW_ID") AND
              "E"."ADDRESS_MODIFICATION_NUM"="C"."MODIFICATION_NUM")
  12 - access("E"."ACCOUNT_LAST_UPD"="B"."LAST_UPD" AND
              "E"."ACCOUNT_MODIFICATION_NUM"="B"."MODIFICATION_NUM")
       filter("A"."ACCOUNT_ROW_ID"=SYS_OP_C2C("B"."ROW_ID") AND
              "E"."ACCOUNT_MODIFICATION_NUM"="B"."MODIFICATION_NUM")
 
36 rows selected

Please let me know how should i go ahead with this.

Right now query is taking more tahn 5 days to run and i have to run it in few hours ... :-(

Thanks & Regards,
Keyuri

Tom Kyte
April 29, 2009 - 9:16 am UTC

this obviously should be a single update - nothing more, nothing less.

erase all of the code
generate a SINGLE update statement.

transportable tablespace

A reader, April 30, 2009 - 3:22 am UTC

greetings Thomas,

and thanks like always.
-----------------------------------
please give some details
you know, like how you move the dumped file, how you produce the dumped file, how big the dumped file is, how long it takes from start to finish
and then, well, the same for the transport.
------------------------------------
- i moved the dumped file: using ftp.
- how you produce the dumped file: using exp
- how big the dumped file is: a few KB
- how long it takes from start to finish:
1) using bulk load around 11 HOURS.
2) using transportable tablespace.
ftp: around 7:30 Hours
import: around 15M and it failed because the table did not created because the tablespace is read only????

Please advice
Tom Kyte
April 30, 2009 - 10:17 am UTC

... import: around 15M and it failed because the table did not created because
the tablespace is read only????
....

that doesn't make sense, the tablespace HAS to be read only if you use exp/imp. So, you did something *wrong*

Everything you describe above seems to indicate that transporting would take many hours LESS TIME - if you fixed what you did wrong in reattaching the datafiles - you are almost there, you just did something "not correct" at the end.


when I say file - I mean the entire set of files you are moving.

Tell us bit by bit what is GOING ON here.

describe in painstaking detail your "DIY" operation (that is the dumped file I was actually talking about).

describe in painstaking detail about the transport.

For example, expecting something like this:

------------- this is our transport ----------------------------------
we use exp to create the transport set.
The size of the datafiles and expdat.dmp file is X terabytes.
This process took X minutes.

describe here how you manage to ftp in seven sessions.
describe here what the network looks like
This process took Y minutes

Now the files are on the target system, we used imp to attach the files
This process took Z minutes

That is a total of X+Y+Z minutes.
----------------------------------------------------------------------


Now, since I have no clue what your DIY process is or does, describe that *completely*

BULK COLLECT and ORA-1555

Chris Gould, June 17, 2009 - 11:35 am UTC

Tom - I'm looking at some code along the following lines
SELECT .. 
BULK COLLECT INTO collection ;

for i in 1 .. collection.count 
loop
   process_row(i);
   COMMIT;
end loop;


The resultset from the query is typically ~60,000 rows, but we frequently get ORA-1555 errors from the query (we are using "old-style" rollback segments not auto-UNDO, even though it's 10gR2)

Would it be less likely get ORA-1555 if we used a LIMIT clause (say 100) on the bulk collect and also only committed once at the end of the job (it doesn't really need to commit after every row), or is this unlikely to have any effect on the ORA-1555 at all?

I have already suggested that we should be using auto UNDO,not the old-style rollback, but that's unlikely to change anytime soon.




Tom Kyte
June 17, 2009 - 3:16 pm UTC

you do not say what is getting the 1555, it cannot be the select bulk collect - else you would never even have gotten to the commit.

so, not sure what to say - insufficient data to comment

I got the same problem

Carlos Robles, August 27, 2009 - 11:12 pm UTC

Hi, tom thanks for your help and advices but i have the same problem of the orginal question i blow out the memory of the server but in my case y freeze complete the server i have to restart the server to fixed.

So basically i have a RedHat Release 3 Update 4, my database is a Oracle 10G version 10.2.0.4.

I have to load the oracle database by a heterogenous services i test all the configuration for that and everything works fine, so i will show you my code:

CREATE OR REPLACE PROCEDURE LOAD_OFSA_TABLES
IS
TYPE TABLE_ARRAY IS TABLE OF FEM_CHECKING%ROWTYPE INDEX BY PLS_INTEGER;
V_DATA TABLE_ARRAY;
V_DATE VARCHAR2(20);
N_LIMIT NUMBER := 100;
ERRORES NUMBER;
ERR NUMBER;
DMLERROR_MSG EXCEPTION;
SQLERROR_MSG VARCHAR2(120):= NULL;
ERROR_COUNTER NUMBER := 0;
PRAGMA EXCEPTION_INIT(DMLERROR_MSG, -24381);
CURSOR C_TBL_SELECT IS SELECT * FROM OFSA_FEM_CHECKING@MSHS;
BEGIN
----CREATING TABLES TO SAVE THE BAD RECORDS
EXECUTE IMMEDIATE 'DROP TABLE TARGET_ERR';
EXECUTE IMMEDIATE 'CREATE TABLE TARGET_ERR AS (SELECT * FROM FEM_CHECKING WHERE ROWNUM = 0)';
EXECUTE IMMEDIATE 'COMMIT';
OPEN C_TBL_SELECT;
LOOP
FETCH C_TBL_SELECT BULK COLLECT INTO V_DATA LIMIT N_LIMIT;
BEGIN
FORALL i IN V_DATA.FIRST..V_DATA.LAST
SAVE EXCEPTIONS
INSERT INTO FEM_CHECKING VALUES V_DATA(i);
EXECUTE IMMEDIATE 'COMMIT';
EXCEPTION
WHEN DMLERROR_MSG THEN
ERRORES := SQL%BULK_EXCEPTIONS.COUNT;
ERROR_COUNTER := ERROR_COUNTER + ERRORES;
FOR i IN 1..ERRORES LOOP
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE ('The following errors were found: '||SQL%BULK_EXCEPTIONS(i).ERROR_INDEX||':'||SQL%BULK_EXCEPTIONS(i).ERROR_CODE);
ERR:= SQL%BULK_EXCEPTIONS(I).ERROR_INDEX;
INSERT INTO TARGET_ERR VALUES V_DATA(ERR);
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE(SQL%BULK_EXCEPTIONS(i).ERROR_INDEX || ', ' ||SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
END LOOP;
WHEN OTHERS THEN
SQLERROR_MSG := SQLERRM;
DBMS_OUTPUT.PUT_LINE ('The following errors were found '||CHR(13)||SQLERROR_MSG);
DBMS_OUTPUT.PUT_LINE('');
END;
EXIT WHEN C_TBL_SELECT%NOTFOUND;
END LOOP;
COMMIT;
CLOSE C_TBL_SELECT;
V_DATE := TO_CHAR(SYSDATE,'DD/MM/YYYY HH24:MI:SS');
DBMS_OUTPUT.PUT_LINE ('Load process complete at: '||CHR(9)||V_DATE);
DBMS_OUTPUT.PUT_LINE ('Total errors'||ERROR_COUNTER);
END LOAD_OFSA_TABLES;
SPOOL OFF;


So if you se i made a SELECT * FROM the source table but this SELECT is the one who blow out the memory i applied the limit equal than you advice to 100, the source table has 174 columns equal than the destiny table, i don't know what to do i test all the ways even using a ROWNUM < 25 for the cursor query and blow out my server, so i will appreciate your advices,

Thanks a lot.
Tom Kyte
August 28, 2009 - 5:15 pm UTC

I hate your code and refuse to consider looking at it, it has a huge bug

        WHEN OTHERS THEN
          SQLERROR_MSG := SQLERRM;
          DBMS_OUTPUT.PUT_LINE ('The following errors were found '||CHR(13)||SQLERROR_MSG);
          DBMS_OUTPUT.PUT_LINE('');
        END; 



http://asktom.oracle.com/pls/ask/search?p_string=%22i+hate+your+code%22


doing DDL in your code!!!

I really hate it:

  EXECUTE IMMEDIATE 'DROP TABLE TARGET_ERR';
    EXECUTE IMMEDIATE 'CREATE TABLE TARGET_ERR AS (SELECT * FROM FEM_CHECKING WHERE ROWNUM = 0)';
    EXECUTE IMMEDIATE 'COMMIT'; 



why isn't that a truncate table at MOST???????

and why execute immediate 'commit';

a) you could just say
commit;
b) but I'd still say "why did you commit, you know DDL commits"



how many other people see the flaw in this code?


        BEGIN
          FORALL i IN V_DATA.FIRST..V_DATA.LAST
            SAVE EXCEPTIONS
                INSERT INTO FEM_CHECKING VALUES V_DATA(i);
                EXECUTE IMMEDIATE 'COMMIT';
        EXCEPTION
        WHEN DMLERROR_MSG THEN
            ERRORES := SQL%BULK_EXCEPTIONS.COUNT;
            ERROR_COUNTER := ERROR_COUNTER + ERRORES;
                FOR i IN 1..ERRORES LOOP
                  DBMS_OUTPUT.PUT_LINE('');
                  DBMS_OUTPUT.PUT_LINE ('The following errors were found: '||SQL%BULK_EXCEPTIONS(i).ERROR_INDEX||':'||SQL%BULK_EXCEPTIONS(i).ERROR_CODE);
                  ERR:= SQL%BULK_EXCEPTIONS(I).ERROR_INDEX;
                  INSERT INTO TARGET_ERR VALUES V_DATA(ERR);
                  DBMS_OUTPUT.PUT_LINE('');
                  DBMS_OUTPUT.PUT_LINE(SQL%BULK_EXCEPTIONS(i).ERROR_INDEX || ', ' ||SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
                END LOOP; 




ouch.

anyway, read:

http://asktom.oracle.com/Misc/how-cool-is-this.html

and erase all of your code and just use dml error logging.

Ok, i made the corrections

Carlos Robles, August 28, 2009 - 7:20 pm UTC

Ok, i made some corrections this is the code:
create or replace procedure load_ofsa_tables is 
  
  type table_array is table of fem_checking%rowtype index by pls_integer; 
  v_data table_array; 
  v_date varchar2(20); 
  n_limit number := 100;  
  errores number; 
  err  number; 
  dmlerror_msg exception; 
  error_counter number := 0; 
  
  pragma exception_init(dmlerror_msg, -24381); 
  cursor c_tbl_select is select * from ofsa_fem_checking@mshs; 
  
begin 

open c_tbl_select; 
  loop 
    fetch c_tbl_select bulk collect into v_data limit n_limit; 
        
  begin 
          
    forall i in v_data.first..v_data.last 
            save exceptions 
                insert into fem_checking values v_data(i); 
                execute immediate 'commit'; 
    
        exception 
        when dmlerror_msg then 
            errores := sql%bulk_exceptions.count; 
            error_counter := error_counter + errores; 
                
    for i in 1..errores loop 
                  dbms_output.put_line(''); 
                  dbms_output.put_line ('errors were found:'||sql%bulk_exceptions(i).error_index||':'||sql%bulk_exceptions(i).error_code); 
                  err:= sql%bulk_exceptions(i).error_index; 
                  insert into target_err values v_data(err); 
      dbms_output.put_line(''); 
                  dbms_output.put_line(sql%bulk_exceptions(i).error_index || ', ' ||sqlerrm(-sql%bulk_exceptions(i).error_code)); 
                end loop; 
    
    exit when c_tbl_select%notfound; 
  end loop; 
  
  close c_tbl_select; 
  
  v_date := to_char(sysdate,'dd/mm/yyyy hh24:mi:ss'); 
  dbms_output.put_line ('load process complete at: '||chr(9)||v_date); 
  dbms_output.put_line ('total errors'||error_counter); 
  
end load_ofsa_tables; 


So, thanks for your help, i think you maybe you should be a little more friendly.

I made another code with pure SQL like this:


INSERT INTO OFSA_FEM_CHECKING 
SELECT * FROM OFSA_FEM_CHECKING@MSHS
LOG ERRORS INTO ERROR_LOAD_LOG REJECT LIMIT 1000;



it's takes 18 min to load 2 millon of rows. If you have a better idea i will appreciate your help.

Thanks,

Carlos.
Tom Kyte
August 29, 2009 - 7:05 pm UTC

... So, thanks for your help, i think you maybe you should be a little more friendly. ...


I'm absolutely aghast at the when others null I see in production code, I refuse to sit by anymore and just say "that's bad", I'm going to rip it to *shreds*. When I don't, people don't believe me that it is the worst programming thing you can possibly do. You know now exactly how bad it is.


How long does it take to transfer the 2 million rows (of unknown width to me, 2 million rows might be 2mb or 2tb - no idea). Forget Oracle for a minute, what can the network transfer?

How many indexes on the existing table?

If you "insert into local_scratch_table select * from ofsa_fem_checking" - to reinsert the rows into a local table without the dblink - what is the response time then?

How to rewrite?

Pete, September 04, 2009 - 1:54 pm UTC

What would be the most efficient way of rewritting the below PL/SQL of code?

It would have been good if the other tables referenced the trans_tbl table and
used the on delete cascade constraint option, but this is not an option now.



declare

cnt number := 0;

begin

for i in (select tran_num from trans_tbl where mod(tran_num,4) = 3) loop
delete from tab2 where tran_num = i.tran_num;
delete from tab3 where tran_num = i.tran_num;
delete from tab4 where tran_num = i.tran_num;
delete from tab5 where tran_num = i.tran_num;

/* couple more delete statements for other tables looking like the
above delete statements */

commit;
cnt := cnt + 1;
end loop;
dbms_output.put_line(cnt || ' Transactions Purged ');

end;
/

Tom Kyte
September 04, 2009 - 4:00 pm UTC

delete from tab2 where tran_num in (select tran_num from trans_tbl where mod(tran_num,4) = 3);
...
delete from tabN where tran_num in (select tran_num from trans_tbl where mod(tran_num,4) = 3);
commit;


if running the subquery is deemed "expensive" or if trans_tbl is being modified during the process, put the rows into a global temporary table and use it instead.

bulk

sam, September 05, 2009 - 1:26 pm UTC

tom:

in oltp system i never use BULK collect. Is this more of a data warehousing thing or basically whenever a SQL query return more than 100 rows you use it.

<<if you are processing more then about 100 rows, BULK collect makes a great deal of sense. >>

I always do implicit cursor and the performance seems good

for x in (sql...)
loop
process data
end loop;

Tom Kyte
September 06, 2009 - 9:33 am UTC

and in 10g, you are doing a bulk collect there. and you didn't even know it.

turn on tracing, use tkprof, see how many fetches

for x in (select * from all_objects)
loop
null;
end loop;

does in 9i and before, 10g and later.

BULK COLLECT: ORA-06531: Reference to uninitialized collection

Sita, November 11, 2009 - 10:28 am UTC

Tom:

I encountered this error while running the code which is implemented using bulk collect :

ORA-06531: Reference to uninitialized collection
ORA-06512: at line 69

and then i initialized the array as below:

l_hold_array array := array();
l_dedup_array array := array();
l_rowid_array rowid_array ;
l_update idx_array ;

now i am getting this error :

ERROR at line 1:
ORA-06533: Subscript beyond count
ORA-06512: at line 69

what i am doing wrong ?


=== code ===

set echo on
set lines 200
set pages 50000
set timing on
set serveroutput on
set timing on feed on serverout on
Whenever sqlerror exit sql.sqlcode;
ALTER SESSION DISABLE PARALLEL DML;
DECLARE
CURSOR cur_dedup
IS
SELECT S.ROWID, submission_no, PRODUCT_ABBR, POLICY_EFCTV_DT, POLICY_NO,
R.status_rank,wip_id, product_cd
FROM BALA.SUBMISSION_DBG S,ETL_SDM.int_status_rank R
WHERE (submission_no, PRODUCT_ABBR, POLICY_EFCTV_DT) in
(
select submission_no,product_abbr, policy_efctv_dt from BALA.SUBMISSION_DBG
where duplicate_in = 'N'
group by submission_no,product_abbr, policy_efctv_dt
having count(*) > 1)
AND DUPLICATE_IN = 'N'
AND S.status_cd = r.status_code(+)
ORDER BY submission_no,
PRODUCT_ABBR,
POLICY_EFCTV_DT,
POLICY_NO,
R.status_rank,
wip_id DESC,
product_cd DESC;

v_dedup CHAR;
v_updt_count NUMBER (10);
v_total_count NUMBER (10);

type array is table of cur_dedup%ROWTYPE ;
type rowid_array is table of ROWID INDEX BY BINARY_INTEGER;
type idx_array is table of binary_integer index by binary_integer;

l_hold_array array := array();
l_dedup_array array := array();
l_rowid_array rowid_array ;
l_update idx_array ;

BEGIN
v_dedup := 0;
v_updt_count := 0;
v_total_count := 0;

OPEN cur_dedup;

LOOP



FETCH cur_dedup bulk collect into l_dedup_array LIMIT 1000 ;

for i in 1..l_dedup_array.count
loop


IF l_dedup_array(i).submission_no = l_hold_array(i).submission_no
AND l_hold_array(i).PRODUCT_ABBR= l_dedup_array(i).PRODUCT_ABBR
AND l_hold_array(i).POLICY_EFCTV_DT= l_dedup_array(i).POLICY_EFCTV_DT
THEN
IF l_hold_array(i).POLICY_NO= l_dedup_array(i).POLICY_NO
OR ( l_hold_array(i).POLICY_NO IS NULL
AND l_dedup_array(i).POLICY_NO IS NULL
AND v_dedup = 0
)
THEN
IF l_hold_array(i).status_rank = l_dedup_array(i).status_rank
AND
l_hold_array(i).wip_id = l_dedup_array(i).wip_id
AND l_hold_array(i).product_cd = l_dedup_array(i).product_cd
THEN
v_dedup := 0;
ELSE
v_dedup := 1;
END IF;
ELSIF ( l_hold_array(i).POLICY_NO IS NULL
AND l_dedup_array(i).POLICY_NO IS NULL
AND v_dedup = 1
)
OR (l_hold_array(i).POLICY_NO IS NOT NULL AND l_dedup_array(i).POLICY_NO IS NULL)
THEN
v_dedup := 1;
ELSE
v_dedup := 0;
END IF;

IF v_dedup = 1
THEN

l_rowid_array(i) := l_dedup_array(i).ROWID ;

END IF;
ELSE
v_dedup := 0;
END IF;

/* SET THE VARS FOR THE HOLDING RECORD. */

l_hold_array(i) := l_dedup_array(i);
END LOOP;


forall i in 1 .. l_rowid_array.count
UPDATE BALA.SUBMISSION_DBG
SET DUPLICATE_IN = 'Y'
WHERE ROWID = l_rowid_array(i);

EXIT WHEN cur_dedup%NOTFOUND;

dbms_output.put_line('Total Rows Updated: '||v_total_count);

end loop;

CLOSE cur_dedup;
END;
/

=== END ==

Tom Kyte
November 15, 2009 - 12:21 pm UTC

i am not a compiler.

i cannot simply "run your code"

therefore, I am not going to look at it.


if you want someone to look at something for you and diagnose what you are doing wrong, you need to provide a FULL TEST CASE, something that individual can run and observe what you saw.

then and only then can they explain to you what you've done wrong.


It is probably centered around this:

         IF v_dedup = 1
         THEN

            l_rowid_array(i) := l_dedup_array(i).ROWID ;

         END IF;
      ELSE
         v_dedup := 0;
      END IF;

/* SET THE VARS FOR THE HOLDING RECORD. */
      
      l_hold_array(i) := l_dedup_array(i);
   END LOOP;

    
    forall i in 1 .. l_rowid_array.count
            UPDATE BALA.SUBMISSION_DBG
               SET DUPLICATE_IN = 'Y'
             WHERE ROWID = l_rowid_array(i);


You set l_rowid_array(i) - but sparsely.

say you set 1, 5, 10 in that array.

l_rowid_array.count is 3

forall i in 1 .. 3

how will that work? your subscripts are 1,5,10, not 1,2,3.


maybe you meant to assign to l_rowid_array(l_rowid_array.count+1), instead of i, so as to fill in 1,2,3 - not 1,5,10


BULK COLLECT: ORA-06531: Reference to uninitialized collection

Sita, November 15, 2009 - 3:35 pm UTC

Thanks Tom.
<< maybe you meant to assign to l_rowid_array(l_rowid_array.count+1), instead of i, so as to fill in 1,2,3 - not 1,5,10

Yes that was the issue and its working fine now.

Subscript beyond count related the previous post

Sita, November 15, 2009 - 5:15 pm UTC

Tom:

I am populating SQL collection to store rowids in varchar2 format later for one single sql update.
I do extend the l_rowid_array collection before assigning the value but not able to figure out why i am geting "Subscript beyond count" error. code is in the right above post and only "l_rowid_array" was changed from PLSQL table type array to SQLTYPE collection.


DECLARE
*
ERROR at line 1:
ORA-06533: Subscript beyond count
ORA-06512: at line 96


drop type rowidtab
/
create or replace type rowidtabtyp as table of VARCHAR2(18)
/

declare
l_rowid_array rowidtabtyp := rowidtabtyp() ;

....

IF v_dedup = 1
THEN

dbms_output.put_line('l_rowid_array.count : '||l_rowid_array.count );
l_rowid_array.extend;
l_rowid_array(l_rowid_array.count+1) := l_dedup_array(i).ROWID ;

dbms_output.put_line('l_rowid_array(l_rowid_array.count+1) : '||l_rowid_array(l_rowid_array.count+1) );
v_total_count :=v_total_count + 1;

END IF;

Tom Kyte
November 23, 2009 - 9:02 am UTC

think about it.....


before the extend, the count is what?
zero

after the extend, the count would be?
one

you try to use array(2) after that.....


My logic, which I described, worked with the PLSQL index by array. You completely changed the datatype to a collection, instead of what you had before. So, you need to understand how these different things work.

If you would have stuck with the original type, then array( array.count+1 ) would be the way to do it - no EXTEND needed.

If you do it your way now, with a collection, you would extend and the assign to array(array.count)



SQL collection

Sita, November 24, 2009 - 1:54 pm UTC

Thanks tom.

it works fine.

the reason i had to change to SQL collection is because i want to perform single SQL update using TABLE(CAST) using collection type.

I cannot do TABLE(CAST()) with PL/SQL array.


Tom Kyte
November 24, 2009 - 2:32 pm UTC

here would be another valid approach in 10g

http://www.toadworld.com/BLOGS/tabid/67/EntryID/477/Default.aspx

as well.

Bulk collect comparision across two databases

San, June 09, 2010 - 4:50 am UTC

Hi Tom,

I am newbie to Oracle and I tried searching your site and read almost all related articles. But still I can not go ahead with the approach.

I have a requirement to compare two tables(which has millions of records in each) from two different databases.

There is a data migration project using datastage tool.There my PL/SQL procedure would be doing a column level validation for this migrated data.

I can try your logic which you have given in the other link to compare two table and log the odd one.
https://asktom.oracle.com/Misc/oramag/on-injecting-and-comparing.html

But the resultsets which I take to compare is not directly from the tables, I need to apply so many rules/filters/joins for the first result set and compare with the second resultset.

So I decided to create a 2 GTT and load the resultsets and compare using UNION as the above link states.

And had the other approach in mind was use cursors with bulk collect LIMIT and compare the cursors over DBlink. But I read somewhere in your post that array processing is not supported over dblink.

Please suggest the best approach for this.

Thanks









Tom Kyte
June 10, 2010 - 11:42 am UTC

best approach would be to encapsulate these "so many rules/filters/joins" in 100% sql if possible (and it almost always is)

if not, if you truly feel you absolutely need procedural code - what you are doing with the global temporary table is as good as anything else. A procedural compare of the two tables slow by slow (even with a bulk fetch) is too painful to even comprehend.

Problem with using EXCEPTION in BULKCOLLECT

Sid, June 09, 2010 - 7:22 am UTC

Hi Tom,
I am not able to use EXCEPTION with BULKCOLLECT please suggest, below is is my code.

CREATE OR REPLACE Procedure Cursor_Test is
cursor cur_a is SELECT
COL1,COL2 FROM TABLE;;
TYPE cur_a_type IS TABLE OF cur_a%ROWTYPE;
rec1 cur_a_type;
recl cur_a%ROWTYPE;

Begin
Open cur_a;
Loop
Fetch cur_a BULK COLLECT into rec1;
For i in 1..rec1.count
Loop
INSERT INTO TABLE1
(
COL1,COL2
)
SELECT
*FROM TABLE;
commit;
End Loop;
EXIT WHEN cur_a%notfound;
END LOOP;
exception
when dup_val_on_index then

Update table1 set col1=recl.col1 where col2=recl.col2 ;
end;


Thanks in advance

EXCEPTION with BULKCOLLECT not able to get data from cursor in exception

Sid, June 09, 2010 - 7:34 am UTC

Hi Tom,
Now i am able to use EXCEPTION with BULKCOLLECT and it is not throwing any error , but the problem is in after dup_val_on_index then
Update table1 set col1=recl.col1 where col2=recl.col2 ;
end;
here i am not aable to get the data from rec1.col1 where rec1 is recl cur_a%ROWTYPE; Where i am able to update if i harcoded the data like
Update table1 set col1='Red' where col2='Color' ;
end;

Please, suggest how can i get data fron rec1(cursor row type) while using BULK COLLECT.


CREATE OR REPLACE Procedure Cursor_Test is
cursor cur_a is SELECT
COL1,COL2 FROM TABLE;;
TYPE cur_a_type IS TABLE OF cur_a%ROWTYPE;
rec1 cur_a_type;
recl cur_a%ROWTYPE;

Begin
Open cur_a;
Loop
Fetch cur_a BULK COLLECT into rec1;
For i in 1..rec1.count
Loop
INSERT INTO TABLE1
(
COL1,COL2
)
SELECT
*FROM TABLE;
commit;
End Loop;
EXIT WHEN cur_a%notfound;
END LOOP;
exception

when dup_val_on_index then

Update table1 set col1=recl.col1 where col2=recl.col2 ;
end;


Thanks in advance

Tom Kyte
June 10, 2010 - 11:51 am UTC

you want merge


merge into table1
using table
on (table1.col2 = table.col2)
when matched then update set col1 = table.col1
when not matched then insert(col1,col2) values (table.col1,table.col2);

No loop
No code
Just merge

EXCEPTION with BULKCOLLECT not able to get data from cursor in exception

Sid, June 11, 2010 - 2:24 am UTC

Hi Tom,
Thanks for your advice.I have tried with Merge and i am getting UNIQUE CONSTRAINT error.
For this reason i have jumped to Cursor before and there i am able to insert and update ,but it is consuming more time.
Then while i am tring with BULK COLLECT in cursor i am able to Insert recodrs in not exsists but i am unable to Update the record if already exists.

The issue is ' I need to insert the data from one table to another if data is not there for particular column and need to Update the data if record exsists(Preset),
there is unique columns in destination tables'.

Please advice
Tom Kyte
June 11, 2010 - 7:32 am UTC

give us data to reproduce with. Show us an example.

Give us a sample schema and sample data to reproduce your issue and we'll talk about the best approach to solving your problem.

I'm confused, I cannot see how you are getting a unique constraint violation so I need more information to give an informed response.

EXCEPTION with BULKCOLLECT not able to get data from cursor in exception -10g

Sid, June 11, 2010 - 8:32 am UTC

Hi Tom,Thanks for quick reply.
Here i am sending the scenario, please advice.
Table 1:TMP_DATA
SNO Uniquekey
ORG
ORDERNUM Uniquekey
ITEM Uniquekey
PLACE Uniquekey
Created_date

Table 2:Prod_data
IDNO Uniquekey
ORG_DATA
OD_NUM Uniquekey
ITEM Uniquekey
LOCATION_ITEM Uniquekey
CREATED_DATE
Entered_date


Now i need to Insert or Update data from TMP_DATA into Prod_data.There are 20-50 lacs of records available in table1.
The below is procedure.It is inserting data but not updating data.I have tried using match and not match but i am not able to solveit.

CREATE OR REPLACE procedure Insert_Test as

TYPE tb_SNO IS TABLE OF TMP_DATA.SNO%TYPE;
TYPE tb_ORG IS TABLE OF TMP_DATA.ORG%TYPE;
TYPE tb_ORDERNUM IS TABLE OF TMP_DATA.ORDERNUM%TYPE;
TYPE tb_ITEM IS TABLE OF TMP_DATA.ITEM%TYPE;
TYPE tb_PLACE IS TABLE OF TMP_DATA.PLACE%TYPE;
TYPE tb_Created_date IS TABLE OF TMP_DATA.Created_date%TYPE;

tb_SNO_array tb_SNO;
tb_ORG_array tb_ORG;
tb_ORDERNUM_array tb_ORDERNUM;
tb_ITEM_array tb_ITEM;
tb_PLACE_array tb_PLACE;
tb_Created_date_array tb_Created_date;

i number := 1;

CURSOR c1 IS
SELECT
SNO,ORG,ORDERNUM,ITEM,PLACE,Created_date
FROM TMP_DATA;

BEGIN

OPEN c1;
LOOP

FETCH c1 bulk collect into

tb_SNO_array,
tb_ORG_array,
tb_ORDERNUM_array,
tb_ITEM_array,
tb_PLACE_array,
tb_Created_date_array limit 1000;


begin
FORALL j in tb_SNO_array.FIRST .. tb_SNO_array.LAST



INSERT INTO Prod_data
(
IDNO,ORG_DATA,OD_NUM,ITEM,LOCATION_ITEM,CREATED_DATE,Entered_date
)

VALUES (
tb_SNO_array(i),
tb_ORG_array(i),
tb_ORDERNUM_array(i),
tb_ITEM_array(i),
tb_PLACE_array(i),
tb_Created_date_array(i),
sysdate
);
EXCEPTION
-- ---- WHEN OTHERS THEN
WHEN dup_val_on_index THEN
-- -- DBMS_OUTPUT.put_line(SQLERRM);
Update Prod_data set ORG_DATA=tb_ORG_array(i),CREATED_DATE=tb_Created_date_array(i),Entered_date=sysdate
where IDNO= tb_SNO_array(i) and tb_ORDERNUM_array=tb_ORDERNUM_array(i)
and ITEM=tb_ITEM_array(i) and LOCATION_ITEM= tb_PLACE_array(i);

END;

COMMIT;
exit when C1%notfound;
END LOOP;
END;
/

Tom Kyte
June 22, 2010 - 6:40 am UTC

I'm not sure you read what I posted.


give us data to reproduce with. Show us an example.

Give us a sample schema and sample data to reproduce your issue and we'll talk about the best approach to solving your problem.


where is your schema (create tables)
where is your data?

I cannot RUN your example. no one can.

However, your code is really wrong. I don't think you understand how forall works and how exception handling works.


forall is not really a loop - you use it like this:

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:8784259916366

Exceptions DO GET THE DATA with BULK COLLECT

V.Hariharaputhran, June 12, 2010 - 12:34 am UTC

Hey you will have to use SQL%BULK_EXCEPTIONS and not just a normal exceptions. Sample code below (This is taken from this site)

DECLARE
type array is table of t%rowtype index by binary_integer;
data array;
errors NUMBER;
dml_errors EXCEPTION;
l_cnt number := 0;
PRAGMA exception_init(dml_errors, -24381);

cursor c is select * from t;
BEGIN
open c;
loop
fetch c BULK COLLECT INTO data LIMIT 100;
begin
FORALL i IN 1 .. data.count SAVE EXCEPTIONS
insert into t2 values data(i);
EXCEPTION
WHEN dml_errors THEN
errors := SQL%BULK_EXCEPTIONS.COUNT;
l_cnt := l_cnt + errors;
FOR i IN 1..errors LOOP
dbms_output.put_line
('Error occurred during iteration ' ||
SQL%BULK_EXCEPTIONS(i).ERROR_INDEX ||
' Oracle error is ' ||
SQL%BULK_EXCEPTIONS(i).ERROR_CODE );
end loop;
end;
exit when c%notfound;
END LOOP;
close c;
dbms_output.put_line( l_cnt || ' total errors' );
end;

why to write seperate insert then update, MERGE do support FORALL, still to make it more simpler you can use DML Error Logging in Oracle 10g.

Regards
V.Hari

EXCEPTION with BULKCOLLECT not able to get data from cursor in exception -10g

Sid, June 13, 2010 - 9:49 pm UTC

Hi Hari,

Thanks for update.It is working fine with Merge.
Can i use joins in merge?I need to insert data into a new table from 2 or 3 tables.Here also if record is exsists i need to update and if not need to insert.

If possible please advice.

Thanks in Advance.
Tom Kyte
June 22, 2010 - 8:04 am UTC

merge into some_table
using (select whatever from how,many,tables,you,want where ... ) X
on (some_table.key = x.key)
when matched then update
when not matched then insert

Thanks.

San, June 15, 2010 - 5:24 am UTC

Thanks for the suggestion Tom.That was my first post in AskTom and eagerly waited for your suggestion.

I am going ahead with GTT tables to compare the two tables. As you suggested I am having an SQL applying all rules/filters and inserting the source result set in GTT. Then will compare with target tables.

Thanks.

Bulk Insert Over DBLink

Syed, August 02, 2010 - 5:12 am UTC

Hi Tom,

We have a requirement which need to load data into remote table using DBlink.I have executed FORALL statement successfully in same database.But when i tried it over a DBLink, it is giving PLS-00739: FORALL INSERT/UPDATE/DELETE not supported on remote tables.
Is there any other way to fix it.

Thanks,
Syed
Tom Kyte
August 02, 2010 - 9:20 am UTC

insert into table@remote select * from local;


would be my preference. If it works (it is a deprecated command in sqplus), the sqlplus copy command might be something to look at as well.

Bulk Insert Over DBLink

Syed, August 03, 2010 - 4:45 am UTC

Hi Tom,
Thanks for quick reply.
I am not able to use direct insert because there is unique constraint and Integrity constraint on inserting table.
I need to skip the particular record and then continue for next record.

Please advice.

Thanks
Tom Kyte
August 03, 2010 - 8:42 am UTC

read about dml error logging, you've been able to deal with this stuff for three major releases now:

ops$tkyte%ORA11GR2> create table t as select * from all_users where 1=0;

Table created.

ops$tkyte%ORA11GR2> alter table t add constraint t_pk primary key(user_id);

Table altered.

ops$tkyte%ORA11GR2> exec dbms_errlog.create_error_log('T');

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> insert into t select * from all_users@ora11gr1;

53 rows created.

ops$tkyte%ORA11GR2> insert into t select username, decode( rownum, 1, -user_id, 2, user_id ), created
  2  from all_users@ora11gr1 where rownum <= 2;
insert into t select username, decode( rownum, 1, -user_id, 2, user_id ), created
*
ERROR at line 1:
ORA-00001: unique constraint (OPS$TKYTE.T_PK) violated


ops$tkyte%ORA11GR2> insert into t select username, decode( rownum, 1, -user_id, 2, user_id ), created
  2  from all_users@ora11gr1 where rownum <= 2
  3  log errors reject limit unlimited;
<b>
1 row created.
</b>

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select ora_err_number$, ora_err_optyp$, ora_err_mesg$ from err$_t;

ORA_ERR_NUMBER$ OR
--------------- --
ORA_ERR_MESG$
-------------------------------------------------------------------------------
              1 I
ORA-00001: unique constraint (OPS$TKYTE.T_PK) violated

Bulk Insert Over DBLink

Syed, August 04, 2010 - 1:35 am UTC

Thank you Tom.
Its working now, you are amazing.

Thanks a lot.

Bulk insert over dblink

Syed, August 04, 2010 - 9:17 am UTC

Hi Tom,
It is qorking fine with one table.
When trying to use the same error log table it is throwing''Unique Constraint' even i am handling DML ERROR LOGGING table.

I becanme to know,t here is a restriction for Error Logging:

'The target table cannot have any triggers or referential integrity constraints defined on it.'
But my target table which is remote table is having Referential integrity on it.

The below is code using Merge and Error logging:
STEP 1: exec dbms_errlog.create_error_log('T');

STEP 2: MERGE INTO DEST_TABLE@DBLINK dest

USING (SELECT ID,USERNAME,DEPT FROM SOURCE_TABLE )
source

on ( dest.ID_IDENTIFIER=source.ID )

WHEN MATCHED THEN UPDATE SET
dest.USERNAME=source.USERNAME,
dest.DEPT =source.DEPT
WHEN NOT MATCHED THEN INSERT
(ID,USERNAME,DEPT)
VALUES
(
source.ID,source.USERNAME,source.DEPT )
LOG ERRORS INTO err$_t ('INSERT') REJECT LIMIT UNLIMITED;

Giving error:Unique constraint violated.
Some times: Integrity Constraint Violated

How can i solve it.
Please advice.

Thanks,
Syed
Tom Kyte
August 04, 2010 - 11:46 am UTC

give example, i cannot reproduce:

ops$tkyte%ORA11GR2> create table t as select * from all_users where 1=0;

Table created.

ops$tkyte%ORA11GR2> alter table t add constraint t_pk primary key(user_id);

Table altered.

ops$tkyte%ORA11GR2> alter table t add constraint uname_unique unique(username);

Table altered.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> exec dbms_errlog.create_error_log('T');

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> merge into t
  2  using( select * from all_users@ora11gr1 ) x
  3  on (t.user_id = x.user_id)
  4  when matched then update set username = x.username, created = x.created
  5  when not matched then insert ( user_id, username, created ) values ( x.user_id, x.username, x.created )
  6  log errors reject limit unlimited;

53 rows merged.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> merge into t
  2  using( select -user_id user_id, username, created from all_users@ora11gr1 ) x
  3  on (t.user_id = x.user_id)
  4  when matched then update set username = x.username, created = x.created
  5  when not matched then insert ( user_id, username, created ) values ( x.user_id, x.username, x.created )
  6  log errors reject limit unlimited;

1 row merged.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select ora_err_number$, ora_err_optyp$, ora_err_mesg$ from err$_t where rownum < 5;

ORA_ERR_NUMBER$ OR
--------------- --
ORA_ERR_MESG$
-------------------------------------------------------------------------------
              1 I
ORA-00001: unique constraint (OPS$TKYTE.UNAME_UNIQUE) violated

              1 I
ORA-00001: unique constraint (OPS$TKYTE.UNAME_UNIQUE) violated

              1 I
ORA-00001: unique constraint (OPS$TKYTE.UNAME_UNIQUE) violated

              1 I
ORA-00001: unique constraint (OPS$TKYTE.UNAME_UNIQUE) violated

Bulk insert over dblink

Syed, August 05, 2010 - 1:58 am UTC

Hi TOM,
Below is detail description:

ops$tkyte%ORA11GR2> create table PRIMARY_TABLE as select * from all_users where 1=0;

Table created.

ops$tkyte%ORA11GR2> create table t as select * from all_users where 1=0;

Table created.

ops$tkyte%ORA11GR2> alter table t add constraint t_pk primary key(user_id);

Table altered.

ops$tkyte%ORA11GR2> alter table t add constraint uname_unique unique(username);

Table altered.

ops$tkyte%ORA11GR2>ALTER TABLE t ADD (
CONSTRAINT FK1_t
FOREIGN KEY (user_id,USERNAME)
REFERENCES PRIMARY_TABLE(user_id,USERNAME)
ON DELETE CASCADE);
Table altered.
ops$tkyte%ORA11GR2> exec dbms_errlog.create_error_log('T');

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> merge into t
2 using( select * from all_users@ora11gr1 ) x
3 on (t.user_id = x.user_id)
4 when matched then update set username = x.username, created = x.created
5 when not matched then insert ( user_id, username, created ) values ( x.user_id, x.username,
x.created )
6 log errors reject limit unlimited;


Error:Integrity constraint (FK1_t) violated

The Unique constraint is handling using DML Error Logging,but how can i handle other errors like Integrity Contarint violated, Partition Error and Unable to extend table space error.

All the above errors can solve by changing the table structres but the requirement won't alloe me to do so.

In our table structure, few records will miss while inserting into PRIMARY_TABLE which is parent table for 't'.
So usually there is possible 'Integrity constraint (FK1_t) violated' error will occure for few records. Here i need to sjip those which gives Integrity Contraint error and need to move for next record.

I have tried with all your examples in various threads that uses BULKINSERT,FORALL and MERGE.But i fail to skip errors. With out constarints all the examples working fine.

Please give solution that will do INSERT and UPDATE into remote table from current database table over a DBlink ,skip any error occured in a loop and move to next record.

Appreciate your help on this issue.

Thanks


Tom Kyte
August 05, 2010 - 6:59 am UTC

Syed,

yesterday it was (this is just a cut and paste of your stuff)


Giving error:Unique constraint violated.



So, it is not doing that? This gets very confusing - my time is as valuable as yours - preciseness is important.

It would have been very nice of you to provide a CUT AND PASTE of you reproducing the issue on YOUR SYSTEM verbatim so we could actually see an actual error message so we could actually verify what you are truly doing..... :(

for you see - your test case DOES NOT RUN as coded - which means you never actually tested it :( :( :(


Here once again, is my 100% complete, I really did run it - you can run it too, you'll see what I see test case showing that in general - foreign keys are not an issue:

ops$tkyte%ORA11GR2> create table PRIMARY_TABLE as select * from all_users where 1=0;

Table created.

ops$tkyte%ORA11GR2> alter table primary_table add constraint pt_pk primary key(user_id,username);

Table altered.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create table t as select * from all_users where 1=0;

Table created.

ops$tkyte%ORA11GR2> alter table t add constraint t_pk primary key(user_id);

Table altered.

ops$tkyte%ORA11GR2> alter table t add constraint uname_unique unique(username);

Table altered.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> ALTER TABLE t ADD (
  2  CONSTRAINT FK1_t
  3  FOREIGN KEY (user_id,USERNAME)
  4  REFERENCES PRIMARY_TABLE(user_id,USERNAME)
  5  ON DELETE CASCADE);

Table altered.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> exec dbms_errlog.create_error_log('T');

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> merge into t
  2  using( select * from all_users@ora11gr1 ) x
  3  on (t.user_id = x.user_id)
  4  when matched then update set username = x.username, created = x.created
  5  when not matched then insert ( user_id, username, created ) values ( x.user_id, x.username, x.created )
  6  log errors reject limit unlimited;

0 rows merged.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select ora_err_number$, ora_err_optyp$, ora_err_mesg$ from err$_t where rownum < 5;

ORA_ERR_NUMBER$ OR
--------------- --
ORA_ERR_MESG$
-------------------------------------------------------------------------------
           2291 I
ORA-02291: integrity constraint (OPS$TKYTE.FK1_T) violated - parent key not fou
nd

           2291 I
ORA-02291: integrity constraint (OPS$TKYTE.FK1_T) violated - parent key not fou
nd

           2291 I
ORA-02291: integrity constraint (OPS$TKYTE.FK1_T) violated - parent key not fou
nd

           2291 I
ORA-02291: integrity constraint (OPS$TKYTE.FK1_T) violated - parent key not fou
nd



so, please - do not post again unless and until you can give us something that ANYONE can reproduce with - you need a test case that is 100% complete - just like I always provide.

I won't read anymore followups unless they have what I need to see what you are seeing. period.

What I am doing wrong here

Mis User, March 09, 2011 - 5:08 pm UTC

Hi Tom,
what I am doing wrong here...
Please help me. Many thanks.

declare
type lemp is table of emp%rowtype;
v_row_rec lemp;

cursor c is
select * from emp where deptno=30;
begin
open c;
loop
fetch c bulk collect
into v_row_rec limit 2;
if v_row_rec.count > 0 then
forall j in v_row_rec.first .. v_row_rec.last
update emp1
set comm=comm+5000
where empno = v_row_rec(j).empno;
commit;
end if;
v_row_rec.delete;

exit when c%notfound;

end loop;
dbms_output.put_line('done1');
close c;
end;
/

Tom Kyte
March 10, 2011 - 10:02 am UTC

give that we have NO CLUE what you are trying to do - I can just say "you are doing nothing wrong" and I'll be right.

But only because I'm assuming you meant to type in what you typed in.

I mean - come on.

My car won't start.

When you tell me what I'm doing wrong - I'll tell you.

I can say your logic is horribly botched - that commit in there is a FATAL ERROR. What if the code works for the first iteration, but fails due to out of space or something on the second iteration? You are left with a mess - get rid of that commit (or make your code restartable - which isn't easy to do but can be done)



ops$tkyte%ORA11GR2> create table emp as select * from scott.emp;

Table created.

ops$tkyte%ORA11GR2> create table emp1 as select * from scott.emp;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> declare
  2    type lemp is table of emp%rowtype;
  3    v_row_rec lemp;
  4  
  5    cursor c is
  6      select * from emp where deptno=30;
  7  begin
  8    open c;
  9    loop
 10      fetch c bulk collect
 11        into v_row_rec limit 2;
 12      if v_row_rec.count > 0 then
 13        forall j in v_row_rec.first .. v_row_rec.last
 14          update emp1
 15          set comm=comm+5000
 16          where empno = v_row_rec(j).empno;
 17        commit;
 18      end if;
 19      v_row_rec.delete;
 20  
 21      exit when c%notfound;
 22  
 23    end loop;
 24    dbms_output.put_line('done1');
 25    close c;
 26  end;
 27  /
done1

PL/SQL procedure successfully completed.

What I am doing wrong here

Mis User, March 09, 2011 - 5:10 pm UTC

Sorry, Script for emp1 table is
create table emp1 as select * from emp;


A reader, March 10, 2011 - 12:03 pm UTC

Hi Tom,

You also did the same mistake by putting commit inside the loop in your recent reply or am i missing something

Thanks
Tom Kyte
March 10, 2011 - 1:14 pm UTC

I just ran the code to show it "compiles and runs" - since they said:

"what I am doing wrong here..."

with no context. just supports the fact that I have no idea what they think "wrong" is

What I am I doing Wrong

MIS User, March 11, 2011 - 4:52 am UTC

Hi Tom,
Thanks for your response.
But Pls see the below error at
line 16 ' where empno = v_row_rec(j).empno; '
Also appriciate , if you could show the best commit option here.


SQL*Plus: Release 10.1.0.4.2 - Production on Fri Mar 11 10:47:56 2011

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.7.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.7.0 - Production

BLACKD_DOZER > declare
2 type lemp is table of emp%rowtype;
3 v_row_rec lemp;
4
5 cursor c is
6 select * from emp where deptno=30;
7 begin
8 open c;
9 loop
10 fetch c bulk collect
11 into v_row_rec limit 2;
12 if v_row_rec.count > 0 then
13 forall j in v_row_rec.first .. v_row_rec.last
14 update emp1
15 set comm=comm+5000
16 where empno = v_row_rec(j).empno;
17 commit;
18 end if;
19 v_row_rec.delete;
20
21 exit when c%notfound;
22
23 end loop;
24 dbms_output.put_line('done1');
25 close c;
26 end;
27 /
where empno = v_row_rec(j).empno;
*
ERROR at line 16:
ORA-06550: line 16, column 18:
PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND
table of records
ORA-06550: line 16, column 18:
PLS-00382: expression is of wrong type
ORA-06550: line 16, column 18:
PL/SQL: ORA-22806: not an object or REF
ORA-06550: line 14, column 4:
PL/SQL: SQL Statement ignored

Thank you.

Tom Kyte
March 11, 2011 - 7:36 am UTC

you have an old version, as the error message states, you cannot do that. When you get to some software written in this century, it'll work.


Also appriciate , if you could show the best commit option here.


just delete line 17 and you have done it the best way.


I often wish plsql did not have commit and rollback. Transaction control belongs in the hand of the client - not a low level stored procedure.


here is the right way to write the above logic, followed by the wrong way:

ops$tkyte%ORA11GR2> update emp1
  2     set comm = comm+5000
  3   where empno in (select empno
  4                     from emp
  5                    where deptno=30 );

6 rows updated.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> declare
  2          type numArray is table of number;
  3  
  4          l_empnos  numArray;
  5          cursor c is select empno from emp where deptno = 30;
  6  begin
  7          open c;
  8          loop
  9                  fetch c bulk collect into l_empnos limit 2;
 10                  forall i in 1 .. l_empnos.count
 11                          update emp1 set comm=comm+5000 where empno = l_empnos(i);
 12                  exit when c%notfound;
 13          end loop;
 14          close c;
 15          dbms_output.put_line('done1');
 16  end;
 17  /
done1

PL/SQL procedure successfully completed.


but a slightly better 'wrong way' then your way.

What am I doing Wrong

MIS users, March 11, 2011 - 10:24 am UTC

Thanks Tom!
You are Robin Hood for all those Oracle developers who need real help !!

Cheers

What am I doing Wrong

MIS user, March 11, 2011 - 10:25 am UTC

Thanks Tom!
You are Robin Hood for all those Oracle developers who need real help !!

Cheers
Tom Kyte
March 11, 2011 - 10:36 am UTC

Oh, one thing about the code I wrote above for your example.

I made the (reasonable) assumption that EMPNO is the primary key of EMP. Without that assumption, the code is not the same - comm might be incremented more than once.

If EMPNO where not the primary key in EMP, then

merge into emp1
using ( select empno, count(*) cnt
from emp
where deptno = 30
group by empno ) emp
on (emp1.empno = emp.empno )
when matched then update set comm = 5000*emp.cnt+comm;


;)

forall update partition with DBMS_SQL

Ernesto Villarruel, May 06, 2011 - 1:19 pm UTC

Good, good day Tom ! ! ! !

I'm the next question,

how to do the next update in pl/sql with DBMS_SQL ???


create table xyz ( sname varchar2(50),
sotherfield varchar2(50),
ostherfieldext varchar2(50),
sregionid varchar2(3)
)
PARTITION BY RANGE (SREGIONID)
(
PARTITION R01 VALUES LESS THAN ('R02') NOLOGGING,
PARTITION R02 VALUES LESS THAN ('R03') NOLOGGING,
PARTITION R03 VALUES LESS THAN ('R04') NOLOGGING,
PARTITION R04 VALUES LESS THAN ('R05') NOLOGGING,
PARTITION R05 VALUES LESS THAN ('R06') NOLOGGING,
PARTITION R06 VALUES LESS THAN ('R07') NOLOGGING,
PARTITION R07 VALUES LESS THAN ('R08') NOLOGGING,
PARTITION R08 VALUES LESS THAN ('R09') NOLOGGING,
PARTITION R09 VALUES LESS THAN ('R10') NOLOGGING
)
NOCOMPRESS
NOCACHE
PARALLEL ( DEGREE 4 INSTANCES 1 )


----------------------------------------------------------------
The script for update partition with KSH is :
----------------------------------------------------------------
file update.sh
-------------------
#!/bin/ksh

v_partition=${1}

sqlplus -s user/pass@database << EOF
set timing on
declare
v_SQLSentencia long;
TYPE CurRepro is ref cursor;
TYPE ArrayRowid is table of ROWID;
TYPE ArrayString is table of varchar2(250);
l_cursor CurRepro;
l_sname ArrayString;
l_rowid ArrayRowid;
begin
v_SQLSentencia := 'select
sname
,rowid as SROWID
from xyz tmp where sname=''A'' ';
OPEN l_cursor for v_SQLSentencia;
LOOP
FETCH l_cursor
BULK COLLECT INTO l_sname
,l_rowid
LIMIT 1000;
FORALL i IN 1 .. l_rowid.count
UPDATE XYZ PARTITION("${v_partition}")
set sname = l_sname(i)
where rowid = l_rowid(i);
EXIT WHEN l_cursor%NOTFOUND;
END LOOP;
COMMIT;
end;
/
EOF

des: /scripts $ update.sh R01

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00

----------------------------------------------------------------



----------------------------------------------------------------
The script for update with partition in store procedure is :
----------------------------------------------------------------

1 create or replace procedure update_part (partition_name varchar2)
2 as
3 v_SQLSentencia long;
4 TYPE CurRepro is ref cursor;
5 TYPE ArrayRowid is table of ROWID;
6 TYPE ArrayString is table of varchar2(250);
7 l_cursor CurRepro;
8 l_sname ArrayString;
9 l_rowid ArrayRowid;
10 begin
11 v_SQLSentencia := 'select
12 sname
13 ,rowid as SROWID
14 from xyz tmp where sname=''A'' ';
15 OPEN l_cursor for v_SQLSentencia;
16 LOOP
17 FETCH l_cursor
18 BULK COLLECT INTO l_sname
19 ,l_rowid
20 LIMIT 1000;
21 FORALL i IN 1 .. l_rowid.count
22 UPDATE XYZ PARTITION("''' || partition_name || '''")
23 SET SNAME = L_SNAME(I)
24 WHERE ROWID = L_ROWID(I);
25 EXIT WHEN l_cursor%NOTFOUND;
26 END LOOP;
27 COMMIT;
28* end;
ernesto@DWH> /

Procedure created.

Elapsed: 00:00:00.07
ernesto@DWH> exec update_part('R01');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
-----------------------------------------------------------------------------------------------------------------------

my question,, is the best option store procedure or script shell??? , i need the best performance,, and how to do in pl/sql with dbms_sql ???

in the page 731 of your book "EXPERT ONE-ON-ONE ORACLE" you said "I would give the nod to DBMS_SQL as the correct
implementation for this type of work." and in other post your comment is

bulk collect = native dynamic sql syntax.

array binds and array fetches = dbms_sql


THANKS IN ADVANCE.
Tom Kyte
May 06, 2011 - 2:44 pm UTC

you do not need dynamic sql for this at all

You do not need procedural code for this at all.

Explain what you are trying to really do here, this example doesn't update anything.

Why do you need dynamic sql for lines 11-14?
Why do you need dynamic sql for lines 22-24? You do NOT need the partition name in there, you are updating via rowid, that'll do it just fine.



forall update partition with DBMS_SQL

Ernesto Villarruel, May 06, 2011 - 4:35 pm UTC

Thanks for you response.

With the aim of improving times,
I try to run a stored procedure in parallel with 9 different parameters, each parameter points
to a different partition.

In your book I read that native SQL parsing performed several DBMS_SQL while with only one is generated.
This is the reason why I want to know if it is necessary to change the code with DBMS_SQL, oh yeah so fine.

Total records in the table: 4,000,000,000

3,000,000 represents 1% of total of records for partition, therefore , according to your standards , should make an update.
or,,, best option would create a temporary table and then make a exchange partition ?



The procedure:
1.- put index in unusable state for partition
2.- update the information
3.- rebuild indexes



This is my target
Execution in Parallel:

Stored procedure RowsInPartition aprox_RowsforUpdate

update_part('R01'); 444,444,444 3,000,000
update_part('R02'); 444,444,444 3,000,000
update_part('R03'); 444,444,444 3,000,000
update_part('R04'); 444,444,444 3,000,000
update_part('R05'); 444,444,444 3,000,000
update_part('R06'); 444,444,444 3,000,000
update_part('R07'); 444,444,444 3,000,000
update_part('R08'); 444,444,444 3,000,000
update_part('R09'); 444,444,444 3,000,000

and ... very much thanks again
Tom Kyte
May 09, 2011 - 6:59 am UTC

the parameters would be partition keys then, not rowids.


The parsing has changed over time. If you execute the same native dynamic sql in a for loop over and over - it will only parse once. The book you are reading is from 8.1.7 of Oracle and things changed.


You should do the work in a single sql statement, why is there ANY procedural code at all here???

and if you get it to a single sql statement, just use PDML (parallel dml), no code whatsoever.

Update Partition

Ernesto Villarruel, May 10, 2011 - 5:05 pm UTC

Tom Tom Tom ,
Im sorry for my doubt, but , are you tom ??? o are you a member of team by tom ???

I have your three books, and i don't understand , your response.

I Just do it the next,according to your comment


1.- put the index btree partitioned in unusable
2.- Update with bulk on entire table
3.- Rebuild partitions of index with de next script


for x in ( select 'alter index ' || index_name ||
' rebuild partition ' || partition_name stmt
from all_ind_partitions
where index_name = 'PK_DWTCA_CLIENTES' and index_owner='VICX886')
loop
dbms_output.put_line( x.stmt );
execute immediate x.stmt;
end loop;

With total Time of 44 minutes and 25 seconds.

and according to my comment

1.- put the index btree partitioned in unusable
2.- Update with bulk by partition in 8 process runing in parallel , each process rebuilding the partition of index

the time for each update partition is "Elapsed: 00:02:18.18"

And total Time of 10 minutes and 10 seconds.
Tom Kyte
May 11, 2011 - 9:22 am UTC

There is only one person that has the password to asktom, and that is me, Tom


what part of what answer did you not understand? This is a huge page, I don't know what you are referring to.

In all of my books I have said clearly:

a) if you can do it in SQL - DO IT, period.
b) if you cannot do it in sql (and you probably can) do it in as little PLSQL as possible
....


where have I conflicted or contradicted anything I've written before? I'm telling you to do this in pure SQL - there doesn't appear to be any need for procedural code to do the update.

The entire thing about the indexes is a non-sequitur. We were not talking about indexes at all???

Update Partition

Ernesto Villarruel, May 10, 2011 - 5:27 pm UTC

Please Tom or member of team by tom ,
for any error that you can detect these are the scripts.

Script 1.- total Time of 44 minutes and 25 seconds
sqlplus -s user/pass@DWHOUSE << FIN
set timing on
select sid from v\$mystat where rownum<=1;
set serveroutput on size 20000

declare

x_sqlcode number;
x_sqlerr varchar2(4000);
v_SQLSentencia varchar2(4000);
TYPE Cursor_c is ref cursor;
TYPE ArrayRow is table of ROWID;
TYPE Arraynum is table of number(9);
TYPE Arrayvachar is table of varchar2(80);
TYPE Arraydate is table of date;
v_cursor Cursor_c;
v_rowid ArrayRow;
v_NCARRIERID Arraynum;
v_STITULO Arrayvachar;
v_SGRUPOVALORID Arrayvachar;
v_SSEXOID Arrayvachar;
v_SFENACIMIENTO Arrayvachar;
v_SNIVELSEID Arrayvachar;
v_SOCUPACIONID Arrayvachar;
v_SSECTORCOMID Arrayvachar;
v_SGRUPOEMPID Arrayvachar;
v_NNUMHIJOS Arraynum;
v_NCUENTANACID Arraynum;
v_SRFC Arrayvachar;
v_SCURP Arrayvachar;
v_SSECTORCOMDS Arrayvachar;
v_SPROFESIONDS Arrayvachar;
v_DINSERREG Arraydate;
v_SEDAD Arrayvachar;
v_SFUENTEID Arrayvachar;
v_cont NUMBER(30):=0;

v_date date;

BEGIN

v_SQLSentencia := 'SELECT
SROWID
,NCARRIERID --INSERT UPDATE
,STITULO --INSERT UPDATE
,SGRUPOVALORID -- UPDATE
,SSEXOID --INSERT UPDATE
,SFENACIMIENTO --INSERT UPDATE
,SNIVELSEID --INSERT UPDATE
,SOCUPACIONID --INSERT UPDATE
,SSECTORCOMID --INSERT UPDATE
,SGRUPOEMPID --INSERT UPDATE
,NNUMHIJOS --INSERT UPDATE
,NCUENTANACID --INSERT UPDATE
,SRFC --INSERT UPDATE
,SCURP --INSERT UPDATE
,SSECTORCOMDS --INSERT UPDATE
,SPROFESIONDS --INSERT UPDATE
,DINSERREG --INSERT UPDATE
,SEDAD --INSERT UPDATE
,SFUENTEID --INSERT UPDATE
FROM vicx886.TTMP_CLI_POS_PASOFINAL PARTITION("CLIENTES_UPDATE") CLI';


EXECUTE IMMEDIATE 'ALTER INDEX VICX886.PK_CLIENTES UNUSABLE';

OPEN v_cursor for v_SQLSentencia;
LOOP
FETCH v_cursor BULK COLLECT
INTO v_rowid
,v_NCARRIERID
,v_STITULO
,v_SGRUPOVALORID
,v_SSEXOID
,v_SFENACIMIENTO
,v_SNIVELSEID
,v_SOCUPACIONID
,v_SSECTORCOMID
,v_SGRUPOEMPID
,v_NNUMHIJOS
,v_NCUENTANACID
,v_SRFC
,v_SCURP
,v_SSECTORCOMDS
,v_SPROFESIONDS
,v_DINSERREG
,v_SEDAD
,v_SFUENTEID
LIMIT 1000;
v_cont:=v_cont + v_rowid.count;
FORALL i IN 1 .. v_rowid.count
UPDATE vicx886.CLIENTES
SET NCARRIERID = v_NCARRIERID (i)
,STITULO = v_STITULO (i)
,SGRUPOVALORID = v_SGRUPOVALORID(i)
,SSEXOID = v_SSEXOID (i)
,SFENACIMIENTO = v_SFENACIMIENTO(i)
,SNIVELSEID = v_SNIVELSEID (i)
,SOCUPACIONID = v_SOCUPACIONID (i)
,SSECTORCOMID = v_SSECTORCOMID (i)
,SGRUPOEMPID = v_SGRUPOEMPID (i)
,NNUMHIJOS = v_NNUMHIJOS (i)
,NCUENTANACID = v_NCUENTANACID (i)
,SRFC = v_SRFC (i)
,SCURP = v_SCURP (i)
,SSECTORCOMDS = v_SSECTORCOMDS (i)
,SPROFESIONDS = v_SPROFESIONDS (i)
,DINSERREG = v_DINSERREG (i)
,SEDAD = v_SEDAD (i)
,SFUENTEID = v_SFUENTEID (i)
where rowid = v_rowid(i);
EXIT WHEN v_cursor%NOTFOUND;

END LOOP;

commit;

dbms_output.put_line('Inicio de rebuild indexes:'||v_date);


for x in ( select 'alter index ' || index_name ||
' rebuild partition ' || partition_name stmt
from all_ind_partitions
where index_name = 'PK_CLIENTES' and index_owner='VICX886')
loop
dbms_output.put_line( x.stmt );
execute immediate x.stmt;
end loop;

end;
/



Script 2.- total Time of 10 minutes and 10 seconds.

lanza_updates.sh
--------------------------------------------------------
#!/bin/ksh

dirwork=/CLIENTES

date

echo inicio

for reg in P1 P2 P3 P4 P5 P6 P7 P8
do

nohup $dirwork/update.sh $reg > update.$reg.log &

done
wait

echo fin

date



update.sh
--------------------------------------------------------
#!/bin/ksh

v_partition=$1

date

sqlplus -s user/pass@DWHOUSE << FIN
select sid from v\$mystat where rownum <=1;
set timing on
set serveroutput on size 20000
declare

x_sqlcode number;
--x_sqlerr varchar2(4000);
v_SQLSentencia varchar2(4000);
TYPE Cursor_c is ref cursor;
TYPE ArrayRow is table of ROWID;
TYPE Arraynum is table of number(9);
TYPE Arrayvachar is table of varchar2(80);
TYPE Arraydate is table of date;
v_cursor Cursor_c;
v_rowid ArrayRow;
v_NCARRIERID Arraynum;
v_STITULO Arrayvachar;
v_SGRUPOVALORID Arrayvachar;
v_SSEXOID Arrayvachar;
v_SFENACIMIENTO Arrayvachar;
v_SNIVELSEID Arrayvachar;
v_SOCUPACIONID Arrayvachar;
v_SSECTORCOMID Arrayvachar;
v_SGRUPOEMPID Arrayvachar;
v_NNUMHIJOS Arraynum;
v_NCUENTANACID Arraynum;
v_SRFC Arrayvachar;
v_SCURP Arrayvachar;
v_SSECTORCOMDS Arrayvachar;
v_SPROFESIONDS Arrayvachar;
v_DINSERREG Arraydate;
v_SEDAD Arrayvachar;
v_SFUENTEID Arrayvachar;
v_cont NUMBER(30):=0;



BEGIN


v_SQLSentencia := 'SELECT
SROWID
,NCARRIERID --INSERT UPDATE
,STITULO --INSERT UPDATE
,SGRUPOVALORID -- UPDATE
,SSEXOID --INSERT UPDATE
,SFENACIMIENTO --INSERT UPDATE
,SNIVELSEID --INSERT UPDATE
,SOCUPACIONID --INSERT UPDATE
,SSECTORCOMID --INSERT UPDATE
,SGRUPOEMPID --INSERT UPDATE
,NNUMHIJOS --INSERT UPDATE
,NCUENTANACID --INSERT UPDATE
,SRFC --INSERT UPDATE
,SCURP --INSERT UPDATE
,SSECTORCOMDS --INSERT UPDATE
,SPROFESIONDS --INSERT UPDATE
,DINSERREG --INSERT UPDATE
,SEDAD --INSERT UPDATE
,SFUENTEID --INSERT UPDATE
FROM VICX886.TTMP_CLI_POS_PASOFINAL3 partition("${v_partition}")';



EXECUTE IMMEDIATE 'ALTER INDEX vicx886.PK_CLIENTES MODIFY PARTITION "${v_partition}" UNUSABLE';

OPEN v_cursor for v_SQLSentencia;
LOOP
FETCH v_cursor BULK COLLECT
INTO v_rowid
,v_NCARRIERID
,v_STITULO
,v_SGRUPOVALORID
,v_SSEXOID
,v_SFENACIMIENTO
,v_SNIVELSEID
,v_SOCUPACIONID
,v_SSECTORCOMID
,v_SGRUPOEMPID
,v_NNUMHIJOS
,v_NCUENTANACID
,v_SRFC
,v_SCURP
,v_SSECTORCOMDS
,v_SPROFESIONDS
,v_DINSERREG
,v_SEDAD
,v_SFUENTEID
LIMIT 1000;

v_cont:=v_cont + v_rowid.count;

FORALL i IN 1 .. v_rowid.count
UPDATE vicx886.CLIENTES partition ("${v_partition}")
SET NCARRIERID = v_NCARRIERID (i)
,STITULO = v_STITULO (i)
,SGRUPOVALORID = v_SGRUPOVALORID(i)
,SSEXOID = v_SSEXOID (i)
,SFENACIMIENTO = v_SFENACIMIENTO(i)
,SNIVELSEID = v_SNIVELSEID (i)
,SOCUPACIONID = v_SOCUPACIONID (i)
,SSECTORCOMID = v_SSECTORCOMID (i)
,SGRUPOEMPID = v_SGRUPOEMPID (i)
,NNUMHIJOS = v_NNUMHIJOS (i)
,NCUENTANACID = v_NCUENTANACID (i)
,SRFC = v_SRFC (i)
,SCURP = v_SCURP (i)
,SSECTORCOMDS = v_SSECTORCOMDS (i)
,SPROFESIONDS = v_SPROFESIONDS (i)
,DINSERREG = v_DINSERREG (i)
,SEDAD = v_SEDAD (i)
,SFUENTEID = v_SFUENTEID (i)
where rowid = v_rowid(i);
EXIT WHEN v_cursor%NOTFOUND;

END LOOP;


dbms_output.put_line('registros :'||v_cont);
COMMIT;

EXECUTE IMMEDIATE 'ALTER INDEX vicx886.PK_CLIENTES rebuild PARTITION "${v_partition}" PARALLEL 2';
end;
/
exit
FIN

date


**********************************************************
The number total of cpu and pga
vicx886@DWHOUSE> show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 20
parallel_threads_per_cpu integer 1
vicx886@DWHOUSE> show parameter pga

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
pga_aggregate_target big integer 20000M



Thanks in advance.

Update Partition

Ernesto Villarruel, May 10, 2011 - 7:27 pm UTC


and with this script, the time is 8 minutes and 12 seconds

create unique index vicx886.ITTMP_CLI_POS_PASOFINAL3 on vicx886.TTMP_CLI_POS_PASOFINAL3(SROWID) parallel( degree 8)

update
(
SELECT cli.NCARRIERID AS HNCARRIERID ,tmp.NCARRIERID AS TNCARRIERID
,cli.STITULO AS HSTITULO ,tmp.STITULO AS TSTITULO
,cli.SGRUPOVALORID AS HSGRUPOVALORID ,tmp.SGRUPOVALORID AS TSGRUPOVALORID
,cli.SSEXOID AS HSSEXOID ,tmp.SSEXOID AS TSSEXOID
,cli.SFENACIMIENTO AS HSFENACIMIENTO ,tmp.SFENACIMIENTO AS TSFENACIMIENTO
,cli.SNIVELSEID AS HSNIVELSEID ,tmp.SNIVELSEID AS TSNIVELSEID
,cli.SOCUPACIONID AS HSOCUPACIONID ,tmp.SOCUPACIONID AS TSOCUPACIONID
,cli.SSECTORCOMID AS HSSECTORCOMID ,tmp.SSECTORCOMID AS TSSECTORCOMID
,cli.SGRUPOEMPID AS HSGRUPOEMPID ,tmp.SGRUPOEMPID AS TSGRUPOEMPID
,cli.NNUMHIJOS AS HNNUMHIJOS ,tmp.NNUMHIJOS AS TNNUMHIJOS
,cli.NCUENTANACID AS HNCUENTANACID ,tmp.NCUENTANACID AS TNCUENTANACID
,cli.SRFC AS HSRFC ,tmp.SRFC AS TSRFC
,cli.SCURP AS HSCURP ,tmp.SCURP AS TSCURP
,cli.SSECTORCOMDS AS HSSECTORCOMDS ,tmp.SSECTORCOMDS AS TSSECTORCOMDS
,cli.SPROFESIONDS AS HSPROFESIONDS ,tmp.SPROFESIONDS AS TSPROFESIONDS
,cli.DINSERREG AS HDINSERREG ,tmp.DINSERREG AS TDINSERREG
,cli.SEDAD AS HSEDAD ,tmp.SEDAD AS TSEDAD
,cli.SFUENTEID AS HSFUENTEID ,tmp.SFUENTEID AS TSFUENTEID
from vicx886.TTMP_CLI_POS_PASOFINAL3 tmp,
vicx886.CLIENTES cli
where tmp.srowid= cli.rowid
)
SET HNCARRIERID = TNCARRIERID
,HSTITULO = TSTITULO
,HSGRUPOVALORID = TSGRUPOVALORID
,HSSEXOID = TSSEXOID
,HSFENACIMIENTO = TSFENACIMIENTO
,HSNIVELSEID = TSNIVELSEID
,HSOCUPACIONID = TSOCUPACIONID
,HSSECTORCOMID = TSSECTORCOMID
,HSGRUPOEMPID = TSGRUPOEMPID
,HNNUMHIJOS = TNNUMHIJOS
,HNCUENTANACID = TNCUENTANACID
,HSRFC = TSRFC
,HSCURP = TSCURP
,HSSECTORCOMDS = TSSECTORCOMDS
,HSPROFESIONDS = TSPROFESIONDS
,HDINSERREG = TDINSERREG
,HSEDAD = TSEDAD
,HSFUENTEID = TSFUENTEID ;


commit;

is this the best option ?

Thanks in advance.

linux device driver

Anup Gupta, May 18, 2011 - 9:39 am UTC

I'm Anup Gupta passout in 2010 and working with spanidea.

Prats, October 26, 2011 - 3:11 am UTC

If i have a table with say 252 million records and I am updating all the records on the table using bulk collect what should be my ideal limit value currently I am using 10000. Should i use 100 as advised by you or is 10000 ok?
Tom Kyte
October 26, 2011 - 5:22 am UTC

I would be using CREATE TABLE AS SELECT (CTAS) probably - not an update. CTAS will be able to skip redo, undo - can easily be parallelized, uses direct path, will result in a nicely packed table when I'm done.


Short of that, I would be using a single update statement - no slow by slow code, even if done in bulk.

I would at the very least make the limit size a configurable parameter and evaluate various sizes and see what works best for me. Rather than hard coding it.

FORALL, SAVE EXCEPTIONS, BULK COLLECT

Swarna, November 08, 2011 - 12:22 am UTC

I have referred your website completely, to get the knowledge on FORALL, SAVE EXCEPTIONS, BULK COLLECT. Most resourceful website for Oracle.

Thanks for your great work!

Problem with bult collect

A reader, November 17, 2011 - 12:08 pm UTC

I have decleared
L_C1 as SYS.ODCIVARCHAR2LIST; for all L_c1 till L_C55;
and i do not have compile errors. but i receive Error as ORA-06531: Reference to uninitialized collection .am i missing some thing ?


FETCH CUSTODIAN_EXTRACT_DETAILS
BULK COLLECT INTO
L_C1, L_C2, L_C3,L_C4,L_C5, L_C6, L_C7,L_C8,L_C9, L_C10,
L_C11,L_C12,L_C13, L_C14,L_C15,L_C16, L_C17, L_C18,L_C19,L_C20 ,
L_C21,L_C22,L_C23, L_C24, L_C25,L_C26,L_C27, L_C28,L_C29,L_C30,
L_C31, L_C32,L_C32,L_C34,L_C35, L_C36, L_C37,L_C38,L_C39, L_C40,
L_C41,L_C42,L_C43, L_C44,L_C45,L_C46, L_C47, L_C48,L_C49,L_C50 ,
L_C51,L_C52,L_C53, L_C54, L_C55 LIMIT L_LIMIT;

for I in 1 .. l_c1.count
LOOP

-- DBMS_OUTPUT.PUT_LINE
-- (L_C1(I) || ',' || L_C2(I) || ',' || L_C3(I)|| ',' || L_C4(I) || ',' || L_C5(I)|| ',' || L_C6(I) || ',' || L_C7(I)|| ',' || L_C8(I) || ',' || L_C9(I) || ',' || L_C10(I) || ',' ||
-- L_C11(I) || ',' || L_C12(I)|| ',' || L_C13(I) || ',' || L_C14(I) || ',' || L_C15(I) || ',' || L_C16(I) || ',' || L_C17(I)|| ',' || L_C18(I) || ',' || L_C19(I)|| ',' || L_C20(I) || ',' ||
-- L_C21(I)|| ',' || L_C22(I) || ',' || L_C23(I) || ',' || L_C24(I)|| ',' || L_C25(I) || ',' || L_C26(I)|| ',' || L_C27(I) || ',' || L_C28(I) || ',' || L_C29(I) || ',' || L_C30(I) || ',' ||
-- L_C31(I)|| ',' || L_C32(I) || ',' || L_C33(I)|| ',' || L_C34(I) || ',' || L_C35(I)|| ',' || L_C36(I) || ',' || L_C37(I) || ',' || L_C38(I) || ',' || L_C39(I) || ',' || L_C40(I)|| ',' ||
-- L_C41(I) || ',' || L_C42(I) || ',' || L_C43(I) || ',' || L_C44(I) || ',' || L_C45(I)|| ',' || L_C46(I) || ',' || L_C47(I)|| ',' || L_C48(I) || ',' || L_C49(I)|| ',' || L_C50(I) || ',' ||
-- L_C51(I) || ',' || L_C52(I)|| ',' || L_C53(I) || ',' || L_C54(I)|| ',' || L_C55(I));


end LOOP;

Pls Ignore my previous Query

A reader, November 17, 2011 - 12:32 pm UTC

I found the issue , I wrote L_C32 twice so L_C33 was not initilized .. so we got the error.

bulk collect

A reader, December 21, 2011 - 12:46 am UTC

hi Tom,

As described from bulk collect in first review where you have mentioned to limit rows to overcome that issue, i write a small procedure where my task is to reterieve all attributes from table where datatype is varchar2 and then check each attribute for a specific character and replace that to null, here is my code, according to my understanding it should work?

below is my code

CREATE OR REPLACE PROCEDURE pr_back_bulk (TABLENAME IN VARCHAR2)

AUTHID CURRENT_USER
IS


REC1 DBMS_SQL.VARCHAR2_TABLE;
start_time number;
SQLST VARCHAR2(250);
BEGIN

start_time:=dbms_utility.get_time;



SQLST:='select column_name
FROM all_tab_columns
where table_name= :x
and data_type = ''VARCHAR2''
AND instr(column_name,''_FORM'')>0';

EXECUTE IMMEDIATE SQLST
BULK COLLECT INTO REC1
using tablename;



SQLST:='FORALL INDX IN REC1.FIRST..REC1.LAST
UPDATE '||TABLENAME||' SET rec1(indx) = replace(REC1(INDX),''/'',''''
where instr(rec1(indx),''/'')>0';

execute immediate sqlst;
END pr_back_bulk;
/

Tom Kyte
December 21, 2011 - 7:57 am UTC

why would you use dynamic sql to retrieve the column names? Given that the sql to query all_tab_columns would NEVER change from execution to execution?

Why would you execute an update statement for every column?


the update statement I would want to probably execute would be:

update table
   set col1 = case when col1 like '%/%' then null else col1 end,
       col2 = case when col2 like '%/%' then null else col2 end,
       ....
  where col1 like '%/%' or col2 like '%/%' ...


so, the code would look something like this:

ops$tkyte%ORA11GR2> create table t
  2  ( x varchar2(10),
  3    y_form varchar2(10),
  4    z_form_field varchar2(10),
  5    a_form number
  6  )
  7  /

Table created.

ops$tkyte%ORA11GR2> insert into t (x,y_form,z_form_field,a_form) values ( 'a', 'b', 'c', 2 );

1 row created.

ops$tkyte%ORA11GR2> insert into t (x,y_form,z_form_field,a_form) values ( 'a', 'b/x', 'c', 3 );

1 row created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace procedure p( p_tname in varchar2 )
  2  authid current_user
  3  as
  4      l_update long := 'update ' || p_tname || ' set ';
  5  begin
  6      for x in ( select '"' || column_name || '"' cname
  7                   from user_tab_columns
  8                  where table_name = p_tname
  9                    and data_type = 'VARCHAR2'
 10                    and column_name like '%\_FORM%' escape '\' )
 11      loop
 12          l_update := l_update || x.cname || ' = case when ' || x.cname || ' like ''%/%'' then null else ' || x.cname || ' end,';
 13      end loop;
 14      l_update := rtrim(l_update,',') || ' where ';
 15      for x in ( select '"' || column_name || '"' cname
 16                   from user_tab_columns
 17                  where table_name = p_tname
 18                    and data_type = 'VARCHAR2'
 19                    and column_name like '%\_FORM%' escape '\' )
 20      loop
 21          l_update := l_update || x.cname || ' like ''%/%'' or ';
 22      end loop;
 23      l_update := substr( l_update, 1, length(l_update)-3 );
 24  
 25      dbms_output.put_line( l_update );
 26      execute immediate l_update;
 27  end;
 28  /

Procedure created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> exec p('T');
update T set "Y_FORM" = case when "Y_FORM" like '%/%' then null else "Y_FORM"
end,"Z_FORM_FIELD" = case when "Z_FORM_FIELD" like '%/%' then null else
"Z_FORM_FIELD" end where "Y_FORM" like '%/%' or "Z_FORM_FIELD" like '%/%'

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select * from t;

X          Y_FORM     Z_FORM_FIE     A_FORM
---------- ---------- ---------- ----------
a          b          c                   2
a                     c                   3




bulk collect

A reader, December 21, 2011 - 3:28 pm UTC

hi Tom,

Thanks for such a helpful response, how can i rewrite this procedure using bulk collect, would it improve my performance or not?
Tom Kyte
December 21, 2011 - 4:21 pm UTC

if you can do it in a single sql statement - do it that way.


If you can do it in a single sql, there is no reason to add the overhead of bulk processing and procedural code to the mix.

bulk collect or straight sql

George Joseph, January 13, 2012 - 12:13 pm UTC

Hi
We have designed a data grid in .NET where the user can select each row, alter a textbox associated with each row and click on the UPDATE button
There are 2 ways we can think to implement this
1. use UPDATE/MERGE by concatenating the changed rows using SELECT .... FROM DUAL UNION ALL
eg:
MERGE INTO emp
USING(
SELECT 1000 as empno, 500 as sal FROM DUAL UNION ALL
..................................................
..................................................
SELECT 1900 as empno, 200 as sal FROM DUAL
)modified
WHEN MATCHED THEN UPDATE SET emp.sal=modified.sal where emp.empno=modified.empno

2. Use bulk update in ODP.NET
ie
setBatchsize(100);
"for selected rows that are to be updated"
loop
UPDATE emp
set sal=:changed_sal
where empno=:empno
end loop;


commit;

Option 1 is more like straight SQL?
Option 2 is like PL/SQL using BULK

Although i haven't yet done any benchmarking stats just wanted to know which one would be a better option in your opinion.

Thanks and Regards
George

Tom Kyte
January 17, 2012 - 9:43 am UTC

neither, you are missing your optimistic concurrency controls here altogether.


what is to prevent user a from reading out the same row as user b and just overwriting each others changes?


your merge is much worse than your update. Your merge doesn't appear to use a single bind variable. As such, it would kill the shared pool and definitely be subject to SQL Injection.


I'd prefer the bulk update, but again, you need to add some optimistic concurrency controls there.

RE:

George Joseph, January 17, 2012 - 10:01 am UTC

Hi
I forgot to add some more information about the previous review.
Its a admin screen and only one user would ever use that screen.
However i am struck by how i am not using binds.
Is a SELECT 101 from dual
UNION ALL
SELECT 102 from dual
should be replaced with
SELECT :var1 from dual
UNION ALL
SELECT :var2 from dual
and bind var1 and var2?. However the number of items the user chooses to change is going to vary each time the user chooses eg: 8:00 am he may choose 2 rows that needs updation and at 9:00 am he may choose 10 rows that needs updation.
In this case wont there be two parses?

Tom Kyte
January 17, 2012 - 3:31 pm UTC

why would you be unioning here at all? I have no idea what the union is about. It wasn't mentioned in the prior discussion and just doesn't compute.


why would you select 101 and 102?

RE

George Joseph, January 17, 2012 - 9:24 pm UTC

"your merge is much worse than your update. Your merge doesn't appear to use a single bind variable."

I was following what you are trying to preach about "doing
in SQL if it is possible " by using MERGE/UPDATE using join key and SELECT UNION ALL to update multiple rows.
And by the looks of it if i have to update a database table 100 modified rows, which the user has selected from the screen, I can do it in SQL using the way i mentioned.
MERGE INTO(SELECT pkeyvalue1,changed_value1 FROM DUAL SELECT pkeyvalue2,changed_value2 and so on..)WHEN MATCHED THEN UPDATE etc
OR
I can go for BULK UPDATE using .NET/J2EE/any_language
WHERE i BATCH say 100 rows at a time, which to me looked a but like PL/SQL with the setExecuteBatch(value) the equivalent of LIMIT clause or array used in bulk collect.Please correct me if i got it wrong.

And as you mentioned the bind values are not being used in the SELECT literal UNION ALL, my question was whether it was a good way to use SELECT :pkeyvalue1, :changedvalue2 FROM DUAL and bind the changed values and pkeyvalues when the user selection was going to change(some time he can choose just one row to update, and some time he can choose 3-4 rows to update at a time).

I am thinking perhaps i took your mantra a bit too far. Please correct me if there is a better way to perform this in single SQL. As always i learn a lot more from this site about Oracle than any other place.
Tom Kyte
January 18, 2012 - 7:06 am UTC

... I was following what you are trying to preach about "doing
in SQL if it is possible " ...

well, that sure as heck doesn't mean "use binds".


There are ways to do the merge using binds, but it would be a waste of time (you'd fill a collecting and merge using the collection as a set, you would bind the collection).


There is ALWAYS A WAY TO BIND - remember that. You cannot say "I cannot bind", you can only say "I chose not to bind". If you are going down a path and thinking "there is no way to use binds here" - you are wrong, feel free to ask how to do it and I'll gladly show you.


SELECT :pkeyvalue1,
:changedvalue2 FROM DUAL


I don't get the value in that at all - you already KNOW what is going to be returned from that query, you just BUILT that query. You are asking the database to return constants you sent to the database! Why would this accomplish anything (that hasn't already been accomplished??? you *have the values*, you *know* the values, you are telling us the values!)


Here you would use a bulk update (it is a lot like a single sql statement). If you really wanted to use the merge, we'd use a collection and bind the collection in.

sort of like this:


ops$tkyte%ORA11GR2> create table t ( x int, y int );

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace type myScalarType as object
  2  ( x int,
  3    y int
  4  )
  5  /

Type created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace type myTableType as table of myScalarType
  2  /

Type created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into t values ( 1, null );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 3, null );

1 row created.

ops$tkyte%ORA11GR2> declare
  2          l_data myTableType := myTableType( myScalarType( 1, 2 ),
  3                                             myScalarType( 3, 4 ) );
  4  begin
  5          merge into t tgt
  6          using (select * from table(l_data)) src
  7          on (tgt.x = src.x)
  8          when matched then update set y = src.y;
  9  end;
 10  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select * from t;

         X          Y
---------- ----------
         1          2
         3          4




but again, using that collection would be laboriously hard and unnecessary from a language other than plsql - it would be trivial to bind scalars to an update, it would require some amount of code to do the collection for you - so the update wins.


BULK Processing

Ashish Harbhajanka, February 04, 2012 - 2:08 am UTC

Hi,
I am trying to load some transactional data from flat file(Excel for example) into Oracle Application and am following the below process :

1) Import Excel Data into custom table.
2) BULK COLLECT Data into PL/SQL Table which is having rowtype of my custom table.
3) Validate Data
4) Derive Data
5) Load Error Recs in other pl/sql table which is having rowtype of my error table
6) Inserting Validated Recs to my base table calling API
7) Inserting Error Recs into error table using BULK Insert.

Now in my custom table there are two fields say record_id(behaves as primary key) and record_status(used to identify status of record).

I do not have these fields in my flat file(Excel).
Hence I am calling a procedure to populate these two fields.

Record ID via a sequence( using SELECT sequence.nextval)
Status with constant value ( assignment ).

My first query is :

1) Can I do a BULK UPDATE to populate both these fields?
2) Currently I am doing the processing record by record basis meaning after bulk collect of all records into my pl/sql table :
For Each Record
a) Populate record id and record status
b) Validate Input Data
c) Derive some columns used for API processing using
SQLs and assigning them to pl/sql table cols.
d) Error Logging into error pl/sql table
e) Calling API to insert validated records
f) Updating Custom Table with derived cols value(stored in pl/sql table)
End ;

BULK INSERT error records into error table.

I want to do all at one shot. I mean :

a) Update record id and record status for all records in custom table.
b) Validate Input Data for all records
c) Derive cols for all records
d) Error Logging for all records
e) API call for all records
f) Synochronising for all records.

How can i acheive this?








Tom Kyte
February 04, 2012 - 5:26 am UTC

Excel is not a flat file :) Flat files are just plain old files you would open up in notepad or vi.

how do you import this excel data - need to know that before we can answer the questions.

Bulk collect times out (high volume of data 200 million)

Sri, May 09, 2012 - 12:20 pm UTC

Hi Tom,

i am sure this is a easy fix but can't find the solution,

I am using bulk collect and it only works if i implement the cursor select with rownum , if i don't limit the rows then it times out i.e end of communication channel error ORA-03114. what is the difference between bulk collect limit and rownum here?

Am i using all the Temp space with my cursor? can dba increase something to make this work?

how can i make this work without using rownum ,
Here is my code :

SET SERVEROUTPUT ON ;
SET TIMING ON ;

DECLARE


TYPE TYP_RID_NON_OH_WC is TABLE OF acctdash_feature_adoption_load%ROWTYPE INDEX BY BINARY_INTEGER;

v_data_vi TYP_RID_NON_OH_WC;


cursor CUR_STG1 is
select
/*+ parallel(mcaa,6) full(mcag) parallel(mcag,6) full(mcc) parallel(mcc,6) parallel(mcc2,6) */
mcaa.account_key,
mcaa.mrkt_sid,
mcc.cmpgn_key,
mcc.cmpgn_name,
mcag.ad_grp_key,
mcag.ad_grp_name,
mcag.ad_grp_content_enabled,
mcc.cmpgn_content_enabled,
mcaa.advanced_match_optin_flag,
mcc.cmpgn_advance_enabled,
mcag.ad_grp_advance_enabled,
ad_grp_crtv_opt_enabled,
count(mcc2.crtv_key) num_creatives,
mcc.geoset_sid,
trunc(sysdate) - 1 load_date,
mcaa.company_key
from
YSMCD.dim_com_ad_group_c mcag,
YSMCD.dim_com_campaign_c mcc,
dash_com_advertiser_account mcaa,
YSMCD.dim_com_creative_c mcc2
where
--ROWNUM<60000 /*only this makes it work*/
--AND
mcc.account_key = mcag.account_key
and mcag.account_key=mcaa.account_key
and mcc.cmpgn_key = mcag.cmpgn_key
and mcc.account_key = mcaa.account_key
and mcc2.ad_grp_key = mcag.ad_grp_key
and mcc.cmpgn_status = 1
and mcag.ad_grp_status = 1
and mcc2.crtv_status = 1
group by
mcaa.account_key,
mcaa.company_key,
mcaa.mrkt_sid,
mcc.cmpgn_key,
mcc.cmpgn_name,
mcag.ad_grp_key,
mcag.ad_grp_name,
mcag.ad_grp_content_enabled,
mcc.cmpgn_content_enabled,
mcaa.advanced_match_optin_flag,
mcc.cmpgn_advance_enabled,
mcag.ad_grp_advance_enabled,
ad_grp_crtv_opt_enabled,
mcc.geoset_sid
;


BEGIN


OPEN CUR_STG1;

LOOP
FETCH CUR_STG1 BULK COLLECT INTO v_data_vi LIMIT 1000;
FORALL i IN 1..v_data_vi.COUNT
INSERT INTO acctdash_feature_adoption_load VALUES V_DATA_VI(i);
COMMIT;
--DBMS_OUTPUT.PUT_LINE('100 ROWS INSERTED');
EXIT WHEN CUR_STG1%NOTFOUND;

END LOOP;

CLOSE CUR_STG1;


EXCEPTION
WHEN OTHERS
THEN
BEGIN
DBMS_OUTPUT.PUT_LINE('An error encountered while running the load script - ' || SQLCODE||'-ERROR- '||SQLERRM);

END;

END;
/

Thanks in Advance for your help :)

Sri.
Tom Kyte
May 10, 2012 - 7:56 am UTC

I despise your code.

http://asktom.oracle.com/Misc/pokemon-and-when-others.html

why do you have that when others?

can you justify it on any technical grounds? (NO, you cannot)

All it does is

a) hides the fact there was an error
b) removes crucial debugging information from view
c) makes it so the database is left in an inconsistent state


This code should not exist at all, this should be:

insert /*+ append */ into t select .....;

Nothing more, nothing less.

If your procedure (when your procedure) fails some day and you are left with some of the inserts done and committed and some not - what then?? seriously - what then? how do you recover?????



The 3114 has nothing to do with the bulk collect. It looks more like the parallel query itself is failing. review your trace files and your alert log.

Size of tables i am using

Sri, May 09, 2012 - 5:48 pm UTC

mv_com_ad_group mcag, --330Million
mv_com_campaign mcc, --10Million
dash_com_advertiser_account mcaa, --123k
mv_com_creative mcc2 --754Million
Tom Kyte
May 10, 2012 - 7:57 am UTC

record counts are not useful in general. Size counts, rowcounts - not as much.

but regardless - see above

Thank a lot tom , i think issue is nested loop

Sri, May 10, 2012 - 12:39 pm UTC

Thanks Tom ,

I tried with just the insert append , and below is the explain plan, i found the issue is with the nested loop and if i remove the (count /group by) from my select its working returning 45Mil rows, so i used USE_HASH(mcag,mcc2)and it removed the nested loop from explain plan but still its giving 3113 end of communication channel. what can i do ? Could give me a brief idea why its nesting here?


insert /*+ append */ into acctdash_feature_adoption_test
select
/*+ use_hash(mcag,mcc2) parallel(mcaa,4) parallel(mcag,4) parallel(mcc,4) */
mcaa.account_key,
mcaa.mrkt_sid,
mcc.cmpgn_key,
mcc.cmpgn_name,
mcag.ad_grp_key,
mcag.ad_grp_name,
mcag.ad_grp_content_enabled,
mcc.cmpgn_content_enabled,
mcaa.advanced_match_optin_flag,
mcc.cmpgn_advance_enabled,
mcag.ad_grp_advance_enabled,
mcag.ad_grp_crtv_opt_enabled,
count(mcc2.crtv_key)num_creatives,
--100,
mcc.geoset_sid,
trunc(sysdate) - 1 load_date,
mcaa.company_key
from
mv_com_ad_group mcag, --330M
mv_com_campaign mcc, --10M
dash_com_advertiser_account mcaa --123k
,mv_com_creative mcc2 --754M
where
--rownum<2
--and
mcc.account_key = mcag.account_key
and mcag.account_key=mcaa.account_key
and mcc.cmpgn_key = mcag.cmpgn_key
and mcc.account_key = mcaa.account_key
and mcc2.ad_grp_key = mcag.ad_grp_key
and mcc.cmpgn_status = 1
and mcag.ad_grp_status = 1
and mcc2.crtv_status = 1
group by
mcaa.account_key,
mcaa.company_key,
mcaa.mrkt_sid,
mcc.cmpgn_key,
mcc.cmpgn_name,
mcag.ad_grp_key,
mcag.ad_grp_name,
mcag.ad_grp_content_enabled,
mcc.cmpgn_content_enabled,
mcaa.advanced_match_optin_flag,
mcc.cmpgn_advance_enabled,
mcag.ad_grp_advance_enabled,
ad_grp_crtv_opt_enabled,
mcc.geoset_sid
;

COMMIT;

PLAN_TABLE_OUTPUT(/*****WITHOUT USE_HASH HINT*******/)

Plan hash value: 773427431

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 6 | 954 | 180K (1)| 00:54:16 | | | | | |
| 1 | LOAD AS SELECT | ACCTDASH_FEATURE_ADOPTION_TEST | | | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10003 | 6 | 954 | 180K (1)| 00:54:16 | | | Q1,03 | P->S | QC (RAND) |
| 4 | SORT GROUP BY | | 6 | 954 | 180K (1)| 00:54:16 | | | Q1,03 | PCWP | |
| 5 | PX RECEIVE | | 2 | 38 | 2 (0)| 00:00:01 | | | Q1,03 | PCWP | |
| 6 | PX SEND HASH | :TQ10002 | 2 | 38 | 2 (0)| 00:00:01 | | | Q1,02 | P->P | HASH |
|* 7 | TABLE ACCESS BY LOCAL INDEX ROWID| DIM_COM_CREATIVE_C | 2 | 38 | 2 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 8 | NESTED LOOPS | | 6 | 954 | 180K (1)| 00:54:16 | | | Q1,02 | PCWP | |
|* 9 | HASH JOIN | | 2 | 280 | 180K (1)| 00:54:16 | | | Q1,02 | PCWP | |
| 10 | PX RECEIVE | | 1067K| 76M| 7024 (1)| 00:02:07 | | | Q1,02 | PCWP | |
| 11 | PX SEND BROADCAST | :TQ10001 | 1067K| 76M| 7024 (1)| 00:02:07 | | | Q1,01 | P->P | BROADCAST |
|* 12 | HASH JOIN | | 1067K| 76M| 7024 (1)| 00:02:07 | | | Q1,01 | PCWP | |
| 13 | PX RECEIVE | | 125K| 2929K| 359 (1)| 00:00:07 | | | Q1,01 | PCWP | |
| 14 | PX SEND BROADCAST | :TQ10000 | 125K| 2929K| 359 (1)| 00:00:07 | | | Q1,00 | P->P | BROADCAST |
| 15 | PX BLOCK ITERATOR | | 125K| 2929K| 359 (1)| 00:00:07 | | | Q1,00 | PCWC | |
| 16 | TABLE ACCESS FULL | DASH_COM_ADVERTISER_ACCOUNT | 125K| 2929K| 359 (1)| 00:00:07 | | | Q1,00 | PCWP | |
| 17 | PX BLOCK ITERATOR | | 4161K| 202M| 6660 (1)| 00:02:00 | 1 | 64 | Q1,01 | PCWC | |
|* 18 | TABLE ACCESS FULL | DIM_COM_CAMPAIGN_C | 4161K| 202M| 6660 (1)| 00:02:00 | 1 | 64 | Q1,01 | PCWP | |
| 19 | PX BLOCK ITERATOR | | 212M| 12G| 173K (1)| 00:52:05 | 1 | 64 | Q1,02 | PCWC | |
|* 20 | TABLE ACCESS FULL | DIM_COM_AD_GROUP_C | 212M| 12G| 173K (1)| 00:52:05 | 1 | 64 | Q1,02 | PCWP | |
| 21 | PARTITION HASH ALL | | 3 | | 2 (0)| 00:00:01 | 1 | 64 | Q1,02 | PCWP | |
|* 22 | INDEX RANGE SCAN | DIM_COM_CREATIVE_C_IDX3 | 3 | | 2 (0)| 00:00:01 | 1 | 64 | Q1,02 | PCWP | |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------


PLAN_TABLE_OUTPUT(/*****USE_HASH HINT*******/)

Plan hash value: 120603416

---------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 4 | 636 | 189K (1)| 00:56:54 | | | | | |
| 1 | LOAD AS SELECT | ACCTDASH_FEATURE_ADOPTION_TEST | | | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10006 | 4 | 636 | 189K (1)| 00:56:54 | | | Q1,06 | P->S | QC (RAND) |
| 4 | SORT GROUP BY | | 4 | 636 | 189K (1)| 00:56:54 | | | Q1,06 | PCWP | |
|* 5 | HASH JOIN | | 4 | 636 | 189K (1)| 00:56:54 | | | Q1,06 | PCWP | |
| 6 | PX RECEIVE | | 125K| 2929K| 359 (1)| 00:00:07 | | | Q1,06 | PCWP | |
| 7 | PX SEND HASH | :TQ10004 | 125K| 2929K| 359 (1)| 00:00:07 | | | Q1,04 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | 125K| 2929K| 359 (1)| 00:00:07 | | | Q1,04 | PCWC | |
| 9 | TABLE ACCESS FULL | DASH_COM_ADVERTISER_ACCOUNT | 125K| 2929K| 359 (1)| 00:00:07 | | | Q1,04 | PCWP | |
| 10 | PX RECEIVE | | 5656K| 728M| 189K (1)| 00:56:47 | | | Q1,06 | PCWP | |
| 11 | PX SEND HASH | :TQ10005 | 5656K| 728M| 189K (1)| 00:56:47 | | | Q1,05 | P->P | HASH |
|* 12 | HASH JOIN BUFFERED | | 5656K| 728M| 189K (1)| 00:56:47 | | | Q1,05 | PCWP | |
| 13 | PX RECEIVE | | 4161K| 202M| 6660 (1)| 00:02:00 | | | Q1,05 | PCWP | |
| 14 | PX SEND HASH | :TQ10002 | 4161K| 202M| 6660 (1)| 00:02:00 | | | Q1,02 | P->P | HASH |
| 15 | PX BLOCK ITERATOR | | 4161K| 202M| 6660 (1)| 00:02:00 | 1 | 64 | Q1,02 | PCWC | |
|* 16 | TABLE ACCESS FULL | DIM_COM_CAMPAIGN_C | 4161K| 202M| 6660 (1)| 00:02:00 | 1 | 64 | Q1,02 | PCWP | |
| 17 | PX RECEIVE | | 324M| 25G| 182K (1)| 00:54:48 | | | Q1,05 | PCWP | |
| 18 | PX SEND HASH | :TQ10003 | 324M| 25G| 182K (1)| 00:54:48 | | | Q1,03 | P->P | HASH |
|* 19 | HASH JOIN BUFFERED | | 324M| 25G| 182K (1)| 00:54:48 | | | Q1,03 | PCWP | |
| 20 | PX RECEIVE | | 324M| 5887M| 9010 (1)| 00:02:43 | | | Q1,03 | PCWP | |
| 21 | PX SEND HASH | :TQ10000 | 324M| 5887M| 9010 (1)| 00:02:43 | | | Q1,00 | P->P | HASH |
| 22 | PX BLOCK ITERATOR | | 324M| 5887M| 9010 (1)| 00:02:43 | 1 | 64 | Q1,00 | PCWC | |
|* 23 | TABLE ACCESS FULL| DIM_COM_CREATIVE_C | 324M| 5887M| 9010 (1)| 00:02:43 | 1 | 64 | Q1,00 | PCWP | |
| 24 | PX RECEIVE | | 212M| 12G| 173K (1)| 00:52:05 | | | Q1,03 | PCWP | |
| 25 | PX SEND HASH | :TQ10001 | 212M| 12G| 173K (1)| 00:52:05 | | | Q1,01 | P->P | HASH |
| 26 | PX BLOCK ITERATOR | | 212M| 12G| 173K (1)| 00:52:05 | 1 | 64 | Q1,01 | PCWC | |
|* 27 | TABLE ACCESS FULL| DIM_COM_AD_GROUP_C | 212M| 12G| 173K (1)| 00:52:05 | 1 | 64 | Q1,01 | PCWP | |
---------------------------------------------------------------------------------------------------------------------------------------------------------------



Tom Kyte
May 10, 2012 - 3:26 pm UTC

ora-3113 = please call support, there is a bug somewhere you are hitting - or a big mis-configuration at the very least.

Bulk fetch with left join takes very long

Robert Graf, August 03, 2012 - 4:51 pm UTC

Hi Tom,

I have a cursor, select from a table (1 partition, 600.000 rows) and bulk fetch all within a loop by using a limit with 1000 rows. The whole work is done within 2 m 37 s.

But when I extended the select with a left join (53 rows) the first bulk fetch of 1000 takes 1-2 seconds and after each bulk fetch of 1000 the time is growing up by 0-2 seconds or so. The last bulk fetch has taken 51 seconds. The whole job is finished after 2 h 25 m. (!)

This is always re-producable on our development and another test server.

What happens here and how to avoid it?

Execution plan without left join:

Plan
SELECT STATEMENT  ALL_ROWSCost: 33,054  Bytes: 79,288  Cardinality: 1,166    
 2 PARTITION LIST SINGLE  Cost: 33,054  Bytes: 79,288  Cardinality: 1,166  Partition #: 1  Partitions determined by Key Values 
  1 TABLE ACCESS FULL TABLE NEXUS.EW_BASICDATA Cost: 33,054  Bytes: 79,288  Cardinality: 1,166  Partition #: 2  Partitions accessed #1


Execution plan with left join:

Plan
SELECT STATEMENT  ALL_ROWS Cost: 49,533  Bytes: 198,220  Cardinality: 1,166
 9 NESTED LOOPS OUTER  Cost: 49,533  Bytes: 198,220  Cardinality: 1,166
  2 PARTITION LIST SINGLE  Cost: 33,054  Bytes: 113,102  Cardinality: 1,166  Partition #: 2  Partitions determined by Key Values
   1 TABLE ACCESS FULL TABLE NEXUS.EW_BASICDATA Cost: 33,054  Bytes: 113,102  Cardinality: 1,166  Partition #: 3  Partitions accessed #1
  8 VIEW NEXUS. Cost: 14  Bytes: 73  Cardinality: 1
   7 HASH JOIN OUTER  Cost: 14  Bytes: 2,548  Cardinality: 49
    5 HASH JOIN RIGHT SEMI  Cost: 9  Bytes: 1,666  Cardinality: 49
     3 TABLE ACCESS FULL TABLE NEXUS.NX_EXCHANGE_ENTITY Cost: 2  Bytes: 689  Cardinality: 53
     4 TABLE ACCESS FULL TABLE NEXUS.NX_EXCHANGE_2 Cost: 6  Bytes: 10,059  Cardinality: 479
    6 TABLE ACCESS FULL TABLE NEXUS.NX_EXCHANGE_MAPPING Cost: 5  Bytes: 252  Cardinality: 14


The left join only generates 53 records and is used as a mapping table from one code to another.

We are using 10.2.0.5.

Thanks a lot for your big help again in advance.


Tom Kyte
August 17, 2012 - 9:58 am UTC

well, you are comparing apples to jupiter here. Seriously - you are comparing a single table full table scan (no temp, no hashing, no nothing - just a trivial full scan)

to a query that does three full scans, two hashes into memory (or temp) and then full scans another table and so on and so on?


really???

that first query is a red herring here, it doesn't even belong.


If you want, get a dbms_monitor trace (with waits) of your second query and a tkprof report of it and we'll look at it, but please - just lose that first query, it means nothing.

It would be like saying "look select * from dual goes fast, why doesn't my five table join do the same?"

commit in cursor loop using SELECT FOR UPDATE

Lasse Jenssen, September 21, 2012 - 6:44 am UTC

Above in this thread someone asked: "Have you ever put commits in a loop for a cursor that does a SELECT FOR UPDATE? "
You answered: I don't understand the context of this comment?

Actually I got the same question. Here is my code:
procedure move_errors is
   cursor   err_cur is
      select e.*, nvl2( b.code, 1,0) missing_bank
      from  e left join b on (e.code= b.code)
      where  b.code is null
         or  element is null <b>for update skip locked</b>;
   
   TYPE err_aat IS TABLE OF err_cur%ROWTYPE INDEX BY PLS_INTEGER;
   err err_aat;
begin
   open err_cur;
      
   loop
      fetch err_cur bulk collect into err limit 100;
      exit when err.count=0;
         
      forall i in 1..err.count
        insert into e_err (...,err_desc)
        values  (err(i).id, err(i).code, ...,
         case when err(i)=1 then 'CODE not found in B table'
              else 'ELEMENT has NULL value' end);  
         
      forall i in 1..err.count
         delete from e where id = err(i).id;
      <b>commit;</b>          
   end loop;
   close err_cur;
end move_errors;


1.)Will this work, or will the commit release the locks on the first commit (inside the cursor loop)? Or will all the rows be locked until we close the cursor?
2.)Will rows in the B-table get locked?

By the way ... looking forward for your presentations in San Fransisco (OOW) in about a week :-)

Tom Kyte
September 26, 2012 - 12:44 pm UTC

this code is buggy..

1) the commit will release the locks, that is part of the definition of commit. the second fetch will fail with fetch out of sequence




ops$tkyte%ORA11GR2> create table t as select * from all_objects;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> declare
  2          cursor c is select * from t for update;
  3  
  4          type array is table of t%rowtype;
  5          l_data array;
  6  begin
  7          open c;
  8          loop
  9                  fetch c bulk collect into l_data limit 100;
 10  
 11                  dbms_output.put_line( 'processing ' || l_data.count );
 12                  commit;
 13                  exit when c%notfound;
 14          end loop;
 15          close c;
 16  end;
 17  /
processing 100
declare
*
ERROR at line 1:
ORA-01002: fetch out of sequence
ORA-06512: at line 9




2) you should always specify "for update of what you WANT". we only lock from one table.



seems to mean you should just do an insert as select and delete in a serializable transaction, no code, no commit.

and if this is an error log, no one would be updating the records so what is the point of the skipped locked???

Bulk collecting to a nested table with "LIMIT" cluase

Sangeetha, October 24, 2012 - 5:49 am UTC

Hi Tom,

I am doing a bulk collect into a nested table as below:

declare
v_tab nt_emp :=nt_emp ();
begin
select ty_rec(empid,empno) bulk collect into v_tab from
employees;
end;

Where
ty_rec is a type defined as :create or replace
type TY_REC is object (EMPNO number,EMPNAME varchar2(50));

and nt_emp is defined as :create or replace
TYPE NT_EMP
IS
TABLE OF TY_REC;

If employees table count is more than say 10000, whether the best approach with bulk collect is rewriting the single fetch into a fetch in the cursor loop with a limit clause as mentioned in the answer to the first thread here?

Thank you so much for your time
Regards
Sangeetha

Tom Kyte
October 25, 2012 - 8:49 am UTC

use an explicit cursor

fetch bulk collect - limit to 100 to 1000

use forall to put the data back if you are putting the data back.

just like the original answer, yes.

Index by Table

Sangeetha P, October 31, 2012 - 6:58 am UTC

Hi Tom,

Thank you so much. I have one more scenario which confuses me on deciding the data structure.

I am doing a calculation process. Basic process is I need to find out the value at position 'i'. But this I could have 10752 distinct values.On each day 10752 values will be different (i is a time factor , skill combination). 5 different sets of tables attributes to this value.

Whether Associative arrays with a VARCHAR2 index would be the ideal answer?If it is an Associative array I have the advantage like, In all scenarios I can write Array1(i) :=Array1(i) +new_val, if I pass the array or if I declare it as global.

Whether 10752 will be a very large count to handle it in memory?

One more issue is that, based one more attribute, I need to have a categorized set of arrays also.


if type='E'
EArray (i) :=EArray (i) +new_val;
if type='F'
FArray (i) :=FArray (i) +new_val; etc...

Whether it will be advantageous to move to GTT instead of Associative Arrays
Tom Kyte
October 31, 2012 - 5:45 pm UTC

10,000 is reasonable in an array.

instead of using a separate array for each type, you could use a multi-level collection.

here, I'll have an array type_lookup that will convert E, F, ... whatever into 1, 2, 3.... a first level subscript.

then we'll have a two dimensional array - first dimension would be your types, second dimension would be their values

ops$tkyte%ORA11GR2> declare
  2  
  3      type numArray is table of number index by binary_integer;
  4      type twoDimArray is table of numArray index by binary_integer;
  5  
  6      type correlatedArray is table of number index by varchar2(1);
  7  
  8      my_data    twoDimArray;
  9      type_lookup correlatedArray;
 10  begin
 11      type_lookup('E') := 1;
 12      type_lookup('F') := 2;
 13      /* .... */
 14      type_lookup('M') := 42;
 15  
 16  
 17      /* .... */
 18  
 19      for i in 1 .. 10
 20      loop
 21          begin
 22              my_data(type_lookup('E'))(i) := my_data(type_lookup('E'))(i) + 1;
 23          exception
 24              when no_data_found then my_data(type_lookup('E'))(i) := 0;
 25          end;
 26      end loop;
 27  end;
 28  /

PL/SQL procedure successfully completed.

Thanks

Sangeetha P, November 01, 2012 - 12:27 am UTC

Thank you so much Tom. Have a great day ahead.

insert 400 million records into a table

Luong Tran, December 13, 2012 - 4:07 am UTC

Hi Tom,

In our datawarehouse project, we have to migrate a very large table. The table has more than 400 million records, and we need to copy those records (not all the columns in the table) to another table.

To make it clear, table A has about 20 columns and contains more than 400 millions records. We created table B which has about 10 columns that are a subset of 20 columns in table A. Now we have to move all records in table A to table B but we found that very very slow even with the bulk insert.

Could you pls help?


Tom Kyte
December 17, 2012 - 3:47 pm UTC

drop table b;

create table b as select .... from a;

using parallel, nologging if you like. that'll be as fast as it gets.


or

insert /*+ append */ into b select .... from a;


you do NOT want to use bulk (array) inserts, just use a single CREATE TABLE AS SELECT, or INSERT /*+APPEND*/ (direct path insert).

use parallel if it makes sense.

use nologging if it makes sense.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library