Skip to Main Content

Breadcrumb

XMAS_TREE

The AskTOM team is taking a break over the holiday season, so we're not taking questions or responding to comments, but we might still sneak in a few Office Hours sessions! Thanks for being a member of the AskTOM community.

Question and Answer

Tom Kyte

Thanks for the question, Raghu.

Asked: March 24, 2003 - 10:21 am UTC

Last updated: July 16, 2012 - 3:26 pm UTC

Version: Oracle 9.2.0.2

Viewed 10K+ times! This question is

You Asked

Tom,
Currently we use the following structure for our ETL , Its taking too long

Cursor for Delta table
Loop
Insert into dimension tables vaues ( cursor variables )
insert into xref tables
insert into fact tables
end loop

commit;

The reason for this structure is to log all the errors in the process.
Currently its taking more than 16 hours for 140,000 records.
It completes in about 1 minite for 300 records in delta table.



1. Ths insert statements are variables This should not take that long ?
2. Is there anything fundamentally wrong with the procedure layout?
3. Does committing within the loop which you strongly oppose speed the process ?


Thanks in Advance
Raghu,J

and Tom said...

I'll do this as a lifo queue:

3) NO -- in fact, it makes it go slower then it should. Committing in a cursor for loop for each insert does a couple of things:

a) goes SLOW
b) generates more REDO
c) generates more UNDO
d) stinks

Those are the cons, for the pros:

NULL

cannot think of any personally...


2) yup, doesn't employ any bulk operations whatsoever.

1) do anything lots of times and it takes time.


The fastest way -- single statements all. maybe it would be faster to validate the data AND THEN load it.

At the very very teeny tiny least -- use bulk processing. see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:3675533064673 <code>

Here I'll show a 9iR2 specific example, uses BULK_EXCEPTIONS and the ability to insert/update from a record. Refer to the above link for how to do this in 8i and before:


ops$tkyte@ORA920> create table t
2 as
3 select *
4 from all_objects
5 order by object_name;

Table created.

Elapsed: 00:00:02.79
ops$tkyte@ORA920>
ops$tkyte@ORA920> create table t2
2 as
3 select *
4 from all_objects
5 where 1=0;

Table created.

Elapsed: 00:00:00.10
ops$tkyte@ORA920>
ops$tkyte@ORA920> alter table t2 add constraint check_object_id
2 check ( object_id <= 31900 );

Table altered.

those are the tables to test with -- I've set it up so that 24 rows will get rejected. Additionally -- I sorted the data by object name to make it so the errors happen "randomly" throughout the load..

ops$tkyte@ORA920> declare
2 l_cnt number := 0;
3 begin
4 for x in ( select * from t )
5 loop
6 begin
7 insert into t2 values X;
8 exception
9 when others then dbms_output.put_line(sqlcode);
10 l_cnt := l_cnt+1;
11 end;
12 commit;
13 end loop;
14 dbms_output.put_line( l_cnt || ' total errors' );
15 end;
16 /
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
24 total errors

PL/SQL procedure successfully completed.

Elapsed: 00:00:58.69

thats you right now -- about 1 minute...


ops$tkyte@ORA920> truncate table t2;

Table truncated.

Elapsed: 00:00:01.48
ops$tkyte@ORA920>
ops$tkyte@ORA920> declare
2 l_cnt number := 0;
3 begin
4 for x in ( select * from t )
5 loop
6 begin
7 insert into t2 values X;
8 exception
9 when others then dbms_output.put_line(sqlcode);
10 l_cnt := l_cnt+1;
11 end;
12 end loop;
13 commit;
14 dbms_output.put_line( l_cnt || ' total errors' );
15 end;
16 /
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
-2290
24 total errors

PL/SQL procedure successfully completed.

Elapsed: 00:00:15.68

That's you with the commit in the proper location


ops$tkyte@ORA920> truncate table t2;

Table truncated.

Elapsed: 00:00:00.10
ops$tkyte@ORA920>
ops$tkyte@ORA920> DECLARE
2 type array is table of t%rowtype index by binary_integer;
3 data array;
4 errors NUMBER;
5 dml_errors EXCEPTION;
6 l_cnt number := 0;
7 PRAGMA exception_init(dml_errors, -24381);
8
9 cursor c is select * from t;
10 BEGIN
11 open c;
12 loop
13 fetch c BULK COLLECT INTO data LIMIT 100;
14 begin
15 FORALL i IN 1 .. data.count SAVE EXCEPTIONS
16 insert into t2 values data(i);
17 EXCEPTION
18 WHEN dml_errors THEN
19 errors := SQL%BULK_EXCEPTIONS.COUNT;
20 l_cnt := l_cnt + errors;
21 FOR i IN 1..errors LOOP
22 dbms_output.put_line
23 ('Error occurred during iteration ' ||
24 SQL%BULK_EXCEPTIONS(i).ERROR_INDEX ||
25 ' Oracle error is ' ||
26 SQL%BULK_EXCEPTIONS(i).ERROR_CODE );
27 end loop;
28 end;
29 exit when c%notfound;
30 END LOOP;
31 close c;
32 dbms_output.put_line( l_cnt || ' total errors' );
33 end;
34 /
Error occurred during iteration 99 Oracle error is 2290
Error occurred during iteration 100 Oracle error is 2290
Error occurred during iteration 1 Oracle error is 2290
Error occurred during iteration 73 Oracle error is 2290
Error occurred during iteration 74 Oracle error is 2290
Error occurred during iteration 75 Oracle error is 2290
Error occurred during iteration 76 Oracle error is 2290
Error occurred during iteration 77 Oracle error is 2290
Error occurred during iteration 78 Oracle error is 2290
Error occurred during iteration 79 Oracle error is 2290
Error occurred during iteration 58 Oracle error is 2290
Error occurred during iteration 9 Oracle error is 2290
Error occurred during iteration 10 Oracle error is 2290
Error occurred during iteration 85 Oracle error is 2290
Error occurred during iteration 54 Oracle error is 2290
Error occurred during iteration 74 Oracle error is 2290
Error occurred during iteration 26 Oracle error is 2290
Error occurred during iteration 27 Oracle error is 2290
Error occurred during iteration 49 Oracle error is 2290
Error occurred during iteration 16 Oracle error is 2290
Error occurred during iteration 17 Oracle error is 2290
Error occurred during iteration 18 Oracle error is 2290
Error occurred during iteration 19 Oracle error is 2290
Error occurred during iteration 67 Oracle error is 2290
24 total errors

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.45

and that is you with BULK processing...

ops$tkyte@ORA920> insert into t2 select * from t where object_id <= 31900;

29895 rows created.

Elapsed: 00:00:00.52

is where you could be if you can put your ETL validation into SQL itself


Looks promising for you ;)

Rating

  (128 ratings)

We're not taking comments currently, so please try again later if you want to add a comment.

Comments

Using multi-table insert

Logan Palanisamy, March 24, 2003 - 2:33 pm UTC

Tom,

How about using multi-table insert like

INSERT ALL
INTO DIMENSION_T1 VALUES (DELTA_T.C1, .., ..)
INTO DIMENSION_T2 VALUES (DELTA_T.Cn, .., ..)
INTO XREF_T VALUES (DELTA_T.Cm, ..,..)
INTO FACT_T VALUES (DELTA_T.Cr, .., ..)
SELECT * FROM DELTA_T;

C1, Cn, Cm, Cr are different columns in the DELTA_T table.
WHEN clause can be added optionally before INTO.


Tom Kyte
March 24, 2003 - 3:07 pm UTC

sure, even better.

Georg, March 24, 2003 - 2:33 pm UTC

Hello Tom,

great example, as usual.

Is there any possibility to print out not the iteration counter but something like the ID (primary key) of the "source row". Would be a great aid in the search of the offending input data. Could it be used with a UPSERT (MERGE) statement ? Thanks in advance.

Regards,
Georg



Tom Kyte
March 24, 2003 - 3:08 pm UTC

you can do that easily -- consider this code:

12 loop
13 fetch c BULK COLLECT INTO data LIMIT 100;
14 begin
15 FORALL i IN 1 .. data.count SAVE EXCEPTIONS
16 insert into t2 values data(i);
17 EXCEPTION
18 WHEN dml_errors THEN
19 errors := SQL%BULK_EXCEPTIONS.COUNT;
20 l_cnt := l_cnt + errors;
21 FOR i IN 1..errors LOOP
22 dbms_output.put_line
23 ('Error occurred during iteration ' ||
24 SQL%BULK_EXCEPTIONS(i).ERROR_INDEX ||
25 ' Oracle error is ' ||
26 SQL%BULK_EXCEPTIONS(i).ERROR_CODE );
27 end loop;
28 end;

On line 24 I am printing out the SUBSCRIPT into DATA of the offending record.

therefore:

data( SQL%BULK_EXCEPTIONS(i).ERROR_INDEX )

contains the primary key - et.al. - of the "bad record" to do with what you like.


I do not understand the tie in with MERGE?

ETL

Raghu, March 24, 2003 - 4:23 pm UTC

" How about using multi-table insert like

INSERT ALL
INTO DIMENSION_T1 VALUES (DELTA_T.C1, .., ..)
INTO DIMENSION_T2 VALUES (DELTA_T.Cn, .., ..)
INTO XREF_T VALUES (DELTA_T.Cm, ..,..)
INTO FACT_T VALUES (DELTA_T.Cr, .., ..)
SELECT * FROM DELTA_T;

C1, Cn, Cm, Cr are different columns in the DELTA_T table.
WHEN clause can be added optionally before INTO.



Followup:
sure, even better. "


what is DELTA_T ?




Tom Kyte
March 24, 2003 - 5:19 pm UTC

it is the delta table from the original question at the top

DELTA_T

Logan Palanisamy, March 24, 2003 - 5:05 pm UTC

Tom,

DELTA_T is the table in 'Cursor for Delta Table Loop' in the original posting.

Insert All

Raghu, March 24, 2003 - 5:37 pm UTC

So the syntax will be more like

1. generate the Sequence values before you use insert all
so you can use then ( Temp_id )

2. Insert all into
dim_1(c1, c2, c3) values( c1, Dc2 , Dc3..,..)
dim_2 ( ci1, ci2) values ( ci1, Dci2 ..... )
select t.c1, t.ci1,
Delta D

3. In the documentation ( 9i) Sequences cannot be used in the subquery.
where D.Pk Column = t.Pk Column

Tom Kyte
March 24, 2003 - 8:03 pm UTC

so, they can be used in the values clause?

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6915127515933 <code>

Multi-table insert with sequence

A reader, March 24, 2003 - 5:59 pm UTC

Tom already has a solution for this. Look at

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6915127515933, <code>

How about multi DML in Bulk process?

Chen, March 25, 2003 - 5:20 am UTC

Hi Tom,
Cool solution. But if I must do some other DML in the bulk process, and I want to do something like following code I do.What can I coding with the bulk method.
begin
for i in somecur loop
begin
savepoint a;
insert into someTable1(a,b) values (i.a,i.b);
insert into someTable2(c,d) values (i.c,i.d);
insert into someTable3(e,f) select e, f from souretable where col1= i.a and col2 = i.b;
exception
when others then
rollback to a;
insert into errortable (a,b,c,d) values(i.a,i.b,i.c,i.d);
end;
end loop;
commit;
end;


Another way

Pete Scott, March 25, 2003 - 5:53 am UTC

A way my team uses on our DWH systems is to define our data load table with all of the required business logic (e.g product must have a valid product number).
Create an Exceptions table.
Then in our batch load:
Disable constraints.
Populate load table
Re-enable constraint with an 'Exceptions into" clause.
Process the exception rows (delete / report/ sideline or whatever - we park ours)

Thats is it dataload done.


Pl/SQL ETL - Bulk MERGE

Georg, March 25, 2003 - 9:26 am UTC

Hi Tom,

what I meant with the MERGE: is it possible to use such a bulk mechanism for the INSERT and UPDATE part of a MERGE command ?

Thanks in advance !

Regards,
Georg

Tom Kyte
March 25, 2003 - 9:55 am UTC

there would be no need??

you take source table and merge with target table. done.

Bulk processing in DW

Kiro, March 25, 2003 - 10:31 am UTC

Hi Tom
Could you comment idea to use Bulk processing with option SAVE EXCEPTIONS to load statistical tables?

T1 (with PK) will be loaded with data every night (app 60 000 to 100 000 records) using bulk processing from PRE_TABLE. T1 is suppose to grow up to 120 GB, and latest data will be deleted. T1 have 12 partitions (by month).
OS is Win2000 AS 4CPU 8GB.
What do you think about idea?

I was thinking for insert /+APPEND/ as select ..... but may be this way is better !?
Thanks
kiro.

Tom Kyte
March 25, 2003 - 12:02 pm UTC

if you can do it in a single SQL statement -- DO IT THAT WAY.

Never write procedural code if you don't have to.

If you can do insert /*+ append */ as select, that would be the way to go.

Cursor with columns from different tables.

kashif, March 25, 2003 - 11:56 am UTC

Hi Tom,

In your response to the original poster, you explain how to use BULK processing for data uploads. Frequently though I come across situations when the cursor I need to use has columns selected from multiple tables. In those cases I am unable to utilize the BULK processing provided, since the collection in which the data is loaded can not be based on a cursor rowtype variable. Here's an example of what I'm talking about (some of the errors are unrelated so please ignore)...

create or replace
Procedure TESTLOAD
IS
cursor c is select * from table1, table2, dual;
type array is table of c%rowtype index by binary_integer;
data array;
errors NUMBER;
dml_errors EXCEPTION;
l_cnt number := 0;
PRAGMA exception_init(dml_errors, -24381);
begin
open c;
loop
fetch c BULK COLLECT INTO data LIMIT 100;
begin
FORALL i IN 1 .. data.count
insert into t2 values data(i);
end;
end loop;
EXCEPTION
WHEN others THEN
null;
END; -- Procedure TESTLOAD
/
sho err

13/7
PL/SQL: SQL Statement ignored

13/33
PLS-00597: expression 'DATA' in the INTO list is of wrong type <<<--- This is the error I am talking about.

15/19
PL/SQL: Statement ignored

16/15
PL/SQL: SQL Statement ignored

16/15
PLS-00435: DML statement without BULK In-BIND cannot be used inside FORALL

16/27
PLS-00201: identifier 'T2' must be declared

If you have any suggestions on workarounds, I would appreciate it. Thanks.

Kashif

Tom Kyte
March 25, 2003 - 12:09 pm UTC

ops$tkyte@ORA920> create or replace view v
  2  as
  3  select * from emp, dual;

View created.

ops$tkyte@ORA920> declare
  2       cursor c is select * from v;
  3       type array is table of v%rowtype index by binary_integer;
  4       data   array;
  5  begin
  6        open c;
  7        fetch c BULK COLLECT INTO data LIMIT 100;
  8  END;
  9  /

PL/SQL procedure successfully completed. 

Restrictions on Bulk Collect

Craig, March 25, 2003 - 12:39 pm UTC

Problem #1 is that I'm having trouble executing your block with the bulk processing above (actually, none of your blocks above will run...odd). I get an ora-06550/ora-00906 "missing left parenthesis on the insert statment.

Problem #2 is that according to the link below, you can not bulk fetch from a cursor into a collection of records. But isn't that what your example above is doing?
</code> http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/05_colls.htm#28396 <code>

Please lift the veil of my ignorance! What am I missing here?

Tom Kyte
March 25, 2003 - 8:14 pm UTC

probably you are using 8i -- refer to the 8i link i have there.... I'm using quite a few new 9i features like insert a record and bulk_exceptions....


#2 where does it say that? in 8i that would be true. In 9i, you can.

Nope, 9i here

Craig, March 26, 2003 - 9:32 am UTC

Sorry about that link, Tom. The docs I was reading were 8i docs, but the link I gave you was for 9i. Clearly, the link I sent you states no such restriction. Chalk it up to sloppy browsing, I guess.

We're running 9.0.1.4.0 here. Are those features you mentioned available on all 9i releases?



Tom Kyte
March 26, 2003 - 9:40 am UTC

The bulk collect into records is new in 9iR2.... sorry about that.

Validation with regards to plsql bulk binds

Errick, March 26, 2003 - 10:22 am UTC

This example is very good when looking at very simple ETL solutions, but I see a case immediately where it may not be so quick. Typically when loading data (such as customers) I frequently examine to see if a customer exists in the table I am loading the data into, and determine whether or not it should be an update or an insert (based on existence). In this case I would create two different plsql tables indexed by binary integers. One for update and one for insert. However if I have two "customers" in this case that are the same entity (ie same ssn) since the object is not in the table, I would not be able to determine that one is for update and one is for insert. I could scan through the entire set of records to determine duplicates and order of DML, but there is a much simpler solution. That is using the bulk bind api of Java, with its Hash Map api. Assuming that I load the source in the database rather than the bytecode this would be much faster for this rather typical ETL example. Just thought I would chime in on what we though when we read the article.

Tom Kyte
March 26, 2003 - 3:56 pm UTC

well, have you benchmarked any of this (now that would be interesting) and why wouldn't you just

a) use merge whenever possible
b) bulk update followed by bulk insert of leftovers otherwise?

ops$tkyte@ORA920> create table emp
  2  as
  3  select *
  4    from scott.emp
  5   where mod(empno,2) = 0;

Table created.

ops$tkyte@ORA920> update emp set sal = 0;

10 rows updated.

ops$tkyte@ORA920> commit;

Commit complete.

ops$tkyte@ORA920>
ops$tkyte@ORA920> create type emp_rec as object (
  2   EMPNO     NUMBER(4),
  3   ENAME     VARCHAR2(10),
  4   JOB       VARCHAR2(9),
  5   MGR       NUMBER(4),
  6   HIREDATE  DATE,
  7   SAL       NUMBER(7,2),
  8   COMM      NUMBER(7,2),
  9   DEPTNO    NUMBER(2) )
 10  /

Type created.

ops$tkyte@ORA920> create type emp_array as table of emp_rec
  2  /

Type created.

ops$tkyte@ORA920> select ename, sal from emp;

ENAME             SAL
---------- ----------
JONES               0
MARTIN              0
BLAKE               0
CLARK               0
SCOTT               0
TURNER              0
ADAMS               0
JAMES               0
FORD                0
MILLER              0

10 rows selected.

ops$tkyte@ORA920> declare
  2          type    array is table of emp%rowtype index by binary_integer;
  3          l_rec   array;
  4          l_data  emp_array;
  5  begin
  6          select empno,ename,job,mgr,hiredate,sal,comm,deptno bulk collect
  7            into l_rec
  8            from scott.emp;
  9
 10          l_data := emp_array();
 11          l_data.extend( l_rec.count );
 12          for i in 1 .. l_rec.count
 13          loop
 14                  -- process record
 15                  l_data(i) := emp_rec( l_rec(i).empno, l_rec(i).ename, l_rec(i).job,
 16                                        l_rec(i).mgr, l_rec(i).hiredate, l_rec(i).sal,
 17                                                            l_rec(i).comm, l_rec(i).deptno );
 18          end loop;
 19
 20          merge into EMP e1
 21          using (select * from TABLE(cast(l_data as emp_array)) ) e2
 22          on ( e2.empno = e1.empno )
 23          when matched then
 24          update set e1.sal = e2.sal
 25          when not matched then
 26          insert ( empno, ename, job, mgr, hiredate, sal, comm, deptno )
 27          values ( e2.empno, e2.ename, e2.job, e2.mgr, e2.hiredate, e2.sal, e2.comm, e2.deptno );
 28  end;
 29  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> select ename, sal from emp;

ENAME             SAL
---------- ----------
JONES            2975
MARTIN           1250
BLAKE            2850
CLARK            2450
SCOTT            3000
TURNER           1500
ADAMS            1100
JAMES             950
FORD             3000
MILLER           1300
KING             5000
WARD             1250
ALLEN            1600
SMITH             800

14 rows selected.
 

Why use collection with merge?

John, April 21, 2003 - 6:47 pm UTC

Hi Tom,

Not sure about this emp example. Why would you first load the scott.emp data into a collection of records and then merge that collection with the local emp table, when you can just merge scott.emp into the local emp table directly? One single SQL is better than several procedural codes, you often say so, no?

Tom Kyte
April 21, 2003 - 10:51 pm UTC

yes, agreed.

but -- the prior thing seemed to be asking for "i need some procedural processing inbetween" that is what the procedural loop was for.

-- External File and End of Line --

Oracle 'User, June 02, 2003 - 6:52 am UTC

Hi Tom ,

I have some problem about ETL again
I have Text file in format like this and it's end of line in difference position

<<TEST.TXT >>

3010401288400273472004 0000000000000000000000000000
3010401288400273477003 0000000000000000000000000000
3010401288400273478001 000009450 TNSM07ZZ0
3010401288400273482003 000015000 TBSM11ZZ0
3010401288400273527005 000011080 RPZZ34BZ4
3010401288400273529001 000029650 XPGC79ZZ0
3010401288400273534001 000016142 TNMG38ZZ0
3010401288400273567001 000008478 TNMG38ZZ0

I've External SQL like this

<< EXT. 1 >>

CREATE TABLE Ext_Gain_TEST (
GAIN_ORG varchar2 (3 ),
GAIN_LOGO varchar2 (3 ),
GAIN_ACCT_NMBR varchar2(16 ),
GAIN_INCOME number (10 ),
GAIN_USR_ACCT_NBR VARCHAR2(19),
GAIN_REF_NBR VARCHAR2(10)
)
ORGANIZATION EXTERNAL
(
TYPE oracle_loader
DEFAULT DIRECTORY dwh_data_dir
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
BADFILE dwh_data_dir:'bad_Gain_ext'
LOGFILE dwh_data_dir:'log_Gain_ext'
FIELDS (
GAIN_ORG position( 1 : 3 ) char ,
GAIN_LOGO position( 4 : 6 ) char ,
GAIN_ACCT_NMBR position( 7 : 22 ) char ,
GAIN_INCOME position( 24 : 32 ) decimal external ,
GAIN_USR_ACCT_NBR position( 33 : 51 ) char NULLIF GAIN_USR_ACCT_NBR = '0000000000000000000',
GAIN_REF_NBR position( 52 : 60 ) char
)
)
LOCATION ('TEST.TXT')
) REJECT LIMIT UNLIMITED;

From this sql ,it'll return only 6 records

GAI GAI GAIN_ACCT_NMBR GAIN_INCOME GAIN_USR_ACCT_NBR GAIN_REF_N
--- --- ---------------- --------------- ------------------- ----------
301 040 1288400273478001 9450 TNSM07ZZ0
301 040 1288400273482003 15000 TBSM11ZZ0
301 040 1288400273527005 11080 RPZZ34BZ4
301 040 1288400273529001 29650 XPGC79ZZ0
301 040 1288400273534001 16142 TNMG38ZZ0
301 040 1288400273567001 8478 TNMG38ZZ0

6 rows selected.

This show that I'll loss 2 records ,which end of line on GAIN_USR_ACCT_NBR Only

But If I use others scripts

<< Ext.2 >>

CREATE TABLE Ext_Gain_TEST2 (
GAIN_ORG varchar2 (3 ),
GAIN_LOGO varchar2 (3 ),
GAIN_ACCT_NMBR varchar2(16 ),
GAIN_INCOME number (10 ),
GAIN_USR_ACCT_NBR VARCHAR2(19)
)
ORGANIZATION EXTERNAL
(
TYPE oracle_loader
DEFAULT DIRECTORY dwh_data_dir
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
BADFILE dwh_data_dir:'bad_Gain_ext'
LOGFILE dwh_data_dir:'log_Gain_ext'
FIELDS (
GAIN_ORG position( 1 : 3 ) char ,
GAIN_LOGO position( 4 : 6 ) char ,
GAIN_ACCT_NMBR position( 7 : 22 ) char ,
GAIN_INCOME position( 24 : 32 ) decimal external ,
GAIN_USR_ACCT_NBR position( 33 : 51 ) char NULLIF GAIN_USR_ACCT_NBR = '0000000000000000000'
)
)
LOCATION ('TEST.TXT')
) REJECT LIMIT UNLIMITED;

It'll return all 8 records but not have GAIN_REF_NBR

GAI GAI GAIN_ACCT_NMBR GAIN_INCOME GAIN_USR_ACCT_NBR
--- --- ---------------- --------------- -------------------
301 040 1288400273472004 0
301 040 1288400273477003 0
301 040 1288400273478001 9450
301 040 1288400273482003 15000
301 040 1288400273527005 11080
301 040 1288400273529001 29650
301 040 1288400273534001 16142
301 040 1288400273567001 8478

8 rows selected.

How can I solves this problem ?

PS. In the previous SQL*Loader ,not have any problem about end of line .



Tom Kyte
June 02, 2003 - 8:02 am UTC

see line 17:

ops$tkyte@ORA920> CREATE TABLE Ext_Gain_TEST (
  2  GAIN_ORG                    varchar2(3),
  3  GAIN_LOGO                   varchar2(3),
  4  GAIN_ACCT_NMBR              varchar2(16),
  5  GAIN_INCOME                 number(10),
  6  GAIN_USR_ACCT_NBR           VARCHAR2(19),
  7  GAIN_REF_NBR                VARCHAR2(10)
  8  ) ORGANIZATION EXTERNAL
  9  (
 10     TYPE oracle_loader
 11     DEFAULT DIRECTORY dwh_data_dir
 12     ACCESS PARAMETERS
 13     (
 14        RECORDS DELIMITED BY NEWLINE
 15        BADFILE dwh_data_dir:'bad_Gain_ext'
 16        LOGFILE dwh_data_dir:'log_Gain_ext'
 17        FIELDS <b>missing field values are null (</b>
 18  GAIN_ORG           position( 1: 3) ,
 19  GAIN_LOGO          position( 4: 6) ,
 20  GAIN_ACCT_NMBR     position( 7:22) ,
 21  GAIN_INCOME        position(24:32) ,
 22  GAIN_USR_ACCT_NBR  position(33:51) NULLIF GAIN_USR_ACCT_NBR = '0000000000000000000',
 23  GAIN_REF_NBR       position(52:60)
 24                )
 25      )
 26  LOCATION ('test.txt')
 27  ) REJECT LIMIT UNLIMITED;

Table created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select * from ext_gain_test;

GAI GAI GAIN_ACCT_NMBR   GAIN_INCOME GAIN_USR_ACCT_NBR   GAIN_REF_N
--- --- ---------------- ----------- ------------------- ----------
301 040 1288400273472004           0
301 040 1288400273477003           0
301 040 1288400273478001        9450                     TNSM07ZZ0
301 040 1288400273482003       15000                     TBSM11ZZ0
301 040 1288400273527005       11080                     RPZZ34BZ4
301 040 1288400273529001       29650                     XPGC79ZZ0
301 040 1288400273534001       16142                     TNMG38ZZ0
301 040 1288400273567001        8478                     TNMG38ZZ0

8 rows selected. 

Thank so much

Ora, June 02, 2003 - 11:01 pm UTC

for your consult ...

display data.empno

Arjun, June 26, 2003 - 2:32 pm UTC

hi: tom

is it possible to display data.empno or data.sal during forall processing besides exception block?

12 loop
13 fetch c BULK COLLECT INTO data LIMIT 100;
14 begin
15 FORALL i IN 1 .. data.count SAVE EXCEPTIONS
16 insert into t2 values data(i);
### display here / pass to function ###
17 EXCEPTION
18 WHEN dml_errors THEN
19 errors := SQL%BULK_EXCEPTIONS.COUNT;
20 l_cnt := l_cnt + errors;
21 FOR i IN 1..errors LOOP
22 dbms_output.put_line
23 ('Error occurred during iteration ' ||
24 SQL%BULK_EXCEPTIONS(i).ERROR_INDEX ||
25 ' Oracle error is ' ||
26 SQL%BULK_EXCEPTIONS(i).ERROR_CODE );
27 end loop;
28 end;

On line 24 I am printing out the SUBSCRIPT into DATA of the offending record.

I suppose : a record type needs to be declared

type empnum is table of emp.empno%type;
empno_array empnum;

open...
fecth c bulk collect into empno_array
...

Tom Kyte
June 26, 2003 - 2:47 pm UTC

you have data which is a record...

you have the subscript

so.....


data( sql%bulk_exceptions(i).error_index ).empno would be the data no?

22 ('Error occurred during iteration ' ||
23 SQL%BULK_EXCEPTIONS(i).ERROR_INDEX ||
24 ' Oracle error is ' ||
25 SQL%BULK_EXCEPTIONS(i).ERROR_CODE );
26 dbms_output.put_line
27 ('bad data = ' || data( sql%bulk_exceptions(i).error_index ).object_id );
28 end loop;
29 exit;
30 end;
31 exit when c%notfound;
32 END LOOP;
33 close c;
34 dbms_output.put_line( l_cnt || ' total errors' );
35 end;
36 /
Error occurred during iteration 45 Oracle error is 2290
bad data = 33019




data.empo figured out

arjun, June 26, 2003 - 2:36 pm UTC

sorry tom :

this way it works :

for i in 1.. data.count
loop
dbms_output.put_line(to_char(data(i).empno));
...
pass_to_function_or_proc(data(i).empno))
...
end loop;



PLSQL/ ETL

RATAN, September 17, 2003 - 1:51 pm UTC

Hi Tom,
I have bellow procedure to do insert and then update. The same I have to do for different extractions. Could you please provide me the easy and effiecient way to do?
Thank you very much.
procedure get_mtd_sales_amt( p_cust_no dwadmin.tb_sdw000_ge_gls_tmp.alt_customer_number%type,
p_bus_tier1 varchar2,
p_fm number,
p_fy number,
p_run_dt date)
is
begin

get_fiscal_cal_details('C', p_run_dt, v_fw, v_fq, v_fh, v_fm, v_fy); -- current

if v_fq = 1 then
v_fq_s := 1;
v_fq_e := 3;
elsif v_fq = 2 then
v_fq_s := 4;
v_fq_e := 6;
elsif v_fq = 3 then
v_fq_s := 7;
v_fq_e := 9;
elsif v_fq = 4 then
v_fq_s := 10;
v_fq_e := 12;
end if;

if v_fh = 1 then
v_fh_s := 1;
v_fh_e := 6;
else
v_fh_s := 7;
v_fh_e := 12;
end if;

insert into tb_sdw087_rpt_amer_wkly_sales
( sales_regn_cd,
sales_regn_nam,
cust_num, cust_nam,
bus_team_tier1_dsc,
product_line_dsc,
curr_fiscal_wk_num,
curr_fiscal_mth_num,
curr_fiscal_qr_num,
curr_fiscal_hy_num,
curr_fiscal_yr_num,
cy_mtd_sales_amt,
cy_qtd_sales_amt,
cy_htd_sales_amt,
cy_ytd_sales_amt,
py_mtd_sales_amt,
py_qtd_sales_amt,
py_htd_sales_amt,
py_ytd_sales_amt
)
select sales_regn_cd, sales_regn_nam,
alt_customer_number,
alt_customer_name,
p_bus_tier1,
'PLC',
v_fw,
v_fm,
v_fq,
v_fh,
v_fy,
sum(decode(fiscal_year,extract(year from sysdate),nvl(daily_order_net_amount,0),0)),
0,
0,
0,
sum(decode(fiscal_year,extract(year from sysdate),0,nvl(daily_order_net_amount,0),0)),
0,
0,
0
from v_america_sales
where alt_customer_number = p_cust_no
and bus_team_desc_tier1 = p_bus_tier1
and bus_team_desc_tier2 = 'PLC'
and (
bus_team_desc_tier3 = 'Motion'
OR
bus_team_desc_tier3 IS NULL
)
and sales_team_tier2 = 'AMERICA SALES'
and level_elim_flag_tier0 = 'N'
and bus_team_elim_flag = 'N'
and fiscal_month = p_fm
and fiscal_year in (p_fy, (p_fy - 1))
group by alt_customer_number, alt_customer_name,
sales_regn_cd, sales_regn_nam;

update tb_sdw087_rpt_amer_wkly_sales
set (cy_qtd_sales_amt, py_qtd_sales_amt) = (select
sum(decode(fiscal_year,extract(year from sysdate),nvl(daily_order_net_amount,0),0)) cy,
sum(decode(fiscal_year,extract(year from sysdate),0,nvl(daily_order_net_amount,0),0)) py
from v_america_sales
where alt_customer_number = p_cust_no
and bus_team_desc_tier1 = p_bus_tier1
and bus_team_desc_tier2 = 'PLC'
and (
bus_team_desc_tier3 = 'Motion'
OR
bus_team_desc_tier3 IS NULL
)
and sales_team_tier2 = 'AMERICA SALES'
and level_elim_flag_tier0 = 'N'
and bus_team_elim_flag = 'N'
and fiscal_month between v_fq_s and v_fq_e
and fiscal_year in (p_fy, (p_fy - 1))
group by alt_customer_number, alt_customer_name,
sales_regn_cd, sales_regn_nam)
where cust_num = p_cust_no;

update tb_sdw087_rpt_amer_wkly_sales
set (cy_htd_sales_amt, py_htd_sales_amt) = (select
sum(decode(fiscal_year,extract(year from sysdate),nvl(daily_order_net_amount,0),0)) cy,
sum(decode(fiscal_year,extract(year from sysdate),0,nvl(daily_order_net_amount,0),0)) py
from v_america_sales
where alt_customer_number = p_cust_no
and bus_team_desc_tier1 = p_bus_tier1
and bus_team_desc_tier2 = 'PLC'
and (
bus_team_desc_tier3 = 'Motion'
OR
bus_team_desc_tier3 IS NULL
)
and sales_team_tier2 = 'AMERICA SALES'
and level_elim_flag_tier0 = 'N'
and bus_team_elim_flag = 'N'
and fiscal_month between v_fh_s and v_fh_e
and fiscal_year in (p_fy, (p_fy - 1))
group by alt_customer_number, alt_customer_name,
sales_regn_cd, sales_regn_nam)
where cust_num = p_cust_no;

update tb_sdw087_rpt_amer_wkly_sales
set (cy_ytd_sales_amt, py_ytd_sales_amt) = (select
sum(decode(fiscal_year,extract(year from sysdate),nvl(daily_order_net_amount,0),0)) cy,
sum(decode(fiscal_year,extract(year from sysdate),0,nvl(daily_order_net_amount,0),0)) py
from v_america_sales
where alt_customer_number = p_cust_no
and bus_team_desc_tier1 = p_bus_tier1
and bus_team_desc_tier2 = 'PLC'
and (
bus_team_desc_tier3 = 'Motion'
OR
bus_team_desc_tier3 IS NULL
)
and sales_team_tier2 = 'AMERICA SALES'
and level_elim_flag_tier0 = 'N'
and bus_team_elim_flag = 'N'
and fiscal_year in (p_fy, (p_fy - 1))
group by alt_customer_number, alt_customer_name,
sales_regn_cd, sales_regn_nam)
where cust_num = p_cust_no;
end get_mtd_sales_amt;


Tom Kyte
September 17, 2003 - 5:32 pm UTC

please, not the proper place for a new question -- that and "hey, rewrite my code to go faster" isn't in the spirit of what I do here.

ask me "how" to do something. ask me "how" something works.



Bulk processing

santhanam, September 18, 2003 - 2:30 am UTC

Good example

Bulk processing

santhanam, September 18, 2003 - 2:30 am UTC

Good example

Mixed Header Detail loading

Michael McCabe, May 14, 2004 - 12:00 pm UTC

Hi,

I have a file similar to the following:-

Two type of records contained - header and detail.
Header denoted by first character being 0, detail
with first character of 1.

The details below the header are associated with
that header.

003310304000J019
1BBH334000000000067484C310304
1BKN580000000000516281C310304
1EFJ940000000000238847C310304
1EFJ955000000000007364C310304
1EFJ990000000000027229C310304
1EFJ992000000000177343C310304
1EFM955000000000116463C310304
1HDP910000000000079386C310304
1HDP922000000000022821C310304
1HDP930000000222476853C310304
1BBH333000000000043688D310304
1BBH335000000000094966D310304
1EFJ941000000000062972D310304
1EFM930000000000012124D310304
1EFM940000000000215912D310304
1EFM941000000000004001D310304
1HDP912000000000081245D310304
1HDP940000000223215163D310304
012311203000K029
1HDP922000000001351670C310304
1HGV524000000001218811C310304
1HGV524000000000069179C310304
1HGV524000000000121643C310304
1HGV524000000000028596C310304
1HGV524000000000462689C310304
1HGV524000000010482884C310304
1HGV524000000000518511C310304
1HGV524000000677674442C310304
1BBH333000000000147903D310304
1BKN580000000000111027D310304
1HDP910000000000170890D310304
1HGV524000000000034997D310304
1HGV524000000000008431D310304
1HGV524000000691455177D310304

Ideally, I would like to load this into a table
which, for each detail record, will contain the header
and detail.

i.e using first three line of sample data

003310304000J019
1BBH334000000000067484C310304
1BKN580000000000516281C310304

will get loaded into table as:-

003310304000J0191BBH334000000000067484C310304
003310304000J0191BKN580000000000516281C310304
etc.

Can anyone suggest the best way

either a SQL Load or use CREATE TABLE ... ORGANIZATION EXTERNAL ?

Any help appreciated!

Michael



Tom Kyte
May 15, 2004 - 10:50 am UTC

ops$tkyte@ORA9IR2> create or replace directory data_dir as '/home/tkyte/'
  2  /
 
Directory created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table et
  2  (txt varchar2(80)
  3  )
  4  ORGANIZATION EXTERNAL
  5  ( type oracle_loader
  6    default directory data_dir
  7    access parameters
  8    (
  9       records delimited by newline
 10       fields
 11       missing field values are null
 12       ( txt position(1:4000) ) )
 13    location ('xxx.dat')
 14  )
 15  /
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select c1 || c2
  2    from (
  3  select txt c2,
  4         max( decode(substr(txt,1,1),0, txt ) ) over (order by r)  c1
  5    from (select txt, rownum r from et)
  6         )
  7   where substr(c2,1,1) = 1
  8  /
 
C1||C2
------------------------------------------------------------------------
003310304000J0191BBH334000000000067484C310304
003310304000J0191BKN580000000000516281C310304
003310304000J0191EFJ940000000000238847C310304
003310304000J0191EFJ955000000000007364C310304
003310304000J0191EFJ990000000000027229C310304
003310304000J0191EFJ992000000000177343C310304
003310304000J0191EFM955000000000116463C310304
003310304000J0191HDP910000000000079386C310304
003310304000J0191HDP922000000000022821C310304
003310304000J0191HDP930000000222476853C310304
003310304000J0191BBH333000000000043688D310304
003310304000J0191BBH335000000000094966D310304
003310304000J0191EFJ941000000000062972D310304
003310304000J0191EFM930000000000012124D310304
003310304000J0191EFM940000000000215912D310304
003310304000J0191EFM941000000000004001D310304
003310304000J0191HDP912000000000081245D310304
003310304000J0191HDP940000000223215163D310304
012311203000K0291HDP922000000001351670C310304
012311203000K0291HGV524000000001218811C310304
012311203000K0291HGV524000000000069179C310304
012311203000K0291HGV524000000000121643C310304
012311203000K0291HGV524000000000028596C310304
012311203000K0291HGV524000000000462689C310304
012311203000K0291HGV524000000010482884C310304
012311203000K0291HGV524000000000518511C310304
012311203000K0291HGV524000000677674442C310304
012311203000K0291BBH333000000000147903D310304
012311203000K0291BKN580000000000111027D310304
012311203000K0291HDP910000000000170890D310304
012311203000K0291HGV524000000000034997D310304
012311203000K0291HGV524000000000008431D310304
012311203000K0291HGV524000000691455177D310304
 
33 rows selected.
 

SQL%BULK_EXCEPTIONS(i).ERROR_CODE

A reader, July 15, 2004 - 4:02 pm UTC

The error code contains the Oracle error code, but in some cases, the error code might refer to some %s substitutions that contain specific table/column/constraint names.

For example,

exec dbms_output.put_line(sqlerrm(-2290))
will give me

ORA-02290: check constraint (.) violated

This is not very useful, I need to know the actual constraint name that was violated.

Why/Doesnt the bulk_exceptions collection store the sqlcode and corresponding sqlerrm also?

Is there a way I can get the actual error message?

Thanks

Tom Kyte
July 15, 2004 - 6:34 pm UTC

no, there is not.

I'm missing something when handling exceptions.

Steve, July 15, 2004 - 11:24 pm UTC

The gist is that we have a table with a VARCHAR2 column that has (mostly) numeric values - we want to move those rows with numeric values in them into another table, and put those rows with non-numeric values in a 3rd table.  Rather than going row by row, I thought I'd speed things up and use bulk operations.  I started by changing 100 rows in OBJ, and that worked - but only if I had an ORDER BY when I defined my cursor.  Remove the ORDER BY from the cursor, and it doesn't work.  Updating the object_id2 column using mod(object_id,5000) doesn't work either, with or without an order by.  Am I missing something here?  The output from DBMS_OUTPUT looks like the procedure is picking up exceptions, but when I query OBJ3, it doesn't have any of the proper rows in it.

Any insight would be greatly appreciated.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
With the Partitioning option
JServer Release 9.2.0.4.0 - Production

O/S is AIX 5.2

CREATE OR REPLACE PROCEDURE FIND_INVALID_NUMBERS3(rows_to_fetch_in IN NUMBER := 200)
IS
TYPE ARRAY IS TABLE OF OBJ%ROWTYPE;
data ARRAY;
CURSOR cur IS SELECT * FROM OBJ ORDER BY OBJECT_ID;
bulk_errors EXCEPTION;
PRAGMA EXCEPTION_INIT(bulk_errors, -24381);
BEGIN
   OPEN cur;
   LOOP
      FETCH cur BULK COLLECT INTO data LIMIT rows_to_fetch_in;
      BEGIN
         FORALL indx IN 1..data.COUNT SAVE EXCEPTIONS
            INSERT INTO OBJ2 VALUES data(indx);
         EXCEPTION
            WHEN bulk_errors THEN 
               FOR indx2 IN 1..SQL%BULK_EXCEPTIONS.COUNT
               LOOP
                  DBMS_OUTPUT.PUT_LINE('Invalid record: object_id = '||
                  data(SQL%BULK_EXCEPTIONS(indx2).ERROR_INDEX).object_id||
                  ' object_id2 = '||data(SQL%BULK_EXCEPTIONS(indx2).ERROR_INDEX).object_id2);
                  INSERT INTO OBJ3 VALUES data(indx2);
               END LOOP;
      END;
      EXIT WHEN cur%NOTFOUND;
   END LOOP;
COMMIT;
END FIND_INVALID_NUMBERS3;
/

drop table obj;
drop table obj2;
drop table obj3;
create table obj
nologging as
select * from dba_objects;
alter table obj
add object_id2 varchar2(20);
update obj
set object_id2 = TO_CHAR(object_id);
update obj
set object_id2 = '94271XYZ'
where mod(object_id,5000) = 0;  -- using object_id <= 100 works, but only
                                -- if an ORDER BY is used in the cursor
commit;
create table obj2
as
select * from obj where 1=2;
create table obj3
as
select * from obj where 1=2;
alter table obj2
modify object_id2 number;
set serveroutput on size 999999;
alter procedure FIND_INVALID_NUMBERS3 compile;
exec FIND_INVALID_NUMBERS3;

SQL> exec FIND_INVALID_NUMBERS3;
Invalid record: object_id = 5000 object_id2 = 94271XYZ
Invalid record: object_id = 10000 object_id2 = 94271XYZ
Invalid record: object_id = 15000 object_id2 = 94271XYZ
Invalid record: object_id = 20000 object_id2 = 94271XYZ
Invalid record: object_id = 25000 object_id2 = 94271XYZ
Invalid record: object_id = 35000 object_id2 = 94271XYZ
Invalid record: object_id = 45000 object_id2 = 94271XYZ
Invalid record: object_id = 50000 object_id2 = 94271XYZ

PL/SQL procedure successfully completed.

SQL> select object_id from obj3 order by 1;

OBJECT_ID
---------
     4861
     9904
    14904
    19904
    24907
    34653
    43090
    48802

8 rows selected.

 

Tom Kyte
July 16, 2004 - 10:56 am UTC

not sure what I'm supposed to see or what "not works" means in this context.

I ran your code with and without an order by and it works for me. obj3 was populated.


Really weird

A reader, July 16, 2004 - 10:12 am UTC

In my bulk exception handler, when I do

declare
errors exception;
PRAGMA exception_init(errors, -24381);
l_error_code pls_integer;
begin
forall i in 1..tab.count save exceptions
insert into ...
exception
when errors then
for j in 1..sql%bulk_exceptions.count loop
l_error_code := sql%bulk_exceptions(j).error_code;
insert into error_table values (l_error_code,sqlerrm(l_error_code));
end;

When I run this, I get an error

ERROR at line 1:
ORA-00917: missing comma
ORA-06512: at "...", line 74
ORA-24381: error(s) in array DML
ORA-06512: at line 1

When I change this to
l_error_text := sqlerrm(l_error_code);
insert into error_table (l_error_code,l_error_text);

it works fine.

Questions:

1. Why is this? What the difference between calling sqlerrm() directly in the values clause vs. storing it in a local variable first?

2. When I look at the error_table, I see

-1400: non-ORACLE exception

This is wrong. The sqlerrm for -1400 is
ORA-01400: cannot insert NULL into ()

What is going on here?

Thanks

Tom Kyte
July 16, 2004 - 1:35 pm UTC

1) sqlerrm is documented as not being callable from SQL.  you have to make the assignment.

2) then you have a but in your code someplace

ops$tkyte@ORA9IR2> exec dbms_output.put_line( sqlerrm( -1400 ) );
ORA-01400: cannot insert NULL into ()
 
PL/SQL procedure successfully completed.
 


if you assigned that string to a variable -- that is what would be put in there.  give me a "for example" -- a simple test case that I too can run. 

Follow-up

Steve, July 16, 2004 - 11:57 am UTC

The exception routine reports that there were 8 rows that couldn't be inserted into the OBJ2 table - these had OBJECT_ID values of 5000, 10000, 15000, 20000, 25000, 35000, 45000 and 50000. This is the expected result, given that I wanted rows with mod(object_id,5000) to fail. The exception routine showed that it encountered those rows, via the DBMS_OUTPUT results. However, when I check the data that was inserted into OBJ3 (which also happens in the exception routine), these object_id's do not match the object_id's displayed via DBMS_OUTPUT - the rows that wound up in the OBJ3 table are not in error - they have valid values (in this case, strictly numeric) for the object_id2 column. The data I expect to show up in OBJ3 should have the same object_id's as those displayed via DMBS_OUTPUT, but that is not what's happening - the rows winding up in OBJ3 should be going in OBJ2, and the rows that should wind up in OBJ3 are not getting inserted. What am I missing?

Tom Kyte
July 16, 2004 - 2:03 pm UTC

you used one index (the right one) in dbms_output and a different (wrong one) in the insert:

24 declare
25 i number := sql%bulk_exceptions(indx2).error_index;
26 begin
27 INSERT INTO OBJ3 VALUES data(i);
28 end;


Really weird

A reader, July 16, 2004 - 1:51 pm UTC

Ah, I figured it out.

sql%bulk_exceptions(i).error_code, for some reason, toggles the sign of the error code i.e. it makes it positive

SQL> exec dbms_output.put_line(sqlerrm(1400))
-1400: non-ORACLE exception

SQL>  exec dbms_output.put_line(sqlerrm(-1400))
ORA-01400: cannot insert NULL into ()

So, I have to do 

l_error_code := sql%bulk_exceptions(i).error_code;
l_error_message := sqlerrm(-l_error_code);

Strange!

 

Does it work in the case of external table

A reader, July 28, 2004 - 1:08 pm UTC

Does it work if delta_t is an external table.


Tom Kyte
July 28, 2004 - 1:38 pm UTC

you can query and insert/merge/delete using an external table, yes.

you cannot insert/merge/delete/update an external table, but you can insert/merge/delete/update using an external table in a subquery.

Can you get a count

daniel, July 29, 2004 - 9:41 am UTC

Tom,

is there a way to capture how many rows were merged in the pl/sql? When you run a command from sql/plus you get a message saying 1000 rows merged. I'm running the same sql from a package and want to capture the value and insert into a table with statistics.
I want the package to look something like this.
1. merge into
2. Capture the number of rows merged.
3. Insert into the stat table.

Thank you.

Tom Kyte
July 29, 2004 - 12:59 pm UTC

rows_processed := sql%rowcount;


Clarification

Daniel, July 29, 2004 - 9:55 am UTC

I want to know how many rows were inserted and how many were updated. I know that I can use sql%rowcount to get the total.

Tom Kyte
July 29, 2004 - 12:59 pm UTC

you cannot get those counts, they are not reported.

the rows are merged, plain and simple.

how to ask HTML DB to do bulk fetching

Winston, August 24, 2004 - 7:11 pm UTC

I defined a SQL report in HTML DB region. When I turned on sql trace and I found it out that it's not using bulk fetching:


********************************************************************************

select s.id,s.name,s.subject
from
twz_file s


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 6 0.00 0.00 0 16 0 5
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 8 0.00 0.00 0 16 0 5

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 65 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
5 TABLE ACCESS FULL DCOS_FILE (cr=16 r=0 w=0 time=458 us)

********************************************************************************

Is there any way that I can set in HTML DB like arraysize in SQL*Plus?

Tom Kyte
August 25, 2004 - 8:40 am UTC

There is no parameter you can set that would enable bulk fetching in HTML DB by the HTML DB reporting engine.

What will be the best way to do this?

A reader, August 28, 2004 - 7:57 am UTC

Hi Tom,
A table contains around 10 Million rows. The table has client, product code, and product count. It is required that each client existing in the table must have atleast three core products. If any of the client does not have these core products, then I have to create records for core products with product count 0. The core products are known through lookup table.
For example:
client product product count
1 P1 1
1 P2 2
2 P1 8
3 P3 5
...................
...................
The o/p required, if P1, P2 and P3 are core products
1 P1 1
1 P2 2
1 P3 0----> new record
2 P1 8
2 P2 0----> new record
2 P3 0----> new record
3 P1 0----> new record
3 P2 0----> new record
3 P3 5
...................
...................
Could you please tell the best way to achieve this, as table is quite large.
Thanks

Tom Kyte
August 28, 2004 - 10:02 am UTC

is this to be "a printed report" or do these rows have to just be inserted.

More Info...

A reader, August 28, 2004 - 5:54 pm UTC

Hi Tom,
The rows are just be be inserted.
Thanks

Tom Kyte
August 28, 2004 - 6:08 pm UTC

select client_id, 3-count(*) cnt
  from t
 where product_code is one of the "core" product
 group by client_id
having count(*) < 3


will generate the set of client_id's and the needed number of records.  just join that to the default products you'd like to assign

eg:

ops$tkyte@ORA9IR2> select client_id, 3-count(*) cnt
  2    from t
  3   group by client_id
  4   having count(*) < 3;
 
 CLIENT_ID        CNT
---------- ----------
         1          2
         3          1
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select *
  2    from ( select client_id, 3-count(*) cnt
  3             from t
  4            group by client_id
  5           having count(*) < 3 ),
  6             (select 1 x, 'prod1' from dual
  7               union all
  8                  select 2 x, 'prod2' from dual  )
  9   where cnt <= x;
 
 CLIENT_ID        CNT          X 'PROD
---------- ---------- ---------- -----
         3          1          1 prod1
         1          2          2 prod2
         3          1          2 prod2


then insert. 

Why Can't I come out with these ideas???

A reader, August 28, 2004 - 6:28 pm UTC

Thanks a lot!

catch insert select errors

Marvin, September 08, 2004 - 7:26 am UTC

Hi

I am currently reviewing some Data Mart ETL codes. I see many codes such as

declare a cursor select * from x
open cursor
insert into table y values (cursor.row)

and goes on, the reason I have been told is the code can catch the error row easily, if there are errors from the logfile we can see immediately what rows caused errors and a single failure wont abort the whole insert process.

If I change this to insert select this goes much much faster (obvious) however if one single row fails the whole insert fails and there is no way to determine which row caused the error.

You mentioned checking the data before load it however I dont see how can we do it if we have duplicated data.

For example, if I am loading table X using data from table Y and when insert into X I encounter duplicate rows, the insert select fails and whole process fails and no way to know which caused the error. If I need to check data first before load I would have to check row by row which is slow as well.

My question is how can we detect the row which causes the error in an INSERT SELECT statement? And how can someone restart this process discarding the row which caused the error? These batch processes runs over 3 weeks and I am pretty sure it´s all because of this row by row fashion coding however it seems that it´s a good way for ETL processes.

Regards

Tom Kyte
September 08, 2004 - 10:07 am UTC

use BULK PROCESSING then with SAVE EXCEPTIONS

search for

"SAVE EXCEPTIONS"

on this site for examples.

save exceptions in PRO*C

Marvin, September 09, 2004 - 1:48 am UTC

Hi

Is there something similar to save exceptions in Pro*C?

thx

Tom Kyte
September 09, 2004 - 8:18 am UTC

I do not believe so, I don't see anything in there remotely similar.

how to catch errorneous rows in PRO*C

A reader, September 10, 2004 - 5:34 am UTC

Hi again

I have been looking through Pro*C documentation and cant see any solution to our problems.

All our ETL processes (running on Oracle 8.1.7.4) are based on PRO*C code without HOST ARRAYS. The performance is poor as you can imagine since all data is processed row by row however using this methology during an insert process we can easily detect which rows cannot be inserted and discard that row to a log file and treat them later, since we are using 8.1.7.4 we cannot use save exceptions from PL/SQL.... Is there any way to achieve this in PRO*C?

We have been thiking check the data integrity then simply load with insert /*+ APPEND */ or UPDATE (we insert if inexistent, update if exists) but it´s not possible, we need to join huge tables to make this check and it takes ages....

Thx

Tom Kyte
September 10, 2004 - 9:10 am UTC

Pro*c will insert as much as it can - and tell you which row "caused us to stop".

Using that you can try to insert 100 rows, find 20 of them go, 21st failed.  So, you log 21 and pick up at 22 and so on.

For example:




void process()
{
exec sql begin declare section;
   int     x[10];
   int     i;
   int     * array;
   int     n = 10;
exec sql end declare section;
                                                                                                                
    EXEC SQL WHENEVER SQLERROR continue;
    exec sql drop table t;
                                                                                                                
    EXEC SQL WHENEVER SQLERROR DO sqlerror_hard();
    exec sql create table t ( x int check ( x > 0 ) );
                                                                                                                
    for( i = 0; i < 10; i++ ) x[i] = i;
                                                                                                                
    x[5] = -1;
    x[9] = -1;
                                                                                                                
    for( array = x, n = 10; n > 0; )
    {
        EXEC SQL WHENEVER SQLERROR continue;
        exec sql for :n insert into t values ( :array );
        EXEC SQL WHENEVER SQLERROR DO sqlerror_hard();
                                                                                                                
        printf( "sqlca.sqlcode = %d\n", sqlca.sqlcode );
        printf( "sqlca.sqlerrmc = %.*s\n", sqlca.sqlerrm.sqlerrml, sqlca.sqlerrm.sqlerrmc );
        printf( "row %d is in error, value = %d\n", sqlca.sqlerrd[2], array[sqlca.sqlerrd[2]] );
                                                                                                                
        array += (sqlca.sqlerrd[2]+1);
        n -= sqlca.sqlerrd[2]+1;
    }
                                                                                                                
    exec sql declare c cursor for select rownum, x from t;
    exec sql open c;
    for( ;; )
    {
        exec sql whenever notfound do break;
        exec sql fetch c into :i, :n;
        exec sql whenever notfound continue;
        printf( "row %d = %d\n", i, n );
    }
    exec sql close c;
}

produces this output:

Connected to ORACLE as user: /
 
sqlca.sqlcode = -2290
sqlca.sqlerrmc = ORA-02290: check constraint (OPS$TKYTE.SYS_C0014459) violated
 
row 0 is in error, value = 0
sqlca.sqlcode = -2290
sqlca.sqlerrmc = ORA-02290: check constraint (OPS$TKYTE.SYS_C0014459) violated
 
row 4 is in error, value = -1
sqlca.sqlcode = -2290
sqlca.sqlerrmc = ORA-02290: check constraint (OPS$TKYTE.SYS_C0014459) violated
 
row 3 is in error, value = -1
row 1 = 1
row 2 = 2
row 3 = 3
row 4 = 4
row 5 = 6
row 6 = 7
row 7 = 8
 

 

question about your last code

A reader, September 10, 2004 - 2:54 pm UTC

Hi

in your code

for( array = x, n = 10; n > 0; )
{
EXEC SQL WHENEVER SQLERROR continue;
exec sql for :n insert into t values ( :array );
EXEC SQL WHENEVER SQLERROR DO sqlerror_hard();

printf( "sqlca.sqlcode = %d\n", sqlca.sqlcode );
printf( "sqlca.sqlerrmc = %.*s\n", sqlca.sqlerrm.sqlerrml,
sqlca.sqlerrm.sqlerrmc );
printf( "row %d is in error, value = %d\n", sqlca.sqlerrd[2],
array[sqlca.sqlerrd[2]] );

array += (sqlca.sqlerrd[2]+1);
n -= sqlca.sqlerrd[2]+1;
}

I understand we are using host arrays, but I dont understand why you use

n -= sqlca.sqlerrd[2]+1

Shouldnt n be constant?

Tom Kyte
September 10, 2004 - 3:05 pm UTC


my array started with 10 elements.

3 of which will fail.

The first time we insert, we say "here are 10 rows to insert"

Pro*c came back and said "ok, we inserted sqlca.sqlerrd[2] of them -- and the sqlca.sqlcode was 'X'"

if X = 0, then we are done -- sqlca.sqlerrd[2] would be equal to N, N would be less than zero, we exit.

if X <> 0 then one of the rows failed. If sqlca.sqlerrd[2] came back as "5" for example -- that means array[0..4] got inserted and array[5] is "bad" (opps anyone else just see the bug i have :) i should have a "if sqlca.sqlcode { }" around the printfs!! -- doesn't change the logic though)

So, array[5] is bad, but 0, 1, 2, 3, 4 got inserted. We log array[5] somewhere and point array to array[6]. Now we have 6, 7, 8, 9 (4 rows) to insert. We must bump 4 down to 4. n -= 5+1, so n = 4.

Next time through the loop we try to insert 4 rows, starting at array offset "6".

what if you have many elements

A reader, September 11, 2004 - 3:38 am UTC

Hi

Thanks for the detailed explanation.
Further query, what if your number of elements is huge? Would still use same technique? Say you have 500000 elements, 0, 1, 2, 3, 4 for inserted, next insert would be 4 rows, and the next? What would that be :-?


ThX

Tom Kyte
September 11, 2004 - 10:37 am UTC

You arraysize should be reasonable (eg: 500,000 would be extreme -- it would technically work but an in memory array of 500,000 is just "not really a good idea(tm)"


The size doesn't matter here, I used 10. If you used 100, 1000, 532, whatever -- the logic is identical, in psuedo code:


loop while more data to insert
try to insert N records
discover you inserted M records
if (m < n)
then
print out error message for the Mth record (m+1 record really, but C
arrays are zero based)

take away M+1 from N (that is how many records left to go)
adjust your pointers to point right past the record that failed
else
there isn't anymore data to insert from this array, exit
end if
end loop



many elements

A reader, September 11, 2004 - 11:11 am UTC

Hi again

if let's say there are 500000 elements, wouldnt the code be as below and process 500000 in a go? Since you are using FOR :n INSERT etc (Host arrays)

for( array = x, n = 500000; n > 0; )
{
EXEC SQL WHENEVER SQLERROR continue;
exec sql for :n insert into t values ( :array );
EXEC SQL WHENEVER SQLERROR DO sqlerror_hard();


printf( "sqlca.sqlcode = %d\n", sqlca.sqlcode );
printf( "sqlca.sqlerrmc = %.*s\n", sqlca.sqlerrm.sqlerrml,
sqlca.sqlerrm.sqlerrmc );
printf( "row %d is in error, value = %d\n", sqlca.sqlerrd[2],
array[sqlca.sqlerrd[2]] );


array += (sqlca.sqlerrd[2]+1);
n -= sqlca.sqlerrd[2]+1;
}


you have declared

int n = 10;

but I am not sure if a default value is required since you are saying

for( array = x, n = 10; n > 0; ) ---> n = 10

Or the n above and the n inside the for loop has nothing to do with each other?

Tom Kyte
September 11, 2004 - 11:16 am UTC

look -- i have an example. the example was 10.

c/10/100/g

and the example still works.

c/100/500/g

and the example still works.


change it to 500,000 -- and it still works but would be an extremely ill advised path to take -- 500k in an array -- please don't go there (unless you work for MS, then go for it)

the n = 10 is technically not necessary in one or the other places. please accept the code for what it is -- an example showing how to array process the data, in a loop that handles errors gracefully, in Pro*C.




delete and host arrays

A reader, September 18, 2004 - 9:25 am UTC

Hi

From your code above (PRO*C, Host Arrays and Insert) it prints the rows which fails. Can we make the same control with DELETE and UPDATE?

For example

cursor x_cur
select * from emp;

DELETE EMP
where DEPTNO = c_cur.deptno;

x_cur.deptno contains (30, 40, 50) ---> HOST ARRAYS



if let's say deptno 30 fails can we catch this deptno in a logfile with some SQLCA element?



Tom Kyte
September 18, 2004 - 10:39 am UTC

see right below.

the code only works if you delete one row

A reader, September 20, 2004 - 3:59 am UTC

Hi

The PRO*C code you have provided works fine for insert values since we are inserting row by row, however it wont work for delete and updates if for each id you are deleting/updating N rows, array[sqlca.sqlerrd[2]] would not give the proper value!

Tom Kyte
September 20, 2004 - 8:39 am UTC

Ahh, you are correct, i was assuming singleton row deletes (by primary key for example) and singleton row deletes that always deleted a single row (not zero!)

the insert processing works due to the fact that each insert values will either insert a row (singleton) or fail.

So, you are correct -- the logic only works if each statement exection affects at least and at most one row (or fails)


thanks for pointing that out, i've changed the text above.

delete and host arrays

A reader, September 20, 2004 - 8:45 am UTC

Hi

Then with DELETE and UPDATE it´s not possible to get the faulty IDs.... in PRO*C???

Tom Kyte
September 20, 2004 - 9:01 am UTC

correct, not as far as I can see, the number of elements processed in the array is not conveyed back to us.

but in my opinion, this is OK.

An insert, I can see -- ok "log failed rows over here for later cleanup"

A delete/update -- if delete #5 failed -- what then? doesn't seem like you can let deletes 1..4 work, 6..10 work but just sort of skip "5"

delete and host arrays

A reader, September 20, 2004 - 9:43 am UTC

Hi

What we are trying to do is convert row by row processing to batch processing so if we have 10 elements in an array it is possible that some might fail and those IDs would go to a log file for further processing.

Guess this is not possible and we must do it row by row.

Thanks for your help

Tom Kyte
September 20, 2004 - 10:45 am UTC

or use plsql to process, you'll find plsql to be infinitely easier to program than pro*c to do ETL like work.

not with pl/sql neither

A reader, September 20, 2004 - 5:14 pm UTC

To the reader who is asking about controlling DML exceptions.

Even with PL/SQL you cannot accomplish what you want to do in 8.1.7.4.

PL/SQL might be easier to code but PRO*C is faster than anything else IMHO and it's worth using it than PL/SQL, horribly slow.

Tom Kyte
September 20, 2004 - 7:14 pm UTC

hah, i'll take your pro*c code on any day to do ETL processing.

This entire site is plsql, it ain't "horribly slow"

Anyone can write slow code in any language. If you take time to learn the language, the features, the best practices -- it can fly.

C has its time and place, but for doing ETL type processing -- nope. If you are dealing with data -- SQL, then PLSQL in that order.

INSERT ALL issue

Budhi, October 26, 2004 - 10:07 am UTC

I have a scenario in which I want to fetch records over a DBLink, insert into the history tables and then insert into the actual staging tables. Staging tables is maitained in such a way that we have a master detail relationship between the staging table.
Suppose the structure is as below:-

STAGE_MASTER
------------
stg_id,
load_date,
invnt_id,
invnt_name,
stock_qty

STAGE_DETAIL
--------------
invnt_make_model,
typ_desc1,
typ_desc2,
typ_desc3,
lst_price_incr,
invnt_incharge
sales_rgn,
sales_mgr,
rsrch_mgr,
stocks_updtr

LOAD_HIST
----------
stg_id,
load_date,
invnt_id,
invnt_name,
stock_qty
invnt_make_model,
typ_desc1,
typ_desc2,
typ_desc3,
lst_price_incr,
invnt_incharge
sales_rgn,
sales_mgr,
rsrch_mgr,
stocks_updtr


Now for performance improvement I am using INSERT ALL. Problem here is I have a sequence which generates the stg_id for me. But this is creating lot of issues like "cannot insert null()", "parent key not found" etc with the following INSERT ALL statement. Can't we use INSERT ALL with sequence and master-detail relationship.

INSERT /*+APPEND*/ ALL
INTO STAGE_MASTER
(stg_id,load_date,invnt_id,invnt_name,stock_qty)
VALUES (stg_id, sysdate, invnt_id, invnt_name, stock_qty)
INTO STAGE_DETAIL
(invnt_make_model,typ_desc1,typ_desc2,typ_desc3,lst_price_incr,invnt_inchargesales_rgn,sales_mgr,rsrch_mgr,stocks_updtr)
VALUES (invnt_make_model,typ_desc1,typ_desc2,typ_desc3,lst_price_incr,invnt_inchargesales_rgn,sales_mgr,rsrch_mgr,stocks_updtr)
INTO LOAD_HIST
(stg_id,load_date,invnt_id,invnt_name,stock_qtyinvnt_make_model,typ_desc1,typ_desc2,
typ_desc3,lst_price_incr,invnt_inchargesales_rgn,sales_mgr,rsrch_mgr,stocks_updtr)
VALUES (stg_id,load_date,invnt_id,invnt_name,stock_qtyinvnt_make_model,typ_desc1,typ_desc2,
typ_desc3,lst_price_incr,invnt_inchargesales_rgn,sales_mgr,rsrch_mgr,stocks_updtr)
SELECT ROWNUM stg_id,load_date,invnt_id,invnt_name,stock_qtyinvnt_make_model,typ_desc1,typ_desc2,typ_desc3,lst_price_incr,invnt_inchargesales_rgn,sales_mgr,rsrch_mgr,stocks_updtr)
FROM invnt_detl@oltp_invnt;



Tom Kyte
October 26, 2004 - 10:18 am UTC

you'll need to give me a more concise, to the point -- but complete example to work with. I don't even see a sequence?

create tables..
insert all

short, few columns, no tablespace/storage clauses -- just really really small, like my examples.

A reader, October 26, 2004 - 2:54 pm UTC

Hi tom,

please think that I am not using sequences instead is adjusting with the rownum to serve the purpose of unique key for my staging table.

Tom Kyte
October 26, 2004 - 3:03 pm UTC

still -- reread above....

Which one?

A reader, October 31, 2004 - 1:24 pm UTC

I have a PL/SQL ETL routine that scrubs staging data from a external vendor. Staging table (external table) is all varchar2, so nothing gets rejected, but I need to validate and load into main table.

Say I want to load the row only if certain columns are numeric. So I do

create or replace type vc2_tab as table of varchar2(4000);

create or replace function is_num(p_vc2_tab in vc2_tab)
return integer
as
j number;
begin
for i in 1..p_vc2_tab.count loop
begin
select to_number(p_vc2_tab(i)) into j from dual;
exception when others then return 0;
end;
end loop;
return 1;
end;
/

create or replace function is_num2(p_vc2 in varchar2)
return integer
is
j number;
begin
select to_number(p_vc2) into j from dual;
exception when others then return 0;
return 1;
end;
/

Which one would perform better

1. select ... from ext_table
where is_num(col1,col2,col3,col4,...,colN)=1

or

2. select ... from ext_table
where is_num2(col1)=1
and is_num2(col2)=1
...
and is_num2(colN)=1

What comes into play, SQL to PL/SQL context switching, one code "looks better" than the other, any other considerations?

Thanks

Tom Kyte
October 31, 2004 - 1:31 pm UTC

if you are using an external table, just map it as a number and let rejects be set high and the C code will write bad records out to the bad file...

that looks better than mapping everything as varchar2

A reader, October 31, 2004 - 3:41 pm UTC

Well, in some cases I am using external tables in others I am not, I was just asking about a comparison between the 2 techniques I was using?

[Yes, I agree that I can let the access driver reject bad rows to a bad file, but to report on that, I need it in the database. So again, map the "bad file" to another bad external table, etc, etc. Its just easier to truncate staging, 'insert /*+ append */ into staging select * from ext_table' and then everything is nice and familiar!]

Tom Kyte
October 31, 2004 - 3:43 pm UTC

with sqlldr -- badfile
with external tables -- badfiles

period. etc etc etc etc. it is "easier" -- it does the work for you.


else benchmark.

Badfiles

A reader, October 31, 2004 - 4:47 pm UTC

The thing about badfiles is...they dont really benefit anyone.

Lets say I have some bad records written (by sqlldr, ext tables, whatever) to a badfile. Now what? How does this benefit my Production support people? They cant "read" a badfile to figure out what is wrong. They need to "join" the badfile with the logfile to see that 'Ah, Record 1 is "bad" because of reasons a, b and c". Now they know what to do. If it is bad because of Reason A, ah hah, bug with development, call up Development manager. If it is Reason B, bad data from vendor, call up Interface manager, etc.

Thats why I like to define everything as varchar2 and let everything "succeed" into staging. This way, my developers can write code to handle all possible errors, log everything to a nice error table, generate fancy reports for Production Support to review and take action upon, etc.

I am surprised we disagree on this topic. I thought you are all for "code once, move on", let Prod support handle it. You are not going to be around to support this code/data 5 years from now!

Thanks

Tom Kyte
November 01, 2004 - 3:17 am UTC

why wouldn't production people have access to that which i need???!?

you asked for an opinion. My opinion is that if you are doing a dataload, let the tools do the scrubbing.

Of these two:

1. select ... from ext_table
where is_num(col1,col2,col3,col4,...,colN)=1

or

2. select ... from ext_table
where is_num2(col1)=1
and is_num2(col2)=1
...
and is_num2(colN)=1

#1 will outperform #2 by far - -however, sqlldr/et's will blow them both away.

but you could always

1) external table load
2) post process what should be a very small badfile using your varchar2 approach


and who said we disagreed. You have requirements that until right now were unstated. Add unstated requirements and perhaps things, responses you get to questions change.

You asked:

<quote>
I have a PL/SQL ETL routine that scrubs staging data from a external vendor.
Staging table (external table) is all varchar2, so nothing gets rejected, but I
need to validate and load into main table.

Say I want to load the row only if certain columns are numeric.
....
What comes into play, SQL to PL/SQL context switching, one code "looks better"
than the other, any other considerations?
</quote>

given that input "I need to validate and load into main table" - the output you get from me is "badfile".



Badfiles

A reader, November 01, 2004 - 9:50 am UTC

Yes, I realize I changed the goalposts on you a little, but I thought we had changed the discussion to the concept of badfiles instead of validating data, etc.

Your reason for using badfiles was that..."it is "easier" -- it does the work for you". But then, given my requirements of generating reports on what failed, why, etc, the work doesnt stop there. I would need, as you said, a "post process" to read the badfile and do stuff with it.

That kind of defeats the purpose of using badfiles! Yes, if I all I need is "thou shalt not put bad data anywhere in the datbase, even staging", then, by all means, tighten up staging and let badfile get everything.

But if my goal is to generate formatted error reports, I need all the data in the database. In that case, I might as well define staging as all-varchar2 and let everything get in, thats all I was saying.

Badfiles are great for one thing only. Since they are in the format of the original data file, one can manually edit the badfile to "fix" the bad data and simply make that badfile the "datafile" and load the fixed records. Other than that, I dont see the benefit of badfiles for humans

Thanks

Tom Kyte
November 01, 2004 - 1:41 pm UTC

i fail to see how that defeats the purpose of anything.

It is what badfiles do -- log bad records, you have not defeated that have you?

the badfile -- as far as i'm concerned -- could be in the database (external table or just load it)

guess we'll have to disagree. I would:

load table from external_table
process bad records -- into a table, into a report, whatever you want.

external tables make badfiles "human usable". I don't see a significant difference between your approach and mine at the end of the day -- just you write more code and I write less.

but it is up to you.


David Aldridge, November 01, 2004 - 11:00 am UTC

SQL*Loader would just flag a single reason for the failure also, wouldn't it? If you had validation failures on two columns then only one reason would be logged.

Tom Kyte
November 02, 2004 - 5:43 am UTC

and using the approach above that was asked about:

1. select ... from ext_table
where is_num(col1,col2,col3,col4,...,colN)=1

or

2. select ... from ext_table
where is_num2(col1)=1
and is_num2(col2)=1
...
and is_num2(colN)=1

how exactly would that change?

A reader, November 01, 2004 - 2:01 pm UTC

"load table from external_table
process bad records -- into a table, into a report, whatever you want"

Yes, we agree on this. The details on the process are what we disagree on.

David just took the words right out of my mouth. I was just going to say. If I need valiation on 10 columns in the record, SQL Loader or ext table would abort after the first bad column. So my post-process wouldnt even know that the others are bad.

Also, for cases where it is not really "bad" as far as the load part is concerned, but the value has to be in a pre-determined LOV (I expect A, B, C, the vendor sends me D). Yes, for this simple case I can define a check constraint and make sqlldr fail it, but in other cases, where a CHECK Constraint cant detect "badness", I would need to get everything into staging as-is and write code to detect bad data, report on it, etc.

Like I said, we are solving 2 different problems. For your stated purpose of just preventing bad data )as defined by declarative constraints) from coming into staging your approach is fine.

My problem definition is slightly broader and for that I am writing more code, nothing wrong with that?

Tom Kyte
November 02, 2004 - 6:35 am UTC

well, if you used the queries you questioned me on -- YOU WOULD NOT even have retrieved the bad records....... (so maybe I'm totally missing your point, you would have skipped those records ENTIRELY)

here is the way I would do this:

a) load the records generating a bad file
b) process the records in bad generating whatever report you wanted me to produce, either report all errors on the row (validate each and every column) or just report the first error


here is what you would be doing

a) read each record
b) validate each and every field (even for the vast majority of 'good' records) in plsql
c) write good record to real table, write bad record report


so -- at the end of the day, I'm not seeing a material difference here with the EXCEPTION that "I run faster with less code possibly"



A reader, November 05, 2004 - 5:05 am UTC

Hi Tom,
"Mixed Header Detail loading" May 14, 2004 review above ---
what would be the best way to generate the output as in Michal's input file

003310304000J019
1BBH334000000000067484C310304
1BKN580000000000516281C310304
1EFJ940000000000238847C310304
...
012311203000K029
1HDP922000000001351670C310304
1HGV524000000001218811C310304
1HGV524000000000069179C310304
1HGV524000000000121643C310304
...


if "header" records were in one and "detail" records in another table
Thanks

Tom Kyte
November 05, 2004 - 4:24 pm UTC

scott@ORA9IR2> select decode( grouping(c2), 1, c1, c2 ) data
2 from (
3 select dept.deptno,
4 'HDR' || dept.deptno||'|'||dept.dname||'|'||dept.loc c1,
5 'DTL' || emp.deptno ||'|'|| emp.empno || '|'|| emp.ename || '|' || emp.job c2
6 from emp, dept
7 where emp.deptno = dept.deptno
8 )
9 group by grouping sets((deptno,c1),(deptno,c1,c2))
10 order by deptno, grouping(c2) desc
11 /

DATA
---------------------------------------------------------------------------------------------------------
HDR10|ACCOUNTING|NEW YORK
DTL10|7934|MILLER|CLERK
DTL10|7782|CLARK|MANAGER
DTL10|7839|KING|PRESIDENT
HDR20|RESEARCH|DALLAS
DTL20|7369|SMITH|CLERK
DTL20|7876|ADAMS|CLERK
DTL20|7902|FORD|ANALYST
DTL20|7566|JONES|MANAGER
DTL20|7788|SCOTT|ANALYST
HDR30|SALES|CHICAGO
DTL30|7900|JAMES|CLERK
DTL30|7521|WARD|SALESMAN
DTL30|7698|BLAKE|MANAGER
DTL30|7499|ALLEN|SALESMAN
DTL30|7654|MARTIN|SALESMAN
DTL30|7844|TURNER|SALESMAN

17 rows selected.

that would be a way to do it in a single SQL -- though I would probably write a program to do this whereby I just joined emp to dept and when I see that the dept record "changed" i would print it out -- and just always print the EMP information




A reader, November 05, 2004 - 4:31 pm UTC

"that would be a way to do it in a single SQL -- though I would probably write a program to do this ..."

Um, why? Arent you always in favor of not writing code unless you have to? If a single SQL can do the job, why write a program?

Tom Kyte
November 05, 2004 - 6:00 pm UTC

Well, I'm assuming there is probably a program involved (creating a large formatted file)...

And if so, just doing a full table scan of each table, hashing them and joining and sorting by join key would be easier than:

a) doing that
b) grouping by to create a "duplicate" row once per group <<<=====
c) sorting all of that

so, assuming a program was going to be written, I would probably let the program do the formatting.

If a program were not to be written, the query would be used.

SC, November 08, 2004 - 8:39 am UTC

Hi Tom.
Thanks for your example!

-Can you please detail in case the program is pl/sql:
"when I see that the dept record "changed" i would print it out -- and just always print the EMP information"
How do you capture that dept record changed?

-What would be approach in case of 3 tables and output in the form:

HDR10|ACCOUNTING|NEW YORK
DTL10|7934|MILLER|CLERK
DET10|7934|MILLER|A1
DET10|7934|MILLER|A2
DET10|7934|MILLER|A3
DTL10|7782|CLARK|MANAGER
DET10|7782|CLARK|B1
DET10|7782|CLARK|B2
...
...


Thanks!


Tom Kyte
November 08, 2004 - 4:50 pm UTC




for x in ( select dept.deptno, ....
from emp, dept
where emp.deptno = dept.deptno
order by dept.deptno )
loop
if ( x.deptno <> last_deptno or last_deptno is null )
then
output DEPTNO record, it just changed (or is the first);
last_deptno := x.deptno;
end if;
output EMP record, every time....
end loop;


3 tables, 50 tables -- does not matter -- you have a "key" (deptno in this case), when key changes -- output HDR record and then remember last key.

In Cobol we used to call this "control break processing" as I recall...

SC, November 09, 2004 - 5:38 am UTC

Hi Tom.
Thanks for your answer!

a. Is your example for "control break processing" the appropriate approach if
processing large number of rows?

b. if I want to get the output with row number assigned to the row
HDR001CLARKMANAGER10
DTL0021CLARKMANAGER10
DTL003KINGPRESIDENT10
DTL004millerCLERK10
HDR005SMITHCLERK20
DTL006SMITHCLERK20
I can do it using a variable which is incremented as the data are fetched.
If using analytics and row_number():

declare
last_deptno number(2);
l_i int := 0;
begin
for x in ( select emp.ename, emp.job, dept.deptno, row_number () over (order by dept.deptno) as rownumber
from emp, dept
where emp.deptno = dept.deptno
order by dept.deptno )
loop
if ( x.deptno <> last_deptno or last_deptno is null )
then
--output DEPTNO record, it just changed (or is the first);
dbms_output.put_line('HDR'||lpad(x.rownumber+l_i,3,0)||x.ename||x.job||x.deptno);
last_deptno := x.deptno;
l_i := l_i + 1;
end if;
--output EMP record, every time....
dbms_output.put_line('DTL'||lpad(x.rownumber+l_i,3,0)||x.ename||x.job||x.deptno);
end loop;
end;
/

Maybe you have something better than this.

c. do we need to have order by when using row_number() - it is in "row_number () over (order by dept.deptno)"

Thanks.


Tom Kyte
November 09, 2004 - 8:56 am UTC

a) in 9i -- through in judicious use of array fetch (fetch c bulk collect into arrays limit 500) -- but in 10g, you can just use the implicit cursor -- but yes, this is one of the rare cases where I would probably take a piece of code over a single sql query.


b) since I'm doing procedural code and I can count easily -- I would do it there, in the code.

c) yes, you must have order by with row_number

What is the best way

Saheli, July 04, 2005 - 8:58 am UTC

Hi Tom,
What is the best way to capture changed data from OLTP database in oracle 8i, if the tables have no timestamp columns?

Thanks in advance.
Regards,



Tom Kyte
July 04, 2005 - 10:48 am UTC

where is the data going, what is the goal?

and adding new features to an existing system built on non-supported software carries with it a host of issues/dangers.

This is for a data warehouse

Saheli, July 05, 2005 - 6:52 am UTC

The data from OLTP system is to be loaded into a datawarehouse. We will be capturing data changed over a period of time and loading it into the DW tables (ETL process). Capturing this data change will be easy if there is a timestamp column or some other date column in the table. If there are no such columns, what is the best way to capture the changed data? The OLTP system is already in place and any changes to that is ruled out.

Regards,
Sunitha

Tom Kyte
July 05, 2005 - 7:45 am UTC

well, unless you change it, you will have to capture the "entire thing".

Unless you add mv logs, you cannot use incremental capture for materialized views.

streams didn't exist way back when.

CDC (change data capture) did not either.

Which method is better?

Reader, August 08, 2005 - 8:20 am UTC

Hi Tom,

Suppose I have about one million rows to insert into a table, which would be the best way to do it?
1. insert into t1 select .......
2. use pl/sql code with bulk collect.
3. Use limit option in bulk collect.

If option 1 inserts 1 million rows, is it generating i million insert statements or is it a single insert statement? How is it different from bulk collect and insert?

Thanks.


Tom Kyte
August 08, 2005 - 8:42 am UTC

#1


it is a single SQL statement. it is different from #2 and #3 in that

a) it is less buggy because you write less code and the less code we all write the better

b) it is easier to implement

c) it uses less resources (redo, undo, cpu, everything)

d) it is the right way to do it.

10gr2 log errors

Andrew, August 08, 2005 - 12:32 pm UTC

Is something up with 10gr2? Ever since your blogs there has been silence about "log errors". I'm still excited about it, I'm still benchmarking it, and dreaming up processes that take advantage of it. How about you?
It makes 'insert into select from...' a much better option here doesn't it?

Tom Kyte
August 08, 2005 - 8:13 pm UTC

:) gotta finish book.

I'm very fond of that feature, it with transparent data encryption are the two big ones of 10gr2

Insert Records into

mAg, August 08, 2005 - 12:46 pm UTC

Hi Tom

Can you explain, how this insert into t1 select * from t... will use less resource (redo and undo)?

Thanks
mAg

Tom Kyte
August 08, 2005 - 8:26 pm UTC

bulk operations are more efficient.  The database is a big book-keeper of things.

If you ask a book-keeper to record one event, that is lots less over head than asking them to record 1,000,000 events.


ops$tkyte@ORA10GR1> create table t1 as select * from big_table.big_table where 1=0;

Table created.

ops$tkyte@ORA10GR1> create table t2 as select * from big_table.big_table where 1=0;

Table created.

ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> exec runstats_pkg.rs_Start

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR1> begin
  2          for x in ( select * from big_table.big_table )
  3          loop
  4                  insert into t1 values X;
  5          end loop;
  6          commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR1> exec runstats_pkg.rs_middle

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR1> insert into t2 select * from big_table.big_table;

1000000 rows created.

ops$tkyte@ORA10GR1> commit;

Commit complete.

ops$tkyte@ORA10GR1> exec runstats_pkg.rs_stop
Run1 ran in 14700 hsecs
Run2 ran in 7901 hsecs
run 1 ran in 186.05% of the time


Name                                  Run1        Run2        Diff
LATCH.cache buffers chains       5,591,174     615,981  -4,975,193
STAT...undo change vector size  67,971,056   3,865,996 -64,105,060
STAT...redo size               378,755,320 119,146,628-259,608,692

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
11,056,088     873,131 -10,182,957  1,266.26%

PL/SQL procedure successfully completed.



Less undo, less redo.  

Tastes great, less filling. 

SQL Loader VS External Table

PK, August 08, 2005 - 1:42 pm UTC

1. With 9i Release 3 is there any one good reason to use SQL Loader?
2.Is there anything that I can do in SQL Load and can not do in External Table?

Thanks for yr help in advance.

Thanks
Koshal


Tom Kyte
August 08, 2005 - 8:24 pm UTC

9ir3? what is that :)

funny, just finish a chapter on data loading/unloading and had this to say:

<quote>

External Tables

External tables were first introduced in Oracle 9i Release 1. Put simply, they allow us to treat an OS file as if it were a read-only database table. They are not intended to be a replacement for a “real” table, or to be used in their place, but rather as a tool to ease the loading and in Oracle 10g – unloading, of data.

When the external tables feature was first unveiled, I often referred to them as "the replacement for SQLLDR". This still holds true – most of the time. Having said this, you might wonder why we just spent so much time looking at SQLLDR. The reason is that SQLLDR has been around for a long time, and there are many, many legacy control files lying around. SQLLDR is still a commonly used tool; it is what many people know and have used . We are still in a period of transition from use of SQLLDR to external tables, so SQLLDR is still very relevant.
What many DBAs don't realize is that their knowledge of SQLLDR control files is very readily transferable to the use of external tables. The syntax is very similar. What you’ll discover, as we work through the examples, is that external tables incorporate much of the SQLLDR syntax and many of the techniques.

There are three situations in which SQLLDR should be chosen over external tables when:

* You have to load data over a network – in other words, when the input file is not on the database server itself. One of the restrictions of external tables is that the input file must be accessible on the database server.

* Multiple users must concurrently work with the same external table processing different input files.

* You have to work with LOB types. External tables do not support LOBs.

With those three exceptions in mind, I would in general strongly recommend using external tables for their extended capabilities. SQLLDR is a fairly simple tool that generates an INSERT statement and loads data. Its ability to utilize SQL is limited to calling SQL functions on a row by row basis. External tables open up the entire SQL set of functionality to data loading. Some of the key functionality features that external tables have over SQLLDR in my experience are:

* The ability to use complex WHERE conditions to selectively load data. SQLLDR has a WHEN clause to select rows to load, but you are limited to using only AND expressions and expression using equality– no ranges (greater than, less than), no OR expressions, no IS NULL and so on.

* The ability to MERGE data. You can take an operating system file full of data and update existing database records from it.

* The ability to perform efficient code lookups. We can join an external table to other database tables as part of our load process.

* Easier multi-table inserts using INSERT. Starting in Oracle 9i, an INSERT statement can insert into 1 or more tables using complex WHEN conditions. While SQLLDR can load into multiple tables, it can be quite complex to formulate the syntax.

* A shallower learning curve for new developers. SQLLDR is “yet another tool” to learn, in addition to the programming language, the development tools, the SQL language and so on. Since they either already know SQL or need to learn SQL to do their job – it removes one more thing to learn about . External tables allows these developers to use knowledge they already must have, without having to learn a new tool (SQLLDR).

So, with that in mind, let's look at how to use external tables.
......
</quote>

Nested Table,SQLLDR

Reader, August 16, 2005 - 7:58 am UTC

We have a requirement where an item can belong to multiple groups.The number of rows per month is about 5 millions .
20% of the items may belong to multiple groups.
We are thinking of using Nested table .
OR
Generate one more row (or use parent child) for each additional group in the table .

I read your Book (pg 418) --sqlldr can handle this .
Even PLSQL can handle this daily load (250000) very easily .

Now ,my question is which is a better choice ?
Nested table or adding one more row in the table for every additional group (20% more rows) .
OR use a master child relationship.

I hope my question is clear .

Thanks



Tom Kyte
August 17, 2005 - 10:34 am UTC

do not use nested tables. I have never seen a real world use of nested tables. You always end up wanting to use the data in the nested tables "standalone", as a table itself, but it is really hard. Use real tables.


Groups have many items.
Items are in many groups.

Classic many to many relationship. You have a groups table, an items table and a GROUPS_ITEMS table to relate them.

I don't understand how this could possibly be "20% more rows"

SQL%BULK_EXCEPTIONS

Darin, September 07, 2005 - 12:24 pm UTC

Tom,

When looping through the SQL%BULK_EXCEPTIONS in the exception block (for a FORALL loop), if I write the error information to a DB table, won't the INSERT transaction reset the information in the implied SQL cursor such that I will lose the remaining SQL%BULK_EXCEPTION information?

Are these the same implied SQL cursors, or are they different (the implied SQL cursor for the insert statemnt [i.e. if I say SQL%ROWCOUNT to determine the number of rows written), and the SQL cursor from the FORALL statment [i.e. from which I am getting the SQL%BULK_EXCEPTION information])?

Thanks

Thanks

Tom Kyte
September 07, 2005 - 1:56 pm UTC

did you try it out to see? it would be considered a best practice to copy it out first, yes.

why risk it? copy it out and don't worry about it ever changing in the future

SQL%BULK_EXCEPTIONS

Darin, September 07, 2005 - 5:07 pm UTC

Tom,

In fact I did try it (after I wrote you above) and it appears to NOT over write the SQL%BULK_EXCEPTIONS information, which seemed a bit odd to me.

medecat@prv3> declare
bulk_err exception;
pragma exception_init(bulk_err,-24381);
err_cnt number;
err_index number;
err_code number;
message varchar2(200);
type tary is table of number;
ary tary := tary(1,2,null,4,null,5,null);

begin

forall i in ary.first..ary.last SAVE EXCEPTIONS
insert into tt values(ary(i));

exception
when bulk_err then
dbms_output.put_line('-');
dbms_output.put_line('SQL%BULK_EXCEPTIONS.COUNT='||SQL%BULK_EXCEPTIONS.count);
for i in 1..SQL%BULK_EXCEPTIONS.count loop
dbms_output.put_line('--');
dbms_output.put_line('Before INSERT - SQL%BULK_EXCEPTIONS(i).error_index: '||SQL%BULK_EXCEPTIONS(i).error_index||'; SQL%BULK_EXCEPTIONS(i).error_code:'||SQL%BULK_EXCEPTIONS(i).error_code);
err_index := SQL%BULK_EXCEPTIONS(i).error_index;
err_code := SQL%BULK_EXCEPTIONS(i).error_code;
insert into t_err values('SQL%BULK_EXCEPTIONS(i).error_index:'||err_index||'; SQL%BULK_EXCEPTIONS(i).error_code:'||err_code);
/* 'insert' statement SHOULD have reset the SQL cursor??? */
dbms_output.put_line('After INSERT - SQL%ROWCOUNT:'||SQL%ROWCOUNT);
dbms_output.put_line('After INSERT - SQL%BULK_EXCEPTIONS(i).error_index:'||SQL%BULK_EXCEPTIONS(i).error_index||'; SQL%BULK_EXCEPTIONS(i).error_code:'||SQL%BULK_EXCEPTIONS(i).error_code);
end loop;
end;
/

SQL%BULK_EXCEPTIONS.COUNT=3
--
Before INSERT - SQL%BULK_EXCEPTIONS(i).error_index: 3; SQL%BULK_EXCEPTIONS(i).error_code:1400
After INSERT - SQL%ROWCOUNT:1
After INSERT - SQL%BULK_EXCEPTIONS(i).error_index:3; SQL%BULK_EXCEPTIONS(i).error_code:1400
--
Before INSERT - SQL%BULK_EXCEPTIONS(i).error_index: 5; SQL%BULK_EXCEPTIONS(i).error_code:1400
After INSERT - SQL%ROWCOUNT:1
After INSERT - SQL%BULK_EXCEPTIONS(i).error_index:5; SQL%BULK_EXCEPTIONS(i).error_code:1400
--
Before INSERT - SQL%BULK_EXCEPTIONS(i).error_index: 7; SQL%BULK_EXCEPTIONS(i).error_code:1400
After INSERT - SQL%ROWCOUNT:1
After INSERT - SQL%BULK_EXCEPTIONS(i).error_index:7; SQL%BULK_EXCEPTIONS(i).error_code:1400

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00

The %BULK_EXCEPTION informatoin does not appear to be disturbed by the INSERT statement.

Also to note, If I replace the statement:
insert into t_err values('SQL%BULK_EXCEPTIONS(i).error_index:'||err_index||'; SQL%BULK_EXCEPTIONS(i).error_code:'||err_code);
with:
insert into t_err values('SQL%BULK_EXCEPTIONS(i).error_index:'||SQL%BULK_EXCEPTIONS(i).error_index||'; SQL%BULK_EXCEPTIONS(i).error_code:'||SQL%BULK_EXCEPTIONS(i).error_code);
I get the follwing error message:
ERROR at line 13:
ORA-06550: line 26, column 71:
PL/SQL: ORA-00911: invalid character
ORA-06550: line 26, column 4:
PL/SQL: SQL Statement ignored

It does not like the "%" in *variable* "SQL%BULK_EXCEPTIONS(i).error_index" in the INSERT statement.

Regarding your suggestion to copy the information from the BULK_EXCEPTIONS attribute before it's used, I attempted to do so, but it would not let me do any *array* copy from the BULK_EXCEPTION attributes, as follows:

declare
bulk_err exception;
pragma exception_init(bulk_err,-24381);
err_cnt number;
err_index number;
err_code number;
message varchar2(200);
type num_ary is table of number;
ary num_ary := num_ary(1,2,null,4,null,5,null);
err_code_ary num_ary;
err_index_ary num_ary;
begin
forall i in ary.first..ary.last SAVE EXCEPTIONS
insert into tt values(ary(i));
exception
when bulk_err then
err_cnt := SQL%BULK_EXCEPTIONS.count;
err_index_ary := SQL%BULK_EXCEPTIONS.ERROR_INDEX;
err_code_ary := SQL%BULK_EXCEPTIONS.ERROR_CODE;
for i in 1..err_cnt loop
dbms_output.put_line('--');
dbms_output.put_line('ERROR_INDEX:'||err_index_ary(i)||'; ERROR_CODE:'||err_code_ary(i));
insert into t_err values('SQL%BULK_EXCEPTIONS(i).error_index:'||err_index||'; SQL%BULK_EXCEPTIONS(i).error_code:'||err_code);
/* 'insert' statement SHOULD have reset the SQL cursor??? */
dbms_output.put_line('After INSERT - SQL%ROWCOUNT:'||SQL%ROWCOUNT);
end loop;
end;
/

err_index_ary := SQL%BULK_EXCEPTIONS.ERROR_INDEX;
*
ERROR at line 18:
ORA-06550: line 18, column 39:
PLS-00302: component 'ERROR_INDEX' must be declared
ORA-06550: line 18, column 2:
PL/SQL: Statement ignored
ORA-06550: line 19, column 39:
PLS-00302: component 'ERROR_CODE' must be declared
ORA-06550: line 19, column 2:
PL/SQL: Statement ignored

Would this require looping through the BULK_EXCEPTION *array* first to get all the information, save it in an array variable, then loop through my array variable to write the informtation to a DB table, to insure that the SQL cursor info won't be lost due to the INSERT transaction?

Thanks as always


Tom Kyte
September 08, 2005 - 7:44 am UTC

I doubt it would overwrite the BULK stuff until you did another BULK operation - but, if you want to be safe.....



yes, you would (*before doing sql*) copy the array to your own array.

Likely, it won't change - but if someone sticks a bulk process in there somewhere, well....

Save Exceptions and autonomous Transactions .

reader, September 25, 2005 - 7:40 am UTC

I have a table bulk1 .
I wish to transfer all the records from this table to bulk_ins .
Bulk_ins has a PK constraint.
I am using save exceptions and want to write all the records erroring out in another table bulk_err .
CREATE TABLE BULK1 as select owner,OBJECT_NAME,OBJECT_ID from all_objects;
create table bulk_ins as select * from bulk1 where1=2;
create index bulk_idx on bulk_ins(object_id);
alter table bulk_ins add constraint bulk_pk primary key(object_id);
create table bulk_err ;
CREATE TABLE BULK_ERR
(
OBJECT_ID NUMBER,
ERR_CD NUMBER,
ERR_MSG VARCHAR2(256 BYTE)
);

Questions :
1.Can I use Autonomouns transactions with Forall ?
2.Why is my exception routine not catching all the exceptions the next time .
3.How can I populate my error table


CREATE OR REPLACE PACKAGE DW_test AS
/*PROCEDURE log_error;*/
PROCEDURE load_data;
END ;
/
create or replace PACKAGE BODY DW_test AS
PROCEDURE log_error
(p_obj_id bulk_err.object_id%TYPE,
p_err_cd bulk_err.ERR_CD%TYPE,
p_err_msg bulk_err.ERR_MSG%TYPE)
IS
PRAGMA autonomous_transaction;
BEGIN
INSERT INTO bulk_err (object_id,err_cd,err_msg) VALUES(p_obj_id,p_err_cd,p_err_msg);
COMMIT;
END ;
PROCEDURE LOAD_data
IS
errors number;
type rec_type is table of bulk1%rowtype;
obj_rec rec_type;
CURSOR obj_cursor IS SELECT * from bulk1;
begin
OPEN obj_cursor;
loop
FETCH obj_cursor BULK COLLECT INTO obj_rec limit 500;
begin
FORALL i IN 1..obj_rec.count save EXCEPTIONS
INSERT INTO bulk_ins values obj_rec(i);
eXCEPTION
WHEN OTHERS THEN
errors := SQL%BULK_EXCEPTIONS.COUNT;
for i IN 1..errors LOOP
dbms_output.put_line
('Error occurred during iteration ' ||
SQL%BULK_EXCEPTIONS(i).ERROR_INDEX ||
' Oracle error is ' ||
SQL%BULK_EXCEPTIONS(i).ERROR_CODE );
END LOOP;
end;
exit when obj_cursor%notfound;
end loop;
commit;
dbms_output.put_line('total recs ' || obj_cursor%rowcount);
end;
end;
/



06:31:14 DWRP> exec dw_test.load_data

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.47
06:31:26 DWRP> set serveroutput on size 1000000
06:32:09 DWRP> exec dw_test.load_data
Error occurred during iteration 501 Oracle error is 1
Error occurred during iteration 501 Oracle error is 1
Error occurred during iteration 501 Oracle error is 1
Error occurred during iteration 501 Oracle error is 1
Error occurred during iteration 501 Oracle error is 1

Why is the count 1 ?

Where is my mistake ?

Thanks for your time .

Tom Kyte
September 25, 2005 - 9:41 am UTC

1) you may write a procedure that is an autonomous transaction, it may use forall.

2) don't know what you mean?


the count is not 1, the error is 1 -

[tkyte@localhost tkyte]$ oerr ora 1
00001, 00000, "unique constraint (%s.%s) violated"
// *Cause: An UPDATE or INSERT statement attempted to insert a duplicate key.
// For Trusted Oracle configured in DBMS MAC mode, you may see
// this message if a duplicate entry exists at a different level.
// *Action: Either remove the unique restriction or do not insert the key.


(look at my code way above - it shows how to use this technique, how you have to count the errors and so on...)

why would you want to use an autonomous transaction here??? Why record failed records if the REST of them do not make it?? Doesn't seem right - if the insert fails - but you logged bad records (some bad records - NOT ALL - you don't know if you finished or not!) so what??? Doesn't seem right to me.

Save Exception and Forall

reader, September 25, 2005 - 2:36 pm UTC

1. I wish to insert the good data in the bulk1 table and all the bad data in the bulk_err table.
2. So when I hit an exception I want insert into bulk_ins using pragma --Is this possible or any other method you suggest.
3.I used your code and replaced them with my tables .
Still the same result .

Error occurred during iteration 501 Oracle error is 1
Error occurred during iteration 501 Oracle error is 1

Oracle error is now clear &#8211;unique constraint
But at the end it should tell me that the records errored 42087 which the total number of records.It only tells the number of times it has looped ie 85

Pl help .


Tom Kyte
September 25, 2005 - 8:01 pm UTC

1&2) I see that - but WHY the pragma????? why do you (i believe mistakenly) believe it is needed???


give me an example from soup to nuts to work with - generate some data, and show me your issue (because I'm not getting it yet?)

Bulk exceptions ...

reader, September 26, 2005 - 6:48 pm UTC

Tom ,

I think I have tried to do better this time to explain my problem .

I have a table bulk1 .
I have another table bulk_ins .
I wish to transfer data from bulk1 to bulk_ins using the following code.

Then I am analyzing the results at the end.


1. CREATE TABLE BULK1 as select owner,OBJECT_NAME,OBJECT_ID from all_objects where rownum < 1200;
create table bulk_ins as select * from bulk1 where1=2;
create index bulk_idx on bulk_ins(object_id);
alter table bulk_ins add constraint bulk_pk primary key(object_id);

2.CREATE OR REPLACE PACKAGE DW_test AS
/*PROCEDURE log_error;*/
PROCEDURE load_data;
END ;
/
CREATE OR REPLACE PACKAGE BODY DW_test AS
PROCEDURE log_error
(p_obj_id bulk_err.object_id%TYPE,
p_err_cd bulk_err.ERR_CD%TYPE,
p_err_msg bulk_err.ERR_MSG%TYPE)
IS
PRAGMA autonomous_transaction;
BEGIN
INSERT INTO bulk_err (object_id,err_cd,err_msg) VALUES(p_obj_id,p_err_cd,p_err_msg);
COMMIT;
END ;
PROCEDURE LOAD_data
IS
lcnt NUMBER :=0;
errors NUMBER :=0;
TYPE rec_type IS TABLE OF bulk1%ROWTYPE;
obj_rec rec_type;
CURSOR obj_cursor IS SELECT * FROM bulk1;
BEGIN
OPEN obj_cursor;
LOOP
FETCH obj_cursor BULK COLLECT INTO obj_rec LIMIT 500;
BEGIN
FORALL i IN 1..obj_rec.COUNT save EXCEPTIONS
INSERT INTO bulk_ins VALUES obj_rec(i);
EXCEPTION
WHEN OTHERS THEN
errors := SQL%BULK_EXCEPTIONS.COUNT;
lcnt:= lcnt + errors;
FOR i IN 1..errors LOOP
DBMS_OUTPUT.PUT_LINE
('Error occurred during iteration ' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX ||
' Oracle error is ' || SQL%BULK_EXCEPTIONS(i).ERROR_CODE );
END LOOP;
END;
EXIT WHEN obj_cursor%NOTFOUND;
END LOOP;
COMMIT;
DBMS_OUTPUT.PUT_LINE('total recs read ' || obj_cursor%rowcount);
DBMS_OUTPUT.PUT_LINE('total recs errored ' || lcnt);
END;
END;
/


The results

First run :No problems as no Duplicates .
17:31:12 > exec dw_test.load_data
total recs read 1199
total recs errored 0
Second Run :Assuming someone executed the load again .
Everything should error out .

17:36:00 > exec dw_test.load_data
Error occurred during iteration 501 Oracle error is 1
Error occurred during iteration 501 Oracle error is 1
Error occurred during iteration 200 Oracle error is 1
total recs read 1199
total recs errored 3

Why does my counter show 3 .It should show me records errored 1199 .

Thanks for your time .



Tom Kyte
September 27, 2005 - 10:38 am UTC

thanks, that is what I needed - a reproducible test case - which I have whittled down to this:

drop table bulk_ins;                                                                                                                   
create table bulk_ins ( object_id number );
                                                                                                                   
create or replace procedure LOAD_data
IS
    type numArray is table of number;
    l_data numArray := numArray(1,2);
BEGIN
    FORALL i IN 1..l_data.COUNT save EXCEPTIONS
        INSERT INTO bulk_ins VALUES ( l_data(i) );
EXCEPTION
    WHEN OTHERS THEN
         FOR i IN 1..sql%bulk_exceptions.count
         LOOP
             DBMS_OUTPUT.PUT_LINE
             ('Error occurred during iteration ' ||
               SQL%BULK_EXCEPTIONS(i).ERROR_INDEX ||
              ' Oracle error is ' ||
               SQL%BULK_EXCEPTIONS(i).ERROR_CODE );
    END LOOP;
END;
/
                                                                                                                   
alter table bulk_ins add constraint bulk_pk primary key(object_id);
exec load_data
exec load_data
alter table bulk_ins drop constraint bulk_pk;
create index bulk_idx on bulk_ins(object_id);
alter table bulk_ins add constraint bulk_pk primary key(object_id);
exec load_data



It appears to be the NON-unique index here. It is messing it up:

ops$tkyte@ORA9IR2> create table bulk_ins ( object_id number );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace procedure LOAD_data
  2  IS
  3          type numArray is table of number;
  4          l_data numArray := numArray(1,2);
  5  BEGIN
  6          FORALL i IN 1..l_data.COUNT save EXCEPTIONS
  7                  INSERT INTO bulk_ins VALUES ( l_data(i) );
  8  EXCEPTION
  9          WHEN OTHERS THEN
 10           FOR i IN 1..sql%bulk_exceptions.count
 11           LOOP
 12               DBMS_OUTPUT.PUT_LINE
 13               ('Error occurred during iteration ' ||
 14                 SQL%BULK_EXCEPTIONS(i).ERROR_INDEX ||
 15                ' Oracle error is ' ||
 16                 SQL%BULK_EXCEPTIONS(i).ERROR_CODE );
 17      END LOOP;
 18  END;
 19  /
 
Procedure created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter table bulk_ins add constraint bulk_pk primary key(object_id);
 
Table altered.
 
ops$tkyte@ORA9IR2> exec load_data
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec load_data
Error occurred during iteration 1 Oracle error is 1
Error occurred during iteration 2 Oracle error is 1
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> alter table bulk_ins drop constraint bulk_pk;
 
Table altered.
 
ops$tkyte@ORA9IR2> create index bulk_idx on bulk_ins(object_id);
 
Index created.
 
ops$tkyte@ORA9IR2> alter table bulk_ins add constraint bulk_pk primary key(object_id);
 
Table altered.
 
ops$tkyte@ORA9IR2> exec load_data
Error occurred during iteration 3 Oracle error is 1
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>


If you run the test with a non-unique index, then one error and the subscript is beyond the end.

If you run the test with a unique index in place - it works as expected.



BUT - only when ALL rows fail!  When I make 2 out of three fail:

ops$tkyte@ORA9IR2> create table bulk_ins ( object_id number );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace procedure LOAD_data( p_extra in number )
  2  IS
  3          type numArray is table of number;
  4          l_data numArray := numArray(1,2, p_extra );
  5  BEGIN
  6          FORALL i IN 1..l_data.COUNT save EXCEPTIONS
  7                  INSERT INTO bulk_ins VALUES ( l_data(i) );
  8  EXCEPTION
  9          WHEN OTHERS THEN
 10           FOR i IN 1..sql%bulk_exceptions.count
 11           LOOP
 12               DBMS_OUTPUT.PUT_LINE
 13               ('Error occurred during iteration ' ||
 14                 SQL%BULK_EXCEPTIONS(i).ERROR_INDEX ||
 15                ' Oracle error is ' ||
 16                 SQL%BULK_EXCEPTIONS(i).ERROR_CODE );
 17      END LOOP;
 18  END;
 19  /
 
Procedure created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter table bulk_ins add constraint bulk_pk primary key(object_id);
 
Table altered.
 
ops$tkyte@ORA9IR2> exec load_data(100)
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec load_data(200)
Error occurred during iteration 1 Oracle error is 1
Error occurred during iteration 2 Oracle error is 1
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> alter table bulk_ins drop constraint bulk_pk;
 
Table altered.
 
ops$tkyte@ORA9IR2> create index bulk_idx on bulk_ins(object_id);
 
Index created.
 
ops$tkyte@ORA9IR2> alter table bulk_ins add constraint bulk_pk primary key(object_id);
 
Table altered.
 
ops$tkyte@ORA9IR2> exec load_data(300)
Error occurred during iteration 1 Oracle error is 1
Error occurred during iteration 2 Oracle error is 1
 
PL/SQL procedure successfully completed.
 


<b>Please - contact support with this test case - it reproduces in 10gR1 and before (but appears to work in 10gR2)</b>
 

Excellent !

Reader, September 28, 2005 - 8:14 am UTC

Thanks a lot !
Very helpful .
One more question --is it a good practice that we create an non-unique Index and apply a PK contraint .We are doing this a lot but you mentioned in the previous post that "non-unique index is messing it up "
Should we stop using this approach .
What problems could be expected ?

Thanks again .

Tom Kyte
September 28, 2005 - 10:03 am UTC

there are reasons for using a non-unique index, but first - what was your goal behind doing it? why did you do it? (then i'll comment as to the "good" or "bad" of it)

Situation similar to Chen's question above

Rahul, September 28, 2005 - 12:27 pm UTC

Tom,

We have code similar to what Chen posted above (How about multi DML in Bulk process?). Trying to speed it up using the bulk collect. We are running on 9iR2.

I cannot think of a way to simulate the rollback with bulk collect (without actually writing the DML for putting the data back the way it was).

Is there a way to do this ?

Thanks for your help!

Tom Kyte
September 28, 2005 - 12:40 pm UTC

if you need to do a unit of work as a unit of work - you either do "an array of this unit or work" or not

Meaning, to have the control you seem to be saying, you could not array process since you have N statements that travel together as a unit of work.

Reasons for non-unique Index ..

Reader, October 01, 2005 - 5:05 pm UTC

"there are reasons for using a non-unique index, but first - what was your goal behind doing it? why did you do it? (then i'll comment as to the "good" or "bad" of it)" ......

1.We have a 5 column index .
We have a PK with 3 columns .
We use the same index for this constraint .


Eagerly waiting for your list of "good" and "bad" ---


Thanks

Tom Kyte
October 01, 2005 - 9:18 pm UTC

that was a good, legitimate reason.

Some more ...

reader, October 01, 2005 - 9:57 pm UTC

You told us that you will give us more good and bad reasons of having a non-unique Index .

Thanks ,



Tom Kyte
October 02, 2005 - 10:35 am UTC

I said I'd comment on whether your use of a non-unique index was a "good idea", in this case I said it was valid - yes.

Non-Unique Index

Reader, October 03, 2005 - 8:04 am UTC

Is it a bad idea to place constraints (pk and unique)
on non unique indexes .Should we use this freely ?

Thanks ,

Tom Kyte
October 03, 2005 - 11:10 am UTC

It is a bad idea to ask the database to use a non-unique index to enforce a unique or primary key?

In general - no, it is not a bad idea, if you have deferrable constraints, it would be mandatory in fact.

It *may* have a subtle, small, tiny impact on the optimizer (no index unique scan - always the chance of duplicates in there).

I would not (I do not) make all constraints deferrable however (especially primary keys - if you do that, make sure to add a redundant NOT NULL constraint!! if the primary key is deferrable - so is the NOT NULL part and the index can be used in less circumstances)

Do it when you have a good reason to do so.

Verifying header and trailer

VA, October 04, 2005 - 6:50 am UTC

Suppose I have a external table/file like

HEADER,20051003
pk1,10
pk2,20
pk3,30
TRAILER,60

How can I load this file into a table only if the header date matches a date in the database and the trailer count matches the data records (60=10+20+30)?

Thanks

Tom Kyte
October 04, 2005 - 4:23 pm UTC

well, select * from et where rownum = 1 will give you the first record. select it, verify it.

select sum(case when str <> 'TRAILER' then value end) sum1,
sum(case when str = 'TRAILER' then value end) sum2
from et
where string <> 'HEADER'

compare sum1 and sum2

and then load.








Verifying header and trailer

VA, October 04, 2005 - 4:58 pm UTC

Well, I was hoping to do this using minimum (1!) number of "passes" of the file since its a large file.

1. select * from et where rownum=1 - Is this equivalent to sqlldr's load=N option i.e. it only reads 1 record and returns? So it would be instantaneous?

2. Is there a way I can load the file, header and trailer into my real table and do the verification as part of the INSERT .. select ... from et itself? Using sum(trailer.col) over () or something like that?

Thanks

Tom Kyte
October 04, 2005 - 8:32 pm UTC

1) yeah...

2) anything like that is going to be a virtual two pass regardless - need to SUM() and need the details. If you do the running total, it'll have to "sort" the data (temp is the second pass).

it'll either be two pass on file or two pass on file+table...

(of course, since I said that - there will be 100 ways to avoid it by tomorrow ;) challenge time - I don't see a way to avoid it using straight SQL

BULK_EXCEPTIONS:oracle version 9i release 2

Padmaja, March 01, 2006 - 1:07 pm UTC

Hello Tom,
I am using oracle version 9i release 2.
And like Georg I wanted to get the key record of the table when I try to use bulk exception.
When I was using the following code it works fine.

declare
l_cnt number :=0;
type array is table of t%rowtype index by binary_integer;
data array;
errors number;
dml_errors exception;
pragma exception_init(dml_errors, -24381);
cursor c is select * from t;
begin
open c;
loop
fetch c bulk collect into data limit 100;
begin

forall i in 1..data.count save exceptions
insert into t2 values data(i);

exception
when dml_errors then
errors := sql%bulk_exceptions.count;
l_cnt := l_cnt + errors;
for i in 1..errors loop
dbms_output.put_line
('Error occurred during iteration ' ||
sql%bulk_exceptions(i).error_index ||
' Oracle error is ' ||
sql%bulk_exceptions(i).error_code );
end loop;
end;
exit when c%notfound;
end loop;
close c;

dbms_output.put_line( l_cnt || ' total errors' );

end;
/

But when i tried to use data( SQL%BULK_EXCEPTIONS(i).ERROR_INDEX )
Statement in dbms_output I got the following error:
PLS-00306: wrong number or types of arguments in call to '||'
This is how I used it in my modified code.

declare
l_cnt number :=0;
type array is table of t%rowtype index by binary_integer;
data array;
errors number;
dml_errors exception;
pragma exception_init(dml_errors, -24381);
cursor c is select * from t;
begin
open c;
loop
fetch c bulk collect into data limit 100;
begin

forall i in 1..data.count save exceptions
insert into t2 values data(i);

exception
when dml_errors then
errors := sql%bulk_exceptions.count;
l_cnt := l_cnt + errors;
for i in 1..errors loop
dbms_output.put_line
('Error occurred during iteration ' ||
sql%bulk_exceptions(i).error_index ||
' Oracle error is ' ||
sql%bulk_exceptions(i).error_code ||' Error Record is :'||
data( sql%bulk_exceptions(i).error_index);
end loop;
end;
exit when c%notfound;
end loop;
close c;

dbms_output.put_line( l_cnt || ' total errors' );

end;
/
I don’t know where I went wrong. I tried in many ways by using just the data(I) and still I am not able get the record in exception part of code.
Can you please tell me where I went wrong?

Thanks in Advance
Padmaja


Tom Kyte
March 01, 2006 - 1:54 pm UTC

dbms_output.put_line
('Error occurred during iteration ' ||
sql%bulk_exceptions(i).error_index ||
' Oracle error is ' ||
sql%bulk_exceptions(i).error_code ||' Error Record is
:'||
data( sql%bulk_exceptions(i).error_index);


does seem to be missing a ) doesn't it. at the end

BULK_EXCEPTIONS:oracle version 9i release 2

Padmaja, March 01, 2006 - 5:20 pm UTC

Yes it does miss a ) but still i got the same error. and missing a ) is typo mistake here. sorry. But in my code it is there and still gives same error. following is correction. thank you for response.
dbms_output.put_line
('Error occurred during iteration ' ||
sql%bulk_exceptions(i).error_index ||
' Oracle error is ' ||
sql%bulk_exceptions(i).error_code ||' Error Record is
:'||
data( sql%bulk_exceptions(i).error_index));

I am able to get the rec key but in different way of bulk loading the code is

DECLARE
TYPE l_tt1 IS TABLE OF t.col1%TYPE INDEX BY BINARY_INTEGER;
TYPE l_tt2 IS TABLE OF t.col2%TYPE INDEX BY BINARY_INTEGER;
TYPE l_tt3 IS TABLE OF t.col3%TYPE INDEX BY BINARY_INTEGER;
TYPE l_tt4 IS TABLE OF t.col4%TYPE INDEX BY BINARY_INTEGER;
TYPE l_tt5 IS TABLE OF t.col5%TYPE INDEX BY BINARY_INTEGER;

l_t1 l_tt1;
l_t2 l_tt2;
l_t3 l_tt3;
l_t4 l_tt4;
l_t5 l_tt5;

BEGIN

SELECT * BULK COLLECT INTO
l_t1,l_t2,l_t3,l_t4,l_t5
FROM t;



FORALL j IN l_t1.FIRST..l_t1.LAST
SAVE EXCEPTIONS
INSERT INTO t2
VALUES(l_t1(j),
l_t2(j),
l_t3(j),
l_t4(j),
l_t5(j)
);

COMMIT;

EXCEPTION

WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(CHR(10));
DBMS_OUTPUT.PUT_LINE(SQL%BULK_EXCEPTIONS.COUNT);
FOR i IN 1..SQL%BULK_EXCEPTIONS.COUNT LOOP
DBMS_OUTPUT.PUT_LINE('SQL CODE : '||SQL%BULK_EXCEPTIONS(i).ERROR_CODE||' - '||SQLERRM||' '||SQLCODE||' - '||SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
dbms_output.put_line( l_t1(sql%bulk_exceptions(i).error_index) || ',' || l_t2(sql%bulk_exceptions(i).error_index);
END LOOP;

END;
/
but i have a mastertabel with 207 cols and its a pain to declare all those cols as BINARY_INTEGER. And also i have several table with more that 30 cols and i needed to modify the code for those too. The code you mensioned is excelent except the exception handling.

Tom Kyte
March 02, 2006 - 9:07 am UTC

so, give us a small concise yet 100% complete example please. create table and all.

BULK_EXCEPTIONS:oracle version 9i release 2

padmaja, March 01, 2006 - 5:24 pm UTC

I am sorry again ) is missing. correction is:
dbms_output.put_line( l_t1(sql%bulk_exceptions(i).error_index) || ',' ||
l_t2(sql%bulk_exceptions(i).error_index) );


BULK_EXCEPTIONS:oracle version 9i release 2

Padmaja, March 02, 2006 - 12:00 pm UTC

Ok. here is the example with table and code.
create table t ( col1 varchar2(9) not null ,
                 col2 varchar2(20) not null,
                 col3 varchar2(2));
CREATE UNIQUE INDEX pk_t ON t
(col1,col2)    

ALTER TABLE t ADD (
  PRIMARY KEY (col1,col2));             

insert into t values ('111111111','2222222222222221','01');
insert into t values ('111111112','2222222222222222','01');
insert into t values ('111111113','2222222222222223','01');
insert into t values ('111111114','2222222222222224','01');
insert into t values ('111111115','2222222222222225','01');
insert into t values ('111111116','2222222222222226','01');
insert into t values ('111111117','2222222222222227','01');
  create table t2 ( col1 varchar2(9) not null ,
                 col2 varchar2(20) not null,
                 col3 varchar2(2));
CREATE UNIQUE INDEX pk_t2 ON t2
(col1,col2)    

ALTER TABLE t2 ADD (
  PRIMARY KEY (col1,col2));


SQL> CREATE OR REPLACE PROCEDURE load_t2 IS
  2  -----------------------------------------------------
  3  --Declare all the variables here
  4  ------------------------------------------------------- 
  5            l_cnt number :=0;
  6            type array is table of t%rowtype index by binary_integer;
  7            data array;
  8            errors number;
  9            dml_errors exception;
 10            pragma exception_init(dml_errors, -24381);
 11            cursor c is select * from t where (col1,col2) not in (select col1,col2 from t2) ;
 12  
 13  ----------------------------------------------------------
 14  --MAIN 
 15  ---------------------------------------------------------
 16  begin
 17            open c;
 18            loop
 19                fetch c bulk collect into data limit 100;
 20                begin
 21               
 22                     forall i in 1..data.count save exceptions
 23                       insert into t2 values data(i);
 24                        
 25                exception
 26                when dml_errors then
 27                     errors := sql%bulk_exceptions.count;
 28                     l_cnt := l_cnt + errors;
 29                     for i in 1..errors loop
 30                         dbms_output.put_line('Error occurred during iteration ' ||
 31                           sql%bulk_exceptions(i).error_index ||
 32                          ' Oracle error is ' ||
 33                          sql%bulk_exceptions(i).error_code )
 34                          ;
 35                     end loop;
 36            end;
 37             exit when c%notfound;
 38          end loop;
 39          close c;
 40  
 41        dbms_output.put_line( l_cnt || ' total errors' );
 42  
 43  end load_t2;
 44  /

Procedure created.

The above procedure is without deplay of data key record. compiles fine. The following code is with key record.
SQL> CREATE OR REPLACE PROCEDURE load_t2 IS
  2  -----------------------------------------------------
  3  --Declare all the variables here
  4  ------------------------------------------------------- 
  5            l_cnt number :=0;
  6            type array is table of t%rowtype index by binary_integer;
  7            data array;
  8            errors number;
  9            dml_errors exception;
 10            pragma exception_init(dml_errors, -24381);
 11            cursor c is select * from t where (col1,col2) not in (select col1,col2 from t2) ;
 12  
 13  ----------------------------------------------------------
 14  --MAIN 
 15  ---------------------------------------------------------
 16  begin
 17            open c;
 18            loop
 19                fetch c bulk collect into data limit 100;
 20                begin
 21               
 22                     forall i in 1..data.count save exceptions
 23                       insert into t2 values data(i);
 24                        
 25                exception
 26                when dml_errors then
 27                     errors := sql%bulk_exceptions.count;
 28                     l_cnt := l_cnt + errors;
 29                     for i in 1..errors loop
 30                         dbms_output.put_line('Error occurred during iteration ' ||
 31                           sql%bulk_exceptions(i).error_index ||
 32                          ' Oracle error is ' ||
 33                          sql%bulk_exceptions(i).error_code ||' Error Record is:'
 34                          ||
 35                          data(sql%bulk_exceptions(i).error_index))
 36                          ;
 37                     end loop;
 38            end;
 39             exit when c%notfound;
 40          end loop;
 41          close c;
 42  
 43        dbms_output.put_line( l_cnt || ' total errors' );
 44  
 45  end load_t2;
 46  /

Warning: Procedure created with compilation errors.
SQL> show errors
Errors for PROCEDURE LOAD_T2:

LINE/COL ERROR
-------- -----------------------------------------------------------------
30/24    PL/SQL: Statement ignored
30/45    PLS-00306: wrong number or types of arguments in call to '||'


thank you,
Padmaja 

Tom Kyte
March 02, 2006 - 1:16 pm UTC

well, data is a complex "record"


dbms_output.put_line( data(0) );

would have a similar error, there is no implicit "print out a record" method, you have to access data(0).attribute - a simple scalar


You have to access the scalars using ||, it doesn't know what it means to concatenate a string with a " table of t%rowtype"

BULK_EXCEPTIONS:oracle version 9i release 2

PAdmaja, March 02, 2006 - 3:13 pm UTC

Thank you for wonderfull answer. the following statement works.
data(sql%bulk_exceptions(i).error_index).col1

thank you again. code works great.
Padmaja

Use of bulk insert in 9i and later

Tony, November 02, 2006 - 1:50 pm UTC

Tom,

Till the version 8i, PL/SQL and SQL engines are different and thus the Bulk Insrt was used to reduce the context swith. In 9i and later versions, PL/SQL and SQL engines are one and the same. What is the use of Bulk Insert here. Please explain. I think it's of no use in 9i and 10g.

Tom Kyte
November 02, 2006 - 7:33 pm UTC

till version NNNNN the plsql and sql engines are different, nothing changed there.

plsql uses a COMMON SQL PARSER - so that things like analytic functions are understood by the plsql parsing process.

it is the use of a COMMON PARSER that changed in 9i, nothing about context switching and such changed.



Data transfer options

A reader, November 15, 2006 - 9:08 pm UTC

We are looking at options for moving data from one oracle DB to the other on a regular basis (frequency any where from an hr to 24 hrs). The source system is an OLTP system (ERP application). The destination DB is an operational data store. The options we are considering are
1) Create Materialized views
2) Write customer SQL to query the source data tables based on the last updated time stamp and insert into the destination DB.
3) Create triggers on the source data tables, these triggers would copy the data to shadow tables(tables with the similar structure, but different table name). Read data from the shadow tables and insert into the destination DB.
4) Use Java messaging to copy data from the shadow tables (option #3) and insert into destination database.

I would like to get your opinion on the above solutions or other alternate solutions. Is Oracle Warehouse Builder the right tool in this situation?

DB version: 8.1.7
Data volume in the tables: about 10000 records every hour.

Thank you



Tom Kyte
November 16, 2006 - 3:06 pm UTC

2) be careful with that axe eugene (references anyone to that quote :)

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:16998677475837 <code>


here is my option:

5) use software written this century where things like streams and change data capture exist and can do the job for you.

Using SAVE EXCPETIONS

Kumar, November 21, 2006 - 5:21 pm UTC

Tom,

I was trying to understand the application of SAVE EXCEPTIONS when using BULK COLLECT. Is this clause applicable only when one uses FORALL clause?
How do I catch the dml erros using BULK COLLECT and using the FOR i in <VAR>.COUNT loop... clause?

So if I am handling let's say 100 rows and each row goes through lots of processing before ending up in tables as updates or inserts and then 10 rows have some dml errors, the rest 90 should be processed.

If you could provide a small example, that would help.




Tom Kyte
November 22, 2006 - 4:10 pm UTC

you do not use bulk collect in a for i in 1..count

you do not use bulk collect in a forall i either

you use save exceptions with forall processing to ask oracle to process as many records from your array as it can, and save the exceptions in another array.

Kumar, November 25, 2006 - 11:37 pm UTC

So what I understand is that you could use:

In the package:
TYPE MY_TYP is table of tbl%rowtype index by binary_integer;
my_coll my_typ;
.....
In the package body:
cursor cursor1 is select * from tbl where flag is null;
open cursor1;
loop
fetch cursor1 bulk collect into my_coll;
for i in 1..my_coll.count
loop
l_flag := my_coll(i).flag;
...
end loop;
update tbl set flag='DONE' where flag=l_flag;
exit when cursor1%notfound;
end loop;
EXCEPTION
when others
then
update tbl set flag='FAIL' where flag=l_flag;
---Call the proc again to process all the records in the --cursor
PKG.PROC

I thought we could do this..Or am I understandng incorrectly. Please clarify.

If my above statments are ok, then is calling the pkg.proc in the exception section a possible solution. So if any errors, then the particular record flag column is updated to 'FAIL'. So next time the proc is called, it does not pick up that failed record.

Tom Kyte
November 26, 2006 - 9:20 am UTC

not really sure what you are asking.

not at all.

Correction to my above response

Kumar, November 25, 2006 - 11:57 pm UTC

A correction. Please read as under:

So what I understand is that you could use:

In the package:
TYPE MY_TYP is table of tbl%rowtype index by binary_integer;
my_coll my_typ;
.....
In the package body:
cursor cursor1 is select * from tbl where flag is null;
open cursor1;
loop
fetch cursor1 bulk collect into my_coll;
for i in 1..my_coll.count
loop
l_pk := my_coll(i).tbl_pk;
...
end loop;
update tbl set flag='DONE' where tbl_pk=l_pk;
exit when cursor1%notfound;
end loop;
EXCEPTION
when others
then
update tbl set flag='FAIL' where tbl_pk=l_pk;
---Call the proc again to process all the records in the --cursor
PKG.PROC

I thought we could do this..Or am I understandng incorrectly. Please clarify.

If my above statments are ok, then is calling the pkg.proc in the exception
section a possible solution. So if any errors, then the particular record flag
column is updated to 'FAIL'. So next time the proc is called, it does not pick
up that failed record.






Kumar, November 27, 2006 - 9:36 pm UTC

The clarification that I sought was the way :

a) Bulk Collect usage.
b) A solution to handle errors that would take the code to the exception block and then continuing completely through the records that are fetched by the cursor. My idea was in one way to simulate the way FORALL ..SAVE EXCEPTIONS handles excpetiosn without stopping the process in the code.

Please go through the earlier code and your suggestions.


Tom Kyte
November 27, 2006 - 9:45 pm UTC

a) makes no sense in the context, anything...

b) if you are using 10gr1 and before, save exceptions would work if you have to load good records and log bad. in 10gr2 and after, "LOG ERRORS" on a single sql statement would be most appropriate instead.

Version is 9i

Kumar, November 27, 2006 - 10:29 pm UTC

We are using 9i. Why is BULK COLLECT not suitable in this context?

I thought fetching 100 records, processing and fetching again was a viable option. Please comment on this.

Thanks for the quick response.

Tom Kyte
November 28, 2006 - 7:08 am UTC

if you can do it in a single sql statement - do not write procedural code.

simply put, that is it.

if you can do it in a single sql statement - do not write procedural code.


Now, if you cannot do it in a single sql statement, by all means make use of all of the tools available to you such as bulk collect, forall, save exceptions and so on to make the code you are forced to write more efficient.

but

if you can do it in a single sql statement - do not write procedural code.

Comment on Exception Handler

Kumar, November 28, 2006 - 10:56 am UTC

Thanks Tom for the response. But I thought you would comment on the calling the package.proc in the exception section to enable processing all the records in the cursor. Please comment or any better way of handling.



Tom Kyte
November 28, 2006 - 12:00 pm UTC

i don't know what you mean.

Kumar, November 28, 2006 - 5:33 pm UTC

This is from the dummy code presented earlier:
.....
end loop;
update tbl set flag='DONE' where tbl_pk=l_pk;
exit when cursor1%notfound;
end loop;
EXCEPTION
when others
then
update tbl set flag='FAIL' where tbl_pk=l_pk;
---Call the proc again to process all the records in the --cursor
PKG.PROC
.....

This package "PKG" has a procedure "PROC". If there are any exceptions while processing the cursor (for eg: primary key violation), then the exception section updates the flag to 'FAIL' and I would like to continue processing the other records. Hence I have called the PKG.PROC from within the exception section.

I would like you to comment/suggest on this approach.


Tom Kyte
November 28, 2006 - 8:07 pm UTC

i didn't write the code...

Code Written by me

Kumar from OH, November 28, 2006 - 9:34 pm UTC

Tom,

That was the sample code that I had written. Sorry,confusing you.

Please look at and comment/suggest.

Thanks.

Something strange...

J. Laurindo Chiappa, November 29, 2006 - 10:09 am UTC

Kumar, in this code, if the first update (the one saying update tbl set flag='DONE') fails due to some unavoidable reason (say, dup key in the record), the flux will go to the exception when others, and there you says :

when others
then
update tbl set flag='FAIL' where tbl_pk=l_pk;

BUT, the table tbl IS the same, the arguments ARE the same, SO if the first update failed, the second update listed above WILL fail again probably, the flag column will NOT get anything, I really don´t see the "point" here...

Regards,

Chiappa


Kumar, December 02, 2006 - 6:51 pm UTC

The DONE or FAIL update would not fail as the update is based on the surrogate key. It is the other tables in the proc that might end up in the exception (though not likely due to the handling of RI in the code itself). But if for some reason the code moves to exception, then the package.proc is called again after updating the reord being procesed as FAIL. So the pkg.proc that is calle in the exception would not get the FAILed record to process.

Thanks for the comments.

Tom Kyte
December 02, 2006 - 8:03 pm UTC

still doesn't make sense.

think about it - if the "done" update failed, why wouldn't the "fail" update.

this code isn't making any sort of sense.

Rajesh, March 30, 2007 - 6:11 am UTC

Hi Tom,
Please look at this ETL statement.

insert into t_stg
select product,trans_date,sum(product_value)
from prod_data pd,prod_pack pp
where pd.country = :country
and pp.country = pd.country
and pp.prod_code = pd.prod_code
and pd.trans_date between :st_dt and :en_date
group by product,trans_date

This query runs daily and prod_data and prod_pack tables hold 3 months
of data at any time and when the recent one day transaction data comes into these tables, oldest
one day transaction data will be removed.

Instead of daily loading 3 months data,
we have a plan to first load 3 months data and subsequent loads
will load only the latest one day transaction data and later delete oldest 1 day data
through a delete statement.

An extra condition need to be added which checks
if no transaction dates in a_stg table then insert all data
for first load and later load the latest one day data.

and pd.trans_date > (select max(trans_date) from t_stg ...

How to modify this insert for that?
Your idea would be welcome.
Thanks Tom.

Tom Kyte
March 30, 2007 - 1:35 pm UTC

you seem to know how to modify it? or I did not follow you clearly....

you add a predicate that filters the rows...

Little help Please

Alistair, May 03, 2007 - 4:40 am UTC

Tom,

A Little help Please. I need to do some ETL in plsql but having problems in finding the easiest way. Could you give me a pointer please

I have a multi million row base table but an extract of the data looks something like.


DIM_1 DIM_2 DIM_3 Year Value

1 2 3 1998 4
1 2 3 1999 23
1 2 4 1998 2
1 2 5 1999 33


I wish to write some procedures that would allow you to add,subtract, multiply various subsets and create new rows in the table. I was thing of writing of seperate procedures for each.

The user would then pick say all rows with DIM_1='1',DIM_2='2',DIM_3='3'to add with DIM_1='1',DIM_2='2',DIM_3='5'where the years are the same.

The Output would look like this (X= New value for this subset)

DIM_1 DIM_2 DIM_3 Year Value

1 2 X 1998 4
1 2 x 1999 55

What would be the easiest way of doing this in pl/sql ? I can do it in pure sql simply but it need to be in plsql procedure as its part of a bigger amount of work.

create table dim_test
(dim_1 varchar2(20),
dim_2 varchar2(20),
dim_3 varchar2(20),
year varchar2(20),
value number)
/
insert into dim_test values ('1','2','3','1998',4);
insert into dim_test values ('1','2','3','1999',23);
insert into dim_test values ('1','2','4','1998',2);
insert into dim_test values ('1','2','5','1999',33);
Tom Kyte
May 03, 2007 - 5:40 pm UTC

... I can do it in pure sql simply but it need to be in plsql procedure as its part of a bigger amount of work. . ...


you have greatly missed the point.

if you can do it in sql, you are done. finished. if you want to put the sql into plsql - cool, go for it.

if you can do it in sql, do not do it in procedural code.

bulk processing

PA, August 15, 2007 - 2:48 pm UTC

Dear Tom,

I love to read the solutions on this blog. They are very useful in real-life situations as well as for learning enhanced n advanced stuff :-)

1)

I am reiterating the Chen's question above.

begin
   for i in somecur loop
    begin
       savepoint a;
       insert into someTable1(a,b) values (i.a,i.b);
       insert into someTable2(c,d) values (i.c,i.d);
       insert into someTable3(e,f) select e, f from souretable where col1= i.a and col2 = i.b;
    exception
       when others then
         rollback to a;
         insert into errortable (a,b,c,d) values(i.a,i.b,i.c,i.d);
    end;
  end loop;
  commit;
end;


I wonder whether in Oracle 10G we can do the same in FORALL loop instead of FOR Loop.

Alternatively, can we do the same thing like -

begin
   forall i in 1 .. somecur.count 
   loop
         procedure_to_process_multiple_records_here(i.coma_seperated_column s);
  end loop;
  commit;
end;


Also, please do some tips on ETL process. Some dos and donts as I need to devise an ETL process whereby staging schema will be used to populate master records for some 200 tables in the production schema. Half of the tables will be repeated for each branch. There could be anything between 500 to 1000 branches. Database size expected to be around some 200 Gig to 500GB for the destination schema. Both the schema should be in the same database. (Moreover, the procedure should also repopulate the changed data). I have decided to use Merge statement for the 2nd problem statement. How to control transaction management like commit n rollback (branch-wise or branch-wise table-wise) assuming no integrity issues will be there in the schema data. I think undo data would be lot hence we may have to consider this. I have also decided to make my procedure re-runnable using the message logging system. Please do comment on this.
Should I consider disabling the constraints ? What about audit triggers ! Business demands auditing.

A checklist from your side to take relevant decision would be very useful so that I wont miss out on anything.

Thanks and Regards,
Piyush

ETL process

P, August 25, 2007 - 1:40 pm UTC

Dear Tom,
Please do comment on the above question.
Tom Kyte
August 27, 2007 - 4:18 pm UTC

ok, it did not make sense to me, you reiterated someones question and then changed it hugely into something that doesn't relate to the original question at all.

now what? :)

no, you cannot forall a stored procedure call, well, you could but it would be not as efficient as just calling it N times.


the only ETL do and don't I know of:

do: read the data warehousing guide
do: understand how oracle works
do: things in SQL, not slow by slow

don't: listen to anyone that says "take all of the data OUT of the database, do stuff to it and put it back in"

How strange, please help

Loïc, February 04, 2008 - 3:52 pm UTC

Hello Tom,
I'm currently facing a strange issue with the following code on an 9.2.0.8 database:

I receive new messages from the new_entries table (for simplicity, the table only contains the functional key of the message) and I must check against the table unique_checks if these new messages have already been provided to the system by using a primary key constraint; in such case, the message is rejected (because of an ORA-00001 error).

SET SERVEROUTPUT ON SIZE 1000000

DROP TABLE unique_checks;

CREATE TABLE unique_checks ( my_key VARCHAR2( 1000 ) NOT NULL PRIMARY KEY );

SET FEEDBACK OFF
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763680802012114');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763670802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763710802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763650802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763730802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763660802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763640802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763720802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763700802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763690802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393925750802012115');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393925730802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393925720802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393925710802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393925690802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393925700802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393925740802012115');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013389318180802012115');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013389318170802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013389318160802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013389318190802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393925760802012115');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394675300802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394675240802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394675250802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394675260802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394675280802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394675290802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394675270802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393800910802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393800920802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393800830802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393800930802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393800860802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393800820802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393800880802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393800850802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013391095460802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013391095450802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013391095440802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013391095430802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393800870802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393800900802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393800840802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393800810802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393800890802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392277590802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392277610802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392277600802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445140802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394444970802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445040802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394444980802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394444960802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445000802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445020802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445060802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445010802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394444990802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445110802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445070802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013391664580802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445100802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394444950802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445050802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445080802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445030802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445160802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445130802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445170802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013391664570802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013391664560802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013391664590802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445120802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445150802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445090802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394042690802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394042670802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394042720802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394042700802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394042730802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394042680802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394042710802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394042660802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394830040802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394830050802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394830070802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394830030802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394830080802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392433980802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392433970802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392433960802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434910802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434900802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434890802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434010802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434000802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392433990802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434020802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434030802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434880802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434920802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394830060802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394222540802012115');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013391320030802012115');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013391320020802012115');
SET FEEDBACK ON
COMMIT;

DROP TABLE new_entries;

CREATE TABLE new_entries ( my_key VARCHAR2( 1000 ) NOT NULL );

SET FEEDBACK OFF
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393763680802012114');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393763670802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393763710802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393763650802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393763730802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393763660802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393763640802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393763720802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393763700802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393763690802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393925750802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393925730802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393925720802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393925710802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393925690802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393925700802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393925740802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389318180802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389318170802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389318160802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389318190802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393925760802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394675300802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394675240802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394675250802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394675260802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394675280802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394675290802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394675270802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393800910802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393800920802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393800830802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393800930802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393800860802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393800820802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393800880802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393800850802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013391095460802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013391095450802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013391095440802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013391095430802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393800870802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393800900802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393800840802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393800810802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393800890802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392277590802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392277610802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392277600802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445140802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394444970802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445040802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394444980802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394444960802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445000802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445020802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445060802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445010802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394444990802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445110802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445070802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013391664580802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013391664570802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445100802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394444950802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445050802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445080802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445030802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445160802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445130802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445170802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013391664560802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013391664590802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445120802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445150802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394445090802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394017910802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394017920802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394017930802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394017900802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394042690802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394042710802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394042730802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394042670802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394042720802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394042700802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394042660802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394042680802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394830040802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434010802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434000802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392433990802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434020802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434030802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434880802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434920802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394830080802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392433980802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392433970802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392433960802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434910802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434900802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434890802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394830070802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394830050802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394830060802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394830030802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434740802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434730802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434710802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434700802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434750802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434720802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394222540802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394222550802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394222530802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013391320030802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013391320020802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013391320010802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013391320040802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394222520802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394222560802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394080970802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394080920802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394080910802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394080950802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394080960802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394080930802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389996500802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389996490802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389996480802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389996470802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389995970802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389995950802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389995940802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389995930802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389996510802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389995920802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389995900802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389995890802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394080940802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389995960802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389995880802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013389995910802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394799080802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394799070802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322980802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392321310802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322990802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392321330802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322950802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322960802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322940802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322930802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392321300802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322920802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394799060802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322970802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392321280802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013386441650802012114');
SET FEEDBACK ON
COMMIT;

DECLARE
     TYPE refCursorType IS REF CURSOR;
     -- declare a weak ref cursor
     select_cursor refCursorType;

     type data_array_type is table of unique_checks%rowtype index by binary_integer;

     data data_array_type;

     errors NUMBER;
     dml_errors EXCEPTION;
     PRAGMA exception_init(dml_errors, -24381); -- do not change the exception number!

BEGIN
     OPEN select_cursor for 'select my_key from new_entries order by my_key';

     LOOP
          -- bulk fetch
          fetch select_cursor BULK COLLECT INTO data LIMIT 1000;

          BEGIN
             -- try to insert into the unique messages table
             FORALL i IN 1..data.COUNT SAVE EXCEPTIONS
                insert into unique_checks values data(i);
          EXCEPTION
              WHEN dml_errors THEN
                   -- if it fails...
                   errors := SQL%BULK_EXCEPTIONS.COUNT;
                   FOR j IN 1..errors LOOP
                       dbms_output.put_line( j || ': error ' || SQL%BULK_EXCEPTIONS(j).ERROR_CODE || ' for row with key: ' || data(j).my_key );
                       -- and if it is a violation of an unique constraint...
                   end loop;
          END;

          exit when select_cursor%notfound;

     END LOOP;

     CLOSE select_cursor;

end;

/

ROLLBACK;


The testcase outputs all duplicated messages that raise an 0RA-00001 error when inserting with the same primary key value. On my database, I get this output:

1: error 1 for row with key: CCP_SETT_CONF00000013386441650802012114     <--- note this
2: error 1 for row with key: CCP_SETT_CONF00000013389318160802012121
3: error 1 for row with key: CCP_SETT_CONF00000013389318170802012121
4: error 1 for row with key: CCP_SETT_CONF00000013389318180802012115
5: error 1 for row with key: CCP_SETT_CONF00000013389318190802012121
6: error 1 for row with key: CCP_SETT_CONF00000013389995880802012115
...


What I don't understand is why I get the ORA-00001 error for the row in the new_entries table with my_key equals to 'CCP_SETT_CONF00000013386441650802012114'?

Indeed this value is inserted only once in the new_entries table and not in the unique_checks table.

What did I miss?

Thanks for your lights!
Loïc

Strange behavior, please help

Loïc, February 04, 2008 - 4:10 pm UTC

Sorry for the previous testcase, it was somewhat too big, I reduced the code size and hopefully the behavior is still the same.

Hello Tom,
I'm currently facing a strange issue with the following code on an 9.2.0.8 database:

I receive new messages from the new_entries table (for simplicity, the table only contains the functional key of the message) and I must check against the table unique_checks if these new messages have already been provided to the system by using a primary key constraint; in such case, the message is rejected (because of an ORA-00001 error).

SET SERVEROUTPUT ON SIZE 1000000

DROP TABLE unique_checks;

CREATE TABLE unique_checks ( my_key VARCHAR2( 1000 ) NOT NULL PRIMARY KEY );

SET FEEDBACK OFF
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763680802012114');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763670802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763710802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763650802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763730802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763660802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763640802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763720802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393763700802012121');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393800810802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013393800890802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392277590802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392277610802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392277600802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445140802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394444970802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445040802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394444980802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394444960802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445000802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445020802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445060802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445010802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394444990802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445110802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445070802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013391664580802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445100802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394444950802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445050802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394445080802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394830080802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392433980802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392433970802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392433960802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434910802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434900802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434890802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434010802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434000802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392433990802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434020802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434030802012116');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434880802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013392434920802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394830060802012122');
INSERT INTO unique_checks(my_key) VALUES ('CCP_SETT_CONF00000013394222540802012115');
SET FEEDBACK ON
COMMIT;

DROP TABLE new_entries;

CREATE TABLE new_entries ( my_key VARCHAR2( 1000 ) NOT NULL );

SET FEEDBACK OFF
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393763680802012114');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393763670802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393763710802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393763650802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393763730802012121');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013393800880802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392277590802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434920802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394830080802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392433980802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392433970802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392433960802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434910802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434900802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392434890802012116');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394830070802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394830050802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322980802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392321310802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322990802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392321330802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322950802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322960802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322940802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322930802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392321300802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322920802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013394799060802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392322970802012122');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013392321280802012115');
INSERT INTO new_entries(my_key) VALUES ('CCP_SETT_CONF00000013386441650802012114');
SET FEEDBACK ON
COMMIT;

DECLARE
     TYPE refCursorType IS REF CURSOR;
     -- declare a weak ref cursor
     select_cursor refCursorType;

     type data_array_type is table of unique_checks%rowtype index by binary_integer;

     data data_array_type;

     errors NUMBER;
     dml_errors EXCEPTION;
     PRAGMA exception_init(dml_errors, -24381); -- do not change the exception number!

BEGIN
     OPEN select_cursor for 'select my_key from new_entries order by my_key';

     LOOP
          -- bulk fetch
          fetch select_cursor BULK COLLECT INTO data LIMIT 1000;

          BEGIN
             -- try to insert into the unique messages table
             FORALL i IN 1..data.COUNT SAVE EXCEPTIONS
                insert into unique_checks values data(i);
          EXCEPTION
              WHEN dml_errors THEN
                   -- if it fails...
                   errors := SQL%BULK_EXCEPTIONS.COUNT;
                   FOR j IN 1..errors LOOP
                       dbms_output.put_line( j || ': error ' || SQL%BULK_EXCEPTIONS(j).ERROR_CODE || ' for row with key: ' || data(j).my_key );
                       -- and if it is a violation of an unique constraint...
                   end loop;
          END;

          exit when select_cursor%notfound;

     END LOOP;

     CLOSE select_cursor;

end;

/

ROLLBACK;


The testcase outputs all duplicated messages that raise an 0RA-00001 error when inserting with the same primary key value. On my database, I get this output:

1: error 1 for row with key: CCP_SETT_CONF00000013386441650802012114   <---- note this
2: error 1 for row with key: CCP_SETT_CONF00000013392277590802012122
3: error 1 for row with key: CCP_SETT_CONF00000013392321280802012115
4: error 1 for row with key: CCP_SETT_CONF00000013392321300802012115
5: error 1 for row with key: CCP_SETT_CONF00000013392321310802012115
6: error 1 for row with key: CCP_SETT_CONF00000013392321330802012115
7: error 1 for row with key: CCP_SETT_CONF00000013392322920802012115
8: error 1 for row with key: CCP_SETT_CONF00000013392322930802012115
9: error 1 for row with key: CCP_SETT_CONF00000013392322940802012115
10: error 1 for row with key: CCP_SETT_CONF00000013392322950802012115
11: error 1 for row with key: CCP_SETT_CONF00000013392322960802012115
12: error 1 for row with key: CCP_SETT_CONF00000013392322970802012122
13: error 1 for row with key: CCP_SETT_CONF00000013392322980802012122
14: error 1 for row with key: CCP_SETT_CONF00000013392322990802012122


What I don't understand is why I get the ORA-00001 error for the row in the new_entries table with my_key equals to 'CCP_SETT_CONF00000013386441650802012114'?

Indeed this value is inserted only once in the new_entries table and not in the unique_checks table.

What did I miss?

Thanks for your lights!
Loïc


Tom Kyte
February 04, 2008 - 5:23 pm UTC

 24            EXCEPTION
 25                WHEN dml_errors THEN
 26                     -- if it fails...
 27                     errors := SQL%BULK_EXCEPTIONS.COUNT;
 28                     FOR j IN 1..errors LOOP
 29                         dbms_output.put_line
 30                         ( j ||
 31                          ': error ' || SQL%BULK_EXCEPTIONS(j).ERROR_CODE ||
 32                          ' for row with key: ' ||
 33                         data(<b>SQL%BULK_EXCEPTIONS(j).ERROR_INDEX</b>).my_key );
 34                     end loop;
 35            END;
 36
 37            exit when select_cursor%notfound;
 38
 39       END LOOP;
 40
 41       CLOSE select_cursor;
 42
 43  end;
 44
 45  /
1: error 1 for row with key: CCP_SETT_CONF00000013392277590802012122
2: error 1 for row with key: CCP_SETT_CONF00000013392433960802012116
3: error 1 for row with key: CCP_SETT_CONF00000013392433970802012116
4: error 1 for row with key: CCP_SETT_CONF00000013392433980802012116
5: error 1 for row with key: CCP_SETT_CONF00000013392434890802012116
6: error 1 for row with key: CCP_SETT_CONF00000013392434900802012116
7: error 1 for row with key: CCP_SETT_CONF00000013392434910802012116
8: error 1 for row with key: CCP_SETT_CONF00000013392434920802012122
9: error 1 for row with key: CCP_SETT_CONF00000013393763650802012121
10: error 1 for row with key: CCP_SETT_CONF00000013393763670802012121
11: error 1 for row with key: CCP_SETT_CONF00000013393763680802012114
12: error 1 for row with key: CCP_SETT_CONF00000013393763710802012121
13: error 1 for row with key: CCP_SETT_CONF00000013393763730802012121
14: error 1 for row with key: CCP_SETT_CONF00000013394830080802012122

PL/SQL procedure successfully completed.


Size of data inserts

Srisk, December 30, 2008 - 5:43 am UTC

Hi,

This is related to the very first question.
Your solution depicting lesser time for Bulk processing and even lesser for 'insert into ... select * from ..' was great!

Question:
We are using 9.2.0.4.0 - 64bit on Solaris 8.

I have about 100K records which I *can* directly insert using 'insert into A .. select * from B .. where A.id=B.id" statement. Considering the amount of data, do you think the direct sql (insert .. select ..) is still better than using Bulk collect? Or should I go for bulk collect?
Would using direct sql be slower or utilize more redo,undo, memory?

Thanks.




Tom Kyte
January 05, 2009 - 9:07 am UTC

regardless of 'size' the rule is:

if you can do it in a single sql statement - do it


so, yes, insert as select is the right approach.

for 100 records, 100,000 records, whatever.

external table with where clause

Nantha, April 22, 2009 - 11:20 am UTC

Tom,

Can't I use where clause in my select from External table in PL/SQL? The file is going to have around 100 to 200 records so just want to use it has a external table.
But I am getting some meaning less error like this. The symbol keep changes.

"Other Error-6553-ORA-06553: PLS-103: Encountered the symbol "=" when expecting one of the following"
Thanks,
Nantha
Tom Kyte
April 24, 2009 - 4:21 pm UTC

... Can't I use where clause in my select from External table in PL/SQL? ...

of course you can.

give us a for example and we'll tell you what you typed in wrong.

external table with where clause

Nantha, April 22, 2009 - 1:00 pm UTC

Tom,
Sorry It works. There was some other problem in the parametes which I was passing.
Thanks.
Nantha

Applicability of CDC

Vikas Atrey, June 03, 2009 - 5:18 am UTC

Hi ,
I have the following scenerio :

The data (10G R2) is populated in the database from various applications multiple times during the day .It could be insert or the update operations.

Business wants to see any record as it existed at some point in the past and hence we need to store the previous snapshot of the data somewhere before updating the same. There are couple of options e.g.

1. Maintain effective_from and effective_to columns in the table itself and whenever there is new changed version (updates) of the record then set the effective_to columns of the previous version to today -1 and insert the new version with effective_from as today and effective_to to some very far time in future.

2. Update the table records and maintain audit tables( old_column_value and new_column_value ) using triggers to store new and old column values. but the queries to make entire record as it existed at some point in time in the past will be quite complex.
If we copy the entire record to audit tables before update using trigger then the data in the audit tables would be quite huge.

3. Maintain the timestamp column to store last_update_time and and see (through PL/SQL batch program) which of the records have changed since we picked up the data. But we may lose the change history here if multiple updates happens before we pick it up.

I was just wondering if CDC is a good option in this scenerio. Not sure if CDC will tell me only the last changed value or can tell all of the changes ( with their new and old values) that has happended since I picked up the records from the "change table" to create my entire record history along with the timestamp when the change occured.
To keep the data volume less and easy to query in this scenerio I guess It will be better to have record with two time stamp columns ( effective_from , effective_to).
Please share your experience.




Tom Kyte
June 04, 2009 - 9:11 am UTC

4) use dbms_wm and enable versioning (search this site for examples, read the application developers guide to workspace manager)

5) upgrade to 11g and use the flashback data archive

6) use streams, capture the changes and using a custom apply build your archive.


All six are viable. I don't care about the size (#2) - disk is cheap, our time and our computing resources to retrieve data is not (and you are worrying about something hypothetical - you have compression and other features to save a bit on space over time - space should not be the consideration)

#5 would be the least amount of human effort.
#4 if the update volume and access patterns allow for it to be viable would be worth considering
#6 rather do that than a trigger in real time (would not do #2)

Dbms_cdc

vikas atrey, June 05, 2009 - 10:12 am UTC

Thank you so much for the quick inputs.
Could you please point me to the location where I can get further details about dbms_CDC capabilities with examples.

In particular , I am interested to know if dbms_cdc can be used to know all of the changes(with old and new values) that has occured on the table records.
And please advice about the options available to save all of these record versions in another table something like bitemporal history .

Converting ETL procedural code into single bulk update

Daniel, October 28, 2009 - 4:06 pm UTC

Tom:

this dedup logic via cursor in plsql code.
this is running slow due to cursor row-by-row processing.
I am trying to convert this de-dedup cursor logic into single Update if at all possible to make the code effecient.

Could you help how to make it bulk based via single update. Bulk collect is my second choice.


=========== 
DECLARE
   CURSOR cur_dedup
   IS
      SELECT   S.ROWID, submission_no, PRODUCT_ABBR, POLICY_EFCTV_DT, POLICY_NO,
               R.status_rank,wip_id, product_cd
          FROM FEI.SUBMISSION_DBG S,ETL_SDM.int_status_rank R
         WHERE (submission_no, PRODUCT_ABBR, POLICY_EFCTV_DT) in
         (
          select submission_no,product_abbr, policy_efctv_dt from FEI.SUBMISSION_DBG
          where duplicate_in = 'N'
          group by submission_no,product_abbr, policy_efctv_dt
          having count(*) > 1)
         AND DUPLICATE_IN = 'N'
         AND S.status_cd = r.status_code(+)
      ORDER BY submission_no,
               PRODUCT_ABBR,
               POLICY_EFCTV_DT,
               POLICY_NO,
               R.status_rank,
               wip_id DESC,
               product_cd DESC;

   rec_dedup      cur_dedupROWTYPE;
   rec_hold       cur_dedupROWTYPE;
   v_dedup        CHAR;
   v_updt_count   NUMBER (10);
   v_total_count  NUMBER (10);
BEGIN
   v_dedup := 0;
   v_updt_count := 0;
   v_total_count := 0;

   OPEN cur_dedup;

   LOOP
      FETCH cur_dedup
       INTO rec_dedup;

      EXIT WHEN cur_dedupNOTFOUND;

      IF v_updt_count > 500
      THEN
         COMMIT;
        v_updt_count:=0;
      END IF;

      IF  rec_hold.submission_no = rec_dedup.submission_no
         AND rec_hold.PRODUCT_ABBR= rec_dedup.PRODUCT_ABBR
         AND rec_hold.POLICY_EFCTV_DT= rec_dedup.POLICY_EFCTV_DT
      THEN
         IF    rec_hold.POLICY_NO= rec_dedup.POLICY_NO
            OR (    rec_hold.POLICY_NO IS NULL
                AND rec_dedup.POLICY_NO IS NULL
                AND v_dedup = 0
               )
         THEN
                        IF rec_hold.status_rank = rec_dedup.status_rank
               AND
                           rec_hold.wip_id = rec_dedup.wip_id
               AND rec_hold.product_cd = rec_dedup.product_cd
            THEN
               v_dedup := 0;
            ELSE
               v_dedup := 1;
            END IF;
         ELSIF    (    rec_hold.POLICY_NO IS NULL
                   AND rec_dedup.POLICY_NO IS NULL
                   AND v_dedup = 1
                  )
               OR (rec_hold.POLICY_NO IS NOT NULL AND rec_dedup.POLICY_NO IS NULL)
         THEN
            v_dedup := 1;
         ELSE
            v_dedup := 0;
         END IF;

         IF v_dedup = 1
         THEN
            UPDATE FEI.SUBMISSION_DBG
               SET DUPLICATE_IN = 'Y'
             WHERE ROWID = rec_dedup.ROWID;

            v_updt_count := v_updt_count + 1;
         v_total_count :=v_total_count + 1;
         END IF;
      ELSE
         v_dedup := 0;
      END IF;

/* SET THE VARS FOR THE HOLDING RECORD. */
      rec_hold := rec_dedup;
   END LOOP;

   COMMIT;
 dbms_output.put_line('Total Rows Updated: '||v_total_count);
   CLOSE cur_dedup;
END;
/
COMMIT;

=========



counts from both tables:

SQL> select count(*) from FEI.SUBMISSION_DBG ;

  COUNT(*)
----------
   7055889

SQL> select count(*) from  ETL_SDM.int_status_rank ;


  COUNT(*)
----------
         9

SQL> SQL> 

Tom Kyte
October 28, 2009 - 4:41 pm UTC

got spec?

I want a specification like that was given to the person that wrote this code.


why should I have to do the reverse engineering when the specification *must* already exist.


and of course, those commits in the middle totally break this logic. Do you see your defining queries and logic use "duplicate_in = 'N'" to identify the working set....

What happens when this code fails halfway through? the defining query will not find records it used to find the first time because the data was changed and committed on it.

procedural code to bulk update

Daniel, October 28, 2009 - 8:33 pm UTC

<< got spec?
I want a specification like that was given to the person that wrote this code.

unfortunately we do not have either the spec or the person who wrote this code. I am trying to understand the logic and rewrite this into single update statement to improve the performance as well as for the reasons you have highlighted (partial commit issue).


Tom Kyte
October 29, 2009 - 8:23 am UTC

... unfortunately we do not have either the spec...

then you cannot do your job. period.


... I am trying to understand the logic ...

that is your job, not mine, when you get to that point of understanding and you have documented it - feel free to come back and ask "how might I best accomplish this goal"


So, start by reverse engineering, draw up the psuedo code - spec it out - have it (your findings) PEER REVIEWED by your coworkers (preferably ones with many more years experience than yourself - that is the best way to peer review - by using peers that know more than you do)

and then we can talk.


You might find more than one bug (we already have one) in this 'logic', you need to understand what you need to do in order to do it best.

And taking someones suspect code and drawing your specification from that is dicey - you are assuming every assumption they made is a fact, you are assuming their code is correct, and so on...

Derived value in external tables

Thanigai, January 03, 2010 - 7:51 am UTC

Hi Tom,
I am trying to use the external tables way to read from EBCDIC file and write to multiple tables based on a record type that occurs at 8th position. THere are 10 different possible occurances for the type, and accordingly the record is routed to one of the 8 tables. All 8 tables have different structure. Until now, this was running great but there is a new need to populate one more column which is just a flag(Y or N) based on some of the columns being either all NULLs or contain values. For example, in Table 1, if columns 4,5 and 6 are all nulls, then the new column should be populated as 'N'. If even one of these 3 are NOT NULLs, then the new column will be 'Y'. The logic could be different for Table #2(which may say if columns 1 thorough 6 are nulls then new column='N'). Can you please let me know if this can be done in a better way than using an before insert trigger for each of the tables?
The load is part of the source to stage in a ETL that will take close to 4 hours using nologging and so, it is important for me to do it the best way possible.

Regards
Thanigai
Tom Kyte
January 04, 2010 - 11:57 am UTC

you don't tell us how you are loading the tables, but it sounds pretty simple?

Just add a case or decode statement to the list of columns to insert into table T1

case when c1 is null and c2 is null and c3 is null then 'N' else 'Y' end


Actually if this flag is to be data dependent - eg: if someone where to UPDATE the table - you should change the returned value - then this should be a column in a view or a virtual column in 11g - NOT assigned during load.

Benchmarking CDC.

Rajeshwaran, Jeyabal, April 22, 2010 - 1:41 pm UTC

Hi Tom,

I need a help from you. I usually follow your Technique of keep Benchmarking the things using, i had completed the Asynchronous HOTLOG CDC in a database. I dont know how measure the Latency Performance ( How Quickly the change data's will be available to subscriber) . Can you please help me how to Measure that?


Thanks,
Rajesh.


Benchmarking CDC.

Rajeshwaran, Jeyabal, April 23, 2010 - 10:58 am UTC

Hi Tom,

Here is my scenario. I had defined Asynchronous Hotlog CDC on Hr.employees tables.

hr@10G> UPDATE employees
  2  set last_name = upper(last_name)
  3  WHERE employee_id = 200;

1 row updated.

hr@10G> commit;

Commit complete.

hr@10G> delete from employees where employee_id = 175;

1 rows deleted.

hr@10G> COMMIT;

Commit complete.

hr@10G> connect cdcsubscriber/cdcsubscriber
Connected.

cdcsubscriber@10G> BEGIN
  2     DBMS_CDC_SUBSCRIBE.extend_window(subscription_name=> 'emp_sub');
  3  END;
  4  /

PL/SQL procedure successfully completed.


cdcsubscriber@10G> SELECT operation$, email, first_name, last_name, employee_id
  2  FROM emp_view;

OP   EMAIL      FIRST_NAME    LAST_NAME    EMPLOYEE_ID
--  ---------   ---------    -----------  -------------
UO  JWHALEN    Jennifer     Whalen       200
UN   JWHALEN    Jennifer     WHALEN       200
D    AHUTTON    Alyssa       Hutton       175


I, just want to Measure, how fastly the data is refreshed in Subscriber view. I dont know how to Measure this, Can you please help me ?

Thanks,
Rajesh.
Tom Kyte
April 23, 2010 - 2:11 pm UTC

sorry, I don't have any advice on measuring the latency with CDC.

Benchmarking CDC.

Rajeshwaran, Jeyabal, April 26, 2010 - 9:37 am UTC

Tom,

To measure the latency,I couldnot find some usefull information from V$STREAMS_CAPTURE. but i did some thing using SQL * plus Set Time on Option.

19:48:21 hr@ORCL> insert into hr.employees
19:48:27   2  (
19:48:27   3    EMPLOYEE_ID,
19:48:27   4    FIRST_NAME,
19:48:27   5    LAST_NAME,
19:48:27   6    EMAIL,
19:48:27   7    PHONE_NUMBER,
19:48:27   8    HIRE_DATE,
19:48:27   9    JOB_ID,
19:48:27  10    SALARY,
19:48:27  11    COMMISSION_PCT,
19:48:27  12    MANAGER_ID,
19:48:27  13    DEPARTMENT_ID
19:48:27  14  )
19:48:27  15  select rownum as EMPLOYEE_ID,
19:48:27  16       substr(object_name,1,20) as  FIRST_NAME,
19:48:27  17       substr(object_name,1,20) as  LAST_NAME,
19:48:27  18       substr(object_name,1,15)||'@mail.com' as EMAIL,
19:48:27  19       555-7777-839 as PHONE_NUMBER,
19:48:27  20       created  as HIRE_DATE,
19:48:27  21       substr(object_name,1,10) as job_id,
19:48:27  22       object_id as sal,
19:48:27  23       0.2 as COMMISSION_PCT,
19:48:27  24       rownum AS MANAGER_ID,
19:48:27  25       10 as DEPARTMENT_ID
19:48:27  26  from  ALL_OBJECTS;

53780 rows created.

19:49:13 hr@ORCL> commit;

Commit complete.


19:50:32 cdcsubscriber@ORCL> BEGIN
19:50:36   2    DBMS_CDC_SUBSCRIBE.extend_window(subscription_name=> 'emp_sub');
19:50:36   3  END;
19:50:36   4  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.06

19:50:38 cdcsubscriber@ORCL> exec print_table(' select * from emp_view where rownum <= 1 ');
     OPERATION$---------- I
     CSCN$--------------- 2637116
     COMMIT_TIMESTAMP$--- 26-APR-10
     XIDUSN$------------- 9
     XIDSLT$------------- 5
     XIDSEQ$------------- 1866
     DDLDESC$------------
     DDLOPER$------------
     DDLPDOBJN$----------
     RSID$--------------- 56225
     TARGET_COLMAP$------ FEF100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000
     EMAIL--------------- CHNF$_RDESC@mail.com
     EMPLOYEE_ID--------- 2445
     FIRST_NAME---------- CHNF$_RDESC
     LAST_NAME----------- CHNF$_RDESC


PL/SQL procedure successfully completed.

Elapsed: 00:00:00.48


I just extended the window at 19:50:36 and the data changes available in Subscriber view at 19:50:38, so the latency here is 2 secs, Is this a right way to measure Tom?
Tom Kyte
April 26, 2010 - 9:50 am UTC

it would be a tad cumbersome in real life, wouldn't it.

Benchmarking CDC.

Your Fan, April 26, 2010 - 3:16 pm UTC

Tom,

I am not sure,that is this a right way to ask you a question. But can you please check with your fellow collegues with in oracle about how to measure the performance latency in CDC ?

Thanks in Advance.
Your Fan.


First question in this post

A reader, December 30, 2010 - 12:22 am UTC

From your reply to the first question in this post :-
is where you could be if you can put your ETL validation into SQL itself


Looks promising for you ;)

In our case ,we also do bulk processing and commit batchwise whenever we need to copy large dataset from one db to other (using bulk procedure or anonymous plsql block).
For smaller dataset ,we use normal insert..select *from t@db_link .

Just want to confirm that we are using right methods for large datasets and that large inserts need to be done batchwise.






Tom Kyte
December 30, 2010 - 1:25 pm UTC

... need to be done batchwise ...

Never say Never
Never say Always
I always say.......



They do not NEED to be done batchwise - "it depends". We do a demo on stage where we load a terabyte of data in less then 15 minutes. There is NO batching there - just one big load (otherwise it would take too long). We then "ELT" it (on purpose ELT - we extracted - loaded and then transform it) in about 10 minutes and validate it in under 5. If we did procedural processing - we'd never be able to do that.


The answer: it depends, it might be the right approach (what you are doing), it might not be. Insufficient data to determine. What you think it large I might think to be very small.

this cod print for 2011 and 2010 i need only for 2011

ziad, April 04, 2011 - 5:07 am UTC

to_char(a.AUDIT_TIMESTAMP + 1 / 24 / 60 * 5, 'MM/DD/YYYY HH24:MI AM') between
to_char(TO_DATE('04/03/2011','MM/DD/YYYY') + 1 / 24 -1 / 60 * 5, 'MM/DD/YYYY HH24:MI AM') and
to_char(TO_DATE('04/03/2011','MM/DD/YYYY') + 1 / 24 / 60 * 5, 'MM/DD/YYYY HH24:MI AM')
Tom Kyte
April 12, 2011 - 12:00 pm UTC

think about this.

Why would you compare strings???? You have dates, why wouldn't you compare dates.


And if you are going to compare strings, you should understand how strings 'compare'. You would sort of want to make the strings sortable wouldn't you?

If you start your string with MM, and then DD and then YYYY - think about it.

06/01/1001

June 1st, year 1001. That would be between:

05/30/2525 (may 30th, 2525)

and

08/02/0001 (aug 2nd, year 1)

because 06 is between 05 and 08, the rest of the string doesn't matter.



To solve your issue, just lose all of the to_char's, I don't know why you used them in the first place


a.AUDIT_TIMESTAMP + 1 / 24 / 60 * 5
between
TO_DATE('04/03/2011','MM/DD/YYYY') + 1 / 24 -1 / 60 * 5
and
TO_DATE('04/03/2011','MM/DD/YYYY') + 1 / 24 / 60 * 5

ETL Process with Transaction Table Lookup

Rajeshwaran, Jeyabal, August 28, 2011 - 1:46 pm UTC

Tom:

I have a Transaction table having nearly 300M records like this.

create table t1(clm_id number,dx_id number);
insert into t1 values(1,1);
insert into t1 values(1,2);
insert into t1 values(1,3);
insert into t1 values(2,1);
insert into t1 values(2,2);
insert into t1 values(3,1);
commit;


Now, I need to compare the data in this table against the data present in some Data files (say DAT.TXT).

CREATE TABLE T2
(
  X VARCHAR2(10),
  Y NUMBER
)
ORGANIZATION external 
(
  TYPE oracle_loader
  DEFAULT DIRECTORY PP_DIR
  ACCESS PARAMETERS 
  (
    RECORDS DELIMITED BY NEWLINE CHARACTERSET WE8MSWIN1252
    BADFILE 'DAT.bad'
    LOGFILE 'log.txt_xt'
    READSIZE 1048576
    FIELDS TERMINATED BY "|" LDRTRIM 
    MISSING FIELD VALUES ARE NULL 
    REJECT ROWS WITH ALL NULL FIELDS 
    (
      X CHAR(255) TERMINATED BY "|",
      Y CHAR(255) TERMINATED BY "|"
    )
  )
  location 
  (
    'DAT.TXT'
  )
)REJECT LIMIT UNLIMITED;


and DAT.TXT has the data like this.

CLM|1|
DX|1|
DX|2|
DX|3|
SP||
CLM|2|
DX|3|
SP|1|
SP|2|
CLM|3|
DX|1|
SP|1|


So to do this comparison created an ETL function like this.

create or replace type vcarray is table of NUMBER;
/

create or replace type clm_typ is object(
 clm_id  varchar2(10),
 dx_id vcarray,
 spc_id vcarray
); 
/
create or replace type clm_ntt is table of clm_typ;
/
create or replace function clm_etl
return clm_ntt
pipelined 
is
 l_clm_type clm_typ:= clm_typ(null,vcarray(),vcarray());
 l_clm_type_init clm_typ:= clm_typ(null,vcarray(),vcarray());
 l_first_clm boolean := false;
begin 
 for r in (select * from t2)
 loop
  if r.x = 'CLM' then   
   if not l_first_clm then
    l_first_clm := true;
   else
    pipe row(l_clm_type);
   end if;   
   l_clm_type := l_clm_type_init;
   l_clm_type.clm_id := r.y ;   
  elsif r.x = 'DX' and r.y is not null then
   l_clm_type.dx_id.extend;
   l_clm_type.dx_id(l_clm_type.dx_id.last) := r.y;   
  elsif r.x ='SP' and r.y is not null then
   l_clm_type.spc_id.extend;
   l_clm_type.spc_id(l_clm_type.spc_id.last) := r.y;   
  end if;   
 end loop;
 pipe row(l_clm_type);

exception
  when others then
   raise_application_error(-20458,sqlerrm);
end clm_etl;
/


Now when i do the comparison like this I works fine.

rajesh@ORA10GR2>
rajesh@ORA10GR2> select t2.*
  2  from
  3        ( select clm_id,cast(collect(dx_id) as vcarray) as dx_id
  4          from t1
  5          group by clm_id )t1,
  6        ( select * from table(clm_etl) ) t2
  7  where t1.clm_id = t2.clm_id
  8  and  cardinality ( t2.dx_id multiset except t1.dx_id ) = 0
  9  /

CLM_ID     DX_ID                          SPC_ID
---------- ------------------------------ ------------------------------
1          VCARRAY(1, 2, 3)               VCARRAY()
3          VCARRAY(1)                     VCARRAY(1)

Elapsed: 00:00:00.04
rajesh@ORA10GR2>


But Now, i need the values present in the column SPC_ID to be displayed as single values and not as collections. So i did like this

rajesh@ORA10GR2> select t2.clm_id,t3.*
  2  from
  3        ( select clm_id,cast(collect(dx_id) as vcarray) as dx_id
  4          from t1
  5          group by clm_id )t1,
  6        ( select * from table(clm_etl) ) t2,
  7        table(t2.spc_id)  t3
  8  where t1.clm_id = t2.clm_id
  9  and  cardinality ( t2.dx_id multiset except t1.dx_id ) = 0
 10  /

CLM_ID     COLUMN_VALUE
---------- ------------
3                     1

Elapsed: 00:00:00.04
rajesh@ORA10GR2>


Question:

1) How can i display the CLM_ID=1 if SPC_ID collection is an empty collection?

2) I am using 10GR2, do you think any other better way to do this than this approch? ( Presently I benchmarked with 998MB text file using this approch, I took only 550 secs)

3) Each DX and SP segment in the data file is dependent on the CLM segment. So having Parallel Pipelined Table function will break this dependency? does Parallel Pipelined Table function help me here?

ETL Process with Transaction Table Lookup

Rajeshwaran, Jeyabal, September 10, 2011 - 11:43 pm UTC

Tom:

Sorry for the lengthy post, Can you please help me on this?

custom pl/sql for ETL approach

bmorrow, October 18, 2011 - 2:23 am UTC

Hi Tom,

I'm currently creating a custom pl/sql package that needs to run under 2 hours. There are 2 source tables,sourceDataTable which has most of the data to be extracted, sourceIdTable where only a record id column is needed, and 2 target tables, targetArchiveTable and targetReportTable.

The bottleneck is that before loading to targetReportTable, every record needs to be tagged for access rights on the datawarehouse (company A can only access record 123, company B can only access 456 and so on).

My approach is:

1) select the data from sourceDataTable with the sourceIdTable column data set to '-1' if there's no match from sourceIdTable.
SELECT /*+ PARALLEL */ sourceDataTable.columns,
(select NVL(sourceIdTable.someRequiredId,-1)
...

2) insert /*+ APPEND */ data into targetArchiveTable

3) loop thru records and delete elements tagged with Id = -1 else tag access rights for record

4) insert /*+ APPEND */ data into targetReportTable


I did it this way so that the source tables only needed to be read once, loaded to targetArchiveTable then row by row tagging can be done in memory then loaded into the targetReportTable.

I got this approach from reading about bulk processing, direct path insert and parallel dmls, can you suggests an approach that I may have overlooked or need to read up on that may speed up ETL performance? Thanks.
Tom Kyte
October 18, 2011 - 2:28 am UTC

I do not understand step 1. You have not provided sufficient detail in the sql shown. It looks like you are using a scalar subquery - which will be a performance killer for you (executing that query over and over and over). further - you just select it - but so what? you don't tell me what you do with it. Why select it at all???

I do not get step 2 as documented. what is data? where does that come from??


I don't understand why you loop over anything. Why not just NOT INSERT the -1 records in the first place????


I didn't follow this at all, sorry

custom pl/sql for ETL approach - additional information

bmorrow, October 18, 2011 - 3:42 am UTC

Hi Tom,

Thanks for the quick response. Here's some more information on what I've posted.

1) Select the data from sourceDataTable with the sourceIdTable column data set to '-1' if there's
no match from sourceIdTable. I've tagged non matching records to have an id of '-1' to mark them for deletion later on after all selected records have been loaded to targetArchiveTable while matching records will be loaded to targetReportTable.

CURSOR c_dataLoad ( p_joinColumn ) IS
SELECT /*+ PARALLEL(sdt, 4) */ sdt.columns,
NVL(sit.someRequiredId, -1)
FROM sourceDataTable sdt,
sourceIdTable sit
WHERE sdt.joinColumn = sit.joinColumn (+);

2) Load all records into targetArchiveTable, keep the records in memory for deletion tagging and access tagging later on.

Fetch c_dataLoad BULK COLLECT INTO rColumns,rRequiredId LIMIT 1000;
FORALL i IN 1 .. rColumns.COUNT
INSERT /*+ APPEND */ INTO targetArchiveTable (Columns) VALUES (rColumns(i));

3) Loop thru collection to delete records not needed on targetReportTable and to flag access rights for records that will be loaded into targetReportTable. The loop is needed to tag each row for access rights when it is loaded into the targetReportTable, since I was going thru it row by row I deleted the records tagged as '-1'.

FOR i IN 1 .. rColumns.COUNT LOOP
IF rRequiredId(i) = -1 THEN
rColumns.DELETE(i);
ELSE
flag1 := someFunction.accessTag(rColumns(i), 'Company A')
flag2 := someFunction.accessTag(rColumns(i), 'Company B')
flag3 := someFunction.accessTag(rColumns(i), 'Company C')
END IF;
END LOOP;

4) load tagged records to targetReportTable.

FORALL i IN INDICES OF rColumns
INSERT /*+ APPEND */ INTO targetReportTable (Columns, CompanyA_access, CompanyB_access, CompanyC_access) VALUES (iColumns(i), flag1(i), flag2(i), flag3(i));


This is my current approach to read thru the source tables, load it to an archive table, tag the records in memory then load it to another table that is used for reporting.


Is there another approach that you can suggests to speed things up? I'm forced to go thru each record for the tagging of the data before it is loaded to the targetReportTable.

Thanks.
Tom Kyte
October 18, 2011 - 4:22 am UTC

first, this:

INSERT /*+ APPEND */ INTO targetArchiveTable (Columns) VALUES
(rColumns(i));

is identical (thankfully for you!!!) to this:

INSERT /*+ Hello world!!!! How are you? */ INTO targetArchiveTable (Columns) VALUES
(rColumns(i));

insert /*+ append */, a direct path load, works with SETS of data, not single rows. If it did work with single rows - since it loads ABOVE the high water mark and never reuses existing free space - would cause you to load 1 ROW PER BLOCK. So, thank goodness it DID NOT WORK.


When I see code like this:

CURSOR c_dataLoad ( p_joinColumn ) IS
SELECT /*+ PARALLEL(sdt, 4) */ sdt.columns,
NVL(sit.someRequiredId, -1)
FROM sourceDataTable sdt,
sourceIdTable sit
WHERE sdt.joinColumn = sit.joinColumn (+);

2) Load all records into targetArchiveTable, keep the records in memory for
deletion tagging and access tagging later on.

Fetch c_dataLoad BULK COLLECT INTO rColumns,rRequiredId LIMIT 1000;
FORALL i IN 1 .. rColumns.COUNT
INSERT /*+ APPEND */ INTO targetArchiveTable (Columns) VALUES
(rColumns(i));

I cringe, why isn't that just:

insert into targetArchiveTable
SELECT sdt.columns,
NVL(sit.someRequiredId, -1)
FROM sourceDataTable sdt,
sourceIdTable sit
WHERE sdt.joinColumn = sit.joinColumn (+);


use parallel, if it makes sense, use direct path (append) if it makes sense - but don't fetch/insert/fetch/insert - when you can just INSERT!


but actually, in reading your flow, I am curious why the entire routine is not just:

ops$tkyte%ORA11GR2> create table sourceDataTable( columns varchar2(30), joinColumn number );

Table created.

ops$tkyte%ORA11GR2> insert into sourceDataTable values( 'abc This row goes in both', 1 );

1 row created.

ops$tkyte%ORA11GR2> insert into sourceDataTable values( 'abc This row goes in one', 2 );

1 row created.

ops$tkyte%ORA11GR2> create table sourceIdTable (someRequiredId number, joinColumn number );

Table created.

ops$tkyte%ORA11GR2> insert into sourceIdTable values( 42, 1 );

1 row created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create table targetArchiveTable( columns varchar2(30) );

Table created.

ops$tkyte%ORA11GR2> create table targetReportTable( columns varchar2(30), flag1 varchar2(1), flag2 varchar2(1), flag3 varchar2(1) );

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert all
  2  when (1=1)
  3  then into targetArchiveTable (columns) values (columns)
  4  when (new_someRequiredId <> -1)
  5  then into targetReportTable(columns, flag1, flag2, flag3) values (columns, flag1, flag2, flag3)
  6  SELECT sdt.columns, NVL(sit.someRequiredId, -1) new_somerequiredId,
  7         case when sit.someRequiredId is not null then substr( columns, 1, 1 ) end flag1,
  8         case when sit.someRequiredId is not null then substr( columns, 2, 1 ) end flag2,
  9         case when sit.someRequiredId is not null then substr( columns, 3, 1 ) end flag3
 10    FROM sourceDataTable sdt, sourceIdTable sit
 11   WHERE sdt.joinColumn = sit.joinColumn (+);

3 rows created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select * from targetArchiveTable;

COLUMNS
------------------------------
abc This row goes in both
abc This row goes in one

ops$tkyte%ORA11GR2> select * from targetReportTable;

COLUMNS                        F F F
------------------------------ - - -
abc This row goes in both      a b c



I used substr in place of your function - but the gist is there..

A reader, November 18, 2011 - 1:29 pm UTC

Tom,

We have a project that needs to get data from an Informix database into Oracle every 4 hours or so (this rate could change in the future). We originally had 2 weeks to get this done and ready for production, so we took a plan that we felt we could implement in the given timeframe. Since then, the deadline has been bumped out a couple of months, so I figured it would be a good idea to step back and make sure we are doing this the right way.

So my first question is, what is (are) the “best” way(s) to move the data based on these requirements? It needs to come from Informix to Oracle at least every 4 hours. This could become every hour, but I doubt it would be any more frequent than that. The Oracle tables need to have high availability. The Oracle tables are read-only; no data goes back to Informix. Indexes and constraints have to be maintained on the Oracle tables and they may or may not be the same as the indexes on the Informix tables. We have no budget for ETL tools. The end goal is to have Oracle replace the Informix database and a separate project is already going on to permanently move this data to Oracle, so this is a temporary solution (1-2 year lifespan).

The route we took was to pull data from the Informix database and put them into external tables in Oracle. A process then fires to make a copy of the external table in the source schema, copy all of the indexes/constraints from the existing table in the source schema, gather statistics, then drop the old table and rename this new table to match. So the first question is, is this a reasonable solution given the above requirements or is there a better approach that we are overlooking.


I then have some questions based on our current approach that I would like to get an answer to even if you say we should be doing things differently, just for future reference.

First question, I am using RENAME for the table and to preserve the index names. How fast does rename work and does it cause any issues with queries that are going on at the same time? Does it cause any locking (i.e. when I am altering an index name)? How much down time is there likely to be with a DROP TABLE X; ALTER TABLE Y RENAME TO X; statement? Is it dependant on the size of X (i.e. if we are dropping a large table in X, does the rename have to wait a significant amount of time)? I have an option to keep a backup of the destination table in my code in which case I rename X to X_TIMESTAMP then rename Y to X. Is that fast?

Second question, I have not put together my permissions check on this procedure yet. I am making it generic so that anyone can send it a source table and a destination table with some parameters, and it will handle the copy, backing up and renaming etc. So, since my procedure will have to live somewhere with permissions to read from schema X and drop tables and create tables in schema Y, I believe I will have to validate the users’ permissions myself. I am thinking of checking the data dictionary to see what permissions the current user has granted to them and through roles. My first thought is to verify that the user has rights to SELECT from the source table. Then check to see if the current user owns the destination table, and if so, they have the permissions to drop and rebuild the object. If they don’t own the destination table, I need to verify that they should be able to perform this sort of operation. Does verifying they have DELETE rights to the table make sense? I realize this is probably one of those, “only you can answer that question” kinds of questions, but I am wondering what kind of privileges you think would make sense to check for to make sure a user can do this sort of operation.

My last question is about rolling back on an exception. This process does most of its work in the background on a temporary table. It creates the table, indexes it, generates stats, then as the final step, swaps it with the “live” table. Before the swap, all that the procedure does to the actual live table is change the names of its indexes and constraints. In these sorts of situations, do you feel that there is a benefit to rolling things back on an exception, or just stopping what it is doing and alerting a DBA to see what happened and decide if they should rollback or fix the error and continue manually? I am personally leaning towards just stopping and sending alerts, partially because I am having trouble wrapping my head around reverting the index names back to what they were without interrupting service. It also seems like it would be easier to track down the exact problem if everything stayed the way it was, rather than having to try and recreate it through debug mode (even though I have a very nice logging object). What is your typical stance on rolling back operations like this, automate or manual?

Thank you.
Tom Kyte
November 21, 2011 - 1:29 pm UTC

how big are these tables.
do they need to be refreshed consistently (together, as a single transaction)

do you have partitioning available as a tool to be used?

A reader, November 21, 2011 - 2:54 pm UTC

They are not very big at all. No more than a million rows in any of the tables and the biggest table is less than 20MB.

They do not need to be refreshed consistantly. It would probably be better if we did it that way, but the business has approved of each table being refreshed individually.

We do not have Enterprise licenses.

We are in a RAC environment if that changes the approach in any way. I believe the DBA will be setting up 4 instances for this database.
Tom Kyte
November 21, 2011 - 6:02 pm UTC

I would just have a pair of tables - T1 and T2 - for each table you needed. they would have the same indexes, the same constraints, the same grants, etc - they would be consistent with each other that way.

Then load the "old one" (you have to keep track of that - keep the name of a table you are loading in yet another table and update it right before you commit so you know what table to load next time).

When done loading, just "create or replace synonym T for T1 (or T2)" to point to the new object.


Leave the indexes in place, leave the constraints in place, leave the grants in place - just direct path load the tables. So you would:

a) figure out which table to load from that little table you'll keep
b) truncate it
c) insert /*+ append */ into it
d) update table to tell you to load the other table next time
e) gather stats
f) create or replace synonym T to point to this table




A reader, November 22, 2011 - 10:09 am UTC

That sounds good. I was going the route of making something more generic that could be used in other places as well, but really, how often do we want to copy a table from one place to another? This will give us a better solution for this scenario, and make it easier to troubleshoot and fix issues, as well as giving high availability in the case of a failure.

One more quick question, is there any reason that I am not seeing that I would not want to just use the Synonym to tell me which table is the "live" table? If I allow each synonym to tell me which table is live, that would allow me to force a refresh of a single table, and not worry about all the table pointing at the same "version" or having to keep track of each table in that extra tracking table. I could take that extra table out of the equation all together. Are there any "gotchas" that I am not seeing here?
Tom Kyte
November 22, 2011 - 1:00 pm UTC

One more quick question, is there any reason that I am not seeing that I would
not want to just use the Synonym to tell me which table is the "live" table?,


good point, querying the dictionary will work. The query to retrieve the metadata would be a little more time consuming than using your own little simple table - but since you'll be doing it once in a while - not a million times per second - it would be perfectly acceptable. Good idea.

data load

A reader, May 22, 2012 - 8:50 pm UTC

hi
we have application version x and database is running at 10g
we have plan to upgrade the application version to y and database to 11g.

but the complexity in application upgrade is that - there are many schema changes in version y of schema.
so data load to new schema wont be straightforward.

what would be the best strategy which would be useful here to load the data into new application schema?
many thanks
Tom Kyte
May 23, 2012 - 8:08 am UTC

there is no "best way" - if there was, there would only be one way.

I don't know all of your requirements but....


a) golden gate can replicate and transform, this could be especially useful if you wanted to migrate "online" - minimizing downtime.

b) streams can replicate and transform, albeit not as easily as golden gate (more programming here)

c) good old fashion "dump and reload" exists

d) database links with create tables as select work

e) if the new version uses new schema names (that would be handy) then simple create tables as select in a single instance work


upgrade DB and Application

A Reader, May 23, 2012 - 5:07 am UTC

Tom,
I am continuing my previous question and detaling the same here.

task : to upgrade database ( to 11g) and application ( version y).


Database RDBMS version : Current - 10g Target - 11g
Database Size : 2TB
Application : current version x target version y
-- application upgrade would result in schema change.
for e.g. in version X table t1, t2, t3 were used but in new version these tables are replced by table t with some change in application flow/code.

Downtime : minimum as possible ( 5 9's % avialablity )
Target System : would be new physical server where new database would reside.

Question/s
1) Would it be wise to upgrade the source to 11g 1st and then go for application schema change?

2) Somehow can we cut short the total steps involved and using DataGuard ( Phical-Logical) and CDC we can achive both the Oracle 11g upgarde and schema change y in mimimum hops?
3) Any other comments?


regards

Tom Kyte
May 23, 2012 - 10:00 am UTC

see above

I don't think data guard would fit in here at all if you want to transform the data.

upgrade DB and Application..

A Reader, May 23, 2012 - 11:43 am UTC

Many thanks Tom.

upgrade DB and Application..contd

A Reader, May 23, 2012 - 12:13 pm UTC

Tom,
dataguard I was referring here because.
we have new hardware (DB host)
we need to upgrade db to 11g from current 10g
and ( need application upgrade to version y from current x ).

so one option might be,
- using dataguard (physical DG with 10g, then changing to logical DB for upgrade to 11g ) switch to new db version 11g on new host ( minimum downtime )with keeping the application version at x -- this would act as staging DB
- next create new database on the same host and transform the data and load into target DB ( using Goldengate/Streams..)
- this case we would need ~ 2x disks size to house both the stage and target DB in new host.


other option might be,
- upgrade the existing db to 11g , this would require more outage and then using GoldenGate/streams copy , tranform and load data into target.

database size is 2TB ,

your comments please.

regards



Tom Kyte
May 23, 2012 - 12:22 pm UTC

but that would do nothing for your application outage - to convert the data.


Golden gate would be used to both a) upgrade the database and b) alter the schema in an online fashion. Data guard wouldn't be able to help you with (b) at all.

upgrade DB and Application..contd

A Reader, May 23, 2012 - 1:19 pm UTC

yes Tom that is true , application outage would be there.
will plan to use golden gate for the same.


regards

Flat file

A reader, July 12, 2012 - 9:25 am UTC

Say I have a flat file with the following structure

1. File header record
Multiple sections of the form
2. Section header record
3. Variable number of detail records for the section
4. Section trailer
5. File trailer record

Typical Cobol style file where the context for each record is simply its position in the file instead of making each record independent by putting common identifiers in each record.

What is the best way to process and load this type of file using a external table? The goal would be to load the data into a single "transaction" table with a "section id" column to link data that came from a section. The file header/trailer records can be discarded.

Any ideas appreciated. Thanks
Tom Kyte
July 12, 2012 - 6:20 pm UTC

do this for me

create a table with an SEQNO column and FILETEXT column.

generate inserts that insert a line of your file into this giving me at least two sections

so I can see what this file would look like.

Flat file

A reader, July 13, 2012 - 8:14 am UTC

Sorry about that, here you go. First character of each record identifies the type of record: file header, file trailer, section header, section trailer, data. The goal is to collect all the sections, tag them with a common identifier (ideally linked to something in the file header/trailer) and put them in a single table i.e. normalize the flat file into a relational structure. Thanks

SQL> create table flatfile(seqno int,filetext varchar2(1000));

Table created.

SQL> insert into flatfile values(1,'1File header');

1 row created.

SQL> insert into flatfile values (2,'2Section 1 header');

1 row created.

SQL> insert into flatfile values (3,'3Section 1 data line 1');

1 row created.

SQL> insert into flatfile values (4,'3Section 1 data line 2');

1 row created.

SQL> insert into flatfile values (5,'3Section 1 data line 3');

1 row created.

SQL>  insert into flatfile values (6,'4Section 1 trailer');

1 row created.

SQL>  insert into flatfile values (7,'2Section 2 header');

1 row created.

SQL>  insert into flatfile values (8,'3Section 2 data line 1');

1 row created.

SQL>  insert into flatfile values (9,'3Section 2 data line 2');

1 row created.

SQL>  insert into flatfile values (10,'4Section 2 trailer');

1 row created.

SQL>  insert into flatfile values (11,'5File trailer');

1 row created.


Tom Kyte
July 16, 2012 - 3:26 pm UTC

ops$tkyte%ORA11GR2> select * from flatfile;

     SEQNO FILETEXT
---------- ----------------------------------------
         1 1File header
         2 2Section 1 header
         3 3Section 1 data line 1
         4 3Section 1 data line 2
         5 3Section 1 data line 3
         6 4Section 1 trailer
         7 2Section 2 header
         8 3Section 2 data line 1
         9 3Section 2 data line 2
        10 4Section 2 trailer
        11 5File trailer

11 rows selected.

ops$tkyte%ORA11GR2> select *
  2    from (
  3  select r, rec_type,
  4         count( case when rec_type = '2' then rec_type end ) over (order by r) section,
  5             last_value( section_ft ignore nulls ) over (order by r) section_ft,
  6             ft
  7    from (
  8  select rownum r,
  9         substr(filetext,1,1) rec_type,
 10             case when substr( filetext,1,1) = '2' then substr(filetext,2) end section_ft,
 11             case when substr( filetext,1,1) <> '2' then substr(filetext,2) end ft
 12    from flatfile
 13         )
 14             )
 15   where rec_type = '3'
 16  /

         R R    SECTION SECTION_FT                              FT
---------- - ---------- --------------------------------------- ---------------------------------------
         3 3          1 Section 1 header                        Section 1 data line 1
         4 3          1 Section 1 header                        Section 1 data line 2
         5 3          1 Section 1 header                        Section 1 data line 3
         8 3          2 Section 2 header                        Section 2 data line 1
         9 3          2 Section 2 header                        Section 2 data line 2

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library