Skip to Main Content

Breadcrumb

Dev Live Dev Intro

This month we are celebrating Developers at AskTOM. We welcome Developers of all levels of experience to join us at our FREE Developer Live events coming in August. Just click on the left to register today! If you are brand new to Database Technology, then we also have got you covered. Just click on the right for your comprehensive FREE training program to kick start your Oracle Database Development journey!

Question and Answer

Tom Kyte

Thanks for the question, Michael.

Asked: December 15, 2000 - 8:41 am UTC

Answered by: Tom Kyte - Last updated: June 26, 2020 - 3:33 pm UTC

Category: Database - Version: Oracle8i 8.1.6.0.0

Viewed 10K+ times! This question is

You Asked

Hi Tom,
this question was put to me by a developer/designer and as
I couldn't a) Find the Answer or b) make up a convincing answer I
thought I would ask you.

Basically it is ....

HOW does Oracle make the changes to a child table when you update a Parent table

i.e They have a table (parent) with code numbers which is the foreign key on a large number of other tables. Some of the child tables are very big (+4 Million rows) so they are wondering if they make a change to a code number does Oracle perform full table scans to make whatever action has been specified for the restraint i.e if the Action specified on the constraint is Delete
Set Null does that mean that Oracle will read the whole child table to set all the relevant fields to null or does it have some sort of internal index it uses?

I hope you can help but as usual thanks in advance! and as its that time of the year I would like to take this opportunity
to wish you A Merry Christmas and A Happy New Year
and thanks very much for all the help you have already given to me this year

Regards,
Michael Horrocks



and we said...

I believe:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:292016138754 <code>will fully answer this for you. Basically -- if you are using an ON DELETE SET NULL | CASCADE, you want to make sure you index the foreign key. Or if you delete from the parent table -- or if you ever update the parents primary key (you don't want to do that anyway)....

I have scripts to detect unindexed foreign keys for you there. In addition to slowing on delete operations, deletions from the parent tables -- unindexed foreign keys can also cause full table locks of the child tables as well.


and you rated our response

  (106 ratings)

Is this answer out of date? If it is, please let us know via a Review

Reviews

Funny behavior of Foreign Key

July 19, 2002 - 4:39 pm UTC

Reviewer: Ravi from Big Apple

Tom, 
I have a situation where foreign key is not being enforced as described below. MS SQL has the same behavior which leads me to believe this must be a SQL standard - is that right? (though MS Access correctly gave me a Foreign Key violation .. so I am not sure). 

Also, is there a way to ensure the constraint is enforced (other than a trigger) ?

Thanks

SQL> create table A (col1 integer not null, col2 integer, constraint xx unique(col1, col2));

Table created.

SQL> create table B (col1 integer not null, col2 integer);

Table created.

SQL> alter table B add constraint yy foreign key (col1, col2) references A(col1, col2);

Table altered.

------ THIS STATEMENT SUCCEEDS ! ------
SQL> insert into B values (1, NULL);

1 row created.

SQL> select * from a;

no rows selected

SQL> select * from b;

      COL1       COL2
---------- ----------
         1 

Tom Kyte

Followup  

July 20, 2002 - 10:14 am UTC

It is the expected (ANSI) behavior (what we do)

goto
</code> http://docs.oracle.com/docs/cd/A87860_01/doc/server.817/a76965/c24integ.htm#2161 <code>

and look (ctl-f) for "Nulls and Foreign Keys"


RE: Partially NULL foreign keys

July 20, 2002 - 10:24 pm UTC

Reviewer: Brad Peek from Irving, TX

Tom - I wasn't aware of the behavior that Ravi's example demonstrated. I read the Oracle documentation at the link you provided and here is an excerpt:

<Ora Doc>
A composite foreign key can be all null, all non-null, or partially null. The following terms define three alternative matching rules for composite foreign keys:

match full:
Partially null foreign keys are not permitted. Either all components of the foreign key must be null, or the combination of values contained in the foreign key must appear as the primary or unique key value of a single row of the referenced table.

match partial:
Partially null composite foreign keys are permitted. Either all components of the foreign key must be null, or the combination of non-null values contained in the foreign key must appear in the corresponding portion of the primary or unique key value of a single row in the referenced table.

match none:
Partially null composite foreign keys are permitted. If any column of a composite foreign key is null, then the non-null portions of the key do not have to match any corresponding portion of a parent key.

</Ora Doc>

It seems like the documentation is suggesting that the user/developer can influence or choose between the "match partial" and "match none" behaviors when a partially null foreign key is present. However, I didn't see any further comment on it and am assuming that this is not the case.

Can you elaborate on when (if ever) Oracle would use "match partial" instead of "match none" when a partially null FK is present?

Thanks for shining a light into the Oracle cave...

P.S. I am assuming that Ravi's example of a multi-column foreign key with the second column being null is considered a partially null FK and further that this applies to the descriptions given for both "match partial" and "match none".

Tom Kyte

Followup  

July 21, 2002 - 9:23 am UTC

The docs are listing out all of the possible cases (just like in the concurrency control section they discuss general isolation levels that we don't do).

We do the match none and match full.

We cannot directly do a match partial (we would need multiple levels of tables to do that -- one table each to represent each combination of columns -- so if you have C1,C2 in a fkey, you need a table with unique(c1), another with unique(c2) and yet a third with unique(c1,c2))

Enqueue Locks

May 06, 2003 - 12:24 pm UTC

Reviewer: Vivek Sharma from Bombay, India

Dear Tom,

I have a query. I have gone thru your book "Expert one on one" which indicates well that a foreign key index needs to be created for performance reason and resolving locking issues as well. But I have a query, How do I know that my database has Locking problem due to unindexed Foreign Keys. I can see many sessions waiting for TM Locks with lock type as 6 or 3 and only 1 or 2 sessions with lock type 4. I think lock type 4 corresponds to Unindexed Foreign Key. Correct me if I am wrong. I have a session
select * from v$lock where sid=69;

SID TY ID1 ID2 LMODE REQUEST BLOCK CTIME
---------- -- ---------- ---------- ---------- ---------- ---------- ----------
69 TM 51131 0 4 0 0 11282
69 TM 50884 0 3 0 0 11282
69 TM 51181 0 4 0 0 11282

v$session_event does not shows any enqueue wait for this session. I indexed the foreign keys of 51131 and 51181 objects which were referencing to the primary key of 50884. Even after creating the indexes this session is showing in v$lock.

I went thru the session_event and found the majority of the sessions waiting for log file sync. Is this the reason many of the sessions waiting for TM Locks. Running UTLLOCKT.sql also does not shows anything, it means there are no deadlocks. But why are so many sessions waiting. The output of the query
select event, count(*) from v$session_event group by event
order by 2 desc shows
EVENT COUNT(*)
---------------------------------------------------------------- ----------
file open 79
db file sequential read 73
SQL*Net message from client 72
SQL*Net message to client 72
log file sync 60
db file scattered read 51
latch free 24
SQL*Net more data to client 23
SQL*Net more data from client 19
rdbms ipc message 8
buffer busy waits 7
SQL*Net break/reset to client 6
control file sequential read 4
file identify 4
db file parallel write 4
control file parallel write 3
direct path read 3
enqueue 3
direct path write 3
log file sequential read 2
log file switch completion 2
LGWR wait for redo copy 1
log file single write 1
pmon timer 1
smon timer 1
rdbms ipc reply 1
log file parallel write 1
db file single write 1

Can u please help me out in identifying the problem. Also please let me know how to identify locking problem due to unindexed foreign keys and from where to find these objects.
Thanks and Regards
Vivek



Tom Kyte

Followup  

May 06, 2003 - 12:42 pm UTC

double check your work -- make sure to have run the query from page 110

this shows that with a INDEX, the table lock on the child is not taken (in 9i, this script isn't meaningful as the locking methods changed entirely with regards to fkeys)



ops$tkyte@ORA817DEV> drop table c;
Table dropped.

ops$tkyte@ORA817DEV> drop table p;
Table dropped.

ops$tkyte@ORA817DEV> create table p ( x int primary key );
Table created.

ops$tkyte@ORA817DEV> create table c ( x references p );
Table created.

ops$tkyte@ORA817DEV> insert into p values ( 1 );
1 row created.

ops$tkyte@ORA817DEV> commit;
Commit complete.

ops$tkyte@ORA817DEV> update p set x = 2;
1 row updated.

ops$tkyte@ORA817DEV> select v$lock.sid, id1,
  2         (select object_name from user_objects where object_id = v$lock.id1) oname,
  3             id2, lmode, request, block, v$lock.type
  4    from v$lock
  5    where v$lock.sid = (select sid from v$mystat where rownum=1)
  6  /

       SID        ID1 ONAME        ID2      LMODE    REQUEST      BLOCK TY
---------- ---------- ----- ---------- ---------- ---------- ---------- --
         8      56386 P              0          3          0          0 TM
         8      56388 C              0          4          0          0 TM
         8     196691             5332          6          0          0 TX

ops$tkyte@ORA817DEV> create index c_idx on c(x);
Index created.

ops$tkyte@ORA817DEV> update p set x = 2;
1 row updated.

ops$tkyte@ORA817DEV> select v$lock.sid, id1,
  2         (select object_name from user_objects where object_id = v$lock.id1) oname,
  3             id2, lmode, request, block, v$lock.type
  4    from v$lock
  5    where v$lock.sid = (select sid from v$mystat where rownum=1)
  6  /

       SID        ID1 ONAME        ID2      LMODE    REQUEST      BLOCK TY
---------- ---------- ----- ---------- ---------- ---------- ---------- --
         8      56386 P              0          3          0          0 TM
         8     327746             5642          6          0          0 TX

s$tkyte@ORA817DEV> delete from p;

1 row deleted.

ops$tkyte@ORA817DEV> select v$lock.sid, id1,
  2         (select object_name from user_objects where object_id = v$lock.id1) oname,
  3             id2, lmode, request, block, v$lock.type
  4    from v$lock
  5    where v$lock.sid = (select sid from v$mystat where rownum=1)
  6  /

       SID        ID1 ONAME        ID2      LMODE    REQUEST      BLOCK TY
---------- ---------- ----- ---------- ---------- ---------- ---------- --
         8      56390 P              0          3          0          0 TM
         8     327765             5634          6          0          0 TX
 

But is the DB having Locking Due to Unindexed FK

May 07, 2003 - 4:22 pm UTC

Reviewer: Vivek Sharma from Bombay, India

Dear Tom,

Thanks for ur reply. But seeing from the wait events does the database have a problem due to unindexed FKeys or the problem with log file sync wait event. When I created an index, why the session was still waiting with type 4.

How do I identify that the database is having locks due to unindexed foreign keys (seeing which wait events).

Please explain.
Vivek Sharma

Tom Kyte

Followup  

May 08, 2003 - 9:28 am UTC

i didn't see anyone "waiting" -- where do you?



look at it this way -- find the unindexed fkeys then ask yourself

a) do we delete from parent
b) do we update parent primary key

if the answer is YES to either -- index them immediately. They are a problem.

if the answer is NO to both -- ignore them, they are not a problem.

foreign keys and performance

June 09, 2003 - 3:53 pm UTC

Reviewer: A reader from ca

hi tom
Thanx for sharing your knowledge with the community!

I have a situation where a table is part of a framework.
(let us call it table t whose primary key column is called
t_pk).

t_pk column should become a foreign key in most of
the other tables that use this framework. The
developing team of this table maintains that they
discourage the users of the framework from creating
foreign key to their table. Since their table
undergoes many inserts, deletes and updates, they
say that the users should register callbacks that
get executed from their framework if they want to
enforce things like cascade delete.
So, if i have a table my_table which has a column
t_pk, I should not create a foreign key to the table
t. This makes is more complicated to ensure that
the rows in my table get deleted when an entry is
deleted in table t - ( I have to register a callback
to delete my row)

I argued that since the call to the "delete" callback
would be synchronous, the performance would be the
same (since in both cases, the deletion happens
in one case automatically, and in another through callback). But I am not sure if in case of inserts and updates this could be an issue.

My question is:
1. Have you encountered this situation before and what
is your opinion about prefering callbacks over
foreign key for implementing features like
cascade delete?
2. How would you verify or dispute this claim.


Thanx!


Tom Kyte

Followup  

June 09, 2003 - 6:49 pm UTC

(gotta be java programers, "framework", "callbacks")


ask them to prove that they can enforce integrity 100% -- efficiently, effectively and most importantly *CORRECTLY*.

Q) How can they "lock those rows in the child table that do not exist"????
A) they cannot (but oracle can)

what I mean is -- someone decides to delete from t where t_pk = 5; Ok, now what? they must query each table in turn (oh, that'll be super efficient won't it - doing it themselves). Now, the problem is *they cannot see rows that are inserted in the child tables yet not committed*. Now what?

the database provides tools. you use them, period. foreign keys are not the stuff of science fiction -- pretty much every database of worth out there supports them.

This is, simply put, the height of absurdity. I went out of my way to spend a couple of pages on silliness like this in the book I'm working on. My example comes from here:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6091237036109 <code>

advice -- if you can get off of that project, you would be better off. These guys are not serious about data, data integrity. it is going to be a computer science project -- lots of fun to code, lots of pizza late a night, plenty of soda -- but it won't be something you want to actually maintain. That is my guess.

June 09, 2003 - 8:27 pm UTC

Reviewer: A reader from ca

Please bear with me as I try to understand the scenario:

so we have a table t with a primary key t_pk
I have another table t1 with a foreign key reference
to t_pk (t1's column is also called t_pk)

I also have an index on t1.t_pk column. (if i have
a foreign key, I have to index it - as we very well know)

When you delete (or insert or update) an entry from t,
all the indexes on the TABLE T have to updated.
The index (or the foreign key) on t1.t_pk has nothing
to do with it (at least so far.) (i.e. it has no impact
on the performance of the delete, insert or update on T -
it is immaterial. Correct?

Now let us say I add a "cascade delete" constraint on
t1.t_pk column .

Now, ONLY in case of deletes from t_pk, you will have to delete all entries from t1. This has to be done either
by Oracle or by the delete callback (assuming it is even
feasible to be done by a callback - a wrong assumption)
that has been registered - either way, I dont
see why a callback would be faster (since the delete callback action is synchronous.)

Also, for inserts and updates the fk index on t1 has no impact on the insert/updates of the table t, right?
(we are assuming that primary keys are not updated)

The last point is that I could have an index on t1.t_pk
even if I dont have a foreign key because I just want to index it due to performance reasons. The requierment
imposed by the SDK does not make sense and in fact
hurts my performance.

One point that if you could explain more clearly,
it would be great!
You mentioned that

"
*they cannot see rows that are inserted in the child
tables yet not committed*. "

so i guess what you are saying is

I am inserting a value in t1 with a pk value of 1 but
haven't yet committed.
The table t's row with value pk=1 gets deleted in another session triggering all "callbacks".

Then my session commits having a row which is orphan.
Is this what you meant?

Thanx!:)
PS: I was hoping to just forward your answer's link -
but thought better - given the bashing:)

And "get out of project"? Easier said than done
in today's economy!:) besides I just got in:)

Thanx a lot!:)

Tom Kyte

Followup  

June 09, 2003 - 8:54 pm UTC

yes, you will have orphans.

ask them "why do you want to write a ton of code to maintain, at extra cost to the company -- especially in this economy -- when it is already 100% done for us?"

What was it Kevin Loney quoted?

June 09, 2003 - 9:42 pm UTC

Reviewer: Trevor Welch

Kevin Loney author of several Oracle books
had a quote in the front of his books
something like

"Controlling developers is like herding cats"



well, are there any other reasons you can think of

June 25, 2003 - 9:59 pm UTC

Reviewer: A reader


"yes, you will have orphans."

We have a similar argument - except that it
turns out that the scenario (which creates child orphans)
is not possible in our case - because of the timings
of deletes.

Now the argument is that in case of delete cascades
the user would see a performance hit - since the
parent table could potentially have child
tables (with a 2 level heirarchy) with potentially hundreds of rows. The argument is that in case of purge delete
callbacks - the user would just experience the delay
associated with deleting parent entry (in the UI) - the remaining entries would get deleted later on - so at least
the delete would "look" fast.

So , the question is what impact DOES the cascade
delete have (assuming indexes on foreign keys are in place)?

I am thinking of benchmarking but thought would ping
the guru first!

To setup the benchmarking, I thought of following steps.

1. Time the case of a delete that has a parent key
and a child table of two levels with may be 400 rows or
so. 2. Time the above delete without any foreign keys and
cascade deletes.

Find out difference in 1 and 2. Should I be thinking of anything else in terms of resources (latches etc)?

That would be the difference I guess (assuming that
the "background" purge callbacks would not be noticiable.)

Of course, we all are aware of overhead of maintaining
additional lines of code - I am trying to prove the case
without that logic...This looks like
Oracle foreign keys one handed versus two handed purge callback solution. We are on 9i,btw.

Thank you!



Tom Kyte

Followup  

June 26, 2003 - 8:55 am UTC

Your case is "today"
Tomorrow will be different.

Avoid using database RI at your own risk.


IMHO -- there is no "performance impact" as you must do it, it is a requirement, it is not overhead since overhead is something that can be avoided (and that is not true here).


Of course just deleting the parent and orphaning the childen is faster.
Conversely, Of course just deleting the parent and orphaning the child is wrong, we call this a bug

If you are worried about performance from the end users perspective, you might go with a "flag delete" on the parent (end date it) and use views that only show you the "non-deleted" rows in that table. You can schedule a job to delete them in the background if you like.

thanx!

June 26, 2003 - 9:56 am UTC

Reviewer: A reader

That is what i was thinking - doing it in background -
but you clarified the actual approach a little more.
I do agree with you that it is a requirement not
an overhead!

foreign keys

June 26, 2003 - 2:01 pm UTC

Reviewer: A reader

"If you are worried about performance from the end users perspective, you might
go with a "flag delete" on the parent (end date it) and use views that only show
you the "non-deleted" rows in that table. You can schedule a job to delete them
in the background if you like. "

How about
1. marking the parent row to be deleted in one transaction
that returns immediately (With views that logically
delete for the entire application)
2. doing the actual delete in the background job (with
foreign key constraints)


Any thoughts (ASSUMING performance is a real issue -
which probably is not the case)?


Tom Kyte

Followup  

June 26, 2003 - 2:15 pm UTC

thats what I meant (1) and (2)

thanx !

June 26, 2003 - 3:07 pm UTC

Reviewer: A reader

nice to get validation from you!

ok - i ran some benchmarks

July 04, 2003 - 6:23 pm UTC

Reviewer: A reader

Hi Tom
As usual, I appreciate the incredible amout of service
you are providing to Oracle community! Thank you
so much!

I ran some benchmarks to find out the overhead of
foreign key delete cascade constraints versus the
callback mechanism. I assumed that the callback mechanism
would have to internally select and delete the child
records. (please see discussions above on the issue:
claims that delete cascade constraints would result
in performance issue with the solution being callbacks
instead.) Although I agree with the FK solution for the
reasons said in the above threads (less coding, no "correctness" issues, database gives it to you
free etc), I wanted to get an idea of what kind of
performance impacts are we talking about here - hence
the benchmarks. I would appreciate your comments on
the validity of the benchmark results and my conclusions.


Following are the two cases I tested (using your runstats
utility to compare the two cases)

Case 1:
Parent table has one record, child has 400 records.
child has a FK constraint with cascade delete on
parent key column.

Case 2:
Parent table has one record, child has 400 records.
child has no FK constraints.

In both cases, parent has a primary key constraint on the
parent key column.

Following are my results:

Run1 - with foreign key constraints
Run2 - without foreign key constraints

-----results ---
Run1 ran in 7 hsecs
Run2 ran in 8 hsecs
run 1 ran in 87.5% of the time

Name Run1 Run2 Diff
STAT...no buffer to keep pinne 0 349 349
STAT...consistent gets - exami 9 360 351
STAT...session logical reads 1,780 2,172 392
STAT...consistent gets 25 425 400
STAT...table scan rows gotten 1 401 400
LATCH.cache buffers chains 8,186 8,625 439

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
20,130 20,494 364 98.22%
----------------

The results seem to indicate that FK constraints
definitely do not incur any overhead compared
to a callback implemented solution - in fact in the
above case, it seems to be marginally better.

Following is my run - showing the code that I ran
(the benchmark is run twice though in both runs the
results seemed to be about the same)

--- my code (with results)

test_user@apps> set echo on
test_user@apps> @fkb
test_user@apps>
test_user@apps> set serveroutput on
test_user@apps> --create a parent table
test_user@apps>
test_user@apps> drop table parent_table_with_fkrefs cascade constraints;

Table dropped.

test_user@apps>
test_user@apps> create table parent_table_with_fkrefs
2 as select sys_guid() as parent_key , owner, object_type, object_name
3 from all_objects
4 where rownum = 1
5 and object_type = 'FUNCTION';

Table created.

test_user@apps>
test_user@apps> alter table parent_table_with_fkrefs add constraint parent_table_pk1 primary key ( parent_key);

Table altered.

test_user@apps>
test_user@apps> drop table parent_table_without_fkrefs cascade constraints;

Table dropped.

test_user@apps>
test_user@apps> create table parent_table_without_fkrefs
2 as select sys_guid() as parent_key , owner, object_type, object_name
3 from all_objects
4 where rownum = 1
5 and object_type = 'PROCEDURE';

Table created.

test_user@apps>
test_user@apps> alter table parent_table_without_fkrefs add constraint parent_table_pk2 primary key ( parent_key);

Table altered.

test_user@apps>
test_user@apps> drop table child_table_with_fk;

Table dropped.

test_user@apps> create table child_table_with_fk
2 (
3 parent_key raw(16) not null,
4 child_value varchar2(400) not null,
5 constraint child_pk primary key(parent_key, child_value),
6 constraint child_fk foreign key(parent_key)
7 references parent_table_with_fkrefs(parent_key)
8 on delete cascade
9 );

Table created.

test_user@apps>
test_user@apps> drop table child_table_without_fk;

Table dropped.

test_user@apps> create table child_table_without_fk
2 (
3 parent_key raw(16) not null,
4 child_value varchar2(400) not null,
5 constraint child_pk1 primary key(parent_key, child_value)
6 );

Table created.

test_user@apps>
test_user@apps> declare
2 cursor l_fk_cursor is select * from parent_table_with_fkrefs;
3 l_fk_rec l_fk_cursor%rowtype;
4 cursor l_no_fk_cursor is select * from parent_table_without_fkrefs;
5 l_no_fk_rec l_no_fk_cursor%rowtype;
6 l_num_of_recs integer := 400;
7 begin
8
9 for l_fk_rec in l_fk_cursor loop
10 dbms_output.put_line( 'obj type ' || l_fk_rec.object_type );
11 for i in 1 .. l_num_of_recs loop
12 insert into child_table_with_fk values ( l_fk_rec.parent_key,
13 i || l_fk_rec.object_type);
14 end loop;
15 end loop;
16
17 for l_no_fk_rec in l_no_fk_cursor loop
18 dbms_output.put_line( 'obj type ' || l_no_fk_rec.object_type );
19 for i in 1 .. l_num_of_recs loop
20 insert into child_table_without_fk values ( l_no_fk_rec.parent_key, i ||
21 l_no_fk_rec.object_type);
22 end loop;
23 end loop;
24 end;
25 /
obj type FUNCTION
obj type PROCEDURE

PL/SQL procedure successfully completed.

test_user@apps>
test_user@apps> commit;

Commit complete.

test_user@apps> @run_bm
test_user@apps> create or replace package run_benchmark_pkg as
2 procedure delete_with_fk;
3 procedure delete_without_fk;
4 procedure benchmark;
5 end;
6 /

Package created.

test_user@apps> create or replace package body run_benchmark_pkg as
2 procedure delete_with_fk
3 is
4 begin
5 delete parent_table_with_fkrefs;
6 end;
7
8 procedure delete_without_fk
9 is
10 begin
11 delete child_table_without_fk where parent_key in
12 (select parent_key from parent_table_without_fkrefs);
13
14 delete parent_table_without_fkrefs;
15 end;
16
17 procedure benchmark
18 is
19 begin
20 runstats_pkg.rs_start;
21 run_benchmark_pkg.delete_with_fk;
22 runstats_pkg.rs_middle;
23 run_benchmark_pkg.delete_without_fk;
24 runstats_pkg.rs_stop(200);
25 end;
26
27 end;
28 /

Package body created.

test_user@apps>
test_user@apps> begin
2 --run_benchmark_pkg.benchmark;
3 runstats_pkg.rs_start;
4 run_benchmark_pkg.delete_with_fk;
5 runstats_pkg.rs_middle;
6 run_benchmark_pkg.delete_without_fk;
7 runstats_pkg.rs_stop(200);
8 end;
9 /
Run1 ran in 7 hsecs
Run2 ran in 8 hsecs
run 1 ran in 87.5% of the time

Name Run1 Run2 Diff
STAT...no buffer to keep pinne 0 349 349
STAT...consistent gets - exami 9 360 351
STAT...session logical reads 1,780 2,172 392
STAT...consistent gets 25 425 400
STAT...table scan rows gotten 1 401 400
LATCH.cache buffers chains 8,186 8,625 439

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
20,130 20,494 364 98.22%

PL/SQL procedure successfully completed.

test_user@apps> @fkb
test_user@apps>
test_user@apps> set serveroutput on
test_user@apps> --create a parent table
test_user@apps>
test_user@apps> drop table parent_table_with_fkrefs cascade constraints;

Table dropped.

test_user@apps>
test_user@apps> create table parent_table_with_fkrefs
2 as select sys_guid() as parent_key , owner, object_type, object_name
3 from all_objects
4 where rownum = 1
5 and object_type = 'FUNCTION';

Table created.

test_user@apps>
test_user@apps> alter table parent_table_with_fkrefs add constraint parent_table_pk1 primary key ( parent_key);

Table altered.

test_user@apps>
test_user@apps> drop table parent_table_without_fkrefs cascade constraints;

Table dropped.

test_user@apps>
test_user@apps> create table parent_table_without_fkrefs
2 as select sys_guid() as parent_key , owner, object_type, object_name
3 from all_objects
4 where rownum = 1
5 and object_type = 'PROCEDURE';

Table created.

test_user@apps>
test_user@apps> alter table parent_table_without_fkrefs add constraint parent_table_pk2 primary key ( parent_key);

Table altered.

test_user@apps>
test_user@apps> drop table child_table_with_fk;

Table dropped.

test_user@apps> create table child_table_with_fk
2 (
3 parent_key raw(16) not null,
4 child_value varchar2(400) not null,
5 constraint child_pk primary key(parent_key, child_value),
6 constraint child_fk foreign key(parent_key)
7 references parent_table_with_fkrefs(parent_key)
8 on delete cascade
9 );

Table created.

test_user@apps>
test_user@apps> drop table child_table_without_fk;

Table dropped.

test_user@apps> create table child_table_without_fk
2 (
3 parent_key raw(16) not null,
4 child_value varchar2(400) not null,
5 constraint child_pk1 primary key(parent_key, child_value)
6 );

Table created.

test_user@apps>
test_user@apps> declare
2 cursor l_fk_cursor is select * from parent_table_with_fkrefs;
3 l_fk_rec l_fk_cursor%rowtype;
4 cursor l_no_fk_cursor is select * from parent_table_without_fkrefs;
5 l_no_fk_rec l_no_fk_cursor%rowtype;
6 l_num_of_recs integer := 400;
7 begin
8
9 for l_fk_rec in l_fk_cursor loop
10 dbms_output.put_line( 'obj type ' || l_fk_rec.object_type );
11 for i in 1 .. l_num_of_recs loop
12 insert into child_table_with_fk values ( l_fk_rec.parent_key,
13 i || l_fk_rec.object_type);
14 end loop;
15 end loop;
16
17 for l_no_fk_rec in l_no_fk_cursor loop
18 dbms_output.put_line( 'obj type ' || l_no_fk_rec.object_type );
19 for i in 1 .. l_num_of_recs loop
20 insert into child_table_without_fk values ( l_no_fk_rec.parent_key, i ||
21 l_no_fk_rec.object_type);
22 end loop;
23 end loop;
24 end;
25 /
obj type FUNCTION
obj type PROCEDURE

PL/SQL procedure successfully completed.

test_user@apps>
test_user@apps> commit;

Commit complete.

test_user@apps> @run_bm
test_user@apps> create or replace package run_benchmark_pkg as
2 procedure delete_with_fk;
3 procedure delete_without_fk;
4 procedure benchmark;
5 end;
6 /

Package created.

test_user@apps> create or replace package body run_benchmark_pkg as
2 procedure delete_with_fk
3 is
4 begin
5 delete parent_table_with_fkrefs;
6 end;
7
8 procedure delete_without_fk
9 is
10 begin
11 delete child_table_without_fk where parent_key in
12 (select parent_key from parent_table_without_fkrefs);
13
14 delete parent_table_without_fkrefs;
15 end;
16
17 procedure benchmark
18 is
19 begin
20 runstats_pkg.rs_start;
21 run_benchmark_pkg.delete_with_fk;
22 runstats_pkg.rs_middle;
23 run_benchmark_pkg.delete_without_fk;
24 runstats_pkg.rs_stop(200);
25 end;
26
27 end;
28 /

Package body created.

test_user@apps>
test_user@apps> begin
2 --run_benchmark_pkg.benchmark;
3 runstats_pkg.rs_start;
4 run_benchmark_pkg.delete_with_fk;
5 runstats_pkg.rs_middle;
6 run_benchmark_pkg.delete_without_fk;
7 runstats_pkg.rs_stop(200);
8 end;
9 /
Run1 ran in 8 hsecs
Run2 ran in 9 hsecs
run 1 ran in 88.89% of the time

Name Run1 Run2 Diff
STAT...consistent gets - exami 10 358 348
STAT...no buffer to keep pinne 0 349 349
STAT...session logical reads 1,755 2,136 381
STAT...consistent gets 20 417 397
STAT...table scan rows gotten 1 401 400
LATCH.cache buffers chains 8,150 8,552 402

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
20,028 20,310 282 98.61%

PL/SQL procedure successfully completed.

test_user@apps> spool off



Tom Kyte

Followup  

July 05, 2003 - 9:42 am UTC

I used this:

ops$tkyte@ORA920> /*
DOC>
DOC>drop table c1;
DOC>drop table c2;
DOC>drop table p1;
DOC>drop table p2;
DOC>set echo on
DOC>
DOC>
DOC>create table p1 ( x int primary key, data char(80) );
DOC>create table p2 ( x int primary key, data char(80) );
DOC>
DOC>insert into p1 select object_id, 'x' from all_objects;
DOC>insert into p2 select * from p1;
DOC>
DOC>create table c1 ( x references p1 on delete cascade, data char(30) );
DOC>create table c2 ( x int, data char(30) );
DOC>create index c1_idx on c1(x);
DOC>create index c2_idx on c2(x);
DOC>
DOC>insert into c1 select x, 'x' from p1, (select rownum from all_objects where rownum<=5);
DOC>insert into c2 select * from c1;
DOC>
DOC>analyze table p1 compute statistics for table for all indexes for all indexed columns;
DOC>analyze table p2 compute statistics for table for all indexes for all indexed columns;
DOC>analyze table c1 compute statistics for table for all indexes for all indexed columns;
DOC>analyze table c2 compute statistics for table for all indexes for all indexed columns;
DOC>*/
ops$tkyte@ORA920>
ops$tkyte@ORA920> exec runStats_pkg.rs_start;

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> begin
  2          for x in ( select * from p1 )
  3          loop
  4                  delete from p1 where x = x.x;
  5                  commit;
  6          end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> exec runStats_pkg.rs_middle;

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> begin
  2          for x in ( select * from p2 )
  3          loop
  4                  delete from c2 where x = x.x;
  5                  delete from p2 where x = x.x;
  6                  commit;
  7          end loop;
  8  end;
  9  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> exec runStats_pkg.rs_stop;
Run1 ran in 7343 hsecs
Run2 ran in 8385 hsecs
run 1 ran in 87.57% of the time


as it better simulates "real world" -- and does it often enough in a loop to give measurable timings (the difference between 8 and 9 hsecs is statistically insignificant to the point where the 9hsecs could represent the same or less time then the 8hsecs really).  But we see the on delete cascade is in fact faster -- and if we actually did this OUTSIDE THE DATABASE using a java client, you would find it faster even more (we removed the network and the extra java code).


Name                                  Run1        Run2        Diff
STAT...CPU used by this sessio       5,981       6,350         369
STAT...redo size                99,192,988 105,686,168   6,493,180

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct

4,969,270   5,020,724      51,454     98.98%

PL/SQL procedure successfully completed.


generated lots less redo and used less CPU -- those are always good things as well.   

additional comments on benchmark

July 04, 2003 - 6:27 pm UTC

Reviewer: A reader

well,
Looks like the FK solution also does very well
in terms of consistent gets aswell (as expected).
btw, what does "STAT..consistent gets - exami" stand for?

Thanx!

Name Run1 Run2 Diff
STAT...consistent gets - exami 10 358 348
STAT...no buffer to keep pinne 0 349 349
STAT...session logical reads 1,755 2,136 381
STAT...consistent gets 20 417 397
STAT...table scan rows gotten 1 401 400
LATCH.cache buffers chains 8,150 8,552 402


thanx Tom for the previous followup

July 07, 2003 - 10:08 am UTC

Reviewer: A reader

on Foreign Keys benchmarks. I was writing the case that
you gave to me already (though my code was more complicated - I was using dynamic sql to generate the
schema - as I wanted to run it for 10 parent tables each
refering to 10 child tables each of which has around 400 records.) One problem was to write non-dynamic code for
the benchmarks (as the no. of tables is huge - you will have to write lots of repetitive static code for a real world scenario.)

An argument against FKs brought up by a colleague of mine
is that if there are 20 child tables and you delete
one record from parent table - Oracle will have to go through all 20 child tables whereas the callback mechanism
doesnot (as - based on the type of info, it knows which
child tables that can contain the record) This is why I am
trying to create a benchmark where I can simulate this scenario. (The above benchmark by me does not address this
issue though - I am still working on the new benchmark)

Overall, I am definitley convinced that FKs is a better
sollution. The aforementioned case would be microoptimization at best of a relatively rare scenario
and is not worth all the extra code.





Tom Kyte

Followup  

July 07, 2003 - 10:53 am UTC

Oracle will have to go through as many child tables as your callback mechanism would.

How could it be otherwise?

if a child was not registered for a callback, you would not have any fkeys on it.

Otherwise, you would be doing totally differently implementations.


Tell your guy to try doing the callback thing, then posting the code here. I'll tell you why it won't work. It is very very hard to lock data that doesn't exist see - so basically, your callback implementation would have to basically prevent all inserts and updates against the child tables -- ALL inserts and ALL updates.

Oracle doesn't (assuming the fkeys are indexed -- if the fkeys are not indexed, we too must resort to full table locks on child tables)

Ask this guy "so, what do you think the performance impact of FULL TABLE locks which is the only correct way for you to implement this wacky logic will be?"

wow that was fast!!!

July 07, 2003 - 11:21 am UTC

Reviewer: A reader from ca

"Oracle will have to go through as many child tables as your callback mechanism would."

Following is the argument:

In the case of callbacks, we "know" from the data
as to which child tables have the data for a parent
record with a value of this "type" - say.

For example, our code (pseudo code) would be something like:

if parent_type_col = 'A' then
delete all children rows from table t1;
delete all children rows from table t2;
else if parent_type_col = 'B' then
delete all children rows from table t3;
...
end if;

Oracle implementation can not do the above as it probably
is going through all the child tables - it does not know
that a parent record with value of 'A' in the column
parent_type_col can not exist in t3 for example.

Regarding the table level locking, that is an excellent point. However, when I am deleting child records, there can not be any inserts against the parent for which the record is deleted. Well, in our case, it so happens that the parent record will be marked as "deleted" first. All subsequent inserts and updates will not be able
to see the parent record to initiate an insert in
child record. In short, deletion of a parent record is only allowed when the record is a of a certain status at which time you wont be doing inserts in child tables (or so is the claim) Since when deleting the parent record
we are sure that there cant be any inserts into child
record, we dont need to lock the child table (sounds ugly
but I think that would be the response I would get)


Thanx a billion for all the help again!!
Hope you had a great 4th of July!





Tom Kyte

Followup  

July 07, 2003 - 11:40 am UTC

so -- ask them why they want to write code, are they paid by the line?

I don't get it, I will never get it, it will never make sense to me, it will never be the right approach.

how will they prevent future inserts/updates from seeing their parent? oh, more code. And they do realize in order for THAT code to work -- they must first LOCK some stuff and yadda yadda yadda.


bottom lines here:

a) which will be the fastest to implement (code wise)
b) which requires the least maintainence
c) have they benchmarked their approach (which will definitely be slower then using builtin stuff)
d) have they justified to anyway why they need to write tons of code that you have to maintain forever that does exactly what they database they already paid for does.


last I'll say on the topic.

thanx Tom!

July 07, 2003 - 11:52 am UTC

Reviewer: A reader from ca

I can see your frustration and understand it - thanx
for all the help so far! I completely agree with your
arguments - I am actually writing up an article
that I would be sending to this team with benchmarks
that would hopefully convince them.



your benchmark runs slightly slower in my test

July 07, 2003 - 12:19 pm UTC

Reviewer: A reader

Hi Tom
Surprised that your benchmarks ran a little slower
on my database (9i).

Thanx and Regards.



SQL> @toms_followup
SQL> drop table c1;

Table dropped.

SQL> drop table c2;

Table dropped.

SQL> drop table p1;

Table dropped.

SQL> drop table p2;

Table dropped.

SQL> set echo on
SQL> 
SQL> create table p1 ( x int primary key, data char(80) );

Table created.

SQL> create table p2 ( x int primary key, data char(80) );

Table created.

SQL> 
SQL> create table c1 ( x references p1 on delete cascade, data char(30) );

Table created.

SQL> create table c2 ( x int, data char(30) );

Table created.

SQL> create index c1_idx on c1(x);

Index created.

SQL> create index c2_idx on c2(x);

Index created.

SQL> 
SQL> analyze table p1 compute statistics for table for all indexes for all
  2  indexed columns;

Table analyzed.

SQL> analyze table p2 compute statistics for table for all indexes for all
  2  indexed columns;

Table analyzed.

SQL> analyze table c1 compute statistics for table for all indexes for all
  2  indexed columns;

Table analyzed.

SQL> analyze table c2 compute statistics for table for all indexes for all
  2  indexed columns;

Table analyzed.

SQL> 
SQL> insert into p1 select object_id, 'x' from all_objects;

33317 rows created.

SQL> insert into p2 select * from p1;

33317 rows created.

SQL> insert into c1 select x, 'x' from p1, (select rownum from all_objects where
  2  rownum<=5);

166585 rows created.

SQL> insert into c2 select * from c1;

166585 rows created.

SQL> exec runStats_pkg.rs_start;

PL/SQL procedure successfully completed.

SQL> 
SQL> begin
  2    for x in ( select * from p1 )
  3    loop
  4       delete from p1 where x = x.x;
  5       commit;
  6       end loop;
  7    end;
  8  /

PL/SQL procedure successfully completed.

SQL> 
SQL> exec runStats_pkg.rs_middle;

PL/SQL procedure successfully completed.

SQL> 
SQL> begin
  2    for x in ( select * from p2 )
  3    loop
  4       delete from c2 where x = x.x;
  5       delete from p2 where x = x.x;
  6       commit;
  7    end loop;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL> 
SQL> exec runStats_pkg.rs_stop;
Run1 ran in 21184 hsecs                                                         
Run2 ran in 18593 hsecs                                                         
run 1 ran in 113.94% of the time                                                
                                                                                   
Name                                Run1      Run2      Diff                    
LATCH.ncodef allocation latch                  4                 3              
-1                                                                              
LATCH.session switching                        4                 3              
-1                                                                              
LATCH.transaction branch alloc                 4                 3              
-1                                                                              
STAT...calls to kcmgcs                         3                 4              
1                                                                               
STAT...parse time cpu                          3                 4              
1                                                                               
STAT...transaction tables cons                 0                 1              
1                                                                               
STAT...cluster key scans                       1                 2              
1                                                                               
STAT...cluster key scan block                  1                 2              
1                                                                               
STAT...CR blocks created                     419               420              
1                                                                               
LATCH.channel handle pool latc                 0                 2              
2                                                                               
STAT...sorts (rows)                        2,873             2,875              
2                                                                               
STAT...redo synch time                        26                24              
-2                                                                              
STAT...commit cleanout failure                 3                 5              
2                                                                               
LATCH.latch wait list                          2                 0              
-2                                                                              
LATCH.user lock                               44                42              
-2                                                                              
LATCH.library cache load lock                  2                 4              
2                                                                               
LATCH.process group creation                   0                 2              
2                                                                               
LATCH.FOB s.o list latch                       4                 7              
3                                                                               
LATCH.job_queue_processes para                39                36              
-3                                                                              
STAT...switch current to new b            33,230            33,227              
-3                                                                              
STAT...sorts (memory)                          9                14              
5                                                                               
STAT...prefetched blocks                     370               376              
6                                                                               
LATCH.channel operations paren                69                62              
-7                                                                              
STAT...consistent changes                    881               889              
8                                                                               
STAT...transaction tables cons                 0                 8              
8                                                                               
LATCH.session timer                           71                62              
-9                                                                              
LATCH.hash table column usage                  0                10              
10                                                                              
STAT...cursor authentications                 15                 4              
-11                                                                             
LATCH.post/wait queue latch                   61                49              
-12                                                                             
LATCH.sort extent pool                        20                 4              
-16                                                                             
STAT...parse time elapsed                     24                 5              
-19                                                                             
STAT...bytes received via SQL*               968               997              
29                                                                              
LATCH.session idle bit                    33,514            33,556              
42                                                                              
STAT...redo buffer allocation                574               532              
-42                                                                             
STAT...table fetch by rowid                    8                50              
42                                                                              
STAT...consistent gets                   170,839           170,884              
45                                                                              
STAT...opened cursors cumulati                21                68              
47                                                                              
LATCH.ktm global data                         49                 1              
-48                                                                             
STAT...redo log space wait tim                40                89              
49                                                                              
STAT...shared hash latch upgra            33,422            33,474              
52                                                                              
STAT...consistent gets - exami           102,243           102,190              
-53                                                                             
STAT...parse count (total)                    27                80              
53                                                                              
STAT...execute count                      99,978           100,033              
55                                                                              
LATCH.transaction allocation             100,519           100,580              
61                                                                              
STAT...commit cleanouts succes           266,624           266,549              
-75                                                                             
STAT...messages sent                         399               310              
-89                                                                             
LATCH.shared pool                          2,861             2,951              
90                                                                              
LATCH.global ctx hash table la               110               204              
94                                                                              
STAT...calls to get snapshot s           133,300           133,400              
100                                                                             
STAT...buffer is not pinned co            33,355            33,482              
127                                                                             
STAT...enqueue releases                  533,566           533,697              
131                                                                             
STAT...enqueue requests                  533,561           533,697              
136                                                                             
LATCH.list of block allocation            66,903            67,068              
165                                                                             
STAT...commit cleanout failure               168                 3              
-165                                                                            
LATCH.active checkpoint queue                350               184              
-166                                                                            
LATCH.dml lock allocation                133,708           133,881              
173                                                                             
LATCH.redo writing                        36,411            36,228              
-183                                                                            
STAT...write clones created in               385               199              
-186                                                                            
STAT...calls to kcmgas                    67,352            67,165              
-187                                                                            
STAT...commit cleanouts                  266,795           266,557              
-238                                                                            
STAT...CPU used by this sessio             7,222             6,937              
-285                                                                            
STAT...CPU used when call star             7,222             6,937              
-285                                                                            
STAT...no work - consistent re            34,737            34,432              
-305                                                                            
STAT...deferred (CURRENT) bloc           100,513           100,207              
-306                                                                            
STAT...db block changes                  944,406           944,741              
335                                                                             
STAT...free buffer requested              42,272            42,629              
357                                                                             
STAT...recursive cpu usage                 6,966             6,578              
-388                                                                            
LATCH.resmgr group change latc           433,891           434,325              
434                                                                             
LATCH.resmgr:schema config               433,891           434,325              
434                                                                             
STAT...recursive calls                   166,782           167,237              
455                                                                             
STAT...redo entries                      504,940           505,397              
457                                                                             
STAT...physical reads                      1,391             1,863              
472                                                                             
STAT...immediate (CURRENT) blo            66,531            66,036              
-495                                                                            
LATCH.messages                             5,221             4,722              
-499                                                                            
STAT...hot buffers moved to he               823               297              
-526                                                                            
LATCH.undo global data                   112,162           111,541              
-621                                                                            
LATCH.enqueue hash chains              1,069,654         1,070,329              
675                                                                             
LATCH.redo allocation                    508,035           508,736              
701                                                                             
STAT...change write time                   9,310             8,496              
-814                                                                            
STAT...cleanouts only - consis               105               944              
839                                                                             
STAT...immediate (CR) block cl               105               944              
839                                                                             
LATCH.enqueues                         1,971,834         1,972,839              
1,005                                                                           
LATCH.row cache objects                    1,965             3,151              
1,186                                                                           
STAT...dirty buffers inspected             2,038               564              
-1,474                                                                          
LATCH.session allocation                  68,658            70,238              
1,580                                                                           
LATCH.multiblock read objects                176             2,374              
2,198                                                                           
STAT...redo size                     119,764,720       119,768,928              
4,208                                                                           
LATCH.cache buffers lru chain             72,384            76,939              
4,555                                                                           
STAT...free buffer inspected               7,071               564              
-6,507                                                                          
LATCH.library cache                      252,281           259,907              
7,626                                                                           
LATCH.checkpoint queue latch              80,163            71,806              
-8,357                                                                          
STAT...session logical reads             925,023           856,846              
-68,177                                                                         
STAT...db block gets                     754,184           685,962              
-68,222                                                                         
LATCH.cache buffers chains             3,954,381         3,767,795              
-186,586                                                                        
                                                                                   
Run1 latches total versus runs -- difference and pct                            
Run1      Run2      Diff     Pct                                                
9,339,536         9,164,060          -175,476 101.91%                           

PL/SQL procedure successfully completed.


SQL> spool off 

Tom Kyte

Followup  

July 07, 2003 - 12:40 pm UTC

given that all machines are not created equal, I'm not. I can find slower machines to run on as well :)

I meant something else

July 07, 2003 - 2:08 pm UTC

Reviewer: A reader

I meant that run1 (foreign keys cascade delete case) ran slower than run2 (callback approach). Sorry for the
confusion!

Regards

Tom Kyte

Followup  

July 07, 2003 - 3:43 pm UTC

you had:

STAT...CPU used by this sessio 7,222 6,937


which is only about 4% different (elapsed time can be very much dependent on whats happening)


but, I believe it has lots to do with you analyzing AND THEN loading rather then LOADING and then analyzing.




thanx tom !

July 08, 2003 - 9:31 pm UTC

Reviewer: A reader

but even after analyzing after loading data the run1 (with FKs) runs slower than run2 (using direct deletes without
FK cascade deletes.) I would have expected that this should
never happen...sorry about the hassel - feel free to ignore
this as the difference is not too much.

--
SQL> set echo on
SQL> set serveroutput on size 1000000
SQL> @toms_followup
SQL> set echo on;
SQL> set serveroutput on size 1000000
SQL> drop table c1;

Table dropped.

SQL> drop table c2;

Table dropped.

SQL> drop table p1;

Table dropped.

SQL> drop table p2;

Table dropped.

SQL> 
SQL> create table p1 ( x int primary key, data char(80) );

Table created.

SQL> create table p2 ( x int primary key, data char(80) );

Table created.

SQL> 
SQL> create table c1 ( x references p1 on delete cascade, data char(30) );

Table created.

SQL> create table c2 ( x int, data char(30) );

Table created.

SQL> create index c1_idx on c1(x);

Index created.

SQL> create index c2_idx on c2(x);

Index created.

SQL> 
SQL> insert into p1 select object_id, 'x' from all_objects;

33318 rows created.

SQL> insert into p2 select * from p1;

33318 rows created.

SQL> insert into c1 select x, 'x' from p1, (select rownum from all_objects where
  2  rownum<=5);

166590 rows created.

SQL> insert into c2 select * from c1;

166590 rows created.

SQL> 
SQL> analyze table p1 compute statistics for table for all indexes for all
  2  indexed columns;

Table analyzed.

SQL> analyze table p2 compute statistics for table for all indexes for all
  2  indexed columns;

Table analyzed.

SQL> analyze table c1 compute statistics for table for all indexes for all
  2  indexed columns;

Table analyzed.

SQL> analyze table c2 compute statistics for table for all indexes for all
  2  indexed columns;

Table analyzed.

SQL> 
SQL> exec runStats_pkg.rs_start;

PL/SQL procedure successfully completed.

SQL> 
SQL> begin
  2    for x in ( select * from p1 )
  3    loop
  4       delete from p1 where x = x.x;
  5       commit;
  6       end loop;
  7    end;
  8  /

PL/SQL procedure successfully completed.

SQL> 
SQL> exec runStats_pkg.rs_middle;

PL/SQL procedure successfully completed.

SQL> 
SQL> begin
  2    for x in ( select * from p2 )
  3    loop
  4       delete from c2 where x = x.x;
  5       delete from p2 where x = x.x;
  6       commit;
  7    end loop;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL> 
SQL> exec runStats_pkg.rs_stop;
Run1 ran in 19833 hsecs                                                         
Run2 ran in 19026 hsecs                                                         
run 1 ran in 104.24% of the time                                                
                                                                                   
Name                                Run1      Run2      Diff                    
LATCH.job_queue_processes para                32                31              
-1                                                                              
LATCH.user lock                               42                43              
1                                                                               
STAT...cluster key scan block                  1                 2              
1                                                                               
STAT...parse time elapsed                      4                 5              
1                                                                               
STAT...redo log space requests                 1                 2              
1                                                                               
STAT...parse time cpu                          5                 4              
-1                                                                              
STAT...cluster key scans                       1                 2              
1                                                                               
LATCH.latch wait list                          2                 0              
-2                                                                              
LATCH.library cache load lock                  2                 4              
2                                                                               
LATCH.loader state object free                 2                 4              
2                                                                               
LATCH.session timer                           66                64              
-2                                                                              
STAT...sorts (rows)                        2,873             2,875              
2                                                                               
LATCH.session idle bit                    33,553            33,551              
-2                                                                              
LATCH.channel operations paren                64                61              
-3                                                                              
LATCH.FIB s.o chain latch                      4                 8              
4                                                                               
STAT...calls to kcmgcs                         3                 7              
4                                                                               
LATCH.post/wait queue latch                   54                50              
-4                                                                              
LATCH.FOB s.o list latch                       4                 8              
4                                                                               
STAT...commit cleanout failure                 6                 1              
-5                                                                              
STAT...table fetch by rowid                   56                51              
-5                                                                              
STAT...deferred (CURRENT) bloc            98,935            98,940              
5                                                                               
STAT...sorts (memory)                          9                14              
5                                                                               
LATCH.hash table column usage                  0                 6              
6                                                                               
LATCH.sort extent pool                        10                 4              
-6                                                                              
STAT...switch current to new b            33,240            33,234              
-6                                                                              
STAT...parse count (total)                    76                82              
6                                                                               
STAT...consistent changes                    881               887              
6                                                                               
STAT...calls to get snapshot s           133,401           133,407              
6                                                                               
STAT...enqueue requests                  533,723           533,716              
-7                                                                              
STAT...consistent gets - exami           100,391           100,399              
8                                                                               
STAT...enqueue releases                  533,724           533,716              
-8                                                                              
STAT...execute count                     100,030           100,038              
8                                                                               
STAT...commit cleanouts                  266,557           266,566              
9                                                                               
STAT...cursor authentications                  7                17              
10                                                                              
STAT...redo entries                      504,478           504,467              
-11                                                                             
STAT...buffer is not pinned co            33,500            33,486              
-14                                                                             
STAT...commit cleanouts succes           266,547           266,561              
14                                                                              
STAT...prefetched blocks                     359               376              
17                                                                              
STAT...calls to kcmgas                    67,090            67,109              
19                                                                              
LATCH.ktm global data                         21                 1              
-20                                                                             
STAT...redo buffer allocation                493               517              
24                                                                              
STAT...redo synch time                        30                54              
24                                                                              
STAT...write clones created in               111               136              
25                                                                              
STAT...bytes received via SQL*               968               997              
29                                                                              
STAT...shared hash latch upgra            33,383            33,426              
43                                                                              
STAT...redo log space wait tim                42                87              
45                                                                              
STAT...hot buffers moved to he                50               108              
58                                                                              
STAT...recursive calls                   167,193           167,272              
79                                                                              
STAT...consistent gets                   168,965           168,878              
-87                                                                             
LATCH.global ctx hash table la               207               108              
-99                                                                             
STAT...no work - consistent re            34,768            34,667              
-101                                                                            
LATCH.list of block allocation            67,058            66,956              
-102                                                                            
STAT...messages sent                         503               387              
-116                                                                            
LATCH.active checkpoint queue                399               265              
-134                                                                            
LATCH.dml lock allocation                133,857           133,722              
-135                                                                            
STAT...db block changes                  944,003           943,834              
-169                                                                            
LATCH.transaction allocation             100,615           100,418              
-197                                                                            
STAT...physical reads                      1,634             1,856              
222                                                                             
STAT...CPU used by this sessio             7,183             6,877              
-306                                                                            
STAT...CPU used when call star             7,183             6,877              
-306                                                                            
LATCH.redo writing                        36,599            36,263              
-336                                                                            
LATCH.shared pool                          3,030             2,657              
-373                                                                            
STAT...dirty buffers inspected             1,567             1,191              
-376                                                                            
STAT...free buffer inspected               1,569             1,191              
-378                                                                            
STAT...recursive cpu usage                 6,884             6,504              
-380                                                                            
STAT...free buffer requested              42,158            42,582              
424                                                                             
LATCH.resmgr group change latc           434,345           433,894              
-451                                                                            
LATCH.resmgr:schema config               434,345           433,894              
-451                                                                            
STAT...change write time                   6,506             7,223              
717                                                                             
LATCH.messages                             5,926             4,993              
-933                                                                            
LATCH.redo allocation                    508,289           507,355              
-934                                                                            
LATCH.multiblock read objects              1,132               160              
-972                                                                            
LATCH.undo global data                   110,573           109,420              
-1,153                                                                          
LATCH.session allocation                  70,292            68,749              
-1,543                                                                          
LATCH.enqueue hash chains              1,070,958         1,069,238              
-1,720                                                                          
LATCH.row cache objects                    3,834             1,379              
-2,455                                                                          
LATCH.enqueues                         1,974,058         1,970,959              
-3,099                                                                          
LATCH.checkpoint queue latch              79,841            72,710              
-7,131                                                                          
LATCH.cache buffers lru chain             76,212            68,531              
-7,681                                                                          
LATCH.library cache                      261,633           252,993              
-8,640                                                                          
STAT...redo size                     119,676,360       119,661,884              
-14,476                                                                         
STAT...db block gets                     754,486           685,937              
-68,549                                                                         
STAT...session logical reads             923,451           854,815              
-68,636                                                                         
LATCH.cache buffers chains             4,014,000         3,729,485              
-284,515                                                                        
                                                                                   
Run1 latches total versus runs -- difference and pct                            
Run1      Run2      Diff     Pct                                                
9,421,151         9,098,076          -323,075 103.55%                           

PL/SQL procedure successfully completed.

SQL> 
SQL> spool off 

fk cascade delete

July 14, 2003 - 12:09 pm UTC

Reviewer: A reader

hi tom
I am back (the reader above with cascade delete problem)
Now that it is shown that cascade delete outperforms the callback delete, the argument from the team is that the main reason we use callbacks is because we can control the number of deletes. The argument is that we have around 200 child tables each containing say around 20000 rows. If we use cascade delete we run out of rollback space. Using callbacks we can control the rows that we delete (and commit) at a time.

I know your views about committing
frequently and the fact that we should size the rollback
segment properly. My question is
1. do indexes come into play in this? (I think they do) So is there a way that I could "disable indexes" on parent and child tables - just during the cascade delete and enable them afterwards?
Does that mitigate the RBS concerns?
As I write I think this may not work especially if the deletes are frequent but your views may not hurt anyways.

2. Also, what are the hardware costs that we are talking about? I am not a DBA, would like to get a sense of how
much $$ we spend on production disks - if I want to say
configure a rollback segment of 100G say (arbitrary number -may not represent reality but want to get a cost
sense)

Overall, you dont need to convince me about disadvantages of committing frequently.

Thanx!

Tom Kyte

Followup  

July 15, 2003 - 12:52 am UTC

I sort of give up, they are determined to write tons of code, maintain tons of code.

disk is pretty cheap -- I'll bet you could buy more disk then you need for rollback for less then what we got paid during the time you've had the discussions with the people that want to write code.

thanx tom!

July 15, 2003 - 5:31 pm UTC

Reviewer: A reader

Well, I can understand why you would give up.. Anyways,
I am now trying to benchmark the case where I do commits
in a loop using dynamic sql for deleting child and parent
records versus a simple cascade delete. I was expecting
to do worse in the case of commit in loop but surprisingly
my results *seem* to show otherwise. I have a feeling I may have made a mistake though I cant seem to put my finger on it.

I am creating two tables p1 and p2. Both have 200 child
tables (created dynamically.) Each child table has
around 20000 records.
In one case (p1), I just do a simple delete and commit to
delete the parent record and all child records. In the
second case (p2), I go in a loop deleting and committing 500 records at a time for each child table till all records are deleted in all 200 child tables. Then I delete the parent table record.

Here is the code - would appreciate your input in the
correctness of this benchmark. It seems to show that committing in a loop is twice as faster and takes
half as much resources! Question is where am I making
a mistake!

Thank you Tom, I (and other readers) owe you a big time for all the help and knowledge and insight you always give
to us!!

---- the benchmark follows

test@apps> @schema
test@apps>
test@apps> set doc off
test@apps> set serveroutput on size 1000000
test@apps>
test@apps> -- package to create schema
test@apps> create or replace package commit_freq_schema_pkg
2 as
3 g_fk_table_name_prefix varchar2(10) := 'C_FK_';
4 g_no_fk_table_name_prefix varchar2(10) := 'C_NO_FK';
5 g_num_of_child_tables integer := 200;
6 g_recs_per_child_table integer := 20000;
7
8 procedure create_schema;
9 end;
10 /

Package created.

test@apps> create or replace package body commit_freq_schema_pkg
2 as
3 procedure create_table ( p_table_name in varchar2,
4 p_create_table_sql in varchar2 )
5 is
6 l_dummy varchar2(30);
7 begin
8 begin
9 select table_name into l_dummy
10 from all_tables
11 where table_name = upper( p_table_name );
12
13 -- drop table if above select finds something;
14 execute immediate 'drop table '|| p_table_name ||
15 ' cascade constraints ';
16 exception
17 when no_data_found then
18 null;
19 end;
20
21 dbms_output.put_line( 'creating table ' ||p_table_name );
22 execute immediate p_create_table_sql;
23 end;
24
25 procedure create_schema
26 is
27 l_table_name varchar2(30);
28 l_create_table_sql varchar2(4000);
29 begin
30 -- drop two parent tables
31 begin
32 execute immediate 'drop table p1 cascade constraints';
33 execute immediate 'drop table p2 cascade constraints';
34 execute immediate ' create table p1 '||
35 '( '||
36 ' parent_key raw(16) default sys_guid() primary key, data char(20) '||
37 ') ';
38
39 execute immediate ' create table p2 '||
40 '( '||
41 ' parent_key raw(16) default sys_guid() primary key, data char(20) '||
42 ')';
43
44 -- create child tables
45 for i in 1 .. g_num_of_child_tables loop
46 l_table_name := g_fk_table_name_prefix||i;
47 l_create_table_sql := 'create table '|| l_table_name ||
48 ' ( parent_key references p1 on delete cascade, data char(20) )';
49
50 create_table ( l_table_name, l_create_table_sql );
51
52 execute immediate 'create index '|| l_table_name || '_idx on ' ||
53 l_table_name || '( parent_key ) ';
54
55
56 l_table_name := g_no_fk_table_name_prefix||i;
57 l_create_table_sql := 'create table '|| l_table_name ||
58 ' ( parent_key raw(16) references p2, data char(20) )';
59
60 create_table ( l_table_name, l_create_table_sql );
61 end loop;
62 end;
63 end;
64
65 end;
66 /

Package body created.

test@apps>
test@apps> show errors;
No errors.
test@apps> --create benchmark schema
test@apps> begin
2 commit_freq_schema_pkg.create_schema;
3 null;
4 end;
5 /

PL/SQL procedure successfully completed.

test@apps> @data
test@apps> set echo on
test@apps> set serveroutput on size 1000000
test@apps>
test@apps> create or replace package commit_freq_data_pkg
2 as
3
4 g_recs_per_child_table integer := 20000;
5 procedure populate_data ;
6 end;
7 /

Package created.

test@apps> create or replace package body commit_freq_data_pkg
2 as
3 procedure populate_data
4 is
5 l_table_name varchar2(30);
6 begin
7 -- parent data
8 insert into p1 values( sys_guid(), 'P1 data');
9 insert into p2 values( sys_guid(), 'P2 data');
10
11 -- child data
12 for i in 1 .. commit_freq_schema_pkg.g_num_of_child_tables loop
13 l_table_name := commit_freq_schema_pkg.g_fk_table_name_prefix||i;
14 execute immediate 'insert into ' || l_table_name ||
15 ' select p1.parent_key , object_type as data'||
16 ' from all_objects, p1 '||
17 ' where rownum < :x '
18 using g_recs_per_child_table ;
19
20 execute immediate 'analyze table ' || l_table_name ||
21 ' compute statistics for table for all indexes for all indexed columns ';
22
23 l_table_name := commit_freq_schema_pkg.g_no_fk_table_name_prefix||i;
24 execute immediate 'insert into ' || l_table_name ||
25 ' select p2.parent_key , object_type as data'||
26 ' from all_objects, p2 '||
27 ' where rownum < :x '
28 using g_recs_per_child_table ;
29
30 execute immediate 'analyze table ' || l_table_name
31 || ' compute statistics for table for all indexes for all indexed columns ';
32 end loop;
33 end;
34
35 end;
36 /

Package body created.

test@apps>
test@apps> show errors;
No errors.
test@apps> --create benchmark data
test@apps> begin
2 commit_freq_data_pkg.populate_data;
3 null;
4 end;
5 /

PL/SQL procedure successfully completed.

test@apps> @test.sql
test@apps> select count(*) from c_fk_1;

19999

test@apps> select count(*) from c_no_fk1;

19999

test@apps> select count(*) from c_fk_200;

19999

test@apps> select count(*) from c_no_fk200;

19999

test@apps> !echo 

test@apps> @bm1
test@apps> set serveroutput on size 100000
test@apps> create or replace package run_benchmark_pkg as
2 g_max_commit integer := 500;
3
4 procedure delete_with_fk;
5 procedure delete_with_no_fk;
6 end;
7 /

Package created.

test@apps> create or replace package body run_benchmark_pkg as
2 procedure delete_with_fk
3 is
4 begin
5 delete p1;
6 commit;
7 dbms_output.put_line( 'FK case done' );
8 null;
9 end;
10
11 procedure delete_with_no_fk
12 is
13 l_delete_stmt varchar2(4000);
14 l_counter number;
15 begin
16 for crec in (select a.table_name
17 from user_tables a
18 where a.table_name like(
19 commit_freq_schema_pkg.g_no_fk_table_name_prefix
20 ||'%') )
21 loop
22 dbms_output.put_line( 'NO FK: table ' || crec.table_name );
23 l_delete_stmt := 'delete from ' || crec.table_name ||
24 ' where rownum <= ' || g_max_commit;
25 loop
26 execute immediate l_delete_stmt;
27 l_counter := SQL%ROWCOUNT;
28 commit;
29 if l_counter < g_max_commit then
30 exit;
31 end if;
32 end loop;
33 end loop;
34
35 commit;
36 delete p2;
37 commit;
38 end;
39
40 end;
41 /

Package body created.

test@apps>
test@apps> begin
2 runstats_pkg.rs_start;
3 run_benchmark_pkg.delete_with_fk;
4 runstats_pkg.rs_middle;
5 run_benchmark_pkg.delete_with_no_fk;
6 runstats_pkg.rs_stop(200);
7 null;
8 end;
9 /

Run1 ran in 349039 hsecs
Run2 ran in 188440 hsecs
run 1 ran in 185.23% of the time

Name Run1 Run2 Diff
STAT...cursor authentications 202 403 201
STAT...opened cursors current 203 -2 -205
STAT...redo ordering marks 231 0 -231
LATCH.hash table column usage 323 89 -234
LATCH.job_queue_processes para 604 369 -235
LATCH.file number translation 244 2 -242
LATCH.user lock 785 423 -362
LATCH.resmgr:actses active lis 806 433 -373
STAT...table fetch by rowid 172 573 401
LATCH.channel operations paren 1,184 617 -567
LATCH.session timer 1,270 684 -586
LATCH.post/wait queue latch 1,282 662 -620
STAT...cluster key scans 0 800 800
STAT...consistent gets - exami 203 1,013 810
STAT...hot buffers moved to he 2,807 1,953 -854
STAT...deferred (CURRENT) bloc 23,086 24,004 918
STAT...redo log space wait tim 1,606 633 -973
LATCH.cache buffer handles 0 1,194 1,194
LATCH.global ctx hash table la 2,627 1,410 -1,217
STAT...immediate (CURRENT) blo 1,567 0 -1,567
LATCH.child cursor hash table 2,248 3,816 1,568
STAT...cluster key scan block 0 1,600 1,600
LATCH.active checkpoint queue 2,817 932 -1,885
STAT...messages sent 2,762 835 -1,927
LATCH.session idle bit 5,686 10,079 4,393
STAT...redo buffer allocation 10,854 6,231 -4,623
STAT...pinned buffers inspecte 22,465 15,998 -6,467
STAT...switch current to new b 24,740 31,803 7,063
STAT...user commits 1 8,001 8,000
STAT...table scans (long table 0 8,200 8,200
STAT...opened cursors cumulati 375 8,579 8,204
STAT...parse count (total) 375 8,579 8,204
STAT...physical reads 75,392 67,180 -8,212
LATCH.session allocation 51,330 42,707 -8,623
LATCH.shared pool 56,268 46,887 -9,381
LATCH.multiblock read objects 12,692 22,258 9,566
LATCH.row cache objects 38,513 28,042 -10,471
LATCH.redo writing 44,064 32,525 -11,539
LATCH.dml lock allocation 9,669 21,630 11,961
LATCH.list of block allocation 5,006 18,576 13,570
STAT...session uga memory 143,776 129,124 -14,652
STAT...calls to kcmgas 25,023 39,879 14,856
STAT...execute count 375 16,578 16,203
LATCH.undo global data 138,061 118,376 -19,685
LATCH.transaction allocation 9,843 29,529 19,686
STAT...calls to get snapshot s 375 24,593 24,218
STAT...recursive calls 2,183 31,405 29,222
LATCH.messages 67,016 37,462 -29,554
STAT...commit cleanouts 168 31,806 31,638
STAT...commit cleanouts succes 0 31,802 31,802
STAT...recursive cpu usage 80,811 45,786 -35,025
STAT...dirty buffers inspected 99,446 55,438 -44,008
LATCH.library cache 397,104 347,630 -49,474
STAT...free buffer requested 227,185 176,249 -50,936
STAT...free buffer inspected 126,715 71,437 -55,278
STAT...prefetched blocks 1 55,945 55,944
STAT...change write time 221,046 126,811 -94,235
LATCH.cache buffers lru chain 437,177 318,085 -119,092
LATCH.checkpoint queue latch 687,195 388,708 -298,487
STAT...no work - consistent re 24,888 536,952 512,064
STAT...consistent gets 25,464 538,340 512,876
STAT...table scan blocks gotte 5 533,605 533,600
STAT...buffer is not pinned co 521 537,718 537,197
STAT...enqueue requests 8,000,807 4,025,225 -3,975,582
STAT...enqueue releases 8,000,808 4,025,225 -3,975,583
LATCH.resmgr group change latc 8,009,876 4,014,672 -3,995,204
LATCH.resmgr:schema config 8,009,876 4,014,672 -3,995,204
STAT...table scan rows gotten 1 3,999,801 3,999,800
STAT...redo entries 8,098,114 4,080,247 -4,017,867
LATCH.redo allocation 8,146,262 4,109,230 -4,037,032
STAT...session logical reads 12,434,833 4,866,493 -7,568,340
LATCH.enqueue hash chains 16,045,376 8,078,815 -7,966,561
STAT...db block changes 16,231,706 8,171,668 -8,060,038
STAT...db block gets 12,409,369 4,328,153 -8,081,216
LATCH.enqueues 32,070,806 16,122,620 -15,948,186
LATCH.cache buffers chains 58,341,726 26,768,151 -31,573,575

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
132,598,313 64,581,794 -68,016,519 205.32%

PL/SQL procedure successfully completed.


test@apps> spool off



Tom Kyte

Followup  

July 15, 2003 - 5:51 pm UTC

why indexes on one set of child tables but not the other.

looks like the overhead of index maintanence here.

since you only have one parent record, the index is "not a good idea",

eg: your test is not real world and the schemas are not equivalent. the child tables have but one value in them. you made the RI version do many more times the work.

thanx tom!

July 15, 2003 - 6:12 pm UTC

Reviewer: A reader

"why indexes on one set of child tables but not the other."

The reason is that I wanted to index the foriegn keys in the case of delete cascade for obvious reasons. However
you may have a point here - I should test the case
with index on the child tables for the second case
also! Thanx for pointing that out!!!

"since you only have one parent record, the index is "not a good idea",

eg: your test is not real world and the schemas are not equivalent. the child
tables have but one value in them. you made the RI version do many more times
the work.
"
Actually the child tables have 20K records not
one record.

Well, our scenario is that we would be committing
after every record in the parent is deleted - you
can assume that the parent record will be deleted
in one transaction. Since number of child
records is a lot - I wanted to just test a case
of deleting one parent record that has 200 child
tables that itself has as many as 20K records.
It reflects the scenario that I am trying to
portray. a parent with lots of children each with lots
of records - and we delete one entry at a time in
one transaction.

Thank you!

Tom Kyte

Followup  

July 15, 2003 - 6:17 pm UTC

the PARENT record has one value

you have 20,000 child records with a constant value
hence index = bad idea for your test csae.

thanks tom

July 15, 2003 - 6:30 pm UTC

Reviewer: A reader

but how would you test this scenario then? At a time
I would be deleting only one parent record and
committing (in my actual program)
which in the child record would by necessity be
the same (because of RI.) I can populate the parent
with multiple records but run the test for deleting only
one record of parent - would that help?

Also, if I create an index in both cases, the
field should be level, right?


Tom Kyte

Followup  

July 15, 2003 - 7:03 pm UTC

yes, the fields should be level by creating an index on both

here are test results!

July 15, 2003 - 9:50 pm UTC

Reviewer: A reader

I would have expected a little but more disparity.
Run1 - FK cascade delete
Run2 - piecemeal delete with intermittent commits.

This test proves that they are "comparable" (if you
just compare performance in terms of elapsed time
and latches consumed). Of course, if you bring in other
factors such as code complexity, increased redo log
contention, overall slower system - then the FK cascade
delete solution is the superior solution any day!!

thank you for everything!

test@apps> set echo on
test@apps> set serveroutput on size 100000
test@apps> @schema
test@apps>
test@apps> set doc off
test@apps> set serveroutput on size 1000000
test@apps>
test@apps> -- package to create schema
test@apps> create or replace package commit_freq_schema_pkg
2 as
3 g_fk_table_name_prefix varchar2(10) := 'C_FK_';
4 g_no_fk_table_name_prefix varchar2(10) := 'C_NO_FK';
5 g_num_of_child_tables integer := 200;
6 g_recs_per_child_table integer := 20000;
7
8 procedure create_schema;
9 end;
10 /

Package created.

test@apps> create or replace package body commit_freq_schema_pkg
2 as
3 procedure create_table ( p_table_name in varchar2,
4 p_create_table_sql in varchar2 )
5 is
6 l_dummy varchar2(30);
7 begin
8 begin
9 select table_name into l_dummy
10 from all_tables
11 where table_name = upper( p_table_name );
12
13 -- drop table if above select finds something;
14 execute immediate 'drop table '|| p_table_name ||
15 ' cascade constraints ';
16 exception
17 when no_data_found then
18 null;
19 end;
20
21 dbms_output.put_line( 'creating table ' ||p_table_name );
22 execute immediate p_create_table_sql;
23 end;
24
25 procedure create_schema
26 is
27 l_table_name varchar2(30);
28 l_create_table_sql varchar2(4000);
29 begin
30 -- drop two parent tables
31 begin
32 execute immediate 'drop table p1 cascade constraints';
33 execute immediate 'drop table p2 cascade constraints';
34 execute immediate ' create table p1 '||
35 '( '||
36 ' parent_key raw(16) default sys_guid() primary key, data char(20) '||
37 ') ';
38
39 execute immediate ' create table p2 '||
40 '( '||
41 ' parent_key raw(16) default sys_guid() primary key, data char(20) '||
42 ')';
43
44 -- create child tables
45 for i in 1 .. g_num_of_child_tables loop
46 l_table_name := g_fk_table_name_prefix||i;
47 l_create_table_sql := 'create table '|| l_table_name ||
48 ' ( parent_key references p1 on delete cascade, data char(20) )';
49
50 create_table ( l_table_name, l_create_table_sql );
51
52 execute immediate 'create index '|| l_table_name || '_idx on ' ||
53 l_table_name || '( parent_key ) ';
54
55
56 l_table_name := g_no_fk_table_name_prefix||i;
57 l_create_table_sql := 'create table '|| l_table_name ||
58 ' ( parent_key raw(16) references p2, data char(20) )';
59
60 create_table ( l_table_name, l_create_table_sql );
61 execute immediate 'create index '|| l_table_name || '_idx on ' ||
62 l_table_name || '( parent_key ) ';
63 end loop;
64 end;
65 end;
66
67 end;
68 /

Package body created.

test@apps>
test@apps> show errors;
No errors.
test@apps> --create benchmark schema
test@apps> begin
2 commit_freq_schema_pkg.create_schema;
3 null;
4 end;
5 /

PL/SQL procedure successfully completed.

test@apps> @data
test@apps> set echo on
test@apps> set serveroutput on size 1000000
test@apps>
test@apps> create or replace package commit_freq_data_pkg
2 as
3
4 g_recs_per_child_table integer := 20000;
5 procedure populate_data ;
6 end;
7 /

Package created.

test@apps> create or replace package body commit_freq_data_pkg
2 as
3 procedure populate_data
4 is
5 l_table_name varchar2(30);
6 begin
7 -- parent data
8 insert into p1 values( sys_guid(), 'P1 data');
9 insert into p2 values( sys_guid(), 'P2 data');
10
11 -- child data
12 for i in 1 .. commit_freq_schema_pkg.g_num_of_child_tables loop
13 l_table_name := commit_freq_schema_pkg.g_fk_table_name_prefix||i;
14 execute immediate 'insert into ' || l_table_name ||
15 ' select p1.parent_key , object_type as data'||
16 ' from all_objects, p1 '||
17 ' where rownum < :x '
18 using g_recs_per_child_table ;
19
20 execute immediate 'analyze table ' || l_table_name ||
21 ' compute statistics for table for all indexes for all indexed columns ';
22
23 l_table_name := commit_freq_schema_pkg.g_no_fk_table_name_prefix||i;
24 execute immediate 'insert into ' || l_table_name ||
25 ' select p2.parent_key , object_type as data'||
26 ' from all_objects, p2 '||
27 ' where rownum < :x '
28 using g_recs_per_child_table ;
29
30 execute immediate 'analyze table ' || l_table_name
31 || ' compute statistics for table for all indexes for all indexed columns ';
32 end loop;
33 end;
34
35 end;
36 /

Package body created.

test@apps>
test@apps> show errors;
No errors.
test@apps> --create benchmark data
test@apps> begin
2 commit_freq_data_pkg.populate_data;
3 null;
4 end;
5 /

PL/SQL procedure successfully completed.

test@apps> @test.sql
test@apps> select count(*) from c_fk_1;

19999

test@apps> select count(*) from c_no_fk1;

19999

test@apps> select count(*) from c_fk_200;

19999

test@apps> select count(*) from c_no_fk200;

19999

test@apps> @bm1
test@apps> set serveroutput on size 100000
test@apps> create or replace package run_benchmark_pkg as
2 g_max_commit integer := 500;
3
4 procedure delete_with_fk;
5 procedure delete_with_no_fk;
6 end;
7 /

Package created.

test@apps> create or replace package body run_benchmark_pkg as
2 procedure delete_with_fk
3 is
4 begin
5 delete p1;
6 commit;
7 dbms_output.put_line( 'FK case done' );
8 null;
9 end;
10
11 procedure delete_with_no_fk
12 is
13 l_delete_stmt varchar2(4000);
14 l_counter number;
15 begin
16 for crec in (select a.table_name
17 from user_tables a
18 where a.table_name like(
19 commit_freq_schema_pkg.g_no_fk_table_name_prefix
20 ||'%') )
21 loop
22 dbms_output.put_line( 'NO FK: table ' || crec.table_name );
23 l_delete_stmt := 'delete from ' || crec.table_name ||
24 ' where rownum <= ' || g_max_commit;
25 loop
26 execute immediate l_delete_stmt;
27 l_counter := SQL%ROWCOUNT;
28 commit;
29 if l_counter < g_max_commit then
30 exit;
31 end if;
32 end loop;
33 end loop;
34
35 commit;
36 delete p2;
37 commit;
38 end;
39
40 end;
41 /

Package body created.

test@apps>
test@apps> begin
2 runstats_pkg.rs_start;
3 run_benchmark_pkg.delete_with_fk;
4 runstats_pkg.rs_middle;
5 run_benchmark_pkg.delete_with_no_fk;
6 runstats_pkg.rs_stop(200);
7 null;
8 end;
9 /

Run1 ran in 362632 hsecs
Run2 ran in 345783 hsecs
run 1 ran in 104.87% of the time

Name Run1 Run2 Diff
STAT...redo log space wait tim 1,694 1,433 -261
STAT...no buffer to keep pinne 0 311 311
STAT...shared hash latch upgra 560 216 -344
LATCH.ktm global data 31 403 372
STAT...buffer is pinned count 0 373 373
STAT...table fetch by rowid 168 803 635
STAT...redo ordering marks 655 0 -655
LATCH.child cursor hash table 2,292 1,624 -668
STAT...cluster key scans 0 800 800
LATCH.cache buffer handles 0 884 884
STAT...consistent gets - exami 201 1,325 1,124
STAT...cluster key scan block 0 1,600 1,600
STAT...dirty buffers inspected 104,894 106,991 2,097
LATCH.messages 65,519 67,736 2,217
STAT...recursive cpu usage 76,049 78,267 2,218
STAT...hot buffers moved to he 1,977 4,614 2,637
LATCH.row cache objects 39,501 42,300 2,799
LATCH.shared pool 58,608 62,305 3,697
LATCH.resmgr group change latc 8,011,684 8,018,380 6,696
LATCH.resmgr:schema config 8,011,683 8,018,380 6,697
LATCH.multiblock read objects 11,538 19,192 7,654
STAT...physical reads 74,278 82,040 7,762
STAT...opened cursors cumulati 371 8,209 7,838
STAT...parse count (total) 371 8,209 7,838
LATCH.session idle bit 3,410 11,291 7,881
STAT...switch current to new b 23,814 31,807 7,993
STAT...table scans (long table 0 8,000 8,000
STAT...user commits 1 8,001 8,000
LATCH.redo writing 43,513 51,664 8,151
LATCH.session allocation 52,818 63,168 10,350
STAT...free buffer requested 225,161 237,765 12,604
LATCH.dml lock allocation 9,966 24,850 14,884
STAT...calls to kcmgas 24,522 39,924 15,402
LATCH.list of block allocation 5,154 20,600 15,446
STAT...execute count 371 16,208 15,837
STAT...pinned buffers inspecte 23,295 1,641 -21,654
STAT...change write time 208,045 186,300 -21,745
STAT...enqueue releases 8,000,803 8,024,482 23,679
STAT...enqueue requests 8,000,803 8,024,482 23,679
STAT...recursive calls 2,151 25,837 23,686
STAT...calls to get snapshot s 372 24,217 23,845
LATCH.transaction allocation 10,178 34,167 23,989
STAT...prefetched blocks 0 26,003 26,003
STAT...free buffer inspected 138,607 108,683 -29,924
STAT...immediate (CURRENT) blo 2,964 34,904 31,940
STAT...redo entries 8,098,570 8,138,730 40,160
LATCH.redo allocation 8,148,350 8,188,747 40,397
STAT...db block changes 16,232,119 16,280,040 47,921
LATCH.enqueue hash chains 16,049,718 16,098,137 48,419
LATCH.undo global data 138,106 188,149 50,043
LATCH.cache buffers lru chain 436,763 491,219 54,456
STAT...commit cleanouts 168 63,691 63,523
STAT...commit cleanouts succes 0 63,680 63,680
LATCH.checkpoint queue latch 660,007 724,996 64,989
LATCH.enqueues 32,079,437 32,154,357 74,920
LATCH.library cache 410,336 501,279 90,943
STAT...db block gets 12,408,821 12,540,683 131,862
STAT...session uga memory max 160,040 12,744 -147,296
STAT...session uga memory 171,808 8,492 -163,316
STAT...no work - consistent re 23,954 501,325 477,371
STAT...consistent gets 24,524 502,859 478,335
STAT...table scan blocks gotte 5 498,005 498,000
STAT...buffer is not pinned co 509 502,125 501,616
STAT...session logical reads 12,433,345 13,043,542 610,197
LATCH.cache buffers chains 58,190,408 59,889,518 1,699,110
STAT...table scan rows gotten 1 3,999,801 3,999,800

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
132,450,811 134,684,654 2,233,843 98.34%

test@apps> spool off


DB Schema

July 26, 2003 - 3:35 am UTC

Reviewer: Rajeev Soni from Hyderabad, India

Hi Tom,
I just wanted to know is there any in-built package/procedure in Oracle (i am using Oracle 9iR2) which will give me the complete DB schema info of the current user, such as All Tables, All Columns, Constraints... Etc.

I have browsed the complete Oracle DB to get the Meta data info regarding the DB Schema... this views will give me all the info required USER_TAB_PRIVS_RECD, All_CONSTRAINTS, ALL_CONS_COLUMNS, ALL_TABLES, ALL_VIEWS, ALL_TAB_COMMENTS, ALL_TAB_COLUMNS, ALL_COL_COMMENTS.

But i have two issues w.r.t. to this views:
1. For NOT NULL column for table.. Check constraint is added to All_CONSTRAINTS, which we can get from ALL_TAB_COLUMNS.
2. And is there any way i can process Check constraints like (<colname> check (Length(username) > 10).

Actually i am require to build a Test Data Generator Tool, for inserting test data for the select Data Source.

I seen SQL Server 2000 also for the tool, there i have package which gives me complete DB Schema. So just wanted to know if there is any package or procedure before writing my own code to get DB schema.

And if you can give me any tips w.r.t. to reading DB schema, it will be very helpful for me.

Thanks and Regards
Rajeev Soni



Tom Kyte

Followup  

July 26, 2003 - 12:49 pm UTC

check out dbms_metadata

scott@ORA920LAP> select dbms_metadata.get_ddl( 'TABLE', 'EMP' ) from dual;

DBMS_METADATA.GET_DDL('TABLE','EMP')
--------------------------------------------------------------------------------

CREATE TABLE "SCOTT"."EMP"
( "EMPNO" NUMBER(4,0) NOT NULL ENABLE,
"ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9),
"MGR" NUMBER(4,0),
"HIREDATE" DATE,
"SAL" NUMBER(7,2),
"COMM" NUMBER(7,2),
"DEPTNO" NUMBER(2,0)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS"



You can setup filters to restrict the verbosity and so on. search this site for dbms_metadata for examples and see:

</code> http://docs.oracle.com/docs/cd/B10501_01/appdev.920/a96612/d_metada.htm#1656 <code>


but i don't understand issue #1 in your list -- that seems to be "a statement", not an issue....

for #2, you can use dynamic sql



for te docs.



Primary Key Caching

July 26, 2003 - 4:01 am UTC

Reviewer: Sikandar Hayat from Pindi Bhattian - Pakistan

I was trying new features of 9i and found a new feature "Primary Key Caching" but unable to get any detailed info about this feature. I hope you will give some tips of this feature or any link of docs where I can get info about this feature. As I am unable to find any thing.

Tom Kyte

Followup  

July 26, 2003 - 12:49 pm UTC

tell me where you saw this term in context. I myself am not familar with it.

primary key caching

July 26, 2003 - 6:44 pm UTC

Reviewer: A reader

From a google search, it seems Robert Freeman's book
on "Oracle 9i New features" mentions this. However, I could
only get the "contents" :)

It seems it has something to do with FK creation - not
sure if it refers to the new feature where Oracle
does not lock the entire table even if you dont create
the FK indexes but I am not sure about this...

Menon:)

Please see
</code> http://www.osborne.com/products/0072223855/0072223855_toc.pdf <code>

"Chapter 6 Oracle9i SQL, PL/SQL New ....
....

Constraint Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Locking and Nonindexed Foreign Keys . . . . . . . . . . . . . . . . . . . . 166
Primary Key Caching During FK Creation . . . . . . . . . . . . . . . . . . 1"


I don't have this book

July 27, 2003 - 12:10 pm UTC

Reviewer: Sikandar Hayat from Pindi Bhattian - Pakistan

Yes I was reading RG. Freeman New Features book. I need detail on this as I don't have the book mentioned in the link.

Thanks

Tom Kyte

Followup  

July 27, 2003 - 1:23 pm UTC

you were reading a book you don't have?

You'll have to ask RG Freeman what he meant, I don't have the referenced book and it is not a term I am familar with.

RE: DB Schema

July 28, 2003 - 1:14 am UTC

Reviewer: Rajeev Soni from Hyderabad, India

Hi Tom,
Well i didnot put my query in proper way. Actually my requirement is to create a automated tool "Test Data Generator", which will dynamically read the DB schema and shows users the list of all tables and after that for the selected table(s) application will generated no. rows specified by the user and insert them in DB.

So here while inserting data i should
1> Show all the user tables and other tables which he has been given permission....
2> Take care of all the constraints defined on the table/column scuh as Primary key, ref key... etc..., so w.r.t. this requirement i just got stucked at CHECK constraint.... for processing that, i suppose i need to implement some sought of SQL-Engine...?

I did identify all the views from where i can get the desired details. I wanted to know weather Oracle provideds any package that will give me all informationl such as list of tables with privileges, all columns of tables with constraints etc., so that i dont re-write the code for queries which are already there.

Thanks and Regards
Rajeev Soni


Tom Kyte

Followup  

July 28, 2003 - 7:04 am UTC

you just query the data dictionary -- it is all there.

re: primary key caching

August 18, 2003 - 9:58 pm UTC

Reviewer: A reader

From the Freeman book, it seems it is the caching of
primary key (don't know where - SGA perhaps) for
inserting of a row in child table. Since each child table
insert results in a lookup in the parent table Oracle
tries to cache the primary key value to save on time...



Column Width Mismatch in Primary key and Foreign Key

January 10, 2004 - 4:23 am UTC

Reviewer: Vivek Sharma from Bombay, India

Dear Tom,

I have certain tables which has a primary key and other tables having a foreign key referencing to these primary key. I would like to know whether can a Data Width Mismatch cause a Performance Problem.

Suppose, I have a primary key

t1.x varchar2(3) primary key which is referenced by t2.x of type varchar2(9)

Similaryly

t3.a number(9,2) primary key referenced by t4.a of type number(13,4).

Will this mismatch have performance impact due to data conversion. The Data Types are same only the Width differs.

I have several tables with such mismatches.

Please Suggest.

Regards
Vivek Sharma


Tom Kyte

Followup  

January 10, 2004 - 10:53 am UTC

read
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:7640334135875 <code>

shows

a) how to discover these and other interesting facts for yourself

b) that no, the "conversion" isn't really a "conversion" at all. a number(x,y) is an edit to be applied at data creation/assignment time. It affects not at all the "compare" time since at compare time they are just binary strings in effect that are compared for equality -- they are not compared "as numbers" but as strings of bytes.

Is is Applicable for any Datatype ?

January 10, 2004 - 11:14 am UTC

Reviewer: Vivek Sharma from Bombay, India

Dear Tom,
Thanks a ton for your response. The link you provided helped me understand that there should be no performance Impact if the width differs. But Sir, is it applicable for all datatypes.

i.e. varchar2(10) primary key v/s varchar2(30) foreign key and

char(10) primary key to char(30) foreign key.

Thanks and Regards
Vivek Sharma

Tom Kyte

Followup  

January 10, 2004 - 12:02 pm UTC

char(10) to char(30) won't work, but a varchar2(n) to varchar2(M) can work

ops$tkyte@ORA9IR2> create table p( x char(10) primary key );
 
Table created.
 
ops$tkyte@ORA9IR2> create table c( y char(30) references p );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into p values ( 'hello' );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into c values ( 'hello' );
insert into c values ( 'hello' )
*
ERROR at line 1:
ORA-02291: integrity constraint (OPS$TKYTE.SYS_C004049) violated - parent key not found
 
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> drop table c;
 
Table dropped.
 
ops$tkyte@ORA9IR2> drop table p;
 
Table dropped.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table p( x varchar2(10) primary key );
 
Table created.
 
ops$tkyte@ORA9IR2> create table c( y varchar2(30) references p );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into p values ( 'hello' );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into c values ( 'hello' );
 
1 row created.
 


<b>but in general, one might consider it "sloppy", something to ask yourself -- why, why did I do that in the first place</b>


Neat trick -- watch this:
ops$tkyte@ORA9IR2> create table p( x varchar2(10) primary key );
 
Table created.
 
ops$tkyte@ORA9IR2> create table c( y references p );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> desc c
 Name                    Null?    Type
 ----------------------- -------- ----------------
 Y                                VARCHAR2(10)
 


hmm, leave off the datatype and let it pick it up from the parent directly.

 

User defined name for FK

January 14, 2004 - 11:28 am UTC

Reviewer: Sami from NJ,USA

<AskTom>
Neat trick -- watch this:
ops$tkyte@ORA9IR2> create table p( x varchar2(10) primary key );

Table created.

ops$tkyte@ORA9IR2> create table c( y references p );

Table created.
</AskTom>

Dear Tom,
I like this trick. Is there a way to specify the NAME for foreign key constraint using this trick (instead of system generated name)?


Tom Kyte

Followup  

January 14, 2004 - 4:05 pm UTC

ops$tkyte@ORA9IR2> create table p ( x int primary key );
 
Table created.
 
ops$tkyte@ORA9IR2> create table c ( x constraint c_fk_to_p references p );
 
Table created.
 

Number of Foreign Keys

February 02, 2004 - 10:37 am UTC

Reviewer: chet from Gainesville, FL USA

Tom,
I have been tasked with creating a small application in our database (9.2.0.4). In my CREATE TABLE script, I have created a foreign key that references the primary key in another schema. This parent table is the center piece of our entire system.

Also, there is a procedure that checks to see if the sample number exists in the parent table, if it doesn't, then the new sample number is inserted. So there would never be a case where the sample number is not in the table.

My boss asked if it was necessary to have a foreign key. I didn't really know how to reply.

Would there be an issue of having probably hundreds of tables referencing the parent table?




Tom Kyte

Followup  

February 02, 2004 - 10:54 am UTC

suppose someone comes along and delete from parent table. (or updates it, whatever)


If you have no foreign key, you now have an orphan.

If you have a foreign key, you have data integrity.


Of course you need fkeys. If you do not, you just have bits and bytes on disk, garbage.


Data integrity is 'sort of important'. Enforcing data integrity is "pretty darn hard" to do yourself. And you'll make a mistake at some point leading to garbage in your database.



Perfect

February 02, 2004 - 11:53 am UTC

Reviewer: chet from Gainesville, FL USA

Thanks Tom.

Questions about foreign keys

February 02, 2004 - 12:35 pm UTC

Reviewer: A reader

Hi Tom, I have two questions about foreign keys.

1). Say table Child references table Parent and the foreign key column P_Id is not null. Then the following two queries should return exactly same results:

a: select c.name from child c, parent p where c.p_id = p.p_id;

b: select c.name from child c;

However the first one will be slow than the second one because of the join. Why Oracle doesn't optimize the first query to run as the second one?

2). Say if Parent table references a lookup table and I denormalize this column into the Child table as well, using triggers. Do you recommend to create foreign constraints on this column in the Child table.


Thanks.


Tom Kyte

Followup  

February 02, 2004 - 12:58 pm UTC

1) i suppose technically it could, but it just "doesn't".

One might ask "why did you ask for data from two tables when you only wanted one"


2) yes, if you want correct data, yes. but - i would not denormalize unless there was some truly -- really -- utterly compelling reason to do so. I've been known to say "normalization is overrated" -- but it is not "very overrated". it isn't something to do without a serious reason.

Joins are pretty darn efficient in rdbms's -- it is what they were born to do.

re: Questions about foreign keys

February 02, 2004 - 4:00 pm UTC

Reviewer: A reader

Thanks for the answer.

We are building a complex reporting system, in which for example one report can have different combinations of parameters and it can end up with thousands of different queries. Sometime the developer may join a lookup table without using any columns from it (like I illustriated in example a.).

I just want to know if there is a way to "tell" the optimizer bypass the join of the lookup table is it's not necessary for the result.

Thanks any way.

Output of the script

February 03, 2004 - 4:31 am UTC

Reviewer: A Reader

The script you have written to find out the unindexed foriegn keys. It shows four columns

1)status
2)table_name
3)columns
4)columns

What last two headings represent. Please Explain.

Regards

Tom Kyte

Followup  

February 03, 2004 - 7:47 am UTC

columns needing to be indexed and the columns that are in fact in the index


ops$tkyte@ORA10G> create table p ( id number primary key );
 
Table created.
 
ops$tkyte@ORA10G> create table c ( id references p NOT NULL, name varchar2(20) );
 
Table created.
 
ops$tkyte@ORA10G> create index c_idx on c(id,name);
 
Index created.
 
ops$tkyte@ORA10G> set echo off
 
STAT TABLE_NAME                     COLUMNS              COLUMNS
---- ------------------------------ -------------------- --------------------
ok   C                              ID                   ID, NAME



that shows the columns = ID need to be indexed and in fact is indexed by an index with columns = ID,NAME in it. 

Indexed Foreign Keys

February 03, 2004 - 10:41 pm UTC

Reviewer: A Reader from India

Dear Tom,

I have executed the script for checking indexed foreign keys. So I got the following results.

As you can see for only one table status is ok. for other its not. But as you can see the table creation script. Both are similar. What can be the reason for this?

Thanks



STAT TABLE_NAME COLUMNS COLUMNS
---- ------------ -------------------- --------------------

ok F_EQUIP_DTL JOB_CODE, PO_SL_NO, JOB_CODE, PO_SL_NO,
RFQ_NO RFQ_NO, ITEM_SL_NO

**** GEN_ITEM_DTL JOB_CODE, PO_SL_NO,
RFQ_NO



CREATE TABLE GEN_ITEM_DTL
(
JOB_CODE VARCHAR2(4) NOT NULL,
PO_SL_NO NUMBER(4) NOT NULL,
PO_REV NUMBER(2) NOT NULL,
ITEM_SL_NO VARCHAR2(10) NOT NULL,
ITEM_EDD DATE,
ITEM_COMP_DT DATE,
QTY_COMP NUMBER(15,3),
RAS_DATE DATE,
DESP_DETAIL VARCHAR2(1000),
RFQ_NO NUMBER(4) NOT NULL,
PROGRESS NUMBER(3),
ITEM_CDD DATE,
EFF_PO_REV NUMBER(2),
EFF_PR_REV NUMBER(2),
PR_REV NUMBER(2),
S_ITEM_AVAIL VARCHAR2(3),
S_ITEM_INSP VARCHAR2(3)
);


ALTER TABLE GEN_ITEM_DTL ADD (
CONSTRAINT PK_GEN_ITEM PRIMARY KEY (JOB_CODE, PO_SL_NO, ITEM_SL_NO, RFQ_NO));

ALTER TABLE GEN_ITEM_DTL ADD (
CONSTRAINT FK_GEN_PO FOREIGN KEY (JOB_CODE, PO_SL_NO, RFQ_NO)
REFERENCES PO_SUMMARY (JOB_CODE,PO_SL_NO,RFQ_NO)
ON DELETE CASCADE);



CREATE TABLE F_EQUIP_DTL
(
JOB_CODE VARCHAR2(4) NOT NULL,
PO_SL_NO NUMBER(4) NOT NULL,
PO_REV NUMBER(2) NOT NULL,
ITEM_SL_NO VARCHAR2(10) NOT NULL,
ITEM_EDD DATE,
ITEM_COMP_DT DATE,
MATERIAL VARCHAR2(30),
OD NUMBER(10,2),
THICKNESS NUMBER(10,2),
WEIGHT NUMBER(11,3),
LENGTH NUMBER(10,2),
QTY_COMP NUMBER(15,3),
RAS_DATE DATE,
DESP_DETAIL VARCHAR2(1000),
RFQ_NO NUMBER(4) NOT NULL,
PROGRESS NUMBER(3),
ITEM_CDD DATE,
EFF_PO_REV NUMBER(2),
EFF_PR_REV NUMBER(2),
PR_REV NUMBER(2),
S_ITEM_AVAIL VARCHAR2(3),
S_ITEM_INSP VARCHAR2(3)
);


ALTER TABLE F_EQUIP_DTL ADD (
CONSTRAINT PK_FED_ITEM PRIMARY KEY (JOB_CODE, PO_SL_NO, RFQ_NO, ITEM_SL_NO));


ALTER TABLE F_EQUIP_DTL ADD (
CONSTRAINT FK_FED_PO FOREIGN KEY (JOB_CODE, PO_SL_NO, RFQ_NO)
REFERENCES PO_SUMMARY (JOB_CODE,PO_SL_NO,RFQ_NO)
ON DELETE CASCADE);




Tom Kyte

Followup  

February 04, 2004 - 7:19 am UTC

ALTER TABLE GEN_ITEM_DTL ADD (
CONSTRAINT PK_GEN_ITEM PRIMARY KEY (JOB_CODE, PO_SL_NO, ITEM_SL_NO, RFQ_NO));

that'll add an index on job_code, po_sl_no, item_sl_no, rfq_no in that order

ALTER TABLE GEN_ITEM_DTL ADD (
CONSTRAINT FK_GEN_PO FOREIGN KEY (JOB_CODE, PO_SL_NO, RFQ_NO)
REFERENCES PO_SUMMARY (JOB_CODE,PO_SL_NO,RFQ_NO)
ON DELETE CASCADE);

and since the fkey is on job_code, po_sl_no, rfq_no -- and item_sl_no is in the "middle" of the index -- that index is not good enough

the fkey columns must be on the LEADING EDGE of an index if you want to use it for the fkey.

can wee ad a FK initially disabled?

March 17, 2004 - 4:47 am UTC

Reviewer: A reader

Hi is it possible add foreign keys but leave it disabled because the data is not loaded in the parent table yet?

Tom Kyte

Followup  

March 17, 2004 - 8:10 am UTC

yup.

ops$tkyte@ORA9IR2> create table p ( x int primary key );
 
Table created.
 
ops$tkyte@ORA9IR2> create table c ( x references p disable );
 
Table created.
 
ops$tkyte@ORA9IR2> insert into c values ( 100 );
 
1 row created.
 
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 

Nice

March 17, 2004 - 8:58 am UTC

Reviewer: Siva from Bangalore,India

Dear Tom,
Do you have a sample table which enforces "self referential
integrity"?Please do reply.
Bye!


Tom Kyte

Followup  

March 17, 2004 - 10:58 am UTC

emp

alter table emp add constraint emp_pk primary key(empno);
alter table emp add constraint mgr_fk forieng key(mgr) references emp(empno);

Foreign Key Using DB Link

March 22, 2004 - 10:48 pm UTC

Reviewer: A reader from India

Dear Tom,

We have two Oracle Servers. In one server S1, we have a Employee Master Table. In second server S2, we have Employee releated other informations. We want to create Foreign Key to Employee Table in Server S1 from Server S2. Is it possible, If yes then Please tell how.

Thanks

Tom Kyte

Followup  

March 23, 2004 - 6:55 am UTC

it will fortunately NEVER happen.

what you have just described means you really want one single database. to have a fkey from one to the other would only mean that one of the servers avialablity will be severely limited and performance would be slow.

think about what would happen if S1 suffered a media failure and needed a bit of point in time recovery. Or what happens to S1 when S2 is down? (S1 is effectively "down" as well). Ditto for S2. the failure of either machine means both machines are dramatically impacted.

You want a *single* server.

OK

March 23, 2004 - 9:39 am UTC

Reviewer: Kumar from Bangalore,KA,India

Dear Tom,
Can Referential Integrity be specified between tables of
different schemas?
From SYS schema WHEN I issue :

Create table emp(deptno references scott.dept)
Is this a valid one?
Bye!


Tom Kyte

Followup  

March 23, 2004 - 9:48 am UTC

do NOT even in a billion years use SYS for anything like this.  

yes, you can use declaritive RI across schemas.


ops$tkyte@ORA9IR2> grant create session, create table to a identified by a;
 
Grant succeeded.
 
ops$tkyte@ORA9IR2> grant create session, create table to b identified by b;
 
Grant succeeded.
 
ops$tkyte@ORA9IR2> alter user a default tablespace users quota unlimited on users;
 
User altered.
 
ops$tkyte@ORA9IR2> alter user b default tablespace users quota unlimited on users;
 
User altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @connect a/a
ops$tkyte@ORA9IR2> set termout off
a@ORA9IR2> REM GET afiedt.buf NOLIST
a@ORA9IR2> set termout on
a@ORA9IR2> create table t ( x int primary key );
 
Table created.
 
a@ORA9IR2> grant references on t to b;
 
Grant succeeded.
 
a@ORA9IR2>
a@ORA9IR2> @connect b/b
a@ORA9IR2> set termout off
b@ORA9IR2> REM GET afiedt.buf NOLIST
b@ORA9IR2> set termout on
b@ORA9IR2> create table t ( x references a.t );
 
Table created.
 
 

Please see this

March 26, 2004 - 12:23 am UTC

Reviewer: Jane from NorthCarolina,USA

ear Tom,
Oracle Documentation says "A single column can be the part of more than
one foreign key".I tried this and it gets created but when we insert rows
it is recognised by the optimizer.Could you please clarify this?
I have showed here the Table creation and some sample insert statements.
Please look into that.
Bye!
SQL> create table a(x int primary key);

Table created.

SQL> create table c(x int primary key);

Table created.

SQL> create table b(x int,constraint fk_x foreign key(x) references a,constraint fk2_x foreign
  2  key(x) references c);

Table created.


SQL> insert into a values(1);

1 row created.

SQL> insert into a values(2);

1 row created.

SQL> insert into c values(3);

1 row created.

SQL> insert into b values(0);
insert into b values(0)
*
ERROR at line 1:
ORA-02291: integrity constraint (SCOTT.FK2_X) violated - parent key not found


SQL> insert into b values(1);
insert into b values(1)  /* scott.fk_x not considered
*
ERROR at line 1:
ORA-02291: integrity constraint (SCOTT.FK2_X) violated - parent key not found


SQL>  

Tom Kyte

Followup  

March 26, 2004 - 9:20 am UTC

what did you expect?

you've basically said "b(x) must be in A and C"

the value 1 is in A
the value 1 is not in C

hence, it fails....

OK

June 17, 2004 - 1:45 am UTC

Reviewer: Raju from Hyderabad,AP,India

Dear Tom,
please see below.

SQL> create table x(a number primary key);

Table created.

SQL> create table y(a references x on delete cascade,b varchar2(30) );

Table created.

* I would like to modify the "Foreign key clause " as
"on delete set null".How to do that? *

I tried the following one but got errors.

SQL> alter table y modify constraint(a references x on delete set null);
alter table y modify constraint(a references x on delete set null)
                               *
ERROR at line 1:
ORA-14006: invalid partition name

Could you please help?

Please do reply. 

Tom Kyte

Followup  

June 17, 2004 - 9:35 am UTC

drop and add. you have to drop it, and recreate it

is this solved in 9i

July 23, 2004 - 4:04 am UTC

Reviewer: Solomon

Is the issue of un indexed foreign key solved in 9i.Because when I do the above steps of creating P, C and then inserting and updating P, only P is getting locked in case of 9i but both P and C are getting locked in case of 8i.

Tom can you help?

Tom Kyte

Followup  

July 23, 2004 - 8:37 am UTC

the issue is solved in 8i, in 7.0 -- the "issue" is if you have an unindexed foreign key and you delete from parent or update parent key then:

a) every row in the child table must be scanned and no new rows can be inserted into child table while this is taking place. this should be somewhat obvious since we have no index on the columns in child to search on (so full scan) and nothing external from the table to "lock" (no index entries)


P is not getting locked in ANY release.

In 8i and before, C is locked for the duration of the transaction (from the point in time the user deletes or updates P primary key till it commits/rollsback)

In 9i and after, C is locked for the duration of the update/delete statement only (less locking but there none the less)


The bottom line is -- if you run into the locking issue (if it actually affects you because you actually delete/update the parents primary key) not having an index materially and MASSIVE affects the performance of the update/delete -- making it full scan child for each update/delete of P! Therefore, indexing is sort of "mandatory" in that case.

Any substitute for FKs?

November 22, 2004 - 9:09 pm UTC

Reviewer: JC from Dallas

Tom,

In our environment we have many many FKs and many many indexes for FKs to avoid the locking problem. All these indexes are costing much I/O. Disk space is not problem but we don't have enough disk spindles(disk controllers) to share the load. Performance on I/O is a concern.

I've been thinking about any other alternative to replace FKs. I'd like to investigate modeling the schema using objects; hoping we can build some integrity rules in object structure instead of FKs. I don't have enough experiences yet on tables with objects. I know that Oracle dictionary is already much objectized. Can you provide some advice on this?

Thanks!!

Tom Kyte

Followup  

November 22, 2004 - 9:24 pm UTC

even with objects (nested tables and such) you have, at the end of the day, relational -- flat tables with surrogate keys (extra keys!) and foreign keys that need indexing.

You need index a fkey if and only if

a) you update the parent tables primary key (shame on you if you do)
b) you DELETE from the parent

and if you do b) and do no have a fkey index -- consider what that means (a FULL SCAN of child for each and every single parent row)


So, since you should never ever do a), and you know what tables you do b) on -- can you use that information to reduce the number of fkey indexes?

Also -- how have you quantified that these indexes are "costing much in IO" -- read IO or write IO? what did you do to measure this exactly?

thanks...but what if I can't reduce many FK indexes?

November 24, 2004 - 1:11 am UTC

Reviewer: JC from Dallas

Thanks for you tips. I will try to reduce the amount of FK indexes, if possible.

But if I can't reduce enough FKs I still would like to find a way to revamp the current implementations -- meaning finding any possible ways to maintain referential integrity but reducing some FKs (i.e. FK indexes).

We have a table with 66 columns and 42 FK indexes, and another one with 43 columns and 11 FK indexes. I am not the designer of the current schemas, but need to help optimize the database performance.

Apologize that I can't provide any statistics as of yet, but I am not confortable with inserting to tables with more than 10 indexes, needless to say with 42 FK indexes.

Do I have any choice of implementations for integrity checks but not using FKs? I understand that FK will be the best performer, but I am willing to consider the next alternative candidate(s).

Thanks again for your advice.

Tom Kyte

Followup  

November 24, 2004 - 7:15 am UTC

well, it is like I said.

there are exactly two cases where you need indexes on fkeys

o update parent primary key. You can (and hopefully have) NEVER DO THAT. you didn't use a primary key if you are updating it. it might be unique, but it ain't a primary key.

o delete from parent table. do you actually delete from the parent table? do you delete from the parent table *frequently* enough to care? do you delete from the parent table only in "admin mode" (eg: these are almost exclusively lookup tables - with 42 of them, I certainly hope so, for I cannot imagine an entity actually have 42 relations -- maybe it could be true but sounds extreme)

There are no other ways to have data integrity without a foreign key when it is called for. Anything you try to code yourself (you are worried about performance on one hand "42 indexes scares me" but not on the other "i know fk's will be fastest") will not only be really slow -- but if you post the code you attempt -- I'll break it in about 30 seconds with just 2 or 3 users.




Heirarchal tree of foreign keys

January 31, 2005 - 11:19 am UTC

Reviewer: Clark from Columbus, OH

Tom,

Is it possible to list the child tables in heirarchial order based on the foreign keys? It would come handy when we want to issue deletes or updates in an OLTP application with lots of dependent tables.

Thanks.

Tom Kyte

Followup  

January 31, 2005 - 11:39 am UTC

create table emp ( empno primary key, mgr references emp);


create table t1 ( x primary key, y int );
create table t2 ( y primary key, x references t1 );
alter table t1 add constraint t1_fk foreign key(y) references x(x);


Hmmm, gotta be careful, but in 10g with NOCYCLE on a connect by -- I'm sure you could.

But that oltp application -- well, it best WOULD KNOW the data, it would have no need to do this query.


And since you never do update cascades -- you would never have the issue on the updates?



January 31, 2005 - 1:13 pm UTC

Reviewer: A reader

Tom,

I think I did not phrase my question clearly.

For e.g.
dept - dept_id (pk)
emp - references dept(dept_id)
sal - references emp(emp_id)

Now if I want to delete a dept, I can find all the tables that refer this dept table with dept_id. Then I have to find if the child tables further have any child tables. Instead can we achieve this thru a single query or a pl/sql script so that I know, first I should delete from sal table, then emp table and then dept table.

Thanks again.

Tom Kyte

Followup  

January 31, 2005 - 1:41 pm UTC

Oh know, I understood you completely

your OLTP application already knows that. It ALREADY better darn well KNOW what tables it needs to affect (that after all is the defintion of the T in OLTP)

if it doesn't and has to "discover them", well, then there is a big problem here. The only think I would say is "the tables should therefore have on delete cascade", the application shouldn't be nosing around saying "hmm, what tables might i have to delete from this week", if it is supposed to be implicit.

My premise is the appliation ALREADY KNOWS THIS information -- it is a transactional application. It (this delete cascade) would have been implemented via referential integrity constraints if it is always to happen OR the application would specifically know "in this case, I delete from these tables".

January 31, 2005 - 9:53 pm UTC

Reviewer: A reader

Yes, the OLTP application knows how to delete it. But I want to know when I want to delete from the backend.

Tom Kyte

Followup  

February 01, 2005 - 8:38 am UTC

that does not make sense.

the backend IS the oltp application.


What you are proposing "find all tables related to this and delete from them first" is downright *scary*

IF the tables are to be treated that way
THEN
they would have been installed with ON DELETE CASCADE;
ELSE
YOUR program would have INTIMATE knowledge of the right thing to do and would
do it
END IF



Number of foreign keys and performance

May 17, 2005 - 7:15 pm UTC

Reviewer: A reader

Tom,
We have a table with 185 columns. We are looking at adding FKs on about 60 columns, these columns already have indexes. 1) What would be the impact of adding 60 FK's on inserts, updates and deletes?
2) And is 60 FK's on a table too high?
3) 185 columns in a single table - is that bad?

Thank you



Tom Kyte

Followup  

May 18, 2005 - 8:42 am UTC

1) the index probe into the parent for each foreign key.
2) well, 185 columns seems "high" to me :) yes 60 seems high but see #3
3) hard to say, totally a design issue. Not being aware of the design, I cannot say....

Number of foreign keys and performance

May 18, 2005 - 10:27 am UTC

Reviewer: A reader

Tom,
>>1) the index probe into the parent for each foreign key.
a) The index probe happen for what operations - Updates, Deletes, Inserts (Selects also..)? Can you please explain.
Even with the index probe isn't it worth the overhead/time to have FK's to maintain data integrity?

Thank you


Tom Kyte

Followup  

May 18, 2005 - 11:24 am UTC

when you insert a row, you'll have 60 index unique scans to 60 parents.

yes, data integrity important.
yes, data model VITAL

I can only say "60 seems high", but I cannot say anything else. If that is the only model possible for this data, so be it. The price you will be paying is 60 index probes.

Many optional foreign keys

July 19, 2005 - 5:06 pm UTC

Reviewer: VA from New Jersey, USA

The data modelling "standard" in our environment that all data elements that fall into a discrete set of values i.e. have a finite domain should be put in a generic LOV table (lov_pk,lov_name,display_val,return_val) and all the tables should have a foreign key to this LOV table.

Of course, most of these foreign keys are optional i.e. they are NULLable columns.

So, when writing queries, we find ourselves doing stuff like

from
mytable t,lov l1,lov l2,...
where t.lov1=l1.lov_pk(+)
and t.lov2=l2.lov_pk(+)
and ...

All these outer joins are killing performance when these views are joined with zillion other views and stuff.

So, someone came up with the bright idea to create a UNIQUE constraint on the LOV table as (lov_pk,display_val) and now the data tables contain a pair of columns for each foreign key (lov1_pk,lov1_display_val)

So, the query above doesnt have to join with the LOV table at all to get the lov_display_val. Of course, if I need the lov_return_val or some other information from the LOV table, I am back to where I started.

What do you think about this approach? Is this "over-normalization"? But surely there is no such thing as having too much declarative integrity.


Thanks

Tom Kyte

Followup  

July 19, 2005 - 6:04 pm UTC

ugh, hate it.


there is no such thing as having too much RI, but there are bad designs.

So, a field representing gender (with M, F and null) needs to be factored out?
State code?

That is going "too far", you can have the state code in the table, and a table of state codes to validate with. You don't have to join to pick up the state code, it is just there already.

You CAN use natural keys when they make sense.

July 19, 2005 - 6:39 pm UTC

Reviewer: VA from New Jersey, USA

Well, for 2 values + a null, we dont factor them out store them right in the table as a M/F/null indicator.

State code? Absolutely. Need RI so it goes into the generic LOV table with lov_name=state

"That is going "too far", you can have the state code in the table, and a table of state codes to validate with. You don't have to join to pick up the state code"

Right, that corresponds to my 2nd design option above. You are right, I dont have to join for state code, but most "reports" want both the state code and the state_name so I have to (outer) join with the STATE lookup table, right?

So, even in the example you bring up, you agree we need a outer-join with STATE. Thats just 1 column, what if my table has dozens of such lookup fields and I need to report on them. Clearly, having dozen outer-joins with lookup tables doesnt make sense. Whats a good design here?

Thanks

Tom Kyte

Followup  

July 19, 2005 - 9:52 pm UTC

you don't have to outer join -- state should be a NOT NULL attribute I would thing and it has RI to the other table, why outer join?


having dozen outer joins is dandy in general -- hash joins are brutally efficient.

Recursive Delete

July 30, 2005 - 1:45 am UTC

Reviewer: Rory from Philippines

Hi Tom,

I saw in your site a way to recursively truncate a table with many children. It showed us the status like disabling the constraint then truncating then enabling the constraint once more. It would do this no matter how deep the children of the parent tables are.

My question is, is there a way to recursively delete from children of children of parent tables. I mean I'd like to delete records from a parent but it wont coz of dependents on it. The problem is, the option "on delete cascade" was not implemented on the tables or constraints when they were made. Do you have a procedure just like the recursive truncate that will work for deletes? The parent tables have so many children and when I use your script to look for children, I sometimes see that parents also have themselves as children. Thanks a lot Tom.

Tom Kyte

Followup  

July 30, 2005 - 8:55 am UTC

the same exact logic will work, you are just starting at a different level in the hierarchy.

so, take same logic, modify it.

but if you plan on doing this more than once in a blue moon, you should make the constraints do it.

Recursive Delete

August 01, 2005 - 1:22 am UTC

Reviewer: Rory from Philippines

Thanks a lot Tom.

Is there a way to modify the foreign key constraint for the option "on delete cascade" or the only way is to re-create it?

thanks again Tom.

Tom Kyte

Followup  

August 01, 2005 - 7:30 am UTC

you have to recreate it.

OK

October 28, 2005 - 7:16 am UTC

Reviewer: Raj from Hyderabad,India

Hi Tom,
I would like to get all the foreign keys
for a given primary key.
I used this query but not working.

select constraint_name AS pk_name,r_constraint_name as fk_name
from user_constraints
where constraint_name = 'T_PK'

Could you please help??

Tom Kyte

Followup  

October 28, 2005 - 1:01 pm UTC

where r_constaint_name = 'T_PK';

the fkeys REFERENCE the primary key T_PK.

Unncessary foreign keys?

November 28, 2005 - 8:12 am UTC

Reviewer: Peter Turányi from Slovakia, CE

Hello.
Time ago when we projected our system we create columns ID and NAME of person who insert and update record in each table. Each ID have foreign key to table of persons to control ID.
Now there are 400 tables in database and numbers of open cursors exceed limit.
One solution is increase numbers of open cursors.
But I think that remove this foreign keys because ID is now inserted by database trigger.
This can improve database performance too. I think.
Is it true?

Thanks

Tom Kyte

Followup  

November 28, 2005 - 1:37 pm UTC

how would removing a foreign key descrease the number of concurrently opened cursors a session has going?

Unncessary foreign keys?

November 29, 2005 - 4:00 am UTC

Reviewer: Peter Turányi from Slovakia, CE

Hi Tom,
I think when session go there is not connection to opened cursors.
But when you remove record about person from table persons database must controll all tables by foreign keys because every table has connection to this table by ID person who insert/update record even when person does not write anything to database.
This is not everyday problem but time to time users insert some persons by mistake and then they want to remove it.
In this case number of opened cursors exceed limit.

Tom Kyte

Followup  

November 29, 2005 - 10:13 am UTC

they only come into play for DELETE from the parent (to cascade the delete). The other checks don't use sql.

and if you need the on delete cascade - well then hey, YOU NEED THEM (and they are only open for the duration of the on delete cascade).

No, I see no reason to remove foreign keys here, none, zero, zippo. They are not the cause of your problem.


Unncessary foreign keys?

November 30, 2005 - 6:28 am UTC

Reviewer: Peter Turányi from Slovakia, CE

So you advice left foreign keys and increase number of open cursors.
But there is no need to delete cascade because you cannot delete person with child records.
When user delete person inserted by mistake there is no reason to control records by this foreign keys (ID person insert/update) because this person cannot do this (insert/update records).
When user delete person having records with ID person insert/update. This means that person is eployee and deleting record is controlled by foreign key from table employees.
So I still think that we don't need this foreign keys.

Tom Kyte

Followup  

November 30, 2005 - 2:41 pm UTC

if you want clean, correct data you will use integrity constraints.

If you want garbage in your database, by all means, don't use constraints.


If you have a de-facto foreign key and wish that to be enforced, you will use a declarative constraint - don't even dream you can do this correctly in the application.

I advise you to use declaritive integrity constraints.


You need these foreign keys.

Foreign Keys and Peoplesoft

February 15, 2006 - 12:16 pm UTC

Reviewer: Alex from Riverside, CA

Tom

I agree with you for the most part that FKs and constraints are important features of a good database design.

However, it is interesting to note that many highly successful and reliable business applications like Peoplesoft uses very minimal DB FKs and constraints (source book "Peoplesoft for the Oracle DBA" ), and what would become of that in future, if Oracle Applications merges with Peoplesoft as ONE product ?

Tom Kyte

Followup  

February 15, 2006 - 1:08 pm UTC

they spend a ton of time in the application doing this stuff.

they are an 80% implementation - so much in house customization is done during implementation. They are "not quite done" yet.

And they pay the price for it - in terms of complexity of code, ensuring the code is correct.

I speak mostly to people developing applications for in house use. They are not to be 80% done, they are to be 100% done.

It is a different story for that sometimes.


But the packaged apps could use a LOT more than they do. A lot more.

Foreign Keys and Peoplesoft

February 15, 2006 - 12:20 pm UTC

Reviewer: Alex from Riverside, CA

Tom

I forgot to mention that Peoplesoft have their own PS dictionary tables within the Oracle database and perhaps PS constraints and FKs built in.

I would like to know your opinion on this design.

Tom Kyte

Followup  

February 15, 2006 - 1:09 pm UTC

Sounds like a lot of work doesn't it. All to be "database independent"

but you - building your database for your company, you don't have to reinvent the wheel.

do you.

FK

March 02, 2006 - 5:37 pm UTC

Reviewer: A reader

Can a foriegn key be related on same table.
how ?

Tom Kyte

Followup  

March 03, 2006 - 7:58 am UTC

sure, just by asking for it.

 
ops$tkyte@ORA10GR2> create table emp ( empno number primary key, mgr references emp );
 
Table created.
 

Foreign Keys

March 28, 2006 - 1:47 am UTC

Reviewer: Baiju_P from India

Sir,

Is there any way that I can give multiple table column references in one single sql statement.

Eg: Say I have three master tables

Create table staff (staffid number(3) Primary key, staffname varchar2(20));

Create table student (studentid number(3) primary key, name varchar2(20));

Create table courses (courseid number(3) primary key, course_description varchar2(30));

Now, I want to create a fourth table that references all the PK col's of the three Master tables. How can I give the references to all the three columns in one single statement. i.e

Alter table <fourth table> add constraint fknn foreign key (staffid, courseid,studentid) references (<master table name and col. names>).....



Tom Kyte

Followup  

March 28, 2006 - 7:53 am UTC

I don't know what you mean exactly, but if your goal is to have a 4th table with 3 columns each of which reference one of these three existing tables, it could look like the create of T3 or the alter of T4 below:

ops$tkyte@ORA9IR2> create table t1 ( x int primary key );
 
Table created.
 
ops$tkyte@ORA9IR2> create table t2 ( y int primary key );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t3 ( a references t1, b references t2 );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t4 ( l int, m int );
 
Table created.
 
ops$tkyte@ORA9IR2> alter table t4
  2  add constraint t4_fk1 foreign key(l) references t1
  3  add constraint t4_fk2 foreign key(m) references t2;
 
Table altered.
 

Foreign Keys

March 29, 2006 - 3:51 am UTC

Reviewer: Baiju_P from cochin

Thanks Sir

Updating a foreign key value

May 29, 2006 - 4:09 pm UTC

Reviewer: Anto from Italy

Hi Tom,
i've got this case:

create table t1 ( id number, col_a varchar2(10),col_b varchar2(10) );
create table t2 ( id number, col_c varchar2(10) );

ALTER TABLE t1 ADD (CONSTRAINT PK_t1 PRIMARY KEY (id));
ALTER TABLE t2 ADD (CONSTRAINT PK_t2 PRIMARY KEY (id,col_c));
ALTER TABLE t2 ADD (
CONSTRAINT FK_t2 FOREIGN KEY (col_c)
REFERENCES t1 (col_a)
);


insert into t1 values (1,'default_val','val2');
insert into t1 values (2,'val3','val4');

insert into t2 values (1,'val3');

What i would like to do is to update t2 setting the foreign key to the default value ,'default_val' on t1 (that i will never delete) when i delete any other row from t1;

I tried a before delete trigger:
create or replace trigger tr_bd_t1
before delete on t1
for each row
begin
if :old.col_a <> 'default_val' then
update t2 set col_c='default_val' where col_c=:old.col_a;
else null;
end if;
end;

How can i avoid mutating table error in this case?
Thanks in advance

Tom Kyte

Followup  

May 30, 2006 - 9:16 am UTC

"why", why not use NULL to represent "I don't really have a true parent"

What is the goal here, are you just trying to avoid the use of NULL or what?


(I hate triggers, I would especially hate a trigger of this sort)

Enforce Integrity

December 11, 2006 - 12:14 am UTC

Reviewer: A reader

I have three tables, say A, B, C. Table A have field ID that refer to the key of table B or C.

The table may look like this

A (A_ID varchar2(10));
B (B_ID varchar2(10));
C (C_ID varchar2(10));

Is foreign key integrity can be applied to table A on A_ID to two table (B,C) at once.
When this way impossible, should I create trigger to enforce data integrity on table A ?

When I doesn't apply locking in the correct way (optimistic or pesimistic locking) - refer to your example on room reservation (your oracle book), do trigger for updating room itself will maintain bussiness rule ?

Thanks Tom.

Tom Kyte

Followup  

December 11, 2006 - 8:04 am UTC

yes, it can - but I don't think that is what you mean.


ops$tkyte%ORA10GR2> create table B (B_ID varchar2(10) primary key);

Table created.

ops$tkyte%ORA10GR2> create table C (C_ID varchar2(10) primary key);

Table created.

ops$tkyte%ORA10GR2> create table A
  2  (A_ID varchar2(10),
  3   constraint b_fk foreign key(a_id) references b(b_id),
  4   constraint c_fk foreign key(a_id) references c(c_id)
  5  );

Table created.

I've a feeling you mean "can I have a foreign key from A to that points to EITHER b or c (not necessarily both at the same time)"

that answer is "no, your model is inappropriate, wrong - you need two attributes"


So, I believe table A really is:

create table a
( a_id primary key,
  fk_b references b,
  fk_c references c
)


don't even think about trying this in a trigger - if you do, please post the code and we'll tell you why it doesn't work in real life. 

More foreign key vs timestamp problems

February 06, 2007 - 4:57 am UTC

Reviewer: Zoe from Leeds, UK

At the moment I am trying to write a piece of script that does:
Cursor
Select (data from 3 tables where status and end date not ended yet) for update (using the rowids)
Update 3 tables ('EL', 'CL', 'ST') to set date end on current one.
Insert new 'ENDED' row into 'ST' table (child of 'CL' table)

The problem is that I either get 06502, 01722, 01830, or the 'no parent found' error depending on what I try to put in the sub_date field (the bare cursor value, TO_DATE, TO_CHAR, and various mixes of the two).

the id and sub_date are a composite FK from the parent table and the sub_date has been forced to have a date and time in it (just in case someone decides to do >1 'CL' in one day I suppose)
INSERT INTO ST ST
( ST.SYSID ,--(number)
ST.SUB_DATE ,--(date)
ST.CODE ,--(varchar2)
ST.START_DATE ,--date
ST.UPDATE_ID ,--user
ST.UPDATE_TIME--date )
VALUES (rec.sysid , --from cursor, same vals as prev row
rec.SUB_DATE, --updated date end, and 'OPEN'***
'ENDED' ,
ld_inscla_end +1 ,--constant end date +1
'my_user' ,
SYSDATE );
***this is the problem date, I checked running the insert on a self-created table and there were no probs as there were no constraints
(dbms_output.put_line also appears to chop the time part of the dates off unless mask to_char to see what it is)
added info: NLS_DATE_FORMAT is set to a standard of 'DD-MON-RRRR' and I have tried to run the script (with rollbacks so far) in PL/SQL Developer and SQL Plus 8.0
I worry that I am being horiffically dense but my line manager can't figure it out either

PLease ignore the last post

February 12, 2007 - 5:03 am UTC

Reviewer: Zoe from Leeds,UK

It's ok, we figured out it was to do with the constant I think....

FK constraints in Data Mart

May 24, 2007 - 6:24 am UTC

Reviewer: Shiva from India

Hi Tom,

Presently, in our most of the solution data marts I am not creating any fk constraints and not null constraints. However, I have defined the primary key constraints only on surrogate key columns of all the dimension tables. For the centralized fact table I have created the indices for the Fk columns in it. The assumption here I made is, ETL will validate,cleanse and load the data from enterprise ware house/staging area to the solution datamart. Please do let me know my design consideration is correct or not and also let me know if you have any suggestions to further improve the data model.

Thanks in advance.

Regards,
Shiva
Tom Kyte

Followup  

May 26, 2007 - 11:05 am UTC

it is a data warehouse where you want to define and assert as many constraints as humanly possible - ALL OF THEM, AS MANY AS YOU CAN.

why?

the optimizer sort of needs them to do smart stuff. without them, it is deaf dumb and blind.

http://asktom.oracle.com/Misc/stuck-in-rut.html

Fk

January 03, 2008 - 10:49 pm UTC

Reviewer: A reader

Tom:

Can i create a Foreign Key from a child table to a parent table where the column is not a primary Key? The PK in the parent table is a sequence number (Seq_no). The parent table also has two columns (Book_No, Book_Medium). The child table/linking table also has these two columns too and I want to ensure their values exist in the parent.

foreign keys

January 09, 2008 - 7:48 am UTC

Reviewer: A reader

Tom:

I found that oracle requires that to create a foreign key from column A to column B, column B has to be either PK or has a UNIQUE constraint.

what is the reason for that?

I cant have a unique constraint because many records could have same value. What i am trying to enforce is that the values in A has to in A. So if "10" is repeated 5 times in B it still should be valid sto be stored in A.

Is there a way around this?
Tom Kyte

Followup  

January 09, 2008 - 8:13 am UTC

ANSI said so. It is the way relational databases are specified to function.

If you have the condition you describe, your data model is fundamentally flawed (speaking relationally) - you are missing a table in this schema.

fundamentally flawed?

January 09, 2008 - 9:08 am UTC

Reviewer: Duke Ganote

Or it could be a 'temporal' model where, for example, 'effective date' is part of the physical primary key. Relationally, there should be a 'stateless' table with just the primary key (minus the effective date) for referential integrity with any child tables -- but few implementations actually do that.
Tom Kyte

Followup  

January 09, 2008 - 10:31 am UTC

if it were 'temporal' then it would be really flawed (the design really needs be thought out, that stateless table doesn't cut it for me) - as the presence of the value as a primary key depends on what day it is. You are saying that logically the row "disappears over time".

You are right that few implementations do that, but that doesn't make it right.

Many implementations do not use bind variables.
Many implementations do not think about securing data until after it goes production
Many implementations do not think about recovery scenarios, until they need to perform on
Many implementations do not design their systems to perform, they just hope it will.

doesn't make them right in the end...

FK

January 09, 2008 - 10:46 am UTC

Reviewer: A reader

It is an old system.

Let us say you produce "Videos" and sometimes a video has different releases.
You issue a control number for each release production.

Videos
----------
Control_No (PK)
Video_No


For invoicing they are only interested to know Video_no and it has to be a valid video number
from the "Videos" table. It is not by control number.

I need a foreign key from Invoices.video_no to Videos.video_no.

Invoices
-------------
Invoice_no (PK)
Video_no
Tom Kyte

Followup  

January 09, 2008 - 11:48 am UTC

every system is by definition a legacy (eg: old) system the millisecond it goes production.

If it is so old, it would be finished - therefore, it is in maintenance and hence - things can and should be fixed. You are changing it, fix it.

Your model is missing a distinct_list_of_videos table - it will have to exist prior to any foreign keys in the relational model being implemented.

If you attempt to do this in your application, make sure to issue the LOCK TABLE VIDEO - you'll have to serialize access (since reads do not block writes, writes do not block reads.... You cannot see others uncommitted work, enforcing referential integrity in the application requires very COARSE locks)

You need a unique key...

January 09, 2008 - 11:14 am UTC

Reviewer: Peter Clark from Cumbria, England

Stick a unique key constraint on videos.video_no then you can add the foreign key.

alter table videos add constraint uk_videos_video_no unique (video_no);

alter table invoices add constraint fk_invoices_videos foreign key(video_no) references videos (video_no);
Tom Kyte

Followup  

January 09, 2008 - 11:50 am UTC

he is saying that video_no is not unique in that table...

FK

January 09, 2008 - 12:37 pm UTC

Reviewer: A reader

Tom:

The way this thing works now is that Person A creates a record in "Videos" table with only control number and video title. Then Person B comes and assigns a random video number for that record. So there is not really a lookup table for those videos. It is stored in the transaction table.

It seems the only way to do it is either remove the duplicate numbers and add a unique constraint OR create a Lookup table for video numbers and have Foreign keys point to that.
Tom Kyte

Followup  

January 10, 2008 - 1:59 pm UTC

If removing the duplicates is an option - that speaks volumes - that means they shouldn't be there in the first place. and the unique constraint is missing.

If they should exist, that does indicate that there should be a master "video table" that constrains this child table (not the child table you are looking at right now - but the one called videos - inappropriately named videos) and the key is the video number (fkey to this new master table) and control number...

Performance of Foreign Keys

February 14, 2008 - 4:32 pm UTC

Reviewer: Del Benavides

Hi Tom,

We have a data model with approximately 500 tables. Every table has audit columns CREATION_DATE, CREATED_BY, LAST_UDPATE_DATE and LAST_UPDATED_BY. The CREATED_BY and LAST_UPDATED_BY columns will store the user_name of the user that created or caused the creation of records in the tables.

There is a proposal to create a foreing key constraint in every table in the data model to the USER_TABLE. My question is Is this a good idea? Will this impact the performance of the database?

The other option is to guarantee that the CREATED_BY and LAST_UPDATED_BY columns contain a valid user at the time of creation or update via a trigger using the USER environment variable.

Thank you in advanced for your help and insight.

Del Benavides.
Tom Kyte

Followup  

February 15, 2008 - 7:51 am UTC

... here is a proposal to create a foreing key constraint in every table in the
data model to the USER_TABLE. My question is Is this a good idea? Will this
impact the performance of the database?...

if your model is such that a record has to be created by someone "known" and modified by someone "known"

Absolutely - yes, this is not only a good idea, is the the ONLY idea.

As for performance, you will not notice it in a likelihood - and even if you do, it is the price of data integrity, one you must pay.

hah, a trigger, bad idea. So what - the user existed AT THE TIME the record was created. Doesn't mean they'll exist tomorrow, or that they are a registered user of your application or anything. And if performance was a concern, you would not be using any triggers - they are somewhat slower than server enforced integrity would be.

What if Deleting a User from Parent Table

February 15, 2008 - 3:04 pm UTC

Reviewer: Abd from US

Hi Tom,
Thanks for giving oppurtunity to ask on this.

We have a similar situation what Del Benavides in above post is asking, In some cases we have to delete a User from Parent table which has FK constraint on 400+ child tables, That takes a lot of time, In such conditions ,we were thinking to drop FK constraints from some Tables, is that a correct decision.
Tom Kyte

Followup  

February 17, 2008 - 7:34 am UTC

what kind of data model has a parent table with 400+ child tables ?!?!?!?!

do you

a) want data integrity, clean, useful data
b) a database full of garbage

You chose, drop foreign keys, accept B. Keep them, have A


I would be seriously looking at a design that has 400 child tables - something is *wrong* there. I think someone took "supertype, subtype" to an illogical extreme here.


Now, that said, the amount of work performed to maintain integrity on 400 tables will obviously be more than 1 table - but it is exactly what you need to do given your data model

I would be interested in hearing the details of something where 400+ child tables was considered the best way to physically implement your schema.

Performance of Foreign Keys

February 15, 2008 - 6:45 pm UTC

Reviewer: Del Benavides from CA USA

.........
if your model is such that a record has to be created by someone "known" and modified by someone "known"

Absolutely - yes, this is not only a good idea, is the the ONLY idea.

As for performance, you will not notice it in a likelihood - and even if you do, it is the price of data integrity, one you must pay.
.........


Tom,

I agree and understand that Data Integrity/Referential Integrity can only be guaranteed by enforcing it in the database. I use foreing keys everywhere to guarantee this. However this case has some other restrictions and presents some other challenges.

Some clarification on the situation I described above. We are using APEX to convert 40+ applications from a Notes environment. The users for our applications will be created and maintained in OID (LDAP). Our policy is to never delete user accounts from LDAP. Instead we expire the accounts for historical purposes.

In order to create foreign keys from each of the 500+ tables in the datamodel to a USERS_TABLE we will have to extract the LDAP users periodically and populate the USERS_TABLE. The other concern I have is that each of the tables with foreing keys to the USERS_TABLE will have to have indexes for the foreing keys to prevent full table locks when changes occurr in the USERS_TABLE. If we don't implement these indexes we could essentially lock the entire schema when there is an update in the USERS_TABLE.

Our requirement is that only authorized users create and update records in the applications. To ensure this we are using the following type of trigger in all our tables:

CREATE OR REPLACE TRIGGER BIU_CASES_TR
BEFORE INSERT OR UPDATE
ON CASES 
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
BEGIN
   
    :new.last_updated_by := v('APP_USER'); 
    :new.last_update_date := sysdate;
   
  IF INSERTING THEN   
    :new.created_by := v('APP_USER');
    :new.creation_date := sysdate;
    select cases_seq.nextval into :new.case_id from dual;
  END IF;
  
END;


This trigger ensures that the created_by and last_updated_by columns are set to the authorized user who is logged in to the application. As you mentioned, and I agree that this doesn't ensure that the user will exist tomorrow. But I am not so concerned that the user will exists tomorrow. I am more concerned with if the users existed and was authorized when they created or updated the record. In addition we can always find out the details on expired users from LDAP.

With this clarification, do you recommend that we still create foreing keys in every table in the data model to the USER_TABLE? I really value your knowledge and opinion and wanted to make sure I asked the question with enough details.

Again, Thank you in advanced for your help and insight,

Del Benavides.
Tom Kyte

Followup  

February 17, 2008 - 7:40 am UTC

... If we don't implement these indexes we could essentially lock the entire schema when there is an update in the USERS_TABLE.
...

but if you never

a) delete from the USERS_TABLE
b) update the primary key of the USERS_TABLE

then the unindexed foreign key argument goes *away*. You need not index them.


Never delete from users_table (you said you won't, you are keeping them)
Never update the primary key (you won't, you don't rename user identities)

therefore, you need no indexes on that foreign key.


That trigger ENFORCES NOTHING.
That trigger SETS SOME COLUMN VALUES

nothing more, nothing less.


... I am more concerned with if the users existed and was authorized when they created or updated the record. In addition we can always find out the details on expired users from LDAP. ....

and the only way to do that would be to have the foreign keys in place. Who is to say that v('APP_USER') returns a user that is a) authorized, b) existed when they created or updated the record.



You either

a) want to have this rule enforced, in which case you use the constraint
b) do not care if this rule is enforced, in which case the rule is not a rule and therefore need not be implemented.

Performance of Foreign Keys

February 19, 2008 - 1:17 pm UTC

Reviewer: Del Benavides from CA USA

Tom,

As you mentioned, if we don't delete and/or update from the USER_TABLE, We don't need indexes in all foreign keys referencing the USER_TABLE. I think that is the problem. I need to revisit how users will be kept in the USER_TABLE. And how the USER_TABLE will be populated. As you mentioned the primary key of the USER_TABLE shouldn't be updated. I need to ensure that this is the case in our data model. I also need to revisit what will be used as the primary key for the USER_TABLE.

As far as the trigger, you are absolutely correct. I misspoke when I said that we use them to ensure that authorized users create and/or update records. What I should of said is that we use them to set the values of the audit columns when a user creates and or updates a record in the applications.

With your help, here is what I understand as the best approach:

a) Ensure an appropriate PRIMARY KEY is used in the USER_TABLE. It must be a key that will never need to be updated.
b) Populate the USER_TABLE from LDAP every time a new user is created.
c) Prevent the update and/or delete from the USER_TABLE.
d) Create FOREING KEYs from all tables in the data model referencing the USER_TABLE to ensure that only valid users can create and/or update records in the data model.


Again, Thank you for your help,

Del Benavides

Tom Kyte

Followup  

February 20, 2008 - 7:08 am UTC

a) that is true of any entity, sure.

d) yup

Foreign keys

May 29, 2008 - 5:49 am UTC

Reviewer: Shivdeep Modi from Newcastle Upon Tyne, UK

Hi,

This is continuation to the question I asked http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:12864646978683#889337200346007716
I thought this is the right thread to put my other queries.

I've got a table tsreport_date which contains a date column , as in the link above. Now I want to create table which will contain month and year for every month of in the tsreport_date table in the format MON-YYYY format, i.e,
MAY-2008 for all entries of MAY 2008 for below
OR
single date like 01-MAY-2008 for month of may.
09-MAY-2008 17:52:22
10-MAY-2008 10:00:03
and so on.

What is the best way to establish this relationship?
How do I enforce referential integrity keys on these two tables?

The idea is that based on the column of the new table, which will a user selection criteria, I want to probe tsreport_date for all the values of the month.
(tsreport_date is an index organized table)

The user will get to select MON-YYYY and should get all the dates for that month from tsreport_date

Regards,
Shivdeep


Tom Kyte

Followup  

May 29, 2008 - 8:13 am UTC

... Now I want to create table which will contain month and year for every
month of in the tsreport_date table in the format MON-YYYY format ...

no you don't. You use formats to pretty print things for human beings. We use DATES to store DATES


given your statements - I fail to see how Referential integrity fits in there - you are going to have 01-MAY-2008 00:00:00 in one table and

09-MAY-2008 17:52:22
10-MAY-2008 10:00:03


in the other - RI doesn't come into play.


Are you just trying to give an end user a picklist? Why not



ops$tkyte%ORA9IR2> select add_months(trunc(sysdate,'mm'),l) from (select -(level-1) l from dual connect by level <= 24);

ADD_MONTHS
----------
01/05/2008
01/04/2008
01/03/2008
01/02/2008
01/01/2008
01/12/2007
01/11/2007
01/10/2007
01/09/2007
01/08/2007
01/07/2007
01/06/2007
01/05/2007
01/04/2007
01/03/2007
01/02/2007
01/01/2007
01/12/2006
01/11/2006
01/10/2006
01/09/2006
01/08/2006
01/07/2006
01/06/2006

24 rows selected.

Referential integrity

May 29, 2008 - 9:23 am UTC

Reviewer: Shivdeep Modi from Newcastle Upon Tyne, UK

Hi,

You are true referential integrity cannot be enforced here. I was begin too expectant.
However what I've managed to do it that I have stored the first day of the month in the new table for all occurences of any date in that month. i.e. 01-MAY-2008 00:00:00 for all occurences of any day in the month of MAY-2008

Yes I am giving the user a select list. The user selects say
MAY-2008 to AUG-2008

This I've translated to:

select 'document.database.edate.options['||rnum||'] = new Option('||''''||report_date||''''||','||''''||report_date||''''||');'
from (select rownum+1 rnum, report_date from
( select report_date from tsreport_date
where report_date >= :begin_date and report_date <= last_day(:end_date) + 1

)
)
/

The user inputs of MON-YYYY are translated to dates in the new table which are then passed on to the bind variables in my processing logic.

Regards,
Shivdeep
Tom Kyte

Followup  

May 29, 2008 - 9:55 am UTC



ops$tkyte%ORA9IR2> select 'document.database.edate.options['||rnum||'] = new Option('||''''||report_date||''''||','||''''||report_date||''''||');'
  2  from (select rownum+1 rnum, report_date from
  3        ( select add_months( to_date(:bd,'MON-YYYY'), level-1 ) report_date
  4             from dual
  5                   connect by level <= (months_between(to_date(:ed,'MON-YYYY'),to_date(:bd,'MON-YYYY'))+1)
  6             )
  7             )
  8  /

'DOCUMENT.DATABASE.EDATE.OPTIONS['||RNUM||']=NEWOPTION('||''''||REPORT_DATE||''
-------------------------------------------------------------------------------
document.database.edate.options[2] = new Option('01-MAY-08','01-MAY-08');
document.database.edate.options[3] = new Option('01-JUN-08','01-JUN-08');
document.database.edate.options[4] = new Option('01-JUL-08','01-JUL-08');
document.database.edate.options[5] = new Option('01-AUG-08','01-AUG-08');


Foreign Keys

May 30, 2008 - 5:59 am UTC

Reviewer: Shivdeep Modi from Newcastle Upon Tyne

Hi,
It is acceptable, but I need to get the dates from the table. The dates may not be consecutive and may have gaps between them.
However the solution to get a range of dates without looping though is helpful. I could use this elsewhere.

Thanks for your help and suggestions.

Regards,
Shivdeep
Tom Kyte

Followup  

May 30, 2008 - 7:42 am UTC

I don't know what you need to be suggested on? You already seem to have a table?

Foriegn Keys

June 02, 2008 - 6:35 am UTC

Reviewer: Shivdeep Modi from Newcastle Upon Tyne, UK

Hi,

You are right, I am selecting from a table. As a matter of fact I am selecting from 3 tables.

My original question was to get create a foreign key between two tables. (MM-YYYY v/s DD-MON-YYYY HH24:MI:SS) and as per your post
above, it is clear that such a relationship cannot be enforced.

I am not enforcing any foreign key relationships now at all and have got things working.

Regards,
Shivdeep

Indexing on Foreign keys

April 09, 2009 - 10:00 am UTC

Reviewer: Rajeshwaran, Jeyabal


Hi Tom,

I have read from your book " Expert one on one Oracle " in the chapter 7. Indexes, It is said that

" Un-indexed foreign key is bad in the following cases as well:
. When you have an ON DELETE CASCADE and have not indexed the child table.
For example EMP is child of DEPT. DELETE FROM DEPT WHERE DEPTNO = 10
should cascade to EMP. If DEPTNO in EMP is not indexed, you will get a full table
scan of EMP. This full scan is probably undesirable and if you delete many rows
from the parent table, the child table will be scanned once for each parent row
deleted. "

So i Indexed my foreign key in Scenario 3(created with On Delete cascade option).
================================================================================================================================================
SCENARIO 1:

CREATE TABLE P
(
X NUMBER CONSTRAINT P_PK PRIMARY KEY
);

CREATE TABLE C
(
Y NUMBER CONSTRAINT C_FK REFERENCES P(X),
z NUMBER
);

hr@XE> DELETE FROM C WHERE y =1;

1000000 rows deleted.

Elapsed: 00:00:21.04
hr@XE> DELETE FROM P WHERE X =1;

1 row deleted.

Elapsed: 00:00:00.15
================================================================================================================================================
SCENARIO 2:

CREATE TABLE P
(
X NUMBER CONSTRAINT P_PK PRIMARY KEY
);

CREATE TABLE C
(
Y NUMBER CONSTRAINT C_FK REFERENCES P(X) ON DELETE CASCADE,
z NUMBER
);

INSERT INTO P(X) VALUES (1);

INSERT INTO C(y,z)
SELECT 1 ,LEVEL
FROM DUAL
CONNECT BY LEVEL <=1000000;
COMMIT;

hr@XE> DELETE FROM P WHERE X =1;

1 row deleted.

Elapsed: 00:00:22.80
================================================================================================================================================
SCENARIO 3:

CREATE TABLE P
(
X NUMBER CONSTRAINT P_PK PRIMARY KEY
);

CREATE TABLE C
(
Y NUMBER CONSTRAINT C_FK REFERENCES P(X) ON DELETE CASCADE,
z NUMBER
);

INSERT INTO P(X) VALUES (1);

INSERT INTO C(y,z)
SELECT 1 ,LEVEL
FROM DUAL
CONNECT BY LEVEL <=1000000;
COMMIT;

CREATE INDEX C_IND ON C(Y);

BEGIN
DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>'HR',TABNAME=>'P',ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE);
DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>'HR',TABNAME=>'C',ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE);
DBMS_STATS.GATHER_INDEX_STATS(OWNNAME=>'HR',INDNAME=>'P_PK',ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE);
DBMS_STATS.GATHER_INDEX_STATS(OWNNAME=>'HR',INDNAME=>'C_IND',ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE);
END;

hr@XE> delete from p where x = 1;

1 row deleted.

Elapsed: 00:00:46.67

================================================================================================================================================

Insert script for all the Three Scenarios.

INSERT INTO P(X) VALUES (1);
INSERT INTO P(X) VALUES (2);
INSERT INTO P(X) VALUES (3);
INSERT INTO P(X) VALUES (4);
INSERT INTO P(X) VALUES (5);

INSERT INTO C(y,z)
SELECT 1 ,LEVEL
FROM DUAL
CONNECT BY LEVEL <=1000000;
INSERT INTO C(y,z)
SELECT 2 ,LEVEL
FROM DUAL
CONNECT BY LEVEL <=1000000;
INSERT INTO C(y,z)
SELECT 3 ,LEVEL
FROM DUAL
CONNECT BY LEVEL <=1000000;
INSERT INTO C(y,z)
SELECT 4 ,LEVEL
FROM DUAL
CONNECT BY LEVEL <=1000000;
INSERT INTO C(y,z)
SELECT 5 ,LEVEL
FROM DUAL
CONNECT BY LEVEL <=1000000;

COMMIT;
================================================================================================================================================

Questions
1) So, As per your statement Scenario 3 should be completed faster than the Scenario 2 right? (Please correct me if i am wrong)
2) Can you please explain me why the scenario 3 takes longer to complete than Scenario 2 & Scenario 1 (is there is any mistake in my Scenario 3),If possible please provide explain me a sample block of code?

Thanks,
Rajesh.

Tom Kyte

Followup  

April 13, 2009 - 3:49 pm UTC

did you think about this.....


you have 1,000,000 rows in the child table.
they all belong to the same parent.

You have ONE MILLION CHILD ROWS
You have ONE parent record.


Now, when you delete the ONE parent record, you will delete all ONE MILLION child rows.

Ok, so you are going to delete every single child record (they all have the SAME PARENT, every row is going to go away)....

You do it once with:


hr@XE> DELETE FROM P WHERE X =1;

1 row deleted.

Elapsed: 00:00:22.80


and no index in place. That does

a) delete one record in P
b) deletes one million records in C via a full scan, without any indexes in place.



You then:

hr@XE> delete from p where x = 1;

1 row deleted.

Elapsed: 00:00:46.67


that will

a) delete one record in P
b) delete one MILLION records in C (*probably via a full scan*) AND MAINTAIN AN INDEX




Think about this from 'common sense' perspective. When do you want to use an index to access rows in a table? Typically when you want to access a small set - you have 1,000,000 rows in a table that is not much larger than 1,000,000 rows - you don't need the index to find those rows, we'll full scan.


think about the common usage here. You have a parent table with lots of records. Each parent record has a couple (dozens maybe, hundreds maybe - but typically NOT ONE MILLION child records - think about the real world case please) child records.

Now, if you take lots of parent records TIMES say 10 child records on average, your child table is 10 times the size of the parent. You want to find 10 records out of this big table. Index makes sense in that case.



Your case 3 is slower because it had to delete 1 parent record plus 1,000,000 child records PLUS MAINTAIN THE INDEX on the child table.

Case 2 - just deleted 1 parent record and 1,000,000 child records.


Try to close your eyes and envision what has to take place for these operations - visualize the mechanics of what is going on.




Deleting Huge amount of Records in Foreign key.

April 14, 2009 - 2:49 am UTC

Reviewer: Rajeshwaran, Jeyabal

Tom,
Thanks for your clarification regarding the indexing on foreign key.

we have a Requirement in our project that
a) Every day we receive datas from Flat File (the file name will be inserted into a File_mater Table with file_id as PK)
b) The items in the file will be loaded into an File_item table (For each file id there will about 1 million record with File_id as FK in the File_Item table)
c) Once the file is completely processed by our application rules, that File_id is marked as Status = "Completed" in the File_master Table.
d) Once this is done, we need to delete all the records from the File_Item table (for that particular File id).

I know that "delete" Produce a lot of Redo and will take long time to execute (Tom you usually suggest to follow the below steps for this scenarios)
a) Create <new_table> as Select * from <old_table> (only the records needed)
b) Drop your <old_table>
c) Rename your <new_table> to <old_table> Create constraints & Indexes.

But since this a production Environment our DBA dont want to follow the " Table drop & Create a new table with only the needed records " scenario's.

Questions

1) Can you please help me how to delete this huge amount of records in an Efficient manner (that could execute SQL's very quickly)?

Thanks,
Rajesh.

Tom Kyte

Followup  

April 14, 2009 - 11:18 am UTC

ouch, this will be painful. Why wouldn't you use partitioning so you can drop/truncate the data easily?


Deleting Huge amount of Records in Foreign key.

April 15, 2009 - 1:02 am UTC

Reviewer: Rajeshwaran, Jeyabal

Tom,
I am currently working in Oracle 10gR2.
So i we are to use partioning approch for below scenario means.

a) Every day we receive datas from Flat File (the file name will be inserted into a File_mater
Table with file_id as PK)
b) The items in the file will be loaded into an File_item table (For each file id there will
about 1 million record with File_id as FK in the File_Item table)
c) Once the file is completely processed by our application rules, that File_id is marked as
Status = "Completed" in the File_master Table.
d) Once this is done, we need to delete all the records from the File_Item table (for that
particular File id).

Question
1) Only FILE_ITEM table will be partitioned for each entry in the parent table, but not the Parent table FILE_MATER
2) The FILE_ITEM will be partitioned by LIST PARTITION Or RANGE PARTITION for each values in the parent key. So that i can drop the indivijual partition for deletition of each entries in the parent Table.

3) I wont be using HASH PARTITION on FILE_ITEM since I caun't drop the Indivijual partiton for each entries in the parent Table.

Please correct me if i am wrong.
Thanks,
Rajesh.
Tom Kyte

Followup  

April 15, 2009 - 8:55 am UTC

1) probably, yes.

2) probably, yes.

3) correct.

Thanks a Ton !!!

April 15, 2009 - 12:00 pm UTC

Reviewer: Rajeshwaran, Jeyabal

Tom,

Thank you so much for your help. I created a table with List partition and benchmarked "drop partition" with the " Delete "Statment.
Here is the result.
DELETE FROM CHILD 
WHERE
 X = 1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.03       0.47          0          7          0           0
Execute      1      4.23     194.20     100455     100798     402101      100000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      4.26     194.67     100455     100805     402101      100000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 57  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  DELETE  (cr=100798 pr=100455 pw=0 time=194200213 us)
 100000   PARTITION LIST SINGLE PARTITION: 2 2 (cr=100128 pr=99949 pw=0 time=30299730 us)
 100000    TABLE ACCESS FULL CHILD PARTITION: 2 2 (cr=100128 pr=99949 pw=0 time=30199727 us)
===========================================================

ALTER TABLE CHILD DROP PARTITION P2


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.12          0          3          1           0
Execute      1      0.03       0.17          4         19        854           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.04       0.29          4         22        855           0

Finally, The one with partitions wins !!

Foreign Keys in Very Large Oracle 10g DB

July 09, 2009 - 5:21 pm UTC

Reviewer: Dipankar Kushari from USA

Hi Tom,

We have a VLDB of size 5.5 TB, Oracle 10g. We need to know what is the better way to check the Referential Integrity in this database. Using FK at the RDBMS or at the application level? Is there very large overhead to use FK for RI in this situation. The DB mostly experience insert. Incremental file load is happening. The max file record count could be arounf 40-50 million.

regards,
Dipankar.
Tom Kyte

Followup  

July 14, 2009 - 4:54 pm UTC

rewrite "We have a VLDB of size 5.5 TB" as

we have a modestly sized database of about 5.5tb.

the VLDB bit doesn't apply for at least the last 5 or 6 years - it isn't very large anymore :)


... Using FK at the RDBMS
or at the application level? ...

I cannot believe you

a) have read anything ever written by me
b) and asked that question.


Here is an excerpt from Effective Oracle by Design on this very subject (chapter on efficient schema)

<quote>

Let the Database Enforce Data Integrity

We touched on this subject way back in the ¿It¿s a Database, Not a Data Dump¿ section in Chapter 1. There, I relayed the story of the consultant who wanted to remove all referential integrity from the database and do it in the application. I explained that this was a really bad idea, for the following reasons:

o Yours will not be the last application to want to use this data. If the rules governing the data integrity are hidden away in client applications, they will not be enforced uniformly, and they will be very hard to change.

o It is slower to manage data integrity on the client. It requires additional round-trips to the server, and it must be done on the server as well.

o It takes orders of magnitude more code to manage integrity on the client.


I have a theory as to why developers are sometimes inclined to do this on the client: It has to do with the DBA-versus-developer philosophy instead of the DBA-and-developers-working-together approach. If the integrity of the data is managed in the database, the developers feel they have lost control. If the integrity of the data is managed in the application, the developers feel they have regained that control. This is a shortsighted perspective. Neither the developer nor the DBA own the data. The data is the property of someone else: the end users. It is certainly not in the best interests of the end users to obfuscate the business rules and hide them in the client application.

Why You Want to Keep Referential Integrity in the Database

Data-integrity rules must be placed into the database to ensure that they are consistently implemented and enforced. Application-specific rules¿rules that apply only to the data in the context of the application¿s use of it¿may be in the application.

Here are the reasons you want data integrity enforced by the database whenever possible:

o Features such as query rewrite are thwarted.¿These features are rendered less than useful if you do not tell the database about the relationships between objects, unless you declaratively spell out the rules regarding the data. In the section on QUERY_REWRITE_ENABLED in Chapter 6, you saw an example of how a materialized view could not be used until the database was told about the relationship between the tables, the NOT NULL column, and the primary keys.

o The data integrity will be compromised at some point in time.¿Virtually every developed system that chooses to enforce foreign key constraints outside the database has orphaned child rows (child rows without parents). If you have such a system, just run this query: select foreign_key_columns from child_table MINUS select primary_key_columns from parent. You may be surprised to find some in your own database! Run a couple of checks on your data, and you might find NULL values where none should be and data that does not conform to your business rules (out-of-range data, for example). These arise from an inconsistent application of the business rules.

o Server-enforced integrity is blindingly fast.¿Is it slower than not using data integrity? Yes, of course it is. It adds an extra step, and it means that more code is executed in the server. Is it faster than you can code it yourself? Yes, it is. Is it applied consistently regardless of the application? Yes, it is.

o The database provides more information. ¿By keeping the integrity rules in the database, your system is infinitely more self-documenting. A simple query tells you what relates to what and how.

As a last reason for using server-enforced integrity, let¿s consider what happens when you try to do it yourself.

The Problem with Do-It-Yourself Integrity Checks


Client-side enforcement of constraints is problematic at best, if you can do it at all. Here, we¿ll look at an example of the problems that can arise.

Ask Tom
"I hope someone can help me and, therefore, thank you in advance if you can. I am using Oracle9 and the EMP table. I need to ensure that no department is to have more than 8 employees and fewer than 3, except when a transaction reduces the number of employees to 0."


Interestingly, this user posed this question to many individuals in different forums, and the most frequent response was something along the lines of the following trigger:

SQL> create or replace trigger xxx
  2  after delete or update or insert on emp
  3  declare
  4  begin
  5
  6   for irec in (select deptno, count(*) emps
  7             from emp
  8             group by deptno
  9             having count(*) <3
 10                 or count(*) >8)
 11   loop
 12       RAISE_APPLICATION_ERROR(-20000, 'Department '
 13       ||irec.deptno || ' has '||irec.emps||' employees!');
 14   end loop;
 15  end;
 16  /


In various forms, this trigger was implemented. Various optimizations were suggested, such as only checking the DEPTNO values that were inserted/updated/deleted (because the performance of the above trigger would be questionable if EMP were large), but the answers were pretty much uniform: Use a trigger and count.

The problem is that the trigger doesn¿t work! My solution to this problem was a bit more involved, but it would work in all situations. He needs to serialize access to employee records at the DEPTNO level during insert, delete, and update operations to the DEPTNO column, and use database check constraints. Then use a trigger to maintain that value.

End Ask Tom


When you need to enforce a simple rule such as a department in the EMP table must have three to eight employees or zero employees, a trigger-and-count solution may seem like it should work. The trigger could just look for any DEPTNO in EMP that does not have a count of employees between 3 and 8. And indeed, when we perform a single-user test, such a trigger does appear to work:

ops$tkyte@ORA920> select deptno, count(*),
  2         case when count(*) NOT between 3 and 8 then '<<<===='
  3              else null
  4          end
  5    from emp
  6   group by deptno
  7  /

    DEPTNO   COUNT(*) CASEWHE
---------- ---------- -------
        10          3
        20          5
        30          6


Our initial data is valid; all of the aggregate counts fall within our required range. If we attempt to delete three employees from Department 20, our trigger kicks in and saves us:

SQL> delete from emp where empno in ( 7369, 7566, 7788 );
delete from emp where empno in ( 7369, 7566, 7788 )
*
ERROR at line 1:
ORA-20000: Department 20 has 2 employees!


Now, however, we¿ll simulate two simultaneous sessions modifying the table. We need to use two SQLPlus sessions in order to do this. I opened the first SQLPlus session and entered:

ops$tkyte@ORA920> delete from emp where empno in ( 7369, 7566 );
2 rows deleted.


Now, in the second session, I executed:
ops$tkyte@ORA920> delete from emp where empno = 7788;
1 row deleted.
ops$tkyte@ORA920> commit;
Commit complete.
Win2k, 9202
mcdonac@db92> drop table emp;

Table dropped.

mcdonac@db92> drop table dept;

Table dropped.

mcdonac@db92> create table emp as select * from scott.emp;

Table created.

mcdonac@db92> create table dept as select * from scott.dept;

Table created.

mcdonac@db92> alter table dept
  2  add emp_count number
  3  constraint must_be_between_3_8
  4  check(emp_count between 3 and 8 OR emp_count = 0)
  5  deferrable initially deferred;

Table altered.

mcdonac@db92> update dept
  2         set emp_count = (select count(*)
  3                            from emp
  4                           where emp.deptno = dept.deptno )
  5  /

4 rows updated.

mcdonac@db92>
mcdonac@db92> alter table dept
  2      modify emp_count NOT NULL;

Table altered.

mcdonac@db92>
mcdonac@db92> create trigger emp_dept_cnt_trigger
  2      after insert or update or delete on emp
  3      for each row
  4      begin
  5          if ( updating and :old.deptno = :new.deptno )
  6          then
  7              return; -- no change
  8          end if;
  9          if ( inserting or updating )
 10         then
 11             update dept set emp_count = emp_count+1
 12              where deptno = :new.deptno;
 13         end if;
 14         if ( updating or deleting )
 15         then
 16             update dept set emp_count = emp_count-1
 17              where deptno = :old.deptno;
 18         end if;
 19     end;
 20  /

Trigger created.


ops$tkyte@ORA920> select deptno, count(*),
  2         case when count(*) NOT between 3 and 8 then '<<<===='
  3              else null
  4          end
  5    from emp
  6   group by deptno
  7  /

    DEPTNO   COUNT(*) CASEWHE
---------- ---------- -------
        10          3
        20          2 <<<====
        30          6



The reason the trigger fails to enforce our business logic as we expect is because this solution does not account for the fact that Oracle provides nonblocking reads, consistent queries, and multiversioning. (For details on the Oracle multiversioning model, see the Oracle9i Release 2 Concepts Guide, Chapter 20, ¿Data Concurrency and Consistency.¿) Even though our first transaction was still in progress (while it was not committed), the second transaction was not blocked from reading rows that were involved in that transaction. Our first transaction deleted two employees, which left three remaining, so all is well. Our second transaction was not aware of the uncommitted changes made by the first transaction, so the trigger saw four rows left in DEPTNO 20, and that was fine. When we committed, the data integrity was lost.

Note that in many other databases, this logic would appear to work. The reason is that the query against EMP in databases that do not provide nonblocking reads (Microsoft SQL Server and DB2, for example) would have blocked on the changed rows. In our example, the result would have been a self-deadlock, because the second delete would have hung on the read of the EMP table. This is yet another reason why database independence is something that is truly hard to achieve generically: The algorithms must be implemented for the database you are using.

To solve the problem, we can use serialization and server-enforced integrity. There are a couple of ways to do this. Here, we¿ll use the DEPT table, where we¿ll maintain an aggregate column EMP_COUNT. It looks like this:

ops$tkyte@ORA920> alter table dept
  2  add emp_count number
  3  constraint must_be_between_3_8
  4  check(emp_count between 3 and 8 OR emp_count = 0)
  5  deferrable initially deferred;
Table altered.

ops$tkyte@ORA920> update dept
  2     set emp_count = (select count(*)
  3                        from emp
  4                       where emp.deptno = dept.deptno )
  5  /
4 rows updated.

ops$tkyte@ORA920> alter table dept
  2  modify emp_count NOT NULL;
Table altered.


Now, we have an aggregate column in the DEPT table that we will use to maintain a count of employees. Additionally, this column has a declarative check constraint on it that verifies the count of employees is either 0 or between 3 and 8, as specified in the rule. Lastly, we made this constraint deferrable initially deferred, meaning it will not be validated until we commit by default. The reason for doing that is to permit multistatement transactions to execute without error. For example, we could update a row setting the DEPTNO from 20 to 30, reducing the number of employees in DEPTNO 20 to 2 but immediately follow that with an update of another row from DEPTNO 30 to 20, increasing the number of employees back to 3. If the constraint were validated after each and every statement, this sort of logic would fail.
Now, we need a trigger to maintain that value:

ops$tkyte@ORA920> create trigger emp_dept_cnt_trigger
  2  after insert or update or delete on emp
  3  for each row
  4  begin
  5      if ( updating and :old.deptno = :new.deptno )
  6      then
  7          return; -- no change
  8      end if;
  9      if ( inserting or updating )
 10      then
 11          update dept set emp_count = emp_count+1
 12           where deptno = :new.deptno;
 13      end if;
 14      if ( updating or deleting )
 15      then
 16          update dept set emp_count = emp_count-1
 17           where deptno = :old.deptno;
 18      end if;
 19  end;
 20  /
Trigger created.


Note: For ease of example, the SQL for the trigger is presented in-line. It would be advantageous to place the SQL into a stored procedure, especially in this case. Not only would we benefit from reducing the soft parsing our application performs, but we would also be able to get away with a single UPDATE statement.

If we are updating and did not change the DEPTNO column, we just return; no changes need to be made. Otherwise, we update the DEPT table and increment or decrement the EMP_COUNT column as needed, since we are using foreign keys. We are assured that the UPDATE statements update at least and at most a single row in DEPT. (You cannot possibly have a child row in EMP with a DEPTNO that does not exist in DEPT, because the database is enforcing that for us.)

The update of the single row in DEPT also causes serialization at the DEPTNO level; that is, only one transaction can insert into DEPTNO=20 at any point in time. All other sessions trying to insert, delete, or transfer where DEPTNO=20 will block on that update. All of the other DEPTNO operations are available for modification; just not this one. We can still update other EMP rows in DEPTNO=20, as long as we do not change their DEPTNO. This serialization is what allows our constraint to be effective.

As another example, consider a primary key/foreign key relationship. The code you would write in order to use client-side enforcement of constraints would need to do the following:

Action/Reaction:

Insert into a child table or update a child table¿s foreign key value.
Lock the parent row for that foreign key (most people would simply select¿look without locking). Note that this will serialize at the parent table row level.

Update the parent primary key or delete from parent table.
Lock the entire child table. Then look for child rows before you update or delete that parent row. That is your only recourse. You must lock the entire child table to prevent any rows that reference this primary key from being created. Oracle itself can bypass this child table lock and do it more efficiently, but only because it does it internally.


Neither of these approaches is very good for concurrency. Oracle can do much better on its own. You will notice that many sessions can insert into a child table simultaneously if you use declarative integrity, and many sessions can modify the parent table using declarative integrity. If you do it yourself, and you do it correctly, you will be forced to serialize frequently, greatly decreasing concurrency and, hence, scalability in your database.

Don¿t believe anyone when they say, ¿Don¿t use integrity in the database. It is too slow.¿ They either don¿t enforce data integrity at all or they have been doing it wrong (and, hence, have a false sense of security). If they did it right, their system is running many times slower than it should be! It is all about benchmarking and doing it right.

Does That Mean That the Client Should Not Do Integrity Checking Itself?

It can be very useful for the client to do integrity checking for a variety of reasons. The important thing to remember, however, is that it must ultimately still be performed in the database. Client-side integrity is not a substitute for server-side integrity; it is a complementary addition. Here are some of the salient reasons for client-side integrity:

o Better end-user experience¿Users can discover as they are typing that the data they entered doesn¿t stand a chance of being entered into the database. They do not need to wait until they select Save.
o Reduced resource usage on the server ¿By preempting bad data on the client, you do not make the server perform work that will ultimately need to be undone.


But, just as there are pros to most everything, there are cons as well. The major disadvantage with client-side integrity is that you now have two places where the rules are, and if they change over time, you must ensure they are changed in both locations. It would be frustrating for a client to not be able to input information the database should accept because the application is working from old rules. It would be equally as frustrating to work on a set of data and feel confident that it is valid, only to discover as you choose Save that the database will reject it. Good configuration management and software engineering principles will reduce the chances of having this happen.
</quote>

A check on DOUBLE CHECK

November 06, 2009 - 3:59 am UTC

Reviewer: Parthiban Nagarajan from Coimbatore, India

Hi Tom

Please advise on the following:

create table ttt1 (id1 int, x int, primary key (id1));
create table ttt2 (id1 int, id2 int, y int, primary key (id1, id2),
foreign key (id1) references ttt1);
create table ttt3 (id1 int, id2 int, z int, primary key (id1, id2),
foreign key (id1, id2) references ttt2,
foreign key (id1) references ttt1); --> do we really need this foreign key?

What I guess is that the foreign key reference of ttt3 on ttt1 is extra-unnecessary work ...
Tom Kyte

Followup  

November 11, 2009 - 1:27 pm UTC

no, you do not.

In fact, you do not need nor want table ttt3 - it is incorrect to have it.

Just add z to ttt2 as an optional attribute, you are done (and done right)

right now - you are done wrong.

Foreign Key, Locks and drop table

May 28, 2010 - 5:01 am UTC

Reviewer: martina from vienna, austria

Hello Tom,

We are experiencing strange problems "every once in a while".

1st:
we exchange "old" partitions with tables on a regular basis. All indices, constraints are copied to the new table.
then we drop the empty old partition.
after that we export the table and then attempt to drop it.
sometimes the database seems to "hang".
I tried to force the problem and saw inactive ias (webform)processes picking up on the table to be dropped. (in v$access). I could not really force the problem - the table was exported and dropped.

2nd:
A sql*plus creates an intermediate table with quite an anonymous name. to ensure integrity a foreign key constraint to the article base table is created. after "using" the table the program tries to drop it and aborts with ORA-00054: resource busy and acquire with NOWAIT specified

the 2nd happened last evening. this morning i saw processes in v$access related to the table to be dropped. all of them were inactive webforms which could not attempt a real access to that table.

the only connection we see is the foreign key constraint to the article base table ...

thank you in advance for your hints!

best regards
martina
Tom Kyte

Followup  

May 28, 2010 - 8:28 am UTC

1st: ddl is going to conflict heavily with online operations - you might want to postpone doing things like dropping until a maintenance window when online stuff will be not present of minimized.

2nd: parent table operations will affect the child tables - yes. This is by design, it works that way.



This DDL that you do in a live system is very shaky - DDL is heavy duty, big time, to be avoided at all costs - especially if you are creating these anonymous transient tables linked by foreign keys to production (the create should block/lock at times too!)

do you really need to do that - in 99.9999999999% of the cases I look at the answer is "no" (in the other cases there are so many bad things, that this is the least of their worries and we ignore it)

i forgot this:

May 28, 2010 - 5:24 am UTC

Reviewer: martina from vienna, austria

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for HPUX: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production


roles

July 28, 2010 - 8:02 am UTC

Reviewer: A reader

greetings thomas,

and thanks like always.


CREATE ROLE MOHTEST_ROLE;

GRANT references on MOHTEST.BIG_TABLE TO MOHTEST_ROLE;

grant MOHTEST_ROLE to MOH2;

select * from role_tab_privs where role = 'MOHTEST_ROLE';

no references will be found.

oracle version 10.2.0.1
Tom Kyte

Followup  

July 28, 2010 - 9:16 am UTC

are you sure you actually did that without error?

ops$tkyte%ORA10GR2> grant references on t to foo;
grant references on t to foo
*
ERROR at line 1:
ORA-01931: cannot grant REFERENCES to a role

ops$tkyte%ORA10GR2> !oerr ora 1931
01931, 00000, "cannot grant %s to a role"
// *Cause:  UNLIMITED TABLESPACE, REFERENCES, INDEX, SYSDBA or SYSOPER
//          privilege cannot be granted to a role.
// *Action: Grant privilege directly to the user.

above post

July 28, 2010 - 8:09 am UTC

Reviewer: A reader

regarding my above post, i know i cannot grant references to a role, but my question:
Is there is any workaround solution.
I have a lot of tables.
Tom Kyte

Followup  

July 28, 2010 - 9:18 am UTC

If you "knew" you could not do what you say you did - why did you post what you posted????

You have very very very few schemas that would need "references" on a given table and most importantly, you want to use least privileges when constructing your physical schema, for security and maintenance reasons. Direct grants are what you want to use for all schema level stuff like this. It will clearly document who has what privilege (and you should further document WHY they have it)



Above post

January 06, 2011 - 10:18 am UTC

Reviewer: A reader

Hello Thomas,

And thanks like always.

----If you "knew" you could not do what you say you did - why did you post what you posted????

I try it in 10g and 11g database and they gave me no errors, i thought it will work in these releases using role, and that is why i asked. (No errors and grants not take effects)

---You have very very very few schemas that would need "references" on a given table and most importantly, you want to use least privileges when constructing your physical schema, for security and maintenance reasons. Direct grants are what you want to use for all schema level stuff like this. It will clearly document who has what privilege (and you should further document WHY they have it).

The problem here is that about 56 schema need the references privileges for 7 schemas for hundred tables and columns, is there any way to manage references privileges other than direct privileges?
Tom Kyte

Followup  

January 06, 2011 - 10:37 am UTC

... is there any way to manage references
privileges other than direct privileges? ..

no, there is not. As stated -

ops$tkyte%ORA10GR2> !oerr ora 1931
01931, 00000, "cannot grant %s to a role"
// *Cause:  UNLIMITED TABLESPACE, REFERENCES, INDEX, SYSDBA or SYSOPER
//          privilege cannot be granted to a role.
// *Action: Grant privilege directly to the user.



short of granting to public - which I would not be a fan of

Foreign Key on tables of different schema

February 14, 2011 - 7:58 am UTC

Reviewer: A reader

Hi Tom,

Can we make a foreign key on two tables on different schema. Lets say I have Table A in Schema A as:

Create table SchemaA.TableA(eno number, dept number);
INSERT INTO SchemaA.TableA VALUES(1, 10);
INSERT INTO SchemaA.TableA VALUES(2, 10);

Also, I have Table B in Schema B as:

Create table SchemaB.TableB(dept number primary key, dept_name varchar2(10));
INSERT INTO SchemaB.TableB VALUES(10, 'FIN');
INSERT INTO SchemaB.TableB VALUES(20, 'HR');

Now the Dept of SchemaB.TableB is the foreign key for SchemaA.TableA

How can we make foreign key constraints between two schemas?
1. Through DB Links
2. Through Triggers

It will be great if you suggest the Pros and Cons of the idea.




Tom Kyte

Followup  

February 14, 2011 - 8:06 am UTC

... How can we make foreign key constraints between two schemas?
...

3) declaratively

options 1 and 2 are not options. You would never use a dblink for a foreign key constraint - that would make the two databases totally dependent on each other - if either were unavailable BOTH ARE. Therefore, you would combine them so that you can make everything more available. It would be beyond "not smart" to even consider a foreign key constraint over a dblink.

You would never ever code in a trigger that which is simply done via a declarative constraint. If I had my way, you would never ever code in a trigger since people use them so often for inappropriate things (like trying to enforce intgrity)

ops$tkyte%ORA11GR2> create user a identified by a default tablespace users quota unlimited on users;

User created.

ops$tkyte%ORA11GR2> grant create session, create table to a;

Grant succeeded.

ops$tkyte%ORA11GR2> create user b identified by b default tablespace users quota unlimited on users;

User created.

ops$tkyte%ORA11GR2> grant create session, create table to b;

Grant succeeded.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> connect a/a
Connected.
a%ORA11GR2> create table t ( x int primary key );

Table created.

a%ORA11GR2> insert into t values ( 1 );

1 row created.

a%ORA11GR2> commit;

Commit complete.
<b>
a%ORA11GR2> grant references on t to b;

Grant succeeded.
</b>
a%ORA11GR2> 
a%ORA11GR2> connect b/b
Connected.
b%ORA11GR2> create table t ( x references a.t );

Table created.

b%ORA11GR2> insert into t values ( 1 );

1 row created.

b%ORA11GR2> insert into t values ( 2 );
insert into t values ( 2 )
*
ERROR at line 1:
ORA-02291: integrity constraint (B.SYS_C0014437) violated - parent key not
found


What if on other database

February 14, 2011 - 8:44 am UTC

Reviewer: A reader

Hi Tom,

How can we do the same if we have the two tables on different database servers. Lets say I have serverA and server B. Then how can we accomplish the same.


Tom Kyte

Followup  

February 14, 2011 - 9:08 am UTC

I shall reiterate:


You would never use a dblink for a foreign key constraint - that would make the two databases totally dependent on each other - if either were unavailable BOTH ARE. Therefore, you would combine them so that you can make everything more available. It would be beyond "not smart" to even consider a foreign key constraint over a dblink.


You would have a single database of course, it would be "not smart" (replace "not smart" with any other term indicating "not smart" if you like) to even consider doing this.

One more point

February 14, 2011 - 9:21 am UTC

Reviewer: A reader

Lets say I have the transactional data for sustomers of two different regions. One server is placed for US transactional data and the second is at UK transactions.

Now if the business rule says: check if a customer primarily exists in the US server, then only he can do the transactions in UK.

How can we accomplish that? Someone suggested that the data can be fetched in Materialized view with fast refresh and then we can make a FK constraint over that table.

I am not very sure of the concept.

Is it really possible that we can maintain a foreign key kind relationship between two database servers? From any point of view, even if it was not a best practice.
Tom Kyte

Followup  

February 14, 2011 - 9:28 am UTC

You should have a single server then - since if the US server is down - the UK is down as well. And vice versa would assume. Therefore - you have LESS availability with two databases than with one (think it through...)

So, obviously - in real life you CANNOT do this in real time. You could use asynchronous replication (via a materialized view for example, they are fully documented). But you would have to bear in mind - they are ASYNCHRONOUS - not real time.


This goes beyond a best practice conversation. There are in life things so "not smart" that they do not deserve time. Having a foreign key between database servers falls well into that category.

Thanks for the explanation

February 14, 2011 - 9:36 am UTC

Reviewer: A reader


Foreign keys on 3 tables.

February 18, 2011 - 3:46 am UTC

Reviewer: Rajeshwaran, Jeyabal

rajesh@10GR2>
rajesh@10GR2> create table t1(x number primary key);

Table created.

Elapsed: 00:00:00.14
rajesh@10GR2> create table t2(x number primary key);

Table created.

Elapsed: 00:00:00.06
rajesh@10GR2> create table t3(x number);

Table created.

Elapsed: 00:00:00.04
rajesh@10GR2>
rajesh@10GR2> alter table t3 add constraint t3_t1 foreign key(x) references t1;

Table altered.

Elapsed: 00:00:00.07
rajesh@10GR2> alter table t3 add constraint t3_t2 foreign key(x) references t2;

Table altered.

Elapsed: 00:00:00.07
rajesh@10GR2>
rajesh@10GR2> insert into t1 values(1);

1 row created.

Elapsed: 00:00:00.04
rajesh@10GR2> insert into t2 values(2);

1 row created.

Elapsed: 00:00:00.03
rajesh@10GR2> commit;

Commit complete.

Elapsed: 00:00:00.04
rajesh@10GR2>
rajesh@10GR2> insert into t3 values(1);
insert into t3 values(1)
*
ERROR at line 1:
ORA-02291: integrity constraint (RAJESH.T3_T2) violated - parent key not found


Elapsed: 00:00:00.04
rajesh@10GR2> insert into t3 values(2);
insert into t3 values(2)
*
ERROR at line 1:
ORA-02291: integrity constraint (RAJESH.T3_T1) violated - parent key not found


Elapsed: 00:00:00.03
rajesh@10GR2>


Tom:

How can i insert record into table T3? Given this kind of data model.

Tom Kyte

Followup  

February 18, 2011 - 9:01 am UTC

given this data model - by definition - you cannot.

You yourself put in place a rule that says "t3.x must exist in BOTH t1 and t2"

Don't know what else you could expect from this??

What is your goal - perhaps we can tell you how to properly model it.

Foreign keys on 3 tables

February 18, 2011 - 6:41 am UTC

Reviewer: Rajeshwaran, Jeyabal

Tom:

Sorry to mention the requirement, I need to insert records in T3 when the values of column 'X' either exists in T1 or T2 table.


Tom Kyte

Followup  

February 18, 2011 - 9:09 am UTC

then you have it backwards.

T3 is the parent table and T1 and T2 will have foreign keys to T3.

Data loading with Foreign key constraints !

February 19, 2011 - 9:33 am UTC

Reviewer: Rajeshwaran, Jeyabal

Tom:

Got into a conversation with co-workers and they told that loading data having foreign key constraints enabled will slow down data loading.

So, did a Benchmarking and founded this.

SQL> create table sales
  2  nologging
  3  as
  4  select * from sales_data
  5  where 1 = 0;

Table created.

Elapsed: 00:00:00.25
SQL>
SQL> insert into sales
  2  select * from sales_data;

11863049 rows created.

Elapsed: 00:03:35.53
SQL>
SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
SQL> alter table sales add constraint sales_fk foreign key(cust_id) references customers;

Table altered.

Elapsed: 00:04:19.73
SQL>

Loading the data without constraints took 3 mins, 35 secs.Enabiling constraint tooks 4 mins, 20 secs.

SQL> truncate table sales;

Table truncated.

Elapsed: 00:00:57.67
SQL>
SQL> insert into sales
  2  select * from sales_data;

11863049 rows created.

Elapsed: 00:03:32.98
SQL>
SQL> commit;

Commit complete.

Elapsed: 00:00:00.03
SQL>

Now loading the data with having constraints enabled took 3 mins and 32 secs.

So from this Benchmarking results, can i say them that having Foreign key constraints enabled during data loading *WILL NOT* slow down the process ?
Tom Kyte

Followup  

February 20, 2011 - 12:32 pm UTC

IT DEPENDS...

that is my answer to everything - it depends.

It depends on the data volumes (that of the parent table, we don't know what that is)

I would NOT use sqlplus to time something like this, use sql_trace.


the fact is - loading data without constraints is faster than loading data with constraints (that should be obvious)

however, loading data without constraints PLUS enabling the constraints is OFTEN (not always) longer than loading data with constraints (especially when you are adding a little to a lot)

Foreign Key and CBO

December 13, 2011 - 8:02 am UTC

Reviewer: Parthiban Nagarajan from Coimbatore, India

Hi Tom

create table zodiac_2011 (
sign varchar2(15),
from_dt date,
to_dt date
);
create table births_2011 (
baby_id number primary key,
baby_name varchar2(50),
baby_dob date,
sign varchar2(15)
);

- I can create a Foreign key on births_2011(sign) referencing to zodiac_2011(sign);
- But, I want to create a foreign key on births_2011(sign, baby_dob) referencing to zodiac_2011(sign, from_dt, to_dt);

Just want to have a tight integrity. (1) Could this be possible? (2) Especially, without using Virtual Columns?

Now, let us consider that we have a such setup.
(3) Can the CBO use this FK relation and execute the following query with cardinality estimate to be 0 ?

select * from births_2011
where sign = 'LEO' and baby_dob between to_date('MAR2011', 'MONYYYY')
and to_date('MAY2011', 'MONYYYY');

( Adding "_2011" in the table names helped me to describe this scenario ;)

Thanks and Regards
Tom Kyte

Followup  

December 13, 2011 - 9:56 am UTC

(1) no
(2) even then, no
(3) not relevant, as the answer with your model is "no". however a sql profile or dynamic sample would be useful with/without constraints.



you would have to change your model to use a declarative foreign key.


create table zodiac_2011
( sign varchar2(15), dt (date or maybe a number MMDD), primary key(sign,dt) );


would be what you need to use. then you could use a virtual column from your births table to the zodiac table - converting the dob into MMDD to do the check.

unindexed FK from Expert Oracle DB Architecture 11g

November 26, 2013 - 10:39 am UTC

Reviewer: Rajeshwaran, Jeyabal

Tom,

The below quote if from Expert Oracle database Architecture, does the DML in the below context refers to INSERT statments too?

say i have child table which gets inserted (by insert into values() statement ) does an un-indexed fk will slow down this inserts? the size of the child table is around 120GB.

<quote>
So, when do you not need to index a foreign key? In general, when the following conditions are met:

o You do not delete from the parent table

o You do not update the parent table’s unique/primary key value, either purposely or by accident (via a tool).

o You do not join from the parent table to the child table, or more generally, the foreign key columns do not support an important access path to the child table and you do not use them in predicates to select data from this table (such as DEPT to EMP)

If you satisfy all three criteria, feel free to skip the index—it is not needed and will slow down DML on the child table. If you do any of the three, be aware of the consequences
</quote>

data loads with FK in place

June 26, 2020 - 4:37 am UTC

Reviewer: Rajeshwaran, Jeyabal

Team:

Below test case having two table T1 and T2, with T1 having Foreign key (FK) and t2 don't have such FK.

Could you please help us to understand why the block change in "current mode" is high during load with FK in place.

drop table cities purge;
create table cities
as
select username as city
from all_users
where rownum <=50;

alter table cities add constraint cities_pk
primary key( city );

drop table t1 purge;
drop table t2 purge;

create table t1( x char(80), city references cities );
create table t2( x char(80), city varchar2(128) );


exec dbms_monitor.session_trace_enable(waits=>true,binds=>true);
declare
 type array is table of cities.city%type 
   index by pls_integer ;
 l_data array ;
begin
 select city bulk collect into l_data
 from cities;
 
 for i in 1..1000
 loop
  for i in 1..l_data.count 
  loop
   insert into t1 values ('x', l_data(i) );
  end loop;
 end loop;
end;
/ 
exit;

conn /

exec dbms_monitor.session_trace_enable(waits=>true,binds=>true);
declare
 type array is table of cities.city%type 
   index by pls_integer ;
 l_data array ;
begin
 select city bulk collect into l_data
 from cities;
 
 for i in 1..1000
 loop
  for i in 1..l_data.count 
  loop
   insert into t2 values ('x', l_data(i) );
  end loop;
 end loop;
end;
/ 
exit;

Tkrpof shows this:


SQL ID: 3kd79g7wub3f8 Plan Hash: 0

INSERT INTO T1 
VALUES
 ('x', :B1 )


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  50000     18.54      20.70          1       1088     106458       50000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    50001     18.54      20.70          1       1088     106458       50000

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 109     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  LOAD TABLE CONVENTIONAL  T1 (cr=5 pr=0 pw=0 time=280 us starts=1)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                         1        0.01          0.01
  latch: shared pool                              2        0.00          0.00



********************************************************************************

SQL ID: gvhqp0gv596q8 Plan Hash: 0

INSERT INTO T2 
VALUES
 ('x', :B1 )


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  50000     20.59      21.13          1       1030      56470       50000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    50001     20.59      21.13          1       1030      56470       50000

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 109     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  LOAD TABLE CONVENTIONAL  T2 (cr=5 pr=0 pw=0 time=557 us starts=1)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  Disk file operations I/O                        3        0.05          0.09
  db file sequential read                         1        0.01          0.01



********************************************************************************

Chris Saxon

Followup  

June 26, 2020 - 3:33 pm UTC

Remember that when you have an FK, the database needs to check the corresponding PK to ensure the value you're inserting exists