Skip to Main Content
  • Questions
  • How to enforce at least one detail record

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Tommy.

Asked: June 06, 2005 - 5:06 pm UTC

Last updated: July 25, 2017 - 7:27 am UTC

Version: 9.2.0

Viewed 1000+ times

You Asked

I have a master table with a MASTER_TYPE-column. Depending of the value of the MASTER_TYPE column, I want to enforce that there exists at least one detail record.

CREATE TABLE master (
master_id NUMBER NOT NULL PRIMARY KEY,
master_type VARCHAR2(4) NOT NULL
CHECK (master_type IN ('one','zero'))
);

CREATE TABLE detail (
detail_id NUMBER NOT NULL PRIMARY KEY,
master_id NUMBER NOT NULL REFERENCES master(master_id)
);

For instance if MASTER_TYPE equals 'one' there must be at least one detail record, but if the value is 'zero' there can be zero or more detail records.

With other words:

INSERT INTO master VALUES (0, 'zero');
COMMIT;

should be legale.

INSERT INTO master VALUES (1, 'one');
COMMIT;

should not be legale. But

INSERT INTO master VALUES (1, 'one');
INSERT INTO detail VALUES (100, 1);
COMMIT;

should.

Please advise.

PS: Why don't you use capital letters for reserved words like the Oracle manual does?

and we said...

This is one approach:

ops$tkyte@ORA9IR2> CREATE TABLE master (
2 master_id NUMBER NOT NULL PRIMARY KEY,
3 master_type VARCHAR2(4) NOT NULL
4 CHECK (master_type IN ('one','zero')),
5 child_cnt number,
6 constraint check_child check( nvl(child_cnt,0) > 0 or master_type = 'zero' ) deferrable initially deferred
7 );

Table created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> CREATE TABLE detail (
2 detail_id NUMBER NOT NULL PRIMARY KEY,
3 master_id NUMBER NOT NULL REFERENCES master(master_id)
4 );

Table created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace trigger detail_trigger
2 after insert or delete or update of master_id on detail
3 for each row
4 begin
5 if (updating or inserting)
6 then
7 update master set child_cnt = nvl(child_cnt,0)+1 where master_id = :new.master_id;
8 end if;
9 if (updating or deleting)
10 then
11 update master set child_cnt = nvl(child_cnt,0)-1 where master_id = :old.master_id;
12 end if;
13 end;
14 /

Trigger created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> INSERT INTO master (master_id,master_type) VALUES (0, 'zero');

1 row created.

ops$tkyte@ORA9IR2> COMMIT;

Commit complete.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> INSERT INTO master (master_id,master_type) VALUES (1, 'one');

1 row created.

ops$tkyte@ORA9IR2> COMMIT;
COMMIT
*
ERROR at line 1:
ORA-02091: transaction rolled back
ORA-02290: check constraint (OPS$TKYTE.CHECK_CHILD) violated


ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> INSERT INTO master (master_id,master_type) VALUES (1, 'one');

1 row created.

ops$tkyte@ORA9IR2> INSERT INTO detail VALUES (100, 1);

1 row created.

ops$tkyte@ORA9IR2> COMMIT;

Commit complete.


as for why don't I use upper case... Well, I always found that to be a coding convention I don't like. (opinion coming...)

I don't find it easier to read
I don't find it more productive to type
I don't find it adds any value whatsoever
I don't like the way it looks :)


I grew up as a C programmer for a long time. C is case sensitive, it would be impossible in that language to code in uppercase. I find the convention to be counter to productivity (harder to type in upper case) and the result code just looks funny to me..

(opinion done)

Rating

  (37 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

'NOT NULL' with primary and foreign keys

A reader, June 07, 2005 - 4:16 am UTC

Tom,

1. What's the difference between

create table cntry
(
cntry_id number primary key,
cntry_nm varchar2(15)
);

and

create table cntry
(
cntry_id number not null primary key,
cntry_nm varchar2(15)
);

Is it worth to specify 'NOT NULL' with 'PRIMARY KEY' column?


2. When working with foreign keys, what's the effect of specifying 'NOT NULL' and vice versa?

create table cty
(
cty_id number primary key,
cty_nm varchar2(25),
cntry_id references cntry(cntry_id) not null
);

and

create table cty
(
cty_id number primary key,
cty_nm varchar2(25),
cntry_id references cntry(cntry_id)
);


Tom Kyte
June 07, 2005 - 8:19 am UTC

1) it is redundant and not necessary. a primary key is implicitly a NOT NULL plus UNIQUE constraint.

2) that is a totally differenty story, the not null makes the relationship MANDATORY, the null makes the relationship OPTIONAL. Only you can tell which is correct for your data.

Is cntry_id NULL or NOT NULL according to your data rules? that is the only question that needs to be answered

Seems pretty easy to get around this constraint

Alexander, June 07, 2005 - 11:25 am UTC

I could either
INSERT INTO master (master_id,master_type, child_cnt) VALUES (1, 'one', 1);
COMMIT;
or just disable the trigger, insert some parent rows without childred, and re-enable it.

Does it make sence?


Tom Kyte
June 07, 2005 - 1:02 pm UTC

and I can just disable a constraint, insert bad data, and enable novalidate it too. so? what is your point? I can put duplicates in a table that violate a primary key. I can put rows into a child table that do not point to a parent. I can put rows into tables that violate check constraints. You can too

you can lock down that child_cnt column. (create table master_table (....) create view MASTER as select <not the child cnt column>) Grant on the VIEW not the table.

I can get around anything, but that is not the point. The point is you need privileges to get around things, remove the ability to get around things.

Very interesting

Alexander, June 07, 2005 - 2:11 pm UTC

Tom,

please give us an example of entering rows violating a PK constraint

Tom Kyte
June 07, 2005 - 2:23 pm UTC

ops$tkyte@ORA9IR2> create table t       (x int constraint t_pk primary key);
 
Table created.
 
ops$tkyte@ORA9IR2> insert into t values ( 1 );
 
1 row created.
 
ops$tkyte@ORA9IR2> alter table t disable constraint t_pk;
 
Table altered.
 
ops$tkyte@ORA9IR2> insert into t values ( 1 );
 
1 row created.
 
ops$tkyte@ORA9IR2> create index t_pk on t(x);
 
Index created.
 
ops$tkyte@ORA9IR2> alter table t enable novalidate constraint t_pk ;
 
Table altered.
 
ops$tkyte@ORA9IR2> select * from t;
 
         X
----------
         1
         1
 
ops$tkyte@ORA9IR2> insert into t values ( 1 );
insert into t values ( 1 )
*
ERROR at line 1:
ORA-00001: unique constraint (OPS$TKYTE.T_PK) violated
 
 

why there is a trigger on one table only?

Mikito Harakiri, June 07, 2005 - 4:55 pm UTC

Referential integrity constrant implemented via triggers should have 2 triggers: one on master table and the other one on the detail.

Complex constraints can also be implemented via materialized views.

Tom Kyte
June 07, 2005 - 6:04 pm UTC

because it is not a referential integrity constraint. For that we have

master_id NUMBER NOT NULL REFERENCES master(master_id)

this trigger is doing ONE THING -- checking that when we commit, a child record was added for type <> zero.

Referential integrity takes ZERO triggers always. Because that would be a constraint and any other implementation is flawed. (and unless it contains a lock table -- fatally flawed!)

what if

Mikito Harakiri, June 07, 2005 - 6:14 pm UTC

What if one updates flag in master table? What constraint/trigger prevents it?

Tom Kyte
June 07, 2005 - 6:59 pm UTC

<repeated from right above>
you can lock down that child_cnt column. (create table master_table (....)
create view MASTER as select <not the child cnt column>) Grant on the VIEW not
the table.
</repeat>

I assume by "flag" you really meant "child_cnt"



Oops

Mikito Harakiri, June 07, 2005 - 7:19 pm UTC

master_type, not flag

Tom Kyte
June 07, 2005 - 7:28 pm UTC

try it, see what happens.....

constraint check_child check( nvl(child_cnt,0) > 0 or master_type = 'zero' ) deferrable initially deferred

on commit that'll be verified....

Suggestion

Mikito Harakiri, June 07, 2005 - 8:11 pm UTC

Aha -- sloppy reading on my part. Then, instead of the two fields

master_type, and
child_cnt

why don't we introduce a single one

min_number_of_children

Tom Kyte
June 07, 2005 - 8:34 pm UTC

Mikito --

show us how that would WORK.

Ok? give us implementation, not hypothesis.

(answer: because a single field won't work)

Slowly catching on

Mikito Harakiri, June 07, 2005 - 8:24 pm UTC

Ok, you introduced redundant count, and almost succeeded maintaining integrity via combination of constraints and trigger. (Almost, because I don't buy that privilege thingy as a fully legitimate method of constraint enforcement). Wouldn't it be simpler to introduce redundant data in the form of materialized view, and declare cardinality constraint on MV?

Tom Kyte
June 07, 2005 - 8:39 pm UTC

umm, ok, I'll disable your constraints (because you didn't lock down with security anything) and then I'll violate them till sunday and then I'll enable them -- but leave the bad stuff behind.

"so, what is your point"

Please, demonstrate the better solution for us. Please.

(remember, you need an outer join, so, let's see that MV. make it so it performs with 1,000,000 rows in master and 10x as many in the detail)....


If you don't believe it takes a combination of security+logic to enforce integrity under any circumstance, well, I've got this bridge I'm looking to sell see - are you interested? It is a very nice historic bridge, near New York, lots of interested parties -- better act soon :)


(and explain how the mv would be "less redundant" or "more redundant" too. And let us know if you don't secure your system -- that it would be 100% bullet proof. thanks!)


Almost got it

Mikito Harakiri, June 07, 2005 - 9:43 pm UTC

create table master
as select level id, mod(level,3) min_no_of_children from dual
connect by level < 100;

create table detail (
parent_id number
);

CREATE MATERIALIZED VIEW LOG ON detail
WITH ROWID INCLUDING NEW VALUES;

ALTER MATERIALIZED VIEW LOG ON detail ADD (parent_id);

CREATE MATERIALIZED VIEW detail_mv
REFRESH FAST ON COMMIT AS
select d.parent_id pid, count(*) cnt
from detail d
group by d.parent_id
;

CREATE MATERIALIZED VIEW LOG ON detail_mv
WITH ROWID INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW master_detail_mv
REFRESH FAST ON COMMIT AS
select 'Constraint Violated' cv
from detail_mv mv, master m
where mv.pid = m.id
and mv.cnt <= m.min_no_of_children;

ALTER TABLE master_detail_mv
ADD CONSTRAINT ck_mv CHECK(cv is null);

The last MV is not fast refresheable, however...

Tom Kyte
June 08, 2005 - 7:28 am UTC

Indeed, so keep trying....

and please -- let us know how this is more "secure", "less easily bypassed", that is a requirement as well.

(and how is this "less redundant"?)


Finally

Mikito Harakiri, June 07, 2005 - 9:50 pm UTC

CREATE MATERIALIZED VIEW LOG ON master
WITH ROWID INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW master_detail_mv
REFRESH FAST ON COMMIT AS
select 'Constraint Violated' cv, mv.rowid mvid, m.rowid mid
from detail_mv mv, master m
where mv.pid = m.id
--and mv.cnt <= m.min_no_of_children
;

ALTER TABLE master_detail_mv
ADD CONSTRAINT ck_mv CHECK(cv is null);

So everything is fast refresheable. The detail_mv contains slightly more redundant info than your count column. The master_detail_mv is normally empty.

Why shouldn't this method scale up to 1M records master/10M detail table?

Tom Kyte
June 08, 2005 - 7:29 am UTC

because it doesn't work, did you TEST it?

when detail has zero rows, how many rows will this table have?

It doesn't work (outer join creeps in ;)

Alberto Dell'Era, June 08, 2005 - 6:00 am UTC

Mikito, i've tried your approach, removing
the comment on "master_detail_mv" that was probably
a drag&drop mistake:

CREATE MATERIALIZED VIEW master_detail_mv
REFRESH FAST ON COMMIT AS
select 'Constraint Violated' cv, mv.rowid mvid, m.rowid mid
from detail_mv mv, master m
where mv.pid = m.id
and mv.cnt <= m.min_no_of_children;

But unfortunately it doesn't work:

dellera2@ORACLE9I> insert into master (id, min_no_of_children) values (1000, 1);

1 row created.

dellera2@ORACLE9I> commit;

Commit complete.

That's because, when no details are inserted, detail_mv is empty and so no record gets inserted in master_detail_mv regardless of the "configurable check" "mv.cnt <= m.min_no_of_children". That's the "outer join requirement" that Tom was hinting about i believe ;)

summarize is 2 argument operator

Mikito Harakiri, June 08, 2005 - 1:10 pm UTC

OK, summarize, is 2-argument operator. It should be

select id,
(select count(*) from detail d
where d.parent_id = m.id)
from master m

which is unfortunately not fast refresheable.

Outer join is a kludge that together with counting NULLs "feature" happens to give the same result. It's incrementally refreshable, however, so I buy it;-)

As far as trigger solution is concerned, would the trigger fire properly if somebody merges data?

Tom Kyte
June 08, 2005 - 1:21 pm UTC

no, the outer join with the aggregate would not be fast refreshable - please, demonstrate (and test) your proposed implementation.

the scalar subquery won't fly in a MV.

outer joins with aggregates are not fast refreshable when the inner table is modified (the details).

this requires an outer join with an aggregate that supports DML on the inner table.

or

a scalar subquery


see
</code> https://www.oracle.com/technetwork/issue-archive/2005/05-mar/o25asktom-089621.html <code>

merge fires the insert/update/delete triggers as needed, yes.



hence - you cannot get there from here using a MV.

I'm open to superior solutions to the trigger, I'd love to have another one. But -- and this is important - you need to provide it not just say "i don't like this one just because, so do it better", one that works (was tested) and scales (rules out complete refreshes).

And if you can make it non-subvertable in all cases
And have no redundant data

I'd love to see it.

Outer join with aggregate

Mikito Harakiri, June 08, 2005 - 2:47 pm UTC

Making Outer join with aggregate fast refresheable is easy: make MV with OJ first, then make an fast refresheable aggregate MV on top of it. I discovered however, that oracle can't really make a fast refresheable MV with left outer join! Indeed,

CREATE MATERIALIZED VIEW LOG ON dept
WITH ROWID INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW LOG ON emp
WITH ROWID INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW dept_emp_oj_mv
REFRESH FAST ON COMMIT as
select dept.rowid drid, dept.deptno d,
emp.rowid erid, emp.deptno e
from dept, emp
where dept.deptno=emp.deptno(+);

doesn't work! The right outer join works

CREATE MATERIALIZED VIEW dept_emp_oj_mv
REFRESH FAST ON COMMIT as
select dept.rowid drid, dept.deptno d,
emp.rowid erid, emp.deptno e
from dept, emp
where dept.deptno(+)=emp.deptno;

but is hardly useful for anything.

Tom Kyte
June 08, 2005 - 4:57 pm UTC

soooo, did you test it, is it scalable, is it "less redundant", does it work (and just as important, is it *easier*)

The rules of outer joins are documented, which needs a primary key is documented


this is not a difference between a "Left" and a "Right" at all (not even close). You are not even using the left/right syntax here.


Solution

J L Bravo, June 08, 2005 - 3:11 pm UTC

Try This,

SQL>CREATE TABLE master (
  2      master_id   NUMBER    NOT NULL PRIMARY KEY,
  3      master_type VARCHAR2(4) NOT NULL
  4                  CHECK (master_type IN ('one','zero'))
  5  );

Table created.

SQL>
SQL>CREATE TABLE detail (
  2      detail_id NUMBER       NOT NULL PRIMARY KEY,
  3      master_id NUMBER       NOT NULL REFERENCES master(master_id)
  4  );

Table created.

SQL>
SQL>
SQL>create or replace package pk_master as
  2       vt_count     integer := 0 ;
  3       vt_master_type  master.master_type%type ;
  4  
  5       Type r_master_id is table OF number
  6           INDEX BY BINARY_INTEGER;
  7  
  8       vr_master_id  r_master_id ;
  9  
 10  end ;
 11  /

Package created.

SQL>
SQL>show errors
No errors.
SQL>
SQL>create or replace trigger bi_master
  2  before insert
  3  on master
  4  for each row
  5  declare
  6  v_sqlcode         integer ;
  7  v_sqlerrm         varchar2(255) ;
  8  Begin
  9  
 10  if :new.master_type = 'one' then
 11  
 12      pk_master.vr_master_id ( pk_master.vt_count ) := :new.master_id ;
 13      pk_master.vt_count := pk_master.vt_count + 1 ;
 14  
 15  end if ;
 16  
 17  end;
 18  /

Trigger created.

SQL>
SQL>show errors
No errors.
SQL>
SQL>create or replace trigger s_bi_master
  2  before insert
  3  on master
  4  declare
  5  v_sqlcode         integer ;
  6  v_sqlerrm         varchar2(255) ;
  7  
  8  Begin
  9  
 10  pk_master.vt_count := 0 ;
 11  
 12  end;
 13  /

Trigger created.

SQL>
SQL>show errors
No errors.
SQL>
SQL>create or replace trigger s_bi_master
  2  after insert
  3  on master
  4  declare
  5  v_sqlcode         integer ;
  6  v_sqlerrm         varchar2(255) ;
  7  Begin
  8  
  9  if pk_master.vt_count > 0 then
 10  
 11  For i in 0 .. pk_master.vt_count - 1 Loop
 12  
 13      begin
 14         insert into detail values ( pk_master.vr_master_id ( i ) , pk_master.vr_master_id ( i ) ) ;
 15      end ;
 16  
 17      pk_master.vr_master_id ( i ) := NULL ;
 18  
 19  end Loop ;
 20  
 21  end if ;
 22  
 23  pk_master.vt_count := 0 ;
 24  
 25  end;
 26  /

Trigger created.

SQL>
SQL>show errors
No errors.
SQL>
SQL>INSERT INTO master VALUES (0, 'zero');

1 row created.

SQL>COMMIT;

Commit complete.

SQL>
SQL>select * from master;

       MASTER_ID MAST            
---------------- ----            
               0 zero            

SQL>select * from detail;

no rows selected

SQL>
SQL>INSERT INTO master VALUES (1, 'one');

1 row created.

SQL>COMMIT;

Commit complete.

SQL>
SQL>select * from master;

       MASTER_ID MAST            
---------------- ----            
               0 zero            
               1 one             

SQL>select * from detail;

       DETAIL_ID        MASTER_ID
---------------- ----------------
               1                1

SQL>
SQL>INSERT INTO master VALUES (2, 'one');

1 row created.

SQL>COMMIT;

Commit complete.

SQL>
SQL>INSERT INTO master VALUES (3, 'zero');

1 row created.

SQL>COMMIT;

Commit complete.

SQL>
SQL>INSERT INTO master VALUES (4, 'zero');

1 row created.

SQL>COMMIT;

Commit complete.

SQL>
SQL>INSERT INTO master VALUES (5, 'one');

1 row created.

SQL>COMMIT;

Commit complete.

SQL>
SQL>INSERT INTO master VALUES (6, 'one');

1 row created.

SQL>COMMIT;

Commit complete.

SQL>
SQL>select * from master;

       MASTER_ID MAST            
---------------- ----            
               0 zero            
               1 one             
               2 one             
               3 zero            
               4 zero            
               5 one             
               6 one             

7 rows selected.

SQL>select * from detail;

       DETAIL_ID        MASTER_ID
---------------- ----------------
               1                1
               2                2
               5                5
               6                6

SQL>
SQL>spool off

You may need to alter the trigger to get the right IDs but this is an start point, wish this help you

Thanks, 

Tom Kyte
June 08, 2005 - 10:01 pm UTC

I believe they want to make sure at least one child was created by the application, not made up by a trigger (eg: make sure data entered by application is correct)



Another solution that almost work

Mikito Harakiri, June 08, 2005 - 3:53 pm UTC

table detail (
parent_id number,
child_id number -- redundant column
);

Let a trigger fill detail.child_id with sequential numbers (no gaps!) starting with 1. Then the required constraint is emptiness of the following view:

select id, min_children from master
minus
select parent_id, child_id from detail

I wish MVs with set operators were fast refreshable!

Tom Kyte
June 08, 2005 - 10:05 pm UTC

please give complete examples - for example "no gaps!" -- and how do you propose to implement that? and how do you propose to do the minus automatically, scalable and so on? and how is this "better" than the trigger.

I've no issues with "good or better" solutions, but half thought out thoughts?

union-all master+detail, than aggregate and check

Alberto Dell'Era, June 08, 2005 - 4:36 pm UTC

This solution *functionally* works; i'm investigating the performances and concurrency characteristics.

We first union-all master and detail, giving a "weight" of 0 to rows from master and of 1 to rows from detail.

Then, we group-by the weight and check that it is >= of children.

Should be easily generalizable for many types of cross-table checks ...

dellera2@ORACLE9I> create table master (
2 id int constraint master_pk primary key,
3 min_no_of_children int not null
4 );

Table created.

dellera2@ORACLE9I>
dellera2@ORACLE9I> create table detail (
2 pid int constraint detail_fk references master (id)
3 );

Table created.

dellera2@ORACLE9I>
dellera2@ORACLE9I> create materialized view log on master with rowid;

Materialized view log created.

dellera2@ORACLE9I>
dellera2@ORACLE9I> create materialized view log on detail with rowid;

Materialized view log created.

dellera2@ORACLE9I>
dellera2@ORACLE9I>
dellera2@ORACLE9I> create materialized view union_all_mv
2 build immediate
3 refresh fast on commit
4 as
5 select 'm' marker, rowid as row_id, 0 weight, id , min_no_of_children
6 from master
7 union all
8 select 'd' marker, rowid as row_id, 1 weight, pid as id, 0 as min_no_of_children
9 from detail;

Materialized view created.

dellera2@ORACLE9I>
dellera2@ORACLE9I> create index union_all_mv_rowid_idx on union_all_mv (row_id);

Index created.

dellera2@ORACLE9I>
dellera2@ORACLE9I> create materialized view log on union_all_mv
2 with rowid (id, weight, min_no_of_children) including new values;

Materialized view log created.

dellera2@ORACLE9I>
dellera2@ORACLE9I>
dellera2@ORACLE9I> create materialized view agg_mv
2 build immediate
3 refresh fast on commit
4 as
5 select id,
6 sum(weight) as sum_weight,
7 count(weight),
8 max (min_no_of_children) as min_no_of_children,
9 count(*)
10 from union_all_mv
11 group by id;

Materialized view created.

dellera2@ORACLE9I>
dellera2@ORACLE9I> create index agg_mv_id_idx on agg_mv (id);

Index created.

dellera2@ORACLE9I>
dellera2@ORACLE9I> alter table agg_mv
2 add constraint check_progeny
3 check (sum_weight >= min_no_of_children)
4 deferrable;

Table altered.

dellera2@ORACLE9I>
dellera2@ORACLE9I>
dellera2@ORACLE9I> insert into master (id, min_no_of_children) values (1, 0);

1 row created.

dellera2@ORACLE9I> commit;

Commit complete.

dellera2@ORACLE9I>
dellera2@ORACLE9I> insert into master (id, min_no_of_children) values (2, 1);

1 row created.

dellera2@ORACLE9I> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02290: check constraint (DELLERA2.CHECK_PROGENY) violated


dellera2@ORACLE9I>
dellera2@ORACLE9I> insert into master (id, min_no_of_children) values (3, 1);

1 row created.

dellera2@ORACLE9I> insert into detail (pid) values (3);

1 row created.

dellera2@ORACLE9I> commit;

Commit complete.

dellera2@ORACLE9I>
dellera2@ORACLE9I> delete from detail where pid = 3;

1 row deleted.

dellera2@ORACLE9I> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02290: check constraint (DELLERA2.CHECK_PROGENY) violated


dellera2@ORACLE9I>
dellera2@ORACLE9I> update detail set pid = 1 where pid = 3;

1 row updated.

dellera2@ORACLE9I> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02290: check constraint (DELLERA2.CHECK_PROGENY) violated


Eric, June 08, 2005 - 10:17 pm UTC

Mikito, let it go. You are arguing with someone that has more Oracle experience on his pinky toe than we have in our whole bodies put together. If other tech areas had someone like him, I wouldn't be converting the millions of lines of java to pl/sql right now....it's 8pm at night here....



Tom Kyte
June 08, 2005 - 10:25 pm UTC

No, no - not at all.

I simply want the guy that just says "I don't like X" to pony up and give the *solution*, tested, thought out -- in the release specified. That is all. I'll gladly accept anything from anyone.

Heck -- Mikito suggested this gem:

with as_many_rows_as_you_like
( select level l from dual connect by level <= :bv )
.....


Love it, great stuff. But please -- don't say "and the rest is left for the student to implement".

Solutions, thats all. (tested and tried)

Performance of the mv-implemented constraint above

Alberto Dell'Era, June 10, 2005 - 7:49 am UTC

I've investigated the performance of the mv-implemented constraint above

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:42170695313022#42335743411615 <code>

and my conclusion is that it is not appropriate for an OLTP environment, unless it's read-mostly.

But here's the experiment - judge by yourself.

First, let's load the tables (100k rows in master, 10*100k rows in detail):

insert into master (id, min_no_of_children)
select rownum, 10 from
(select null from all_objects where rownum <= 1000),
(select null from all_objects where rownum <= 100)
;

insert into detail (pid)
select 1 + mod (rownum, 100000) from
(select null from all_objects where rownum <= 1000),
(select null from all_objects where rownum <= 100),
(select null from all_objects where rownum <= 10)
;
commit;

Add the index on the fk i forgot to create above:

create index detail_fk_idx on detail (pid);

Now, let's augment the refresh machinery with this indexes, to make the performance as optimal as possible:

-- improve my original union_all_mv_rowid_idx
drop index union_all_mv_rowid_idx;
create index union_all_mv_rowid_marker_idx on union_all_mv (row_id, marker);

create index union_all_mv_map_id_idx on union_all_mv (SYS_OP_MAP_NONNULL (id));

create index log_snap_idx_detail on mlog$_detail (snaptime$$);
create index log_snap_idx_master on mlog$_master (snaptime$$);
create index log_snap_idx_union_all_mv on mlog$_union_all_mv (snaptime$$);

Ananlyze everything (logs are not analyzed by gather_schema_stats):

exec dbms_stats.gather_table_stats (user, 'mlog$_detail', cascade=>true);
exec dbms_stats.gather_table_stats (user, 'mlog$_master', cascade=>true);
exec dbms_stats.gather_table_stats (user, 'mlog$_union_all_mv', cascade=>true);
exec dbms_stats.gather_schema_stats (user, cascade=>true);

Now, let's clone the master+detail structure without the MVs and logs, and compare the stats (using runstats) of a representative OLTP benchmark, to measure the overhead:

create table naked_master as select * from master;
create table naked_detail as select * from detail;
alter table naked_master add constraint naked_master_pk primary key (id);
alter table naked_detail add constraint naked_detail_fk foreign key (pid) references naked_master (id);

create index naked_detail_fk on naked_detail (pid);

exec dbms_stats.gather_schema_stats (user, cascade=>true);

exec runstats_pkg.rs_start;

insert into naked_master (id, min_no_of_children) values (100, 0);
commit;
update naked_master set min_no_of_children=-1 where id = 100;
commit;
delete from naked_master where id = 100;
commit;
insert into naked_master (id, min_no_of_children) values (100, 1);
insert into naked_detail (pid) values (100);
commit;
update naked_detail set pid = pid where pid = 100;
commit;
delete from naked_detail where pid = 100;
delete from naked_master where id = 100;
commit;

exec runstats_pkg.rs_middle;

insert into master (id, min_no_of_children) values (100, 0);
commit;
update master set min_no_of_children=-1 where id = 100;
commit;
delete from master where id = 100;
commit;
insert into master (id, min_no_of_children) values (100, 1);
insert into detail (pid) values (100);
commit;
update detail set pid = pid where pid = 100;
commit;
delete from detail where pid = 100;
delete from master where id = 100;
commit;

exec runstats_pkg.rs_stop (100);

Run1 ran in 23 hsecs
Run2 ran in 50 hsecs
run 1 ran in 46% of the time

Name Run1 Run2 Diff
STAT...data blocks consistent 0 102 102
STAT...rows fetched via callba 0 110 110
STAT...consistent changes 10 124 114
STAT...table scans (short tabl 0 134 134
LATCH.undo global data 21 162 141
STAT...commit cleanouts 18 159 141
STAT...commit cleanouts succes 14 155 141
LATCH.enqueues 12 180 168
LATCH.cache buffers lru chain 0 170 170
STAT...free buffer requested 0 170 170
STAT...sorts (rows) 2,597 2,778 181
STAT...workarea executions - o 10 198 188
STAT...sorts (memory) 6 200 194
STAT...enqueue releases 18 213 195
STAT...enqueue requests 17 213 196
LATCH.dml lock allocation 24 228 204
STAT...calls to kcmgas 6 219 213
LATCH.child cursor hash table 0 291 291
STAT...cluster key scans 0 326 326
STAT...table scan blocks gotte 0 344 344
LATCH.enqueue hash chains 44 428 384
STAT...shared hash latch upgra 2 505 503
STAT...redo entries 32 574 542
LATCH.redo allocation 44 587 543
STAT...table fetch by rowid 0 566 566
STAT...buffer is pinned count 0 572 572
STAT...index scans kdiixs1 2 577 575
STAT...session cursor cache hi 30 605 575
STAT...cluster key scan block 0 578 578
STAT...index fetch by key 3 593 590
STAT...opened cursors cumulati 30 764 734
STAT...db block gets 53 902 849
STAT...parse count (total) 30 918 888
STAT...execute count 31 925 894
STAT...db block changes 61 1,107 1,046
STAT...consistent gets - exami 12 1,077 1,065
LATCH.simulator hash latch 0 1,196 1,196
STAT...calls to get snapshot s 20 1,316 1,296
STAT...buffer is not pinned co 0 1,792 1,792
LATCH.row cache enqueue latch 0 2,238 2,238
LATCH.row cache objects 0 2,373 2,373
STAT...recursive calls 7 3,119 3,112
LATCH.library cache pin alloca 60 3,705 3,645
LATCH.shared pool 209 4,943 4,734
LATCH.library cache pin 224 6,219 5,995
STAT...table scan rows gotten 0 7,720 7,720
LATCH.library cache 264 9,129 8,865
STAT...no work - consistent re 0 14,590 14,590
STAT...consistent gets 20 16,584 16,564
STAT...session logical reads 73 17,486 17,413
LATCH.cache buffers chains 271 36,231 35,960
STAT...session pga memory 0 65,536 65,536
STAT...redo size 6,800 152,952 146,152

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
1,442 68,413 66,971 2.11%

So we have
LIO : 16,584 / 20 = 830
redo : 152,952 / 6,800 = 22
latches: 68,413 / 1,442 = 47
elapsed: 50 / 23 = 2.1

A very heavy overhead in general; especially the latches and LIO increase probably makes this solution unscalable in a multi-user environment.

I think it is attractive only in read-mostly environment, where concurrency is not a concern; in this environment, i would enjoy the generalizability of the solution (just propagate the necessary data down the MV, then make the check constraint as complex as necessary) and the fact that's a declarative (SQL-only) solution.

Mikita is really cool!!

Edgar, June 14, 2005 - 10:41 am UTC

After a little modification of his gem,
i find what i was looking for!
(sequence of numbers 1..N from nothing)

select level L from
(select 1 d from dual
union
select 2 d from dual)
start with d=1
connect by level <= :N and d = 1
-- order by level
;

Thanx.


Tom Kyte
June 14, 2005 - 1:41 pm UTC

with as_many_as_you_want
as
(select level l
from dual
connect by level <= :n )
select * from as_many_as_you_want

is shorter and does the same....

Ouch, i find one better way of "infinit dual" on this site

Edgar, June 14, 2005 - 11:01 am UTC

SQL> select *
  2  from (select level l from dual connect by level < 10)
  3  ; 

Alberto's test

Mikito Harakiri, June 14, 2005 - 5:12 pm UTC

I wonder if anybody performed a similar test comparing materialized view with poor man's MV: a table incrementally maintained by a trigger.

Tom Kyte
June 15, 2005 - 3:14 am UTC

why don't *YOU* do it.

Tom: any idea about SNAPTIME$$ ... what for??

Gabe, June 15, 2005 - 3:50 pm UTC

Alberto:

Regarding your MV tests …

#1. Indexes on SNAPTIME$$ … in the tests I did I noticed Oracle stores ’01-jan-4000’ in this column. I don’t know under what circumstances a _real_ date will be stored in SNAPTIME$$ … one that would make the indexes desirable.

#2. For your tests above can you confirm you used 9.2.0.6? I got …

Oracle9i Enterprise Edition Release 9.2.0.1.0 – Production

flip@FLOP> create materialized view agg_mv
2 build immediate
3 refresh fast on commit
4 as
5 select id,
6 sum(weight) as sum_weight,
7 count(weight),
8 max (min_no_of_children) as min_no_of_children,
9 count(*)
10 from union_all_mv
11 group by id;
from union_all_mv
*
ERROR at line 10:
ORA-12053: this is not a valid nested materialized view


Tom Kyte
June 16, 2005 - 3:30 am UTC

#1 it is used in replication when more than one site pulls from the same snapshot log.

#2 can you give me the entire small, concise, yet complete test case? everything needed. I know, it is all "above somewhere", but put it together into a single place and I'll test it for you.

Gabe's #1

Alberto Dell'Era, June 16, 2005 - 5:55 am UTC

For #1 - snaptime$$ is updated at commit time (it contains the date of commit time from this moment on - i've verified it putting a trigger on the mview log) as part of the refresh process:

update "MVTEST2"."MLOG$_T1"
set snaptime$$ = :1
where snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')

snaptime$$ is referenced at the end to delete the rows:

delete from "MVTEST2"."MLOG$_T1" where snaptime$$ <= :1

snaptime$$ is then referenced in many statements during the refresh process, such as:

select dmltype$$, max(snaptime$$)
from "MVTEST2"."MLOG$_T2" where snaptime$$ <= :1
group by dmltype$$;

SELECT 'N', COUNT(*)
FROM "MVTEST2"."MLOG$_T2"
WHERE SNAPTIME$$ > :1 AND SNAPTIME$$ <= :2

And of course in the statements that updates the mv, that "always" contains subqueries such as

SELECT ...
FROM "MVTEST2"."MLOG$_T2" "MAS$"
WHERE "MAS$".SNAPTIME$$ > :1

The index on snaptime$$ is very useful if the mview log HWM is very high (after a mass modification to the base table probably) and the table contains only a few rows (OLTP -> small tx); the index gives the CBO the opportunity to avoid a long FTS (which is of course the best plan if the HWM is low instead).
That's the way i discovered it - after a mass modification the time to refresh went from instantaneous to several seconds (not surprisingly - the log hwm was 1000 or so blocks). Putting and index on snaptime$$ solved the "problem"; here's the test case:

</code> http://asktom.oracle.com/pls/ask/f?p=4950:61:17384291759493672697::::P61_ID:42304815945767#42387644282338 <code>

I would be very interested in any comment from you (Gabe,Tom). Thanks for your interest in the meanwhile :)

Gabe's #2

Alberto Dell'Era, June 16, 2005 - 7:38 am UTC

Here's the test case (confirmed to work in 9.2.0.6):

set echo on;

drop materialized view agg_mv;
drop materialized view union_all_mv;
drop table detail;
drop table master;
drop table naked_detail;
drop table naked_master;

create table master (
id int constraint master_pk primary key,
min_no_of_children int not null
);

create table detail (
pid int constraint detail_fk references master (id)
);

create index detail_fk_idx on detail (pid);

create materialized view log on master with rowid;

create materialized view log on detail with rowid;

create materialized view union_all_mv
build immediate
refresh fast on commit
as
select 'm' marker, rowid as row_id, 0 weight, id , min_no_of_children
from master
union all
select 'd' marker, rowid as row_id, 1 weight, pid as id, 0 as min_no_of_children
from detail;

create index union_all_mv_rowid_idx on union_all_mv (row_id, marker);

create materialized view log on union_all_mv
with rowid (id, weight, min_no_of_children) including new values;

create materialized view agg_mv
build immediate
refresh fast on commit
as
select id,
sum(weight) as sum_weight,
count(weight),
max (min_no_of_children) as min_no_of_children,
count(*)
from union_all_mv
group by id;

create index agg_mv_id_idx on agg_mv (id);

alter table agg_mv
add constraint check_progeny
check (sum_weight >= min_no_of_children)
deferrable;

insert into master (id, min_no_of_children)
select rownum, 10 from
(select null from all_objects where rownum <= 1000),
(select null from all_objects where rownum <= 100)
;

insert into detail (pid)
select 1 + mod (rownum, 100000) from
(select null from all_objects where rownum <= 1000),
(select null from all_objects where rownum <= 100),
(select null from all_objects where rownum <= 10)
;
commit;

create index union_all_mv_map_id_idx on union_all_mv (SYS_OP_MAP_NONNULL (id));
-- already indexed: create index agg_mv_map_id_idx on agg_mv (SYS_OP_MAP_NONNULL (id));

create index log_snap_idx_detail on mlog$_detail (snaptime$$);
create index log_snap_idx_master on mlog$_master (snaptime$$);
create index log_snap_idx_union_all_mv on mlog$_union_all_mv (snaptime$$);

exec dbms_stats.gather_table_stats (user, 'mlog$_detail', cascade=>true);
exec dbms_stats.gather_table_stats (user, 'mlog$_master', cascade=>true);
exec dbms_stats.gather_table_stats (user, 'mlog$_union_all_mv', cascade=>true);

exec dbms_stats.gather_schema_stats (user, cascade=>true);

create table naked_master as select * from master;
create table naked_detail as select * from detail;
alter table naked_master add constraint naked_master_pk primary key (id);
alter table naked_detail add constraint naked_detail_fk foreign key (pid) references naked_master (id);

create index naked_detail_fk on naked_detail (pid);

exec dbms_stats.gather_schema_stats (user, cascade=>true);

exec runstats_pkg.rs_start;

insert into naked_master (id, min_no_of_children) values (-100, 0);
commit;
update naked_master set min_no_of_children=-1 where id = -100;
commit;
delete from naked_master where id = -100;
commit;
insert into naked_master (id, min_no_of_children) values (-100, 1);
insert into naked_detail (pid) values (-100);
commit;
update naked_detail set pid = pid where pid = -100;
commit;
delete from naked_detail where pid = -100;
delete from naked_master where id = -100;
commit;

exec runstats_pkg.rs_middle;

insert into master (id, min_no_of_children) values (-100, 0);
commit;
update master set min_no_of_children=-1 where id = -100;
commit;
delete from master where id = -100;
commit;
insert into master (id, min_no_of_children) values (-100, 1);
insert into detail (pid) values (-100);
commit;
update detail set pid = pid where pid = -100;
commit;
delete from detail where pid = -100;
delete from master where id = -100;
commit;

exec runstats_pkg.rs_stop (100);


Tom Kyte
June 16, 2005 - 9:56 am UTC

Alberto -- thanks, that makes it easy -- i don't even have to run it since you did ;)

Very nice.

Gabe, June 16, 2005 - 3:21 pm UTC

Alberto:

#2: For me (9.2.0.1), that agg_mv fails creation with ORA-12053 … I should patch up really.

#1: Yes, snaptime$$ gets updated upon commit with sysdate. I did also test the mass modification scenario you described and indexes on snaptime$$ did help with subsequent refreshes … but only if stats were gathered on the logs after the big transaction.

One interesting thing about such massive transactions against tables typically used in OLTP mode … during the refresh process the optimizer is doing all that processing with whatever statistics were there in the first place (probably in line with the regular small-size OLTP transaction) … maybe some dynamic sampling does take place though.

Thank you.

PS. All that’s left is to see a repeatable test case with [non-FKs] deferrable constraints on MVs making a difference … but I’ll ask it in another thread.


Very nice for me, too :)

Alberto Dell'Era, June 20, 2005 - 3:31 pm UTC

Gabe:

I was concentranting on the scenario I'm interested about (quite frequent i think) - an OLTP system, with very small few-rows tx, with the occasional massive modification (one-off data load, data fixing, etc) that bumps the hwm. For this, I thought that indexing snaptime$$, and then gathering statistics on the "empty" log, could be useful - the test (thanks for replaying it!) confirmed that.

Much better than e.g. dropping&recreating the log, that requires downtime (and complete refresh of the mv), and that you must remember to synchronize with the massive modification (= 99% probability of forgetting about).

One interesting aside - a sort of "hard-coded dynamic sampling" actually happens, ie (in 9.2.0.6) this statement is issued during the refresh:

SELECT 'N', COUNT(*)
FROM "MVTEST2"."MLOG$_T2"
WHERE SNAPTIME$$ > :1 AND SNAPTIME$$ <= :2

that probably feeds the argument of the CARDINALITY hint to be found in all mlog-selecting subqueries:

SELECT /*+ CARDINALITY( 1) NO_SEMIJOIN */ CHARTOROWID("MAS$"."M_ROW$$") RID$
FROM "MVTEST2"."MLOG$_T2" "MAS$"
WHERE "MAS$".SNAPTIME$$ > :1

I thought about raising optimizer_dynamic_sampling, but I haven't investigated it, mainly because that would be an high-impact modification to the default behaviour, and so i prefer the "analyze the empty log" "trick".

Mv-based constraints are really fascinating, aren't they ? :)

Anyone claiming the MV-based constraints idea?

Gabe, June 22, 2005 - 4:47 pm UTC

Alberto:

<quote>
Mv-based constraints are really fascinating, aren't they?
</quote>

I’m not sure who first came with the idea of using constraints on MVs to enforce complex constraints … it is certainly a clever twist to their original role. I think it is a relatively new technique (as in ‘years’) but it got to the point where I see people _rushing_ into it. I’m a bit old-fashioned in that sense … give me a PL/SQL layer and I’m happy. Declarative constraints? … wonderful … but for complex ones maybe there are other ways to do it without the MV overhead. And the DEFERRABLE stuff … think of the large apps having multi-layer APIs where the commit is controlled way out in the middle-tier code … I’d argue there are important benefits to getting violations immediately (and hence being able to handle them as close to the offending code as possible) … Java or whatever getting some ORA-????? on commit? … now what? … do they [usually] make provisions for them [beyond, maybe a catch-all exception]?

So, as a study case – sure it is interesting … as a technique - one of the many in the toolbox. I usually go for the simple stuff + getting the model right.


Tom Kyte
June 23, 2005 - 1:48 pm UTC

I would go with the MV based ones when a simplier alternative does not exist.

(as with anything :)

simple rules

Alberto Dell'Era, June 22, 2005 - 5:57 pm UTC

Gabe:

> I usually go for the simple stuff + getting the model right.

Absolutely - and hence the fascinating adjective applied to MVs - it's purely declarative: you state the invariant, and off you go. A very very Mathematical, simple and elegant way.
I mean, sure the PL/SQL trigger looks simpler, but that's only because we are experienced in using PL/SQL - once we get the same level of experience using MVs, sure we'll appreciate the MV as simpler (and easily generalizable to more complex constraint types).

I'm not convinced about the deferred argument you put forward - I've yet to find a program that reacts to errors (unless they're not errors, eg an upsert), normally they log the error and say "oops, failed" - for an error-reacting code has to be tested and especially *maintained*, with the high risk that seldom-executed code (in the error branch) is buggy beyond belief (ie it's not sloppy programming, it's an informed design decision). Sure, it would be better to have more informations for debugging ("deferred constraint X failed on row.."), but i don't see the deferred constraint as a problem (generally speaking, sure YMMV - that's life in Oracle).

Let me play the Easy Prophet - when or if Oracle will implement multi-row, cross-table ANSI constraints - it will be with on-commit MVs, automatically maintained (they would look like indexes at that point). And the mv logs will be on-commit-delete-rows GTTs :)

Credits

Mikito Harakiri, June 23, 2005 - 4:11 pm UTC

> I’m not sure who first came with the idea of using constraints on MVs to enforce complex constraints

Tony Andrews exchange on this forum is where I saw it first. Although, the idea is certainly old. Take any paper on materialized view maintenance, and they would mention MV applied for constraint enforcement.

related with Solaris – Oracle question:

Jo, July 30, 2005 - 2:30 pm UTC

Sorry I could not find Tom Ask form, Please Can you add this Qn.?

Hi,

This is related with Solaris – Oracle question:

Table info: CREATE TABLE abcd ( rat  VARCHAR2 (55));

Data in rat column: 2@7

If we do query with NT Oracle
NT –SQL>  SELECT rat FROM abcd;
2@7        (the answer should be correct).

But if we do query with SUN Solaris Oracle
SUN-UNIX-SQL> SELECT rat FROM abcd;
2?7        (the answer with a question mark inplace of special character).

Note: @ means control character.

I understand the control character (special character) not support sql loader on Solaris.  SQL loader support only character based not GUI. So,
How we can solve this problem in Solaris based Oracle?


 

Tom Kyte
July 30, 2005 - 3:39 pm UTC

did you find the box on the home page that said:

<quote>
Sorry I have a large backlog right now, please ask a question later

See the "other resources" tab above and to the left for recommendations of places to go to get answers to questions!
</quote>

??


This is more of a character set question, than a solaris question (and sqlldr ?)

You have different NLS_LANG settings on the two clients, Unix will default to US7ASCII.

What should your NLS_LANG be given your databases characterset? set your NLS_LANG environment variable properly

And make sure your terminal can display that characterset as well (no, I don't know your terminal emulation software, so I cannot tell you how to do that)

similar question

Asim Naveed, October 19, 2005 - 8:50 am UTC

Hi,
Please refer to your first followup in this thread.
My question is the same as the first question in this
thread, Except that I want atleast one detail record
for each master record regardless of whatever the
MASTER_TYPE field of master record have. And I want
to do this without using child_cnt. Is it possible?

And please if you can tell me how to do it at
application level i.e. in Forms 6i.

Thanks

Tom Kyte
October 19, 2005 - 8:54 am UTC

refer to first followup ... but question is same as the first question in the thread?

so, I'm not really sure what you want... (and I don't do data things at the application level - data things belong in the database - so whatever I answer will be a database answer)

clarification

Asim Naveed, October 19, 2005 - 1:42 pm UTC

Actually the question is
"Can we do it without using the child_cnt field."

I thought that, by ignoring the MASTER_TYPE field,
we may be easily able to acheive the same thing without using child_cnt.

The difference is that orginal question's have a restriction
of checking the MASTER_TYPE field, and my question does
not have this restriction. So by eliminating this
restriction, I though may be it is now possible to make sure
that each master record have atleast one child record
without using the child_cnt field.



Thanks


Tom Kyte
October 19, 2005 - 3:46 pm UTC

how about you phrase YOUR question in YOUR terms?

rather than having me try to "diff your changes from the original to come to your question"


but - in short, if you want to ensure there is at least one child, you sort of need that child_cnt record. the master_type wasn't relevant to the problem of "at least one", it simply told us when "at least one had to be true"

How its getting 12 rows

Ravi, November 03, 2005 - 4:45 pm UTC

Don't get it Tom, why select * from(Select level from dual connect by level<=12) yields 12 rows when "Select level from dual connect by level<=12" gives only one row. May be simple reason but not getting it.

Thanks.

Tom Kyte
November 04, 2005 - 2:52 am UTC

because dual is magic.

workaround

A reader, November 04, 2005 - 3:41 am UTC

select level from (select * from dual,(select * from dual)) connect by level<=12

??

Sri, May 24, 2007 - 11:15 am UTC

Select level from dual connect by level<=12

DOES return 12 rows.

For 'Sri from Daily Land'

Muhammad Riaz Shahid, June 07, 2009 - 6:42 am UTC

Yes it does but only in 10g and upper versions:


SQL> Select level from dual connect by level<=12 
  2  ;

     LEVEL
----------
         1

SQL> SELECT * FROM v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.7.0 - Production
PL/SQL Release 9.2.0.7.0 - Production
CORE    9.2.0.7.0       Production
TNS for 32-bit Windows: Version 9.2.0.7.0 - Production
NLSRTL Version 9.2.0.7.0 - Production

SQL> ed
Wrote file afiedt.buf

  1* Select level from dual connect by level<=12
SQL> /

     LEVEL
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12

12 rows selected.

SQL> SELECT * FROM v$version;

BANNER
---------------------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
PL/SQL Release 10.2.0.1.0 - Production
CORE    10.2.0.1.0      Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production


For Oracle 12cR2 is this still the best solution?

A reader, July 24, 2017 - 9:00 pm UTC

Hi Tom,

I'm having the same problem of the main question(i.e the one to many relationship and enforcing at least one matchup). I'm wondering is this 12 years old approach still the best practice? or there is updated methods for doing this?
Connor McDonald
July 25, 2017 - 7:27 am UTC

This is pretty much still the way, because its basically implementing the concept of an "assertion" (aka complex table constraint).

We're aiming to make this a native part of the database - vote on that here

https://community.oracle.com/ideas/13028

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library