Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question.

Asked: January 04, 2003 - 1:46 pm UTC

Last updated: March 01, 2010 - 6:47 am UTC

Version: 9.2.0

Viewed 1000+ times

You Asked

I hope someone can help me & therefore thank you in advance if you can. Im using Oracle 9 & the "EMP" table (see below) found in the "DemoBld" DB, of which i need to ensure that no department is to have more than 8 employees & fewer than 3, except when a transaction reduces the number of employees to 0, of which is to be reported. How can i do this? PL/SQL, Packages, Triggers? Fortunately the "Emp" table does have a column called "DeptNo" & therefore only this tables is required to be used. I look forward to any comments.
*****************************************************
CREATE TABLE EMP (
EMPNO NUMBER(4)
CONSTRAINT EMPNOGTZERO CHECK(EMPNO > 0),
ENAME VARCHAR2(10)
CONSTRAINT ENAMEUPPER CHECK(ENAME = UPPER(ENAME)),
JOB VARCHAR2(9),
MGR NUMBER(4)
CONSTRAINT EMP_SELF_KEY
REFERENCES EMP(EMPNO),
HIREDATE DATE,
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2) NOT NULL,
CONSTRAINT EMP_FOREIGN_KEY FOREIGN KEY (DEPTNO)
REFERENCES DEPT (DEPTNO),
CONSTRAINT EMP_PRIMARY_KEY PRIMARY KEY (EMPNO)
)
/
*****************************************************
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ------
7839 KING PRESIDENT 17-NOV-81 5000 10
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7566 JONES MANAGER 7839 02-APR-81 2975 20
7902 FORD ANALYST 7566 03-DEC-81 3000 20
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7788 SCOTT ANALYST 7566 09-DEC-82 3000 20
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7876 ADAMS CLERK 7788 12-JAN-83 1100 20
7900 JAMES CLERK 7698 03-DEC-81 950 30
7934 MILLER CLERK 7782 23-JAN-82 1300 10

14 rows selected.

and Tom said...

warning: checks like this force serialization at the aggregate level. you will be only able to modiy records in the EMP table for a given DEPTNO by one session at a time!

Here is one solution using deferable constraints:

scott@ORA920> create table count_by_dept
2 ( deptno primary key ,
3 cnt ,
4 constraint cnt_between_3_and_8_or_0
5 check ( cnt between 3 and 8 OR cnt = 0 ) deferrable initially deferred
6 )
7 organization index
8 as
9 select deptno, count(*)
10 from emp
11 group by deptno
12 /

Table created.

scott@ORA920>
scott@ORA920> create or replace trigger dept_trigger
2 after insert or update of deptno or delete on emp for each row
3 begin
4 if (inserting or updating)
5 then
6 update count_by_dept set cnt = cnt+1 where deptno = :new.deptno;
7 if ( sql%rowcount = 0 )
8 then
9 insert into count_by_dept(deptno,cnt) values (:new.deptno,1);
10 end if;
11 end if;
12 if (updating or deleting)
13 then
14 update count_by_dept set cnt = cnt-1 where deptno = :old.deptno;
15 if ( sql%rowcount = 0 )
16 then
17 raise_application_error( -20001, 'Impossible condition' );
18 end if;
19 end if;
20 end;
21 /

Trigger created.

scott@ORA920>
scott@ORA920> delete from emp where deptno = 10 and rownum = 1;

1 row deleted.

scott@ORA920> commit;
commit
*
ERROR at line 1:
ORA-02091: transaction rolled back
ORA-02290: check constraint (SCOTT.CNT_BETWEEN_3_AND_8_OR_0) violated

we cannot have just two in deptno = 10

scott@ORA920>
scott@ORA920> delete from emp where deptno = 10;

3 rows deleted.

scott@ORA920> commit;

Commit complete.

but we can have ZERO

scott@ORA920>
scott@ORA920> insert into emp select * from emp;

11 rows created.

scott@ORA920> commit;
commit
*
ERROR at line 1:
ORA-02091: transaction rolled back
ORA-02290: check constraint (SCOTT.CNT_BETWEEN_3_AND_8_OR_0) violated


but we cannot have more then 8 in deptno=30...



Rating

  (45 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Restricting number of entries (continued)

Smithy, January 04, 2003 - 7:23 pm UTC

Thanks for you assistance so far, but once i have successfully created the table & trigger without error i enter there exact command you enter but where as your command is accepted, mine produces an error (see below).

SQL> delete from emp where deptno = 10 and rownum = 1;
delete from emp where deptno = 10 and rownum = 1
            *
ERROR at line 1:
ORA-04098: trigger 'SCOTT.EMP_TRIGGER' is invalid and failed re-validation

This error happens on any department or even if i try to delete all entries for a dept (see below).

SQL> delete from emp where deptno = 10;
delete from emp where deptno = 10
            *
ERROR at line 1:
ORA-04098: trigger 'SCOTT.EMP_TRIGGER' is invalid and failed re-validation

How come im receiving these error messages, when you didnt & how can i resolve the issue.

Thanks in advance. 

Tom Kyte
January 04, 2003 - 7:34 pm UTC

you typed something in wrong

SQL> show error trigger emp_trigger

 

Wow

Remi, January 05, 2003 - 4:00 pm UTC

Hi Tom

Can you believe this, I recieved this exact (and I mean exact) same question on my newly created forum </code> http://askremi.ora-0000.com
see: 
http://askremi.ora-0000.com/htdocs/question/show/33 <code>

And I think your solution is far more better, I'll keep learning every day ;-)

Greate answer Great forum

Remi Visser

Tom Kyte
January 05, 2003 - 4:54 pm UTC

don't know if you  would like a critique of your answer or not but... here goes:

a) the insert trigger will fail in many cases.

ops$tkyte@ORA920> insert into emp select * from scott.emp where rownum = 1;
insert into emp select * from scott.emp where rownum = 1
            *
ERROR at line 1:
ORA-04091: table OPS$TKYTE.EMP is mutating, trigger/function may not see it
ORA-06512: at "OPS$TKYTE.TRG_EMP_AFT_INS", line 6
ORA-04088: error during execution of trigger 'OPS$TKYTE.TRG_EMP_AFT_INS'



anytime the insertion might be more then one row -- you lose.  Mutating table.  The for each row trigger cannot read the table.  additionally -- it does not handle multi-user databases AT ALL.  consider:

ops$tkyte@ORA920> select deptno, count(*) from emp group by deptno;

    DEPTNO   COUNT(*)
---------- ----------
        10          3
        20          5
        30          6

ops$tkyte@ORA920> insert into emp values ( 1, 'x', 'x', 1, sysdate, 1, 1, 30 );

1 row created.

ops$tkyte@ORA920> insert into emp values ( 1, 'x', 'x', 1, sysdate, 1, 1, 30 );

1 row created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> REM this is another user, another transation:
ops$tkyte@ORA920> declare
  2          pragma autonomous_transaction;
  3  begin
  4          insert into emp values ( 1, 'x', 'x', 1, sysdate, 1, 1, 30 );
  5          commit;
  6  end;
  7  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920>
ops$tkyte@ORA920> commit;

Commit complete.

ops$tkyte@ORA920> select deptno, count(*) from emp group by deptno;

    DEPTNO   COUNT(*)
---------- ----------
        10          3
        20          5
        30          9

ops$tkyte@ORA920>

whoops -- how did that get there???



I would say the same with your update or delete triggers -- very "single user", excessively single user -- but nothing to make them single user.

Multi-user integrity check is heinously hard when you try to DIY.  Best left to the server as you can see....

good luck with the site.  You should put the stuff in a database ;)

 

It's also on Experts Exchange

Gary, January 06, 2003 - 12:54 am UTC

</code> http://www.experts-exchange.com/Developer/Databases/Oracle/Tools_Development/Q_20444092.html <code>

But so far, Tom's answer looks the best.
I especially like the 'deferred' bit, which takes care of programs which may send a series of single delete/update statements, rather one department level one.

PS. I see dept_trigger is row_level, but if a statement level trigger applied to both insert and update, does it get fired twice for a merge or just once ? If once, are both 'inserting' and 'updating' true ?
Is there a possiblilty of

Tom Kyte
January 06, 2003 - 7:38 am UTC

Arg -- their solution has the same multi-user bugs as the other ones.

Triggers -- I have a love/hate relationship with them for this reason.  People just DON'T even think about multi-versioning, read consistency, data integrity when they do these things.  That trigger provided in that answer

a) doesn't take into consideration "transfers" (updates)
b) doesn't work at all for deletes (well, since it fails I guess you could say it protects the data integrity)
c) doesn't work for mulit-row inserts
d) appears to work but damages data integrity for single row inserts


I just hope this question is homework from a really smart teacher who wants to take the collective set of answers and show just exactly how complex this stuff is -- that some amount of real thought is required for multi-user concurrency issues -- that you HAVE to let the database do it -- that attempting to do it via a trigger alone is not possible.


As for merge:

ops$tkyte@ORA920> create table emp as select * from scott.emp;

Table created.

ops$tkyte@ORA920> create table emp2 as select * from scott.emp;

Table created.

ops$tkyte@ORA920> update emp2 set empno = -empno where rownum <= 7;

7 rows updated.

ops$tkyte@ORA920> update emp2 set ename = lower(ename);

14 rows updated.

ops$tkyte@ORA920>
ops$tkyte@ORA920> create or replace trigger emp_bi
  2  before insert on emp
  3  begin
  4      dbms_output.put_line( 'BEFORE INSERT' );
  5  end;
  6  /

Trigger created.

ops$tkyte@ORA920> create or replace trigger emp_bifer
  2  before insert on emp for each row
  3  begin
  4      dbms_output.put_line( 'BEFORE INSERT for each row ' || :new.empno );
  5  end;
  6  /

Trigger created.

ops$tkyte@ORA920> create or replace trigger emp_aifer
  2  after insert on emp for each row
  3  begin
  4      dbms_output.put_line( 'AFTER INSERT for each row ' || :new.empno );
  5  end;
  6  /

Trigger created.

ops$tkyte@ORA920> create or replace trigger emp_ai
  2  after insert on emp
  3  begin
  4      dbms_output.put_line( 'AFTER INSERT' );
  5  end;
  6  /

Trigger created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> create or replace trigger emp_bu
  2  before update on emp
  3  begin
  4      dbms_output.put_line( 'BEFORE update' );
  5  end;
  6  /

Trigger created.

ops$tkyte@ORA920> create or replace trigger emp_bufer
  2  before update on emp for each row
  3  begin
  4      dbms_output.put_line( 'BEFORE update for each row ' || :new.empno );
  5  end;
  6  /

Trigger created.

ops$tkyte@ORA920> create or replace trigger emp_aufer
  2  after update on emp for each row
  3  begin
  4      dbms_output.put_line( 'AFTER update for each row ' || :new.empno );
  5  end;
  6  /

Trigger created.

ops$tkyte@ORA920> create or replace trigger emp_au
  2  after update on emp
  3  begin
  4      dbms_output.put_line( 'AFTER update' );
  5  end;
  6  /

Trigger created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> merge into EMP
  2  using EMP2
  3  on ( emp.empno = emp2.empno )
  4  when matched then
  5      update set emp.ename = emp2.ename
  6  when not matched then
  7      insert ( empno, ename, job, mgr, hiredate, sal, comm, deptno )
  8      values ( emp2.empno, emp2.ename, emp2.job, emp2.mgr,
  9               emp2.hiredate, emp2.sal, emp2.comm, emp2.deptno )
 10  /
BEFORE INSERT
BEFORE update
BEFORE update for each row 7788
AFTER update for each row 7788
BEFORE update for each row 7839
AFTER update for each row 7839
BEFORE update for each row 7844
AFTER update for each row 7844
BEFORE update for each row 7876
AFTER update for each row 7876
BEFORE update for each row 7900
AFTER update for each row 7900
BEFORE update for each row 7902
AFTER update for each row 7902
BEFORE update for each row 7934
AFTER update for each row 7934
BEFORE INSERT for each row -7566
AFTER INSERT for each row -7566
BEFORE INSERT for each row -7521
AFTER INSERT for each row -7521
BEFORE INSERT for each row -7654
AFTER INSERT for each row -7654
BEFORE INSERT for each row -7369
AFTER INSERT for each row -7369
BEFORE INSERT for each row -7499
AFTER INSERT for each row -7499
BEFORE INSERT for each row -7782
AFTER INSERT for each row -7782
BEFORE INSERT for each row -7698
AFTER INSERT for each row -7698
AFTER update
AFTER INSERT

14 rows merged.

ops$tkyte@ORA920>

does both as needed row by row. 

Thanks for the critique

Remi Visser, January 06, 2003 - 5:35 am UTC

Hi Tom

(this refers to your critique on my answer)

Thanks very much Tom for you review on the question, I will remove my answer and post a hyperlink to your answer.

Remi

Flawed solutions

Tony, January 06, 2003 - 8:15 am UTC

Tom,

I didn't understand all your points in your critique of the Experts forum solution:
b) doesn't work at all for deletes (well, since it fails I guess you could say it protects the data integrity)
- Seems to work for me (multi-user issues aside)
c) doesn't work for mulit-row inserts
- Again, appears OK to me (this is a statement level trigger)

And Remi - you acknowledge that your site was inspired by Tom's, but isn't writing "more THEN 8" in your solution rather than "more THAN 8" taking the homage too far! :)

Tom Kyte
January 06, 2003 - 8:49 am UTC

whoops -- sorry about that -- I see I goofed on (B), I thought it was a row trigger, now I see it is an after statement trigger.  Let me be more thorough (cause it is still a bad implementation)

So, allow me to change my response (with proofs this time!!!):

assume:

ops$tkyte@ORA920> drop table emp;

Table dropped.

ops$tkyte@ORA920> create table emp as select * from scott.emp;

Table created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select deptno, count(*) from emp group by deptno;

    DEPTNO   COUNT(*)
---------- ----------
        10          3
        20          5
        30          6

ops$tkyte@ORA920>
ops$tkyte@ORA920> create or replace trigger xxx
  2  after delete or insert on emp
  3  declare
  4  begin
  5
  6   for irec in (select deptno, count(*) emps
  7             from emp
  8             group by deptno
  9             having count(*) <3
 10                 or count(*) >8)
 11   loop
 12       RAISE_APPLICATION_ERROR(-20000, 'Departement '||irec.deptno||' has '||irec.emps||' employees!');
 13   end loop;
 14
 15  end;
 16  /

Trigger created.


was run before each one!


<b>a) doesn't take into consideration "transfers" (updates)</b>


ops$tkyte@ORA920> update emp set deptno = 30 where empno in ( 7369, 7566, 7788 );

3 rows updated.

ops$tkyte@ORA920> select deptno, count(*) from emp group by deptno;

    DEPTNO   COUNT(*)
---------- ----------
        10          3
        20          2   <<<=====
        30          9   <<<=====


<b>b) doesn't work for deletes with more than one user</b>

ops$tkyte@ORA920> delete from emp where empno in ( 7369, 7566 );
2 rows deleted.

ops$tkyte@ORA920> declare
  2          pragma autonomous_transaction;
  3  begin
  4          delete from emp where empno = 7788;
  5          commit;
  6  end;
  7  /
PL/SQL procedure successfully completed.

ops$tkyte@ORA920> commit;
Commit complete.

ops$tkyte@ORA920> select deptno, count(*) from emp group by deptno;

    DEPTNO   COUNT(*)
---------- ----------
        10          3
        20          2   <<<========
        30          6


<b>c) doesn't work for inserts with more than one user:</b>

ops$tkyte@ORA920> insert into emp select * from emp where deptno = 30 and rownum <= 2;

2 rows created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> declare
  2          pragma autonomous_transaction;
  3  begin
  4          insert into emp select * from emp where deptno = 30 and rownum <= 2;
  5          commit;
  6  end;
  7  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> commit;

Commit complete.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select deptno, count(*) from emp group by deptno;

    DEPTNO   COUNT(*)
---------- ----------
        10          3
        20          5
        30         10   <<<<=======



about the then/than -- ouch ;)

I have been trying to watch that one.  slows me down typing though, everytime I type then -- i have to stop and think.... 

Similar but different

Charlie, June 13, 2003 - 6:00 pm UTC

Hi Tom,

SQL> CREATE TABLE box_type(ID NUMBER(5), NAME VARCHAR2(20), max_position NUMBER(4), is_ordered VARCHAR2(1));

SQL> INSERT INTO box_type VALUES(1, 'big-box', 8, 'N');
SQL> INSERT INTO box_type VALUES(2, 'small-box', 4, 'Y');


SQL> SELECT * FROM box_type;

        ID NAME                 MAX_POSITION IS_ORDERED
---------- -------------------- ------------ ----------
         1 big-box                         8 N
         2 small-box                       4 Y

SQL> CREATE TABLE stuff(ID NUMBER(5) PRIMARY KEY, box_type_id NUMBER(5), box_id NUMBER(5), position NUMBER(4));
SQL> ALTER TABLE stuff ADD(CONSTRAINTS stuff_uq UNIQUE(box_id, position));


SQL> INSERT INTO stuff VALUES(1, 1, 100, 1);
SQL> INSERT INTO stuff VALUES(2, 1, 100, 10);
SQL> INSERT INTO stuff VALUES(3, 2, 200, 1);
SQL> INSERT INTO stuff VALUES(4, 2, 200, 2);

SQL> SELECT * FROM stuff;

        ID BOX_TYPE_ID     BOX_ID   POSITION
---------- ----------- ---------- ----------
         1           1        100          1
         2           1        100         10 
         3           2        200          1
         4           2        200          2

SQL> COMMIT;


I have a requirement which is:
whenever doing insert or update on table "stuff", I want to make sure 
based on box_type, if it is ordered(is_ordered='Y'), the input position
must be between 1 and box_type.max_position and take the input position from GUI.  
If box_type is unordered(is_ordered='N'), the position will be generated by database
automatically and the total number of positions must be less than or equal to
box_type.max_position.  For example,

If I insert another box-100, based on box_type, we know its max_position is 8 and unordered. 
So database will assgin a new position(via max(position)+1 or sequence), say, 11 
and check total count, something like(
select count(*) cnt from stuff 
where box_id = 100
), the cnt must be less than or equal to 8.


If I insert another box-200, we know its max_position is 4 and ordered.
So it will take THE USER's INPUT position(3) FROM GUI and make sure THIS INPUT POSITION
between 1 and 4.

I tried a trigger solution, but it failed with mutating error.

Could you please shed some light on it?

thanks,
 

Tom Kyte
June 14, 2003 - 8:33 am UTC

well, i wouldn't use a trigger -- when the logic behind the trigger gets so complex that you are in the world of "magic" to get it working, well, it isn't a good maintainable solution anymore.

a PROCEDURE to insert/update this table would be called for.

But, we've already have half of it right -- the count limit part, which requires you to add a column to box_type which is a current count of the child rows)....

doing the other part, based on that (when ordered make sure position between 1 and max) should be straightforward (but ugly, strongly encourage a PROCEDURE).

here is the insert trigger, you can work on the update (which has to deal with an update to BOX_TYPE_ID, so some :new and :old references will be needed)

ops$tkyte@ORA817DEV> CREATE TABLE box_type
  2  (ID NUMBER(5) primary key,
  3   NAME VARCHAR2(20),
  4   max_position NUMBER(4) NOT NULL,
  5   is_ordered VARCHAR2(1) NOT NULL,
  6   cnt number default 0 NOT NULL,
  7   constraint cnt_check check ( is_ordered != 'N' OR cnt between 0 and max_position )
  8  );

Table created.

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> INSERT INTO box_type VALUES(1, 'big-box', 8, 'N', 0);

1 row created.

ops$tkyte@ORA817DEV> INSERT INTO box_type VALUES(2, 'small-box', 4, 'Y', 0);

1 row created.

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> SELECT * FROM box_type;

        ID NAME                           MAX_POSITION I        CNT
---------- ------------------------------ ------------ - ----------
         1 big-box                                   8 N          0
         2 small-box                                 4 Y          0

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> drop TABLE stuff;
drop TABLE stuff
           *
ERROR at line 1:
ORA-00942: table or view does not exist


ops$tkyte@ORA817DEV> CREATE TABLE stuff
  2  (ID NUMBER(5) PRIMARY KEY,
  3   box_type_id references box_type,
  4   box_id NUMBER(5),
  5   position NUMBER(4));

Table created.

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> ALTER TABLE stuff ADD(CONSTRAINTS stuff_uq UNIQUE(box_id, position));

Table altered.

ops$tkyte@ORA817DEV> drop sequence s;

Sequence dropped.

ops$tkyte@ORA817DEV> create sequence s;

Sequence created.

ops$tkyte@ORA817DEV> create or replace trigger stuff_trigger
  2  before insert on stuff for each row
  3  declare
  4          l_rec box_type%rowtype;
  5  begin
  6          select * into l_rec from box_type where id = :new.box_type_id;
  7
  8          if (l_rec.is_ordered = 'Y')
  9          then
 10                  if ( NOT ( :new.position between 1 and l_rec.max_position ) )
 11          then
 12                          raise_application_error( -20001, 'not between 1 .. max' );
 13                  end if;
 14          else
 15                  update box_type set cnt = cnt+1 where id = :new.box_type_id;
 16                  select s.nextval into :new.position from dual;
 17          end if;
 18  end;
 19  /

Trigger created.

ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> INSERT INTO stuff VALUES(1, 1, 100, 1);

1 row created.

ops$tkyte@ORA817DEV> INSERT INTO stuff VALUES(2, 1, 100, 10);

1 row created.

ops$tkyte@ORA817DEV> INSERT INTO stuff VALUES(3, 1, 100, 10);

1 row created.

ops$tkyte@ORA817DEV> INSERT INTO stuff VALUES(4, 1, 100, 10);

1 row created.

ops$tkyte@ORA817DEV> INSERT INTO stuff VALUES(5, 1, 100, 10);

1 row created.

ops$tkyte@ORA817DEV> INSERT INTO stuff VALUES(6, 1, 100, 10);

1 row created.

ops$tkyte@ORA817DEV> INSERT INTO stuff VALUES(7, 1, 100, 10);

1 row created.

ops$tkyte@ORA817DEV> INSERT INTO stuff VALUES(8, 1, 100, 10);

1 row created.

ops$tkyte@ORA817DEV> INSERT INTO stuff VALUES(9, 1, 100, 10);
INSERT INTO stuff VALUES(9, 1, 100, 10)
            *
ERROR at line 1:
ORA-02290: check constraint (OPS$TKYTE.CNT_CHECK) violated
ORA-06512: at "OPS$TKYTE.STUFF_TRIGGER", line 13
ORA-04088: error during execution of trigger 'OPS$TKYTE.STUFF_TRIGGER'


ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> INSERT INTO stuff VALUES(100, 2, 200, 1);

1 row created.

ops$tkyte@ORA817DEV> INSERT INTO stuff VALUES(101, 2, 200, 2);

1 row created.

ops$tkyte@ORA817DEV> INSERT INTO stuff VALUES(102, 2, 200, 3);

1 row created.

ops$tkyte@ORA817DEV> INSERT INTO stuff VALUES(103, 2, 200, 4);

1 row created.

ops$tkyte@ORA817DEV> INSERT INTO stuff VALUES(104, 2, 200, 5);
INSERT INTO stuff VALUES(104, 2, 200, 5)
            *
ERROR at line 1:
ORA-20001: not between 1 .. max
ORA-06512: at "OPS$TKYTE.STUFF_TRIGGER", line 10
ORA-04088: error during execution of trigger 'OPS$TKYTE.STUFF_TRIGGER'


ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> select * from box_type;

        ID NAME                           MAX_POSITION I        CNT
---------- ------------------------------ ------------ - ----------
         1 big-box                                   8 N          8
         2 small-box                                 4 Y          0

ops$tkyte@ORA817DEV>
 

almost there...

Charlie, June 14, 2003 - 5:15 pm UTC

Thanks Tom!!

There is one thing can defeat this solution.

ops$tkyte@ORA817DEV> INSERT INTO stuff VALUES(8, 1, 100, 10);

1 row created.

Right after this statement, if I inserted another DIFFERENT box, say Box-500, which also belongs to box_type(1), unordered and max_position is 8

INSERT INTO stuff VALUES(9, 1, 500, 50);

It will barf because the current "cnt" in box_type has been updated to 8

ID NAME MAX_POSITION I CNT
---------- -------------------- ------------ - ----------
1 big-box 8 N 8 <==
2 small-box 4 Y 0

However, since they are totally different boxes, I do want to insert this box-500 into table "stuff". How can I do that?

Thanks!





Tom Kyte
June 14, 2003 - 5:31 pm UTC

i didn't follow that at all. I thought you wanted 8 "big-boxes" and no more.

I don't understand your "different box"

but if you are saying -- there can be many collections of 8 boxes, you need yet another table to maintain the count by "collection"

You have the information you need -- the technique is all there. If you understand it, you can apply it to your problem (and if you don't understand it, you really need to before you use it so.... that leads us back to "you have the technique, you can apply to to a broad range of problems...)

stuck

Mark, December 29, 2003 - 11:38 am UTC

I am trying to enforce the business rule that if an employee has one or more addresses, then one and only one address must be "preferred", that is, preferred = 'Y'. All other addresses for the employee must have preferred = 'N'. An address does not have to exist for an employee.

I have created the following for testing.

CREATE TABLE mark_EMP1 (
  EMPNO     NUMBER(4) PRIMARY KEY,
  ENAME     VARCHAR2(10),
  JOB       VARCHAR2(9),
  MGR       NUMBER(4),
  HIREDATE  DATE,
  SAL       NUMBER(7,2),
  COMM      NUMBER(7,2),
  DEPTNO    NUMBER(2));

CREATE TABLE mark_ADDR1 (
  TRANS        NUMBER(4) PRIMARY KEY,
  EMPNO     NUMBER(4),
  ADDR_LINE1  VARCHAR2(40),
  ADDR_LINE2  VARCHAR2(40),
  CITY        VARCHAR2(40),
  STATE       VARCHAR2(2),
  ZIP         VARCHAR2(40),
  PREFERRED   VARCHAR2(1));

ALTER TABLE mark_ADDR1 ADD CONSTRAINT mark_ADDR1_FK1 FOREIGN KEY (EMPNO) REFERENCES mark_EMP1(EMPNO);
ALTER TABLE mark_ADDR1 ADD constraint chk_mark_addr1_preferred check (preferred in ('N','Y'));

INSERT INTO MARK_EMP1 ( EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM,
DEPTNO ) VALUES ( 
7521, 'WARD', 'SALESMAN', 7698,  TO_Date( '02/22/1981 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, 1250, 500, 30); 

commit;
   
CREATE SEQUENCE mark_ADDR1_TRANSNUM INCREMENT BY 1 START WITH 1 MINVALUE 1 NOCYCLE NOCACHE NOORDER;

create table mark_addr_pref_cnt
    (  empno primary key,
       cnt    ,
       constraint mark_addr_pref_cnt_chk_1
       check ( cnt = 1 ) deferrable initially deferred
    )
    organization index
    as
    select empno, count(*)
     from mark_ADDR1
     WHERE preferred = 'Y'
    group by empno;


 create or replace trigger mark_ADDR1_pref_trigger
   after insert or update of PREFERRED or delete on mark_ADDR1
   for each row
   begin
     IF INSERTING 
     OR (UPDATING AND :NEW.preferred = 'Y' AND :OLD.preferred = 'N')
     THEN
      UPDATE mark_addr_pref_cnt
         SET cnt = cnt + DECODE(:NEW.preferred, 'Y', 1, 0)
       WHERE empno = :NEW.empno;

      IF (SQL%ROWCOUNT = 0)
      THEN
         INSERT INTO mark_addr_pref_cnt
                     (empno, cnt)
              VALUES (:NEW.empno, DECODE(:NEW.preferred, 'Y', 1, 0));
      END IF;
     END IF;

     IF    (DELETING AND :OLD.preferred = 'Y')
      OR (UPDATING AND :OLD.preferred = 'Y' AND :NEW.preferred = 'N')
     THEN
      UPDATE mark_addr_pref_cnt
         SET cnt = cnt - 1
       WHERE empno = :OLD.empno;

     END IF;
   end;
/   

I am having two problems.
1) Once an address is added to the employee, how can I delete all addresses from the employee without violating the constraint on mark_addr_pref_cnt? In other words, how can I delete the associated row in mark_addr_pref_cnt when all addresses for an employee are deleted?

2) I am getting a constraint error on mark_addr_pref_cnt at unexpected times.:

SQL> insert into mark_ADDR1 (TRANS,EMPNO,ADDR_LINE1,CITY,STATE,ZIP,PREFERRED)
  2    values (mark_ADDR1_TRANSNUM.nextval,7521,'319 Lamarck Dr.','Snyder','NY','14226','N');

1 row created.

SQL> insert into mark_ADDR1 (TRANS,EMPNO,ADDR_LINE1,CITY,STATE,ZIP,PREFERRED)
  2    values (mark_ADDR1_TRANSNUM.nextval,7521,'1410 Main St.','Buffalo','NY','14209','Y');

1 row created.

SQL> select empno, trans, preferred
  2  from mark_addr1
  3  order by empno, preferred desc;

     EMPNO      TRANS P
---------- ---------- -
      7521          6 Y
      7521          5 N

SQL> select * 
  2  from mark_addr_pref_cnt
  3  order by empno;

     EMPNO        CNT
---------- ----------
      7521          1

SQL> commit;
commit
*
ERROR at line 1:
ORA-02091: transaction rolled back
ORA-02290: check constraint (VOLUME.MARK_ADDR_PREF_CNT_CHK_1) violated

Thanks for your help. 

Tom Kyte
December 29, 2003 - 1:02 pm UTC

given the question:

I am trying to enforce the business rule that if an employee has one or more 
addresses, then one and only one address must be "preferred", that is, preferred 
= 'Y'. All other addresses for the employee must have preferred = 'N'.


My answer would be:


ops$tkyte@ORA9IR2> create UNIQUE index emp_addr_index on
  2  mark_addr1( decode( preferred, 'Y', empno ) );
 
Index created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into mark_addr1 values ( 1,   111, 'x', 'y', 'x', 'y', 'z', 'Y' );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into mark_addr1 values ( 2,   111, 'x', 'y', 'x', 'y', 'z', 'N' );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into mark_addr1 values ( 3,   111, 'x', 'y', 'x', 'y', 'z', 'N' );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into mark_addr1 values ( 4,   111, 'x', 'y', 'x', 'y', 'z', 'Y' );
insert into mark_addr1 values ( 4,   111, 'x', 'y', 'x', 'y', 'z', 'Y' )
*
ERROR at line 1:
ORA-00001: unique constraint (OPS$TKYTE.EMP_ADDR_INDEX) violated


 

at least vs. at most

j., December 30, 2003 - 6:14 am UTC

that would just enforce the uniqueness of 'preferred' state per "group".

but is there any non-trigger-solution to force AT LEAST one entry per "group" to be marked as 'preferred'? or does this requirement have to be met by maintaining an additional "preferred count" column the same way as in your count-child-entry-pattern above?

Mark, December 30, 2003 - 8:28 am UTC

J. is right - in that there needs to be at least one preferred address, preferred = 'Y'. That is why I went with the "cnt" table and the trigger. I figured the deferrable initially deferred constraint was necessary because a non-preferred address could be inserted first. Or, the preferred could be deleted before a new preferred was inserted or updated. So, I am back to my original problems.

Thanks again for your help

Tom Kyte
December 30, 2003 - 10:56 am UTC

well, we have it 80% solved then.  you just want to make sure that at least one is Y if there exists at least one row.  we can do that by using a "mutating table" workaround PLUS the unique index (we still need both).

Here is the approach:

ops$tkyte@ORA9IR2> CREATE TABLE mark_ADDR1 (
  2    TRANS        NUMBER(4) PRIMARY KEY,
  3    EMPNO     NUMBER(4),
  4    ADDR_LINE1  VARCHAR2(40),
  5    ADDR_LINE2  VARCHAR2(40),
  6    CITY        VARCHAR2(40),
  7    STATE       VARCHAR2(2),
  8    ZIP         VARCHAR2(40),
  9    PREFERRED   VARCHAR2(1));
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace package state_pkg
  2  as
  3      type array is table of number index by binary_integer;
  4
  5      g_data array;
  6      g_empty array;
  7
  8      procedure doit;
  9  end;
 10  /
 
Package created.
 
<b>we'll maintain the list of modified empnos in these tables...</b>


ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace package body state_pkg
  2  as
  3
  4  procedure doit
  5  is
  6      l_idx  number;
  7      l_cnt1 number;
  8      l_cnt2 number;
  9  begin
 10      l_idx := g_data.first;
 11      while (l_idx is not null)
 12      loop
 13          select count(*), count(decode(preferred,'Y','Y'))
 14            into l_cnt1, l_cnt2
 15            from mark_addr1
 16           where empno = l_idx;
 17
 18          if ( l_cnt1 <> 0 and l_cnt2 <> 1 )
 19          then
 20              raise_application_error( -20001, 'At least one addr must be preferred' );
 21          end if;
 22          l_idx := g_data.next(l_idx);
 23      end loop;
 24  end;
 25
 26  end;
 27  /
 
Package body created.

<b>do it will simply check that either

a) there are ZERO rows for this employee
b) if there are not ZERO rows -- at least ONE of them will be 'Y'

It is not used for uniqueness -- it won't be able to do that as we'll see. thats why the error message unambigously says "at least one" and not doing any sort of duplication check</b>
 
ops$tkyte@ORA9IR2> create or replace trigger t1
  2  before insert or update or delete on mark_addr1
  3  begin
  4      state_pkg.g_data := state_pkg.g_empty;
  5  end;
  6  /
 
Trigger created.

<b>we need to reset before each update/insert/delete...</b>

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace trigger t2
  2  after insert or update or delete on mark_addr1 for each row
  3  begin
  4      if (inserting or updating)
  5      then
  6          state_pkg.g_data( :new.empno ) := 1;
  7      end if;
  8      if (deleting or updating)
  9      then
 10          state_pkg.g_data( :old.empno ) := 1;
 11      end if;
 12  end;
 13  /
 
Trigger created.

<b>simply remember the empnos...</b>

 

ops$tkyte@ORA9IR2> create or replace trigger t3
  2  after insert or update or delete on mark_addr1
  3  begin
  4      state_pkg.doit;
  5  end;
  6  /
 
Trigger created.

<b>and then do the check..</b>
 
ops$tkyte@ORA9IR2> insert into mark_addr1 values ( 2,   111, 'x', 'y', 'x', 'y', 'z', 'N' );
insert into mark_addr1 values ( 2,   111, 'x', 'y', 'x', 'y', 'z', 'N' )
            *
ERROR at line 1:
ORA-20001: At least one addr must be preferred
ORA-06512: at "OPS$TKYTE.STATE_PKG", line 20
ORA-06512: at "OPS$TKYTE.T3", line 2
ORA-04088: error during execution of trigger 'OPS$TKYTE.T3'

<b>so far, so good -- it detected we didn't have a preferred one, but:</b>

ops$tkyte@ORA9IR2> insert into mark_addr1 values ( 1,   111, 'x', 'y', 'x', 'y', 'z', 'Y' );
1 row created.
 
ops$tkyte@ORA9IR2> insert into mark_addr1 values ( 2,   111, 'x', 'y', 'x', 'y', 'z', 'N' );
1 row created.
 
ops$tkyte@ORA9IR2> insert into mark_addr1 values ( 3,   111, 'x', 'y', 'x', 'y', 'z', 'N' );
1 row created.
 
ops$tkyte@ORA9IR2> insert into mark_addr1 values ( 4,   111, 'x', 'y', 'x', 'y', 'z', 'Y' );
insert into mark_addr1 values ( 4,   111, 'x', 'y', 'x', 'y', 'z', 'Y' )
            *
ERROR at line 1:
ORA-20001: At least one addr must be preferred
ORA-06512: at "OPS$TKYTE.STATE_PKG", line 20
ORA-06512: at "OPS$TKYTE.T3", line 2
ORA-04088: error during execution of trigger 'OPS$TKYTE.T3'

<b>and it would "appear" that it does the duplication check -- but it doesn't really:</b>
 
 
ops$tkyte@ORA9IR2> rollback;
 
Rollback complete.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into mark_addr1 values ( 1,   111, 'x', 'y', 'x', 'y', 'z', 'Y' );
 
1 row created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
  2      pragma autonomous_transaction;
  3  begin
  4      insert into mark_addr1 values ( 4,   111, 'x', 'y', 'x', 'y', 'z', 'Y' );
  5      commit;
  6  end;
  7  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
<b>since the two sessions cannot see eachothers additions.... So -- we create the index:</b>

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create UNIQUE index emp_addr_index on
  2  mark_addr1( decode( preferred, 'Y', empno ) );
mark_addr1( decode( preferred, 'Y', empno ) )
                                    *
ERROR at line 2:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found

<b>just shows the data is "bad"</b> 
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> delete from mark_addr1;
 
2 rows deleted.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create UNIQUE index emp_addr_index on
  2  mark_addr1( decode( preferred, 'Y', empno ) );
 
Index created.

<b>we create the index and retry:</b>
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into mark_addr1 values ( 1,   111, 'x', 'y', 'x', 'y', 'z', 'Y' );
 
1 row created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
  2      pragma autonomous_transaction;
  3  begin
  4      insert into mark_addr1 values ( 4,   111, 'x', 'y', 'x', 'y', 'z', 'Y' );
  5      commit;
  6  end;
  7  /
declare
*
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource
ORA-06512: at line 4
 
<b>if we were to use two sessions -- we would have seen these sessions serialize instead of deadlock -- but you get the point.</b>
 
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> delete from mark_addr1;
 
1 row deleted.

<b>and we can delete cause we check for ZERO rows (thats OK) or count(Y) != 1..
</b>
 
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2>
 

but how to "switch" preferred state between existing entries?

j., December 30, 2003 - 1:44 pm UTC

(without deleting all "group" entries and reinserting them with modified states)

checks are not performed at transaction level but statement level ...

Tom Kyte
December 30, 2003 - 2:11 pm UTC

ops$tkyte@ORA9IR2> select trans, preferred from mark_addr1;
 
     TRANS P
---------- -
         1 Y
         2 N
         3 N
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> update mark_addr1
  2     set preferred = decode( trans, 1, 'N', 3, 'Y' )
  3   where trans in ( 1, 3 );
 
2 rows updated.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select trans, preferred from mark_addr1;
 
     TRANS P
---------- -
         1 N
         2 N
         3 Y


? it only takes a single update statment to modify as many rows as you want.  a simple toggle is all you need.  More simply, it might be:

ops$tkyte@ORA9IR2> update mark_addr1
  2  set preferred = decode( trans, 1, 'Y', 'N' )
  3  where trans = 1
  4    or ( empno = 111 and preferred = 'Y' );
 
2 rows updated.
 
ops$tkyte@ORA9IR2>  select trans, preferred from mark_addr1;
 
     TRANS P
---------- -
         1 Y
         2 N
         3 N
In that -- you need to know the trans you want to be "preferred" and the empno of the person it belongs to.  that way you don't need to know the "old" and the "new", just the "new". 

my fault

j., December 30, 2003 - 4:43 pm UTC

that just reminds me of the lesson: don't do in "single row procedural mode" what can be achieved with straight descriptive SQL ...


thanks

Mark, December 31, 2003 - 11:20 am UTC

Excellent!! It works perfectly.

Thanks for your help Tom; I really appreciate it.

Thanks for your input J.


hi

A reader, March 25, 2004 - 4:22 pm UTC

I have a before insert/update/delete trigger that inserts :old or :new values into an audit table. I have defined pragma autonomous_transaction so that I can commit inside my trigger.

When a user inserts a record into the regular table, an equivilent record is inserted into the audit table and commits. Later if the user feels to rollback the record from the regular table, I am not sure how to get rid off the new record in the audit table.
Is there a way to work around on this kind of situation.

Thanks

Tom Kyte
March 25, 2004 - 9:28 pm UTC

delete the line that reads "pragma autonomous_transaction" and it'll "just happen"



Inter-row integrity?

A reader, September 16, 2004 - 8:03 pm UTC

I have the following tables: CUST, EMP and a CUST_EMP assoc table with just (cust_id,emp_id). Basically, there is a many-many association between customers and employees.

I need to introduce the concept of a "primary employee" for a customer. How would you model/store this information? Of course, there has to be only 1 primary emp per customer, whether it is mandatory or not is still TBD.

I see the following options:

1. Add a primary_flag to the cust_emp table. But then how can I ensure that only 1 (cust,emp) row has this flag set?

2. Add a primary_emp_id to the cust table. This would work except that the business rule is that any arbitrary emp canot be the primary emp, it has to be one of the associated emps for that cust (as per the cust_emp table). So then I am back where I started.

Any ideas, thoughts appreciated. Thanks.

Tom Kyte
September 16, 2004 - 8:19 pm UTC

primary employee to me seems a no brainer,

there is a column in the cust table that is a foreign key to the employee table. If it is mandatory, it is a not null foreign key columns.


You can change the business rule to be "the associated emps for a customer is the set of emps in cust_emp and the primary in cust"


if you goto

</code> http://asktom.oracle.com/~tkyte <code>
and grab:
Slides presented at the IOUG-A in Toronto and at the TCOUG in the Twin Cities, April 2004

the readwrite consistency section actually shows how to enforce the integrity if you go with the Y/N flag on the association object.


This enforces uniqueness -- only ONE per customer:

create unique index X on cust_emp( decode(primary,'Y',custno) );


the trigger demonstrated can do the "mandatory" if that becomes an issue.


Inter-row integrity?

A reader, September 16, 2004 - 8:07 pm UTC

Another wrinkle: There is a periodic batch process that recalculates this primary employee for every customer as per some business rules. When the new primary employee is different from the existing one, something needs to happen (insert into another table, etc). How can I do this?

Would you recommend a procedural approach to this batch process or a pure SQL solution? Thanks for any help.

Tom Kyte
September 16, 2004 - 8:20 pm UTC

if you model this as an attribute of CUST, a trigger would work (not a huge fan of triggers with side effects like this) -- or the batch could do it.

if you use the association object, same answer.

Inter-row integrity

A reader, September 16, 2004 - 9:00 pm UTC

"You can change the business rule to be "the associated emps for a customer is the set of emps in cust_emp and the primary in cust"

Unfortunately, I cant change the business rule. This system has to interface with that 6-letter CRM system and that system has a built-in rule that the primary emp has to be in the association table.

Given that, I guess my only option is the primary=Y flag in the cust_emp table?

I didnt quite understand your suggestion to use a trigger to recalculate all the primary emps in batch? Could you please elaborate? Also, how can I do the 'do something if primary changes' part?

Thanks

Tom Kyte
September 17, 2004 - 8:11 am UTC

Ok, this is the ONLY way to enforce the uniqueness:

create unique index X on cust_emp( decode(primary,'Y',custno) );


you cannot enforce uniqueness in a trigger. as for the "if primary emp changes", a trigger would be looking for ":old.status = 'Y' and :new.status = 'N'" (if you want to perform this extra operation from the context of the emp that USED to be the primary, or ":old.status = 'N' and :new.status = 'Y'" if you wanted to do this extra stuff from the context of the emp that is going to be the new primary.

else, you do it in the batch -- the batch that reassigns the emps.


I did not suggest to use a trigger to recalc, i said a trigger could be used as a side effect of your recalc to "do this other magic thing" (but I don't like that approach, i prefer top down - not "some magic happens as a side effect of a trigger"

A reader, September 16, 2004 - 9:51 pm UTC

"This enforces uniqueness -- only ONE per customer:

create unique index X on cust_emp(decode(primary,'Y',custno));
"

1. How can I extend this to 2 columns? Suppose the PK of my cust_emp table were (cust_id,emp_id,type_id). How can I say that a customer can have only 1 primary emp_id per type_id? i.e. for primary=Y, cust_id+type_id have to be unique?

2. Further, how can I say that this doesnt apply to all type_ids but only where type_id in (select type_id from type where type_category='foo')?

3. If I want to implement this primary emp concept as a attribute in the cust table (fk to emp.emp_id), how can I enforce that the emp_id is in the cust_emp association table?

Thanks


Tom Kyte
September 17, 2004 - 8:21 am UTC

1) on cust_emp( decode(primary,'Y',cust_id), decode(primary,'Y',type_id) );


2) you cannot generalize it with subqueries (it would not be deterministic anymore, the contents of TYPE can change -- necessitating a rebuild of the index!)

So, you would say:

on cust_emp( case when primary = 'Y' and type_id in ( 'A', 'B', 'C' ) 
                  then cust_id
              end, 
             case when primary = 'Y' and type_id in ( 'A', 'B', 'C' ) 
                  then type_id
              end )


and if the set of type_id's change -- you drop and recreate the index with the new set (the index must be rebuilt if the set of interesting values changes)

3) You'd need a table that has cust_id, emp_id as the primary key and point a foreign key back to it.  eg: assuming the cust_emp table has a primary key of custno, empno:


ops$tkyte@ORA9IR2> create table cust ( custno int primary key, data int, primary_empno int );
 
Table created.
 
ops$tkyte@ORA9IR2> create table emp ( empno int primary key, data int );
 
Table created.
 
ops$tkyte@ORA9IR2> create table cust_emp( custno references cust, empno references emp, primary key(custno,empno) );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter table cust add constraint cust_fk foreign key(custno,primary_empno) references cust_emp;
 
Table altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into cust values ( 1, null, NULL );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into emp values ( 100, null );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into emp values ( 101, null );
 
1 row created.
 
ops$tkyte@ORA9IR2> insert into cust_emp values ( 1, 100 );
 
1 row created.
 
ops$tkyte@ORA9IR2> update cust set primary_empno =  101 where custno = 1;
update cust set primary_empno =  101 where custno = 1
*
ERROR at line 1:
ORA-02291: integrity constraint (OPS$TKYTE.CUST_FK) violated - parent key not
found
 
 
ops$tkyte@ORA9IR2> update cust set primary_empno =  100 where custno = 1;
 
1 row updated.
 

A reader, September 18, 2004 - 11:51 pm UTC

Ok lets say I want to implement this primary concept as a flag in the association table.

Every customer has to have at most 1 primary emp, but could have none

create table cust(cust_id int primary key,last_name varchar2(30));
insert into cust select rownum,object_name from all_objects
where rownum<=10000;

create table emp(emp_id int primary key,last_name varchar2(30));
insert into emp select rownum,object_name from all_objects
where rownum<=200;

create table cust_emp(cust_id int,emp_id int,primary varchar2(1));

alter table cust_emp add foreign key(cust_id) references cust;

alter table cust_emp add foreign key(emp_id) references emp;

create unique index i on cust_emp(decode(primary,'Y',cust_id));

I want to populate cust_emp with a random sample of say 30000 rows satisfying the unique index i above. I cant think of the SQL to do this but you get the idea.

Now, I want to create a view showing all customers, any associated employees and the primary flag if applicable.

The view would be

create view v as
select * from cust c,emp e,cust_emp ce
where c.cust_id=ce.cust_id(+)
and ce.emp_id=e.emp_id(+)

For large volumes I cannot see this view with 2 outer joins performing well?

How can I do this?

Thanks

Tom Kyte
September 19, 2004 - 10:26 am UTC

give me a query to randomly generate 30,000 emp/cust relations and I'll show you how to use

decode( row_number() over (partition by cust_id order by empno ), 1, 'Y', 'N' )


to assign 'Y' to the first cust/emp relation...........



as for the view question -- depends on how you use it.

if you

select * from v where cust_id = ....
where cust_id in (.....)


it'll be really super fast. if you just "select * from v" -- it'll be as long as it takes to join (not any slower than a "normal" join as it should be 3 full scans, 2 hash joins -- a hash join outer with (+) but that won't significantly ping the return time)


If you put a predicate on data in the cust_emp, emp attributes (and the predicate is not "IS NULL") -- this view would be totally inappropriate -- you would want a view without outer joins in that case (since if you did outer join,
those attributes would be NULL and never satisfy the predicate)

Updating

A reader, September 19, 2004 - 9:56 am UTC

1. The unique index essentially makes this like a radio-button toggle for a customer i.e. it can be set only for one emp for a given customer. So, how can I change it? As soon as I update/insert another row with primary='Y', I would get the UK violation.

Do I have to use a deferrable constraint?

create unique index i on cust_emp(decode(primary,'Y',cust_id)) deferrable;

2. Say I need to update the primary on all my customers. I have a table/view that gives the new primary for all customers (new_cust_emp). How can I efficiently merge this table into my existing cust_emp table updating only when necessary. Can I use MERGE? Or do I have to do row-by-row processing?

3. When the primary emp changes, I want to do some procesing. I guess a row-level update trigger on the table is my only choice here.

create or replace trigger t
after update on cust_emp
for each row
when (:new.primary='Y')

If :old.emp_id != :new.emp_id then ...

I want to avoid triggers if possible, can this be done without triggers?

Thanks

Tom Kyte
September 19, 2004 - 10:48 am UTC

1) in order to make emp2 primary, you must be setting emp1 "non-primary"

you would probably have 2 updates -- one to turn off the old one, one to turn on the new one. Yes, this could be done in a single update as well:


update cust_emp
set primary = decode( empno, :empno, 'Y', 'N' )
where cust_id = :cust_id
and (primary = 'Y' or empno = :empno);


that would have the effect of turning off any other empno, and turning on the passed in empno. It would work even of the existing empno was already the primary.

2) seems like a two step to me:

update cust_emp
set primary = 'N'
where primary = 'Y'
and (cust_id,empno) not in ( select cust_id, empno from this_table );

update (select primary
from cust_emp, this_table
where cust_emp.cust_id = this_table.cust_id
and cust_emp.empno = this_table.empno
and cust_emp.primary <> 'Y' )
set primary = 'Y';


that is, set to N all rows in cust_emp that are not to be primary in the future but that currently are primary.

then, update the primary flag for all cust/emp rows that are not currently primary but are designated to be primary

3) without a trigger -- how could you have any additional processing on a row update?

unless you wrap the update in 1 above in a stored procedure and said "to change primary, tho shall call this routine" -- you'll be using a trigger.

If you use the update in #1 AND after the update sql%rowcount = 2, you changed the primary. if sql%rowcount = 1, you did not (you either a) assigned the first primary which technically isn't 'changing' or b) made the same one a primary again)


Or your stored procedure could query out the current and compare and do the update and perhaps extra processing




A reader, September 19, 2004 - 8:02 pm UTC

"give me a query to randomly generate 30,000 emp/cust relations and I'll show you how to use..."

I did come up with the row_number() over (partition by cust_no order by empno) part myself, but couldnt come up with a way to generate 30000 random cust/emp relations. Just out of academic interest, how can this be done?

Here is what I tried

insert into cust_emp
select cust_id,emp_id,
decode(row_number() over (partition by cust_id order by emp_id),1,'Y','N')
from cust,emp where dbms_random.value(1,100)<=25
and rownum<=30000;

This didnt work very well, all emp_no=1 were always 'Y' and there were duplicates for my UK (decode(primary,'Y',cust_no))

Next I tried

insert into cust_emp
select cust_id,emp_id,
decode(row_number() over (partition by cust_id order by emp_id),1,'Y','N')
from (select * from cust sample(10)),emp
where rownum<=30000;


This didnt work too well either. Any ideas?

Regarding the view, this is similar to the discussion we had at

</code> http://asktom.oracle.com/pls/ask/f?p=4950:61:::::P61_ID:22868016492361 <code>

create view v as
select * from cust c,emp e,cust_emp ce
where c.cust_id=ce.cust_id(+)
and ce.emp_id=e.emp_id(+)

Yes, if I write the view like this and user does

select * from v where emp_last_name='foo' it will never return any rows

But if I write the query as

select * from cust c,emp e,cust_emp ce
where c.cust_id=ce.cust_id(+)
and ce.emp_id=e.emp_id(+)
and e.emp_last_name(+)='foo'

it is fine, right?

Since I cant control how the view will be used by the users, it doesnt make sense to write a generic view like this. Might as well write a view called "primary_emps" and have exact joins everywhere. Agree?

On a related note, how about a scalar subquery

select c.*,
(select empno||last_name from cust_emp ce,emp e
where ce.emp_id=e.emp_id and ce.cust_id=c.cust_id
and ce.primary='Y') primary_emp
from cust c;

This would perform fine for cust_no =/in but would tank if i exposed this in a view and user queries

select * from v where primary_emp=...

since that would query the scalar subquery and FTS everything in sight?

Thanks

Thanks

Tom Kyte
September 19, 2004 - 8:30 pm UTC

lets say you have N cust records.

You want 30,000/N emps associated with each cust (i'm assuming you want each cust in the cust_emp table, I would -- feels right. don't care if every emp is there)


so, you

from
(select cust_id from cust), (select rownum r from emp where rownum <= 30000/N)

that gives you each cust record 30000/N times.

Now, you want to get a 'random' emp associated with that. I'll make the generous assumption that empno is a number between X and Y (you find them, you have them already)

select cust_id, (select empno
from emp
where empno > dbms_random.value(X,Y)
and rownum = 1 ) empno
from (select cust_id from cust),
(select rownum r from emp where rownum <= 30000/N)

call that Q, now we

select distinct cust_id, empno
from (Q)

add in your row_number() and there you go perhaps...


Now, here:

<quote>
create view v as
select * from cust c,emp e,cust_emp ce
where c.cust_id=ce.cust_id(+)
and ce.emp_id=e.emp_id(+)

Yes, if I write the view like this and user does

select * from v where emp_last_name='foo' it will never return any rows

But if I write the query as

select * from cust c,emp e,cust_emp ce
where c.cust_id=ce.cust_id(+)
and ce.emp_id=e.emp_id(+)
and e.emp_last_name(+)='foo'
</quote>

you are wrong in that select * from v will not return data -- it will, it'll return rows where the last_name = foo

the other one is a TOTALLY 100% DIFFERENT question. the second query says "show all customers and if they are related to an emp named foo, show that data too"

the first one "show customers related to an emp named foo", the second "show ALL customers and if foo is related - -show that too"

Night and day.


so you would have to first figure out which question you really meant to ask -- was it the

"show ALL customers and if something else exists, show that too....."
or
"show customer such that...."

those are two different queries entirely.

A reader, September 19, 2004 - 9:21 pm UTC

"show ALL customers and if something else exists, show that too....."

This is the query I want. So, I cant really expose this using my view v, right?

So, would the scalar subquery I mentioned earlier be an alternative here? I know that scalar subqueries are great to "avoid" outer-joins, but they are not really designed to handle filters on that scalar column, as in my earlier example?

Thanks

i.e. if I have
create or replace view v
as

Tom Kyte
September 19, 2004 - 9:57 pm UTC

no, you cannot.


the scalar subquery would do the same thing! think about it. as soon as you slap the predicate "and last_name = 'foo'" on there....


A reader, September 19, 2004 - 9:30 pm UTC

update cust_emp
set primary = decode( empno, :empno, 'Y', 'N' )
where cust_id = :cust_id
and (primary = 'Y' or empno = :empno);

Isnt this the same as

update cust_emp
set empno=:empno
where cust_id=:cust_id
and primary='Y'
and empno!=:empno

Because of the unique index there is guaranteed to be only row for :cust_id/'Y', so just update the emp for that row to the specified emp (if not there already). To me, this seems a little more intuitive than your version?

This would update 0 or 1 row

I could use this to combine the 2 step process you mention into a MERGE operation? Something like

MERGE INTO cust_emp a
using (select * from new_cust_emp b)
on (a.cust_id=b.cust_id and a.primary=b.primary)
when matched then update set a.emp_id=b.emp_id
when not matched then insert values b.cust_id,b.emp_id,'Y');

Any my trigger could do the additional processing if :old.emp_id != :new.emp_id

Am I missing anything?

Thanks

Tom Kyte
September 19, 2004 - 10:02 pm UTC

no, you want to TOGGLE the primary key.

before the update:

cust_id empno primary_flag
1 42 Y
1 55 N

you want to get to:

cust_id empno primary_flag
1 42 N
1 55 Y

update cust_emp
set primary = decode( empno, :empno, 'Y', 'N' )
where cust_id = :cust_id
and (primary = 'Y' or empno = :empno);

will do that -- you say "cust_id = 1, empno = 55" and it will

o update 1,42,Y to 1,42,N (because of the cust_id = :cust_id and primary ='Y')
o update 1,55,N to 1,55,Y (because of the cust_id = :cust_id and empno = :empno)

Your:

update cust_emp
set empno=:empno
where cust_id=:cust_id
and primary='Y'
and empno!=:empno

would result in

cust_id empno primary_flag
1 55 Y
1 55 N

-- removing 42 from the mix and getting a dup val on the primary key (1,55 times two)



Tell you what -- for the merge question -- give it a go (my answer is no, you need to update all existing rows - they are like "anti" rows there). set up a test case, try it out -- give it a go, just like I do .

A reader, September 19, 2004 - 10:18 pm UTC

Oops, thanks for setting me straight there. I missed the toggle part.

I will benchmark the update/merge stuff but I suspect you are right, as usual.

One final question (for now!), the unique index

create unique index i on cust_emp(decode(primary,'Y',cust_id))

This index ensures that there is only one row per customer marked as primary. For data integrity.

Can this index be used for data access into the table? Under what circumstances?

Thanks

[The technique used to generate the 30k random cust/emp rows was very cool, thanks!]

Tom Kyte
September 20, 2004 - 7:56 am UTC

I don't think you can benchmark the merge stuff -- i don't see merge applying (except now, on second thought, the two step needs to be:


update ....

merge ..... <<<== i assumed the emp would have been related in order to be made
primary, but if you say they might not be in there, merge
would be appropriate.


think about it -- you have a cust_id = 55 with a primary empno of 1. Your table of "new guys" says cust_id = 55 should have empno= 42.

merge would be able to cust cust_emp with this table T by cust_id and set the primary flag to Y for 42 (either via an update or an INSERT). MERGE WOULD NOT HOWEVER be able to set primary flag to N for 1 -- unless you put "anti records" in there -- records to set to N and records to set to Y.



As for the index, say you wanted to find the primary emp for cust_id X


You could:

where cust_id = X and primary_flag = 'Y'

and if you had an index on cust_id,primary_flag -- it would find it at once. But, if you queried:

where decode(primary_flag,'Y',cust_id) = X

you would be able to find it just as fast without the extra index.


Index

A reader, September 20, 2004 - 10:19 am UTC

You are right about the merge, thanks for pointing it out. I am not sure yet if the cust/emp row would always be there or might need to be inserted and then marked 'Y'. You have provided solutions for both cases anyway, thanks.

Regarding the index, the table PK is (cust_id,emp_id,another_column) anyway, so queries of the form

where cust_id = X and primary_flag = 'Y'

could always use the PK index.

My question was really why would I use a convoluted query of the form

where decode(primary_flag,'Y',cust_id) = :cust_id

when I can simply use the above form?

Thanks

Tom Kyte
September 20, 2004 - 11:02 am UTC

could always use the pk index, to range scan, and table access by index rowid for every customer record in there.

Eg: if a customer was related to 10 emps, you would have

3 LIO's to read the index structure (root -> branch -> leaf)
and
10 LIO's to read the table blocks to see if primary flag = 'Y'


using the decode we would have:

3 LIO's to read the index structure
1 LIO to read that row

period. done, we found our record.


you can either have 13 LIO's or 4 -- up to you.

A reader, September 20, 2004 - 10:52 pm UTC

before the update:

cust_id empno primary_flag
1 42 Y
1 55 N

update cust_emp
set primary = decode( empno, :empno, 'Y', 'N' )
where cust_id = :cust_id
and (primary = 'Y' or empno = :empno);

If I pass in cust_id=1/empno=42, it will update the 1/42/Y row unnecessarily. Is there a way to make the UPDATE above update 0 rows if I pass in a emp who is already the primary?

Thanks

Tom Kyte
September 21, 2004 - 7:37 am UTC

*think about what you might add* to the predicate to make that happen.

It is just boolean logic here -- the answer is "of course".

Now, just add to the predicate what you need to make it avoid that. What predicate might you "AND" into that to make it skip that update.......

scalar subquery

A reader, September 20, 2004 - 10:57 pm UTC

Not sure I understood your response to my question about the scalar subquery. Suppose I have

select c.*,
(select empno from cust_emp ce,emp e
where ce.emp_id=e.emp_id and ce.cust_id=c.cust_id
and ce.primary='Y') primary_emp
from cust c;

This would perform fine for cust_no = 1 or cust_no in (1,2,...) but would tank if i exposed this in a
view and user queries

select * from v where primary_emp=1

Tom Kyte
September 21, 2004 - 7:39 am UTC

of course. but you are very confusing -- cause we were talking about the difference between

show me ALL customers and if....
vs
show me ALL customer with .....

not performance of a scalar subquery with a predicate on the column that is the scalar subquery.


You want a view that outer joins from CUST -> EMP for queries against cust.

You want a normal left join from CUST-> EMP for any queries that involve EMP attributes.

A reader, September 21, 2004 - 2:08 pm UTC

"update cust_emp
set primary = decode( empno, :empno, 'Y', 'N' )
where cust_id = :cust_id
and (primary = 'Y' or empno = :empno);

Now, just add to the predicate what you need to make it avoid that. What predicate might you "AND" into that to make it skip that update"

Hm, maybe I am being dense here, but I cant see it. If I add 'and empno != :empno' it would make the toggle not work?


Tom Kyte
September 21, 2004 - 2:18 pm UTC

ops$tkyte@ORA9IR2> variable cust_id number;
ops$tkyte@ORA9IR2> variable empno number;

ops$tkyte@ORA9IR2> exec :cust_id := 1; :empno := 42;

PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from t;
 
   CUST_ID      EMPNO P
---------- ---------- -
         1         42 Y
         1         55 N
         1         60 N

<b>Ok, for cust_id = 1, set empno = 42 to be "the one"</b>

 
ops$tkyte@ORA9IR2> update t
  2     set primary = decode( empno, :empno, 'Y', 'N' )
  3   where cust_id = :cust_id
  4     and (primary = 'Y' or empno = :empno)
  5  <b>   and NOT (primary = 'Y' and empno = :empno)</b>
  6  /
 <b>
0 rows updated.
 </b>
ops$tkyte@ORA9IR2> select * from t;
 
   CUST_ID      EMPNO P
---------- ---------- -
         1         42 Y
         1         55 N
         1         60 N
 
<b>Ok, cust_id = 1, empno =55 make them the ONE</b>


ops$tkyte@ORA9IR2> exec :empno := 55;
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> update t
  2     set primary = decode( empno, :empno, 'Y', 'N' )
  3   where cust_id = :cust_id
  4     and (primary = 'Y' or empno = :empno)
  5     and NOT (primary = 'Y' and empno = :empno)
  6  /
<b> 
2 rows updated.
 </b>
ops$tkyte@ORA9IR2> select * from t;
 
   CUST_ID      EMPNO P
---------- ---------- -
         1         42 N<b>
         1         55 Y</b>
         1         60 N
 
ops$tkyte@ORA9IR2>
 

Scalar subquery

A reader, September 21, 2004 - 2:12 pm UTC

Sorry for the confusion and this kind of deviates from the topic on this page, but you brought up a interesting point

"not performance of a scalar subquery with a predicate on the column that is the scalar subquery"

I have many views that use a scalar subquery for various reasons (to avoid outer join, to pull a single piece of information from another table without joining, etc).

As soon as I expose it in a view, there is nothing to stop users, BI-tools, etc from putting a predicate on this 'column that is the scalar subquery'.

Performance tanks.

How can I avoid this? Make predicates on the scalar subquery column faster?

Thanks

Tom Kyte
September 21, 2004 - 2:23 pm UTC

you cannot really -- you have to

a) run the scalar subquery
b) evaluate the predicate

basically, the user is saying "give me all of the rows and one by one we will do this thing"

If they are putting predicates on this, and the goal was "remove outer join", it means "outer join was not necessary" in the first place and this should just be a "join" without the "outer"

Scalar subquery

A reader, September 21, 2004 - 5:54 pm UTC

Yes, I realize that, I guess what I was really asking is there a way to throw an error when a predicate is applied on a certain column in a view? Something like

'Sorry, this column is read-only, I cant let you apply a predicate to it'

? Thanks

Tom Kyte
September 21, 2004 - 7:28 pm UTC

that would violate the rules of SQL.

and it will not kill performance any more than:

select * from view;

will -- that is

select * from view where column_that_is_scalar_subquery = 5;

and

select * from view;

will do roughly the same work (i'd prefer the one with the predicate -- at least it won't return *everything* over the network)

Aggregate in scalar subquery

A reader, September 21, 2004 - 9:20 pm UTC

Suppose I have the following

create or replace view v as
select
cust_id,
(select sum(sales) from cust_sales where cust_id=c.cust_id) sales,
...
from cust c;

This is a valid use of a scalar subquery, right? To present some 1-m information at a summary level.

Of course, since it is exposed in the view, users can apply the predicate to it

select * from v where sales > :n

This kills performance. How can I tune this kind of query?

Thanks

Tom Kyte
September 22, 2004 - 7:32 am UTC

as stated, that kills performance no more, no less than:

select * from v;

does it. In fact, I would say "it is better performaning than select * from v", you should be happy.


If that view was frequently access
AND
if that predicate was frequently applied
THEN
a materialized view might be applicable
or
(gasp, dare I say it...) denormalization of the sum(sales) into the cust
table directly... seems every time I say that, someone yells really loud.
end if



David Aldridge, September 22, 2004 - 11:04 am UTC

>> (gasp, dare I say it...) denormalization of the sum(sales) into the cust table directly... seems every time I say that, someone yells really loud.

Fortunately I'm not the yelling sort ... but is there a way of avoiding this denormalization being a serializing process on the modification of sales for that customer?

Tom Kyte
September 22, 2004 - 11:49 am UTC

well, the MV will be too -- there is no way to store data at the "level of a customer" without serializing at the "level of a customer"

the MV will serialize upon committing -- only one person at a time can apply the accumulated transactional information to the rolled up MV.

A reader, September 22, 2004 - 12:02 pm UTC

"well, the MV will be too -- there is no way to store data at the "level of a customer" without serializing at the "level of a customer""

Well, thats like saying that a insert into a table is a serial process. It isnt, Oracle is able to handle multiple concurrent inserts just fine using row-level locking, freelists, etc. Why is this any different?

"the MV will serialize upon committing -- only one person at a time can apply the accumulated transactional information to the rolled up MV"

How does this compare to a row-level trigger on the sales table that applies the accumumated information to the cust
table?

Thanks

Tom Kyte
September 22, 2004 - 1:04 pm UTC

"at the level of a customer"

show me inserts into a table -- at the LEVEL OF A CUSTOMER....?

give me a "for example" to show my premise "wrong"




A reader, September 22, 2004 - 2:26 pm UTC

Maybe I misunderstood what you meant by "store data at the level of a customer". I understand that to mean "insert into cust ..."

"insert into cust ..." is not a serial operation, is it? Yes, update cust set col=... where cust_pk=.. is a serial/blocking operation, only one user can do that at a time. In that sense, yes, updating the cust.sum_sales column would be a serializing operation no matter how it is done (on commit MV, row-level trigger, etc)

Thanks

Tom Kyte
September 22, 2004 - 3:02 pm UTC

they want a sum of sales at the customer level.

right now, there are two tables:

cust
cust_sales


create or replace view v as
select
cust_id,
(select sum(sales) from cust_sales where cust_id=c.cust_id) sales,
...
from cust c;


they would like an attribute at the customer level, in the customer table, that is the sum(sales)

the only ways to maintain this customer level information (cust_id -> attribute, attribute is 'at the customer level') will involve serialization at some point.


the MV will do it upon commit.
the "denormalize" will do it at the statement level.

<quote>
Yes, update cust set
col=... where cust_pk=..
</quote>

is exactly what you do when you "store data at the customer level"


Override

A reader, October 18, 2004 - 1:40 pm UTC

<quote>
2) seems like a two step to me:

update cust_emp
set primary = 'N'
where primary = 'Y'
and (cust_id,empno) not in ( select cust_id, empno from this_table );

update (select primary
from cust_emp, this_table
where cust_emp.cust_id = this_table.cust_id
and cust_emp.empno = this_table.empno
and cust_emp.primary <> 'Y' )
set primary = 'Y';
<quote>

My cust_emp table also has a primary_override flag. If set to Y, this means that "dont touch this row". How can I incorporate this into the 2 updates above?

Thanks

Tom Kyte
October 18, 2004 - 2:02 pm UTC

not sure what you mean.

but if that row is "to not be updated in any case", just add "and column <> 'Y'" i suppose

A reader, October 18, 2004 - 2:13 pm UTC

Thats what I did but suppose cust_id=1,emp_id=1 is the primary and it is also marked override='Y'. My new_table contains cust_id=1,emp_id=2 as the new primary.

For the (1,1) row,

update cust_emp
set primary = 'N'
where primary = 'Y'
and (cust_id,empno) not in ( select cust_id, empno from this_table )
and primary_overide != 'Y';

This step will NOT update the row since primary_override is marked Y. Thats fine.

But the next update

update (select primary
from cust_emp, this_table
where cust_emp.cust_id = this_table.cust_id
and cust_emp.empno = this_table.empno
and cust_emp.primary <> 'Y'
and cust_emp.primary_override != 'Y')
set primary = 'Y';

will try to set the (1,2) row to 'Y' since that row is not marked override=Y and that will violate my unique constraint
create unique index on cust_emp(decode(primary,'Y',cust_id));

I guess what I need instead of the primary_override != Y

and not exists (select null from cust_emp b where a.cust_id=b.cust_id and b.primary_override='Y')

i.e. before updating the row check if any other row (including the current row) is marked as dont touch. Basically, update the primary only if the current primary is NOT an override

Thanks



A reader, October 18, 2004 - 2:13 pm UTC

Oops, sorry, forgot to close the bold tag!

A reader, October 18, 2004 - 6:57 pm UTC

Howd'd you put in those Bold faced font? I thought Tom allowed only simply ASCII chars here...??

Tom Kyte
October 19, 2004 - 8:52 am UTC

pound-signB text to be in bold pound-signb


replace pound-sign with # of course.

Why not use a materialized view instead

A reader, November 20, 2005 - 4:24 am UTC

Hi Tom,

referring to the original question (count of employees between 3 and 8 for each dept), why not use materialized view. In few of other similar posts like </code> https://asktom.oracle.com/Misc/oramag/on-rollups-merges-and-moves.html, http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:21389386132607 <code>

it seems you preferred materialized view over triggers, then why not in this case.
Am I missing something here...?

Tom Kyte
November 20, 2005 - 8:29 am UTC

#bThere are many ways to crack every nut.

There are dozens of solutions to every problem.

There are many ways to do the same thing.

(if I kept on going... well - I tend to give a solution or two, not every possible solution)

A reader, November 20, 2005 - 4:27 am UTC

#-signb. Sorry didn't want to put it in bold.

A reader, November 20, 2005 - 9:44 am UTC

#b Thanks

'current' instead of 'preferred' address -- sysdate screws up the index

Aalbert, December 12, 2005 - 10:49 am UTC

Suppose we had an address-validity table like so (simplified example):

CREATE TABLE address_validity (
address_id NUMBER,
valid_from DATE,
valid_to DATE
);

and we define an address to be current if the sysdate falls between valid_from and valid_to, with NULL being the lowest value for the former and the highest value for the latter. In other words, the following function determines whether an address (or anything else) is current:

CREATE FUNCTION isCurrent(valid_from IN DATE, valid_to IN DATE) RETURN NUMBER
IS
now DATE := SYSDATE;
BEGIN
IF
(valid_from IS NULL OR valid_from <= now)
AND
(valid_to IS NULL OR valid_to > now)
THEN
RETURN 1;
ELSE
RETURN NULL;
END IF;
END;
/

We want to allow duplicate address_ids, as long as a maximum of 1 is current at all times, so we could've used this function to create a unique index on address_validity like so:

CREATE UNIQUE INDEX uidx_address
ON address(address_id, isCurrent(valid_from, valid_to);

...if only the function were DETERMINISTIC. Alas, it isn't, since it uses sysdate. We could declare it to be, but in fact it wouldn't be.
I tried passing in 'now' as a third parameter, but that doesn't work either, since it would mean calling the function with SYSDATE as an argument, and I get the complaint: ".ORA-01743: only pure functions can be indexed", which sysdate, of course, isn't.

Tom, is there any way of accomplishing this?

Tom Kyte
December 12, 2005 - 12:33 pm UTC

seems that the valid_to should be NULL for "current" addresses (eg: an address always has a valid_from - regardless, the time of insertion for example but not a valid_to - it is valid "until some unknown time in the future").

Then

create index current_address_index
on t( address_id, valid_to );


and a query

select * from t where address_id = :x and valid_to is null;

will use the index quite nicely. No function based index needed at all - and valid_to being NULL for the current address makes sense here.

Uh-oh

Aalbert, December 12, 2005 - 11:19 am UTC

I've been reading </code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:474221407101 <code>and now am a little more pessimistic about it...

unfortunately...

Aalbert, December 13, 2005 - 3:33 am UTC

"seems that the valid_to should be NULL for "current" addresses (eg: an address always has a valid_from - regardless, the time of insertion for example but not a valid_to - it is valid "until some unknown time in the future")."

This is quite right most of the time, and I've added constraints to that effect -- UNIQUE(address_id, valid_to) and UNIQUE(address_id, valid_from) -- but when someone informs us of a change in address, we know when that will take place, so besides knowing the valid_from for the new address, we now have a valid_to for the current address as well.

But I realised that it is in fact the same problem as the 'no overlapping values' one in the link above, so it's HARD, and there's no constraint or unique index that will enforce this rule.
It is weird though -- it is such a simple rule in plain English (or Dutch), but so difficult to enforce.

Well, thanks for your time anyway, Tom.

Assertions in Oracle

anuradha, February 22, 2010 - 3:45 am UTC

Though, oracle does not support create assertion, there exists a create assertion priviledge which can be given to the users at the time of user creation. If assertion cannot be created, why should we have such an option?
Tom Kyte
March 01, 2010 - 6:47 am UTC

where is it documented?

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library