Skip to Main Content
  • Questions
  • Best way to enforce cross-row constraints?

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question, Kurt.

Asked: December 20, 2011 - 12:16 pm UTC

Last updated: January 31, 2022 - 2:24 am UTC

Version: 10g

Viewed 10K+ times! This question is

You Asked

I use the database to declare (and enforce) as much application logic as I can.
What I'd like to do is to enforce application constraints across related rows, if
possible.

As a contrived example, suppose we have a table of Agreements and a second
table of participants within an agreement, with a column to indicate the amount
of each participant's percent.

create table agreements
  (
    agreement_id    number         not null
  , agreement_name  varchar2( 10 ) not null
  , primary key (
                  agreement_id
                )
  ) ;

create table participant
  (
    participant_name  number not null
  , agreement_id      number not null
  , participation_pct number not null
  , primary key (
                  participant_name
                , agreement_id
                )
  ) ;


I create my applications to use PL/SQL packages to insert, update and delete rows in the database (application APIs, not table APIs).

Using these applications APIs makes it trivial to programmatically enforce the restriction that total participation within one agreement must sum to 100
percent, but can you think of a way I can declare a database constraint to
enforce cross-row logic such as this?

Thanks in advance!

and Tom said...

You can do it with an on commit fast refresh materialized view - but I'm not necessarily recommending that. It would cause some amount of serialization during the commit phase. You could do better in your API since your API would only have to serialize at the agreement_id level. That is - your API can simply select for update the single row in the agreements table, operate in isolation on the participant rows for that agreement and do its check.



Here is what the materialized view would look like.

ops$tkyte%ORA11GR2> create table agreements
  2    (
  3      agreement_id    number         not null
  4    , agreement_name  varchar2( 10 ) not null
  5    , primary key (
  6                    agreement_id
  7                  )
  8    ) ;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create table participant
  2    (
  3      participant_name  number not null
  4    , agreement_id      number not null
  5    , participation_pct number not null
  6    , primary key (
  7                    participant_name
  8                  , agreement_id
  9                  )
 10    , foreign key (agreement_id) references agreements
 11    ) ;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create materialized view log
  2  on participant with rowid, (participation_pct) including new values;

Materialized view log created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create materialized view participant_pct_is_100
  2  refresh fast
  3  on commit
  4  as
  5  select agreement_id, sum(participation_pct) ppct
  6    from participant
  7   group by agreement_id
  8  /

Materialized view created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> alter materialized view participant_pct_is_100
  2  add constraint check_ppct_is_100
  3  check (ppct = 100)
  4  deferrable;

Materialized view altered.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into agreements values ( 1, 'x' );

1 row created.

ops$tkyte%ORA11GR2> insert into participant values ( 100, 1, 40 );

1 row created.

ops$tkyte%ORA11GR2> insert into participant values ( 200, 1, 40 );

1 row created.

ops$tkyte%ORA11GR2> insert into participant values ( 300, 1, 20 );

1 row created.

ops$tkyte%ORA11GR2> commit;

Commit complete.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into agreements values ( 2, 'x' );

1 row created.

ops$tkyte%ORA11GR2> insert into participant values ( 10, 2, 40 );

1 row created.

ops$tkyte%ORA11GR2> insert into participant values ( 20, 2, 40 );

1 row created.

ops$tkyte%ORA11GR2> insert into participant values ( 30, 2, 21 );

1 row created.

ops$tkyte%ORA11GR2> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02290: check constraint (OPS$TKYTE.CHECK_PPCT_IS_100) violated





If you wanted to do this via a trigger - it could actually be done fairly easy ( and correctly!). We'll maintain a rollup count in the agreements table. The trigger on participant will update this row. That will do the serialization at the agreement_id level we need. A deferrable constraint will do the checking for us:

ops$tkyte%ORA11GR2> create table agreements
  2    (
  3      agreement_id    number         not null
  4    , agreement_name  varchar2( 10 ) not null
  5    , ppct            number default 0 constraint ppct_must_be_100 check (ppct=100) deferrable initially deferred
  6    , primary key (
  7                    agreement_id
  8                  )
  9    ) ;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create table participant
  2    (
  3      participant_name  number not null
  4    , agreement_id      number not null
  5    , participation_pct number not null
  6    , primary key (
  7                    participant_name
  8                  , agreement_id
  9                  )
 10    , foreign key (agreement_id) references agreements
 11    ) ;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace trigger participant_trigger
  2  after insert or update or delete on participant for each row
  3  begin
  4          if (inserting or updating)
  5          then
  6                  update agreements set ppct = nvl(ppct,0)+:new.participation_pct where agreement_id = :new.agreement_id;
  7          end if;
  8          if (updating or deleting)
  9          then
 10                  update agreements set ppct = nvl(ppct,0)-:old.participation_pct where agreement_id = :old.agreement_id;
 11          end if;
 12  end;
 13  /

Trigger created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into agreements (agreement_id,agreement_name) values ( 1, 'x' );

1 row created.

ops$tkyte%ORA11GR2> insert into participant (participant_name,agreement_id,participation_pct) values ( 100, 1, 40 );

1 row created.

ops$tkyte%ORA11GR2> insert into participant (participant_name,agreement_id,participation_pct) values ( 200, 1, 40 );

1 row created.

ops$tkyte%ORA11GR2> insert into participant (participant_name,agreement_id,participation_pct) values ( 300, 1, 20 );

1 row created.

ops$tkyte%ORA11GR2> select * from agreements;

AGREEMENT_ID AGREEMENT_       PPCT
------------ ---------- ----------
           1 x                 100

ops$tkyte%ORA11GR2> commit;

Commit complete.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into agreements (agreement_id,agreement_name) values ( 2, 'x' );

1 row created.

ops$tkyte%ORA11GR2> insert into participant (participant_name,agreement_id,participation_pct) values ( 10, 2, 40 );

1 row created.

ops$tkyte%ORA11GR2> insert into participant (participant_name,agreement_id,participation_pct) values ( 20, 2, 40 );

1 row created.

ops$tkyte%ORA11GR2> insert into participant (participant_name,agreement_id,participation_pct) values ( 30, 2, 21 );

1 row created.

ops$tkyte%ORA11GR2> select * from agreements;

AGREEMENT_ID AGREEMENT_       PPCT
------------ ---------- ----------
           1 x                 100
           2 x                 101

ops$tkyte%ORA11GR2> commit;
commit
*
ERROR at line 1:
ORA-02091: transaction rolled back
ORA-02290: check constraint (OPS$TKYTE.PPCT_MUST_BE_100) violated

Rating

  (41 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Cross-row constraint enforcement

Kurt Arthur, December 21, 2011 - 9:37 am UTC

Wow! Not one, but two ways to do what I want ('course, the MV isn't recommended, but still :-)

Thanks for the suggestions, advice and explanations. I'm going to add this into my bag of techniques.

clear, concise, useful

Mark, December 21, 2011 - 10:13 am UTC

The materialized view method has a "cool" factor that I like, but I agree it's not the best method. (Also I am often programming for Standard Edition, which sadly lacks materialized views.)
Tom Kyte
December 21, 2011 - 11:06 am UTC

Standard edition has materialized views

http://docs.oracle.com/cd/E11882_01/license.112/e10594/editions.htm#sthref134


it does not support query rewrite - but it does support read only materialized views.

Watch out for rounding

Brian Camire, December 21, 2011 - 4:02 pm UTC

One problem with enforcing this kind of constraint is rounding. For example, if you have three participants with an equal 1/3 share, the trigger approach as given (and probably the materialized view approach as well) will fail as follows:
SQL> insert into participant
  2  (participant_name,agreement_id,participation_pct) values ( 100, 1, 100/3 );


1 row created.

SQL> insert into participant
  2  (participant_name,agreement_id,participation_pct) values ( 200, 1, 100/3 );


1 row created.

SQL> insert into participant
  2  (participant_name,agreement_id,participation_pct) values ( 300, 1, 100/3 );


1 row created.

SQL> commit;
commit
*
ERROR at line 1:
ORA-02091: transaction rolled back
ORA-02290: check constraint (TEMP.PPCT_MUST_BE_100) violated

The reason is that 1/3 cannot expressed exactly as a NUMBER, and the sum of the percentages is slightly less than 100 as a result.

You might work around this by having some sort of tolerance around PPCT_MUST_BE_100 constraint (perhaps with check constraints that require the individual percentages to be greater than this).

Another way might be to use whole number "shares" instead of percentages.
Tom Kyte
December 21, 2011 - 4:26 pm UTC

Yup, agreed, that is an implementation detail they would have to figure out.

can virtual column doable?

A reader, December 21, 2011 - 9:39 pm UTC

Hi Tom,

Just curious whether virtual column can archieve this kind of goal?

My basic idea is with a virtual column whose value is the current_val+sum_of_all_existing value, then add a check(<=100) constrain.

Below is the function which is very specific but can get the sum of a column(pct) of given table's given id
create or replace function fun(p_id int,tname varchar2) return number deterministic as
cnt number;
str varchar2(2000);
begin
str:= 'select sum(pct) from '||tname||' where id='||p_id;
execute immediate str into cnt;
return cnt;
end;

use this funtion in the virtual column definition
create table test(id int, name varchar2(10), pct number, cc generated always as (leo(id,'test')+pct) virtual, constraint con1 check(cc<=100));

I can create them successfully, but any query or DML on table will hit below error:
ERROR at line 1:
ORA-06552: PL/SQL: Statement ignored
ORA-06553: PLS-225: subprogram or cursor 'LEO' reference is out of scope

I know my idea is crazy and almost impossible, but still would like to hear more elaboration from you.

Meanwhile, as virtual column need the function to be 'deterministic', i got another question for you:

Why even a function is not deterministic, but i can define it as 'deterministic' and got result correctly, is it a way to 'cheat' oracle?

Below is my test sqls:
create table test(id int, a int);
create or replace function f1(p_id int) return number deterministic as
cnt number;
begin
select sum(a) into cnt from test where id=p_id;
return cnt;
end;
/
insert into test values(1,10);
select f1(1) from dual; -- got 10 here
insert into test values(1,10);
select f1(1) from dual; -- got 20 here, i am thinking shoule be 10 as in same session there will be cache due to deterministic

Tom Kyte
December 22, 2011 - 10:01 am UTC

create or replace function fun(p_id int,tname varchar2) return number
deterministic as
cnt number;
str varchar2(2000);
begin
str:= 'select sum(pct) from '||tname||' where id='||p_id;
execute immediate str into cnt;
return cnt;
end;


how is that function even remotely deterministic? By definition it is not.

and it doesn't use bind variables. two strikes.

and it is subject to sql injection.... your out.


no, this cannot work.



Why even a function is not deterministic, but i can define it as
'deterministic' and got result correctly, is it a way to 'cheat' oracle?


No, it is a way you used to LIE to Oracle. You told us, you asserted, that this function is deterministic. We made you do that so you would know to make sure it in fact was deterministic. If you lie to us, we'll lie to you. Go ahead - create an index on that function (you can since you defined it deterministic). But good luck using that index to actually find data later!!!


select f1(1) from dual; -- got 20 here, i am thinking shoule be 10 as in same
session there will be cache due to deterministic



deterministic does not cause the function to be cached (that is the RESULT_CACHE directive - but those functions are known to not be deterministic by default and we invalidate the cache when the underlying tables change).

deterministic allows us to use that function in certain places such as function based indexes.




think about your logic here for a minute...

You create a table T as:

( id number, ...some columns...
virtual_column generated always as ( f('x',id) ),
constraint x check (virtual_column <= 100)
)

Now, you've lied to us and said that f('x') is deterministic. Now, when you create a row in t we would evaluate f('x'), check it - and assume it *will not change*.

Now, you change the data in the 'x' table - we don't know there is a relationship here - we would not re-evaluate anything and the data integrity is gone.


it cannot possibly work.

In the year 2042...

Toon Koppelaars, December 22, 2011 - 2:46 am UTC

In the year 2042 we can do this as follows:

SQL> create or replace assertion sum_pct_is_100 as
  2  check(not exists 
  3         (select 'a violation'
  4          from (select p.agreement_id, sum(p.participation_pct) as sum_pct
  5                from participant p
  6                group by p.agreement_id) x
  7          where x.sum_pct <> 100))
  8  deferrable initially deferred
  9  /


Assertion created.


And Oracle will have an efficient implementation under the covers, which:

- only validates this assertion when either:
  - a new row was inserted in participation, or
  - an existing row was deleted from participation, or
  - in an existing row either agreement_id, or participation_pct, or both have been updated.
- will do an efficient 'delta' validation, i.e. only validate the sum(participation_pct) for agreement_id's that might have been affected.
- will serialize transactions that simultaneously:
  - execute some kind of dml that will trigger (delta) validation of the assertion (see above), and
  - do so only for transactions that involve the same agreement_id.

All of this will be taken care of automagically under the covers, by Oracle, somewhere around the year 2042.

The serialization performed by the trigger implementation shown, is actually too strong a serialization.
It prevents transactions performing updates on agreement rows, for which currently some other transaction is 'maintaining' participant percentages.
This should not be necessary. Not introducing the redundant column in agreement and clever usage of DBMS_LOCK can fix this.

Tom Kyte
December 22, 2011 - 10:10 am UTC

This should not be necessary. Not introducing the redundant column in agreement
and clever usage of DBMS_LOCK can fix this.


at the expense of having to range scan the participant table to sum up the value every time.

and how do you get it to do the check at commit time - since you'll likely have to allow it to go inconsistent during the transaction????

can you at least outline the dbms_lock idea? I don't envision as workable at the transaction level which is pretty much mandatory for something like this - you would never be able to delete - and you would get ONE shot at inserting otherwise.

Martin Rose, December 22, 2011 - 8:34 am UTC

I agree with Toon regarding Tom's trigger code being overly strong on serialisation, but I disagree with his future 'assertion' statement.

Doing that, would entail a GROUP BY any time any data change is made -- no better than Tom's materialised view example in that respect, in fact. Although a nice idea, I cannot see how Oracle could perform such a Delta optimisation that you suggest (you would have to declare said delta optimisation code yourself along with your table-level code to achieve that, IMO, in which case you might as well write it as triggers from the outset).

But then we are arguing over something that is entirely hypothetical..... :-)

Here is some real code making use of some 'clever' DBMS_LOCK code, as Toon suggests.

It prevents overlapping dates across records for the same unique identifier.
Note, it stops when bad data for one identifier is found, and actually prints out all the bad dates for that identifier with RAISE_APPLICATION_ERROR, to help with debugging the data.


PTF_ID      VALID_DATE_FROM    VALID_DATE_TO

   1          01-JAN-2000       31-JAN-2000
   1          01-FEB-2000       28-FEB-2000
   1          01-MAR-2000       31-MAR-2000
   1          15-MAR-2000       19-MAR-2000  <-- Bad Record
   1          01-APR-2000       30-APR-2000
   1          01-JUN-2000       31-JUN-2000

   2          01-JAN-2000       31-JAN-2000
   2          01-FEB-2000       28-FEB-2000
   2          01-MAR-2000       31-MAR-2000

   3          01-JAN-2000       31-JAN-2000
   3          01-FEB-2000       28-FEB-2000
   3          01-MAR-2000       31-MAR-2000


CREATE OR REPLACE PACKAGE PTREF_TRG AS
  LockHandle VARCHAR2(128);
  --
  TYPE PTREF_TYPE IS TABLE OF ROWID INDEX BY BINARY_INTEGER;
  --
  PTREF_ROWID          PTREF_TYPE;
  PTREF_NULL  CONSTANT PTREF_TYPE := PTREF_ROWID;
END PTREF_TRG;
/

CREATE OR REPLACE PACKAGE BODY PTREF_TRG AS
  PROCEDURE Allocate_Lock_Handle IS
  --
  -- DBMS_LOCK.ALLOCATE_UNIQUE does an implicit COMMIT,
  -- so must be encapsulated within an autonomous transaction
  -- since COMMITs cannot occur inside DB triggers.
  --
  PRAGMA AUTONOMOUS_TRANSACTION;
  --
  BEGIN
    DBMS_LOCK.ALLOCATE_UNIQUE ('PTREF_A_IU_S_TRG', LockHandle);
  END Allocate_Lock_Handle;
  --
BEGIN
  Allocate_Lock_Handle;
END PTREF_TRG;
/

CREATE OR REPLACE TRIGGER PTREF_B_IU_S_TRG
BEFORE INSERT OR UPDATE OF PTF_ID, VALID_FROM_DATE, VALID_TO_DATE ON PTREF
BEGIN
  PTREF_TRG.PTREF_ROWID := PTREF_TRG.PTREF_NULL;
END PTREF_B_IU_S_TRG;
/

CREATE OR REPLACE TRIGGER PTREF_A_IU_R_TRG
AFTER INSERT OR UPDATE OF PTF_ID, VALID_FROM_DATE, VALID_TO_DATE ON PTREF
FOR EACH ROW
BEGIN
  PTREF_TRG.PTREF_ROWID(PTREF_TRG.PTREF_ROWID.COUNT + 1) := :new.ROWID;
END PTREF_A_IU_R_TRG;
/


CREATE OR REPLACE TRIGGER PTREF_A_IU_S_TRG
AFTER INSERT OR UPDATE OF PTF_ID, VALID_FROM_DATE, VALID_TO_DATE ON PTREF
DECLARE
  NO_ERROR EXCEPTION;
  --
BEGIN
  DECLARE
    PTREF_rec PTREF%ROWTYPE;
    --
    Error_Message LONG := NULL;
    --  
    -- Prevent other transactions from running this same trigger code
    -- until this session has commited
    DEADLOCK_DETECTED CONSTANT INTEGER := 2;
    Lock_Status       CONSTANT INTEGER := DBMS_LOCK.REQUEST
                                             (LOCKHANDLE        => PA_PTREF_TRG.LockHandle,
                                              RELEASE_ON_COMMIT => TRUE);
  BEGIN
    IF Lock_Status = DEADLOCK_DETECTED THEN
      RAISE_APPLICATION_ERROR (-20000, 'Internal Error: Deadlock detected while INSERTing/UPDATEing into PTREF.');
    END IF;
    --
    --
    FOR i IN 1..PTREF_TRG.PTREF_ROWID.COUNT LOOP
      BEGIN
        SELECT *
        INTO   PTREF_rec
        FROM   PTREF
        WHERE  ROWID = PTREF_TRG.PTREF_ROWID(i);
        --
      EXCEPTION
        -- In case another session has deleted the record
        -- since recording its ROWID in the row-level trigger
        WHEN NO_DATA_FOUND THEN NULL;
      END;
      --
      --
      FOR Overlapping_Dates_rec IN
           (SELECT TO_CHAR(VALID_FROM_DATE, 'DD-MON-YYYY') ||' to '||
                   TO_CHAR(VALID_TO_DATE,   'DD-MON-YYYY')  AS OVERLAPPING_RECORD
            FROM   PTREF
            WHERE  ROWID           <> PTREF_TRG.PTREF_ROWID(i)
            AND    PTF_ID           = PTREF_rec.PTF_ID
            AND   (VALID_TO_DATE   >= PTREF_rec.VALID_FROM_DATE
            AND    VALID_FROM_DATE <= PTREF_rec.VALID_TO_DATE)
            ORDER BY VALID_FROM_DATE) LOOP
            --
        Error_Message := Error_Message || Overlapping_Dates_rec.OVERLAPPING_RECORD ||', ';
      END LOOP;
      --
      IF Error_Message IS NOT NULL THEN
        RAISE_APPLICATION_ERROR (-20001, 'Error: Overlapping record(s) found for PTF_ID '||
          TO_CHAR(PTREF_rec.PTF_ID) ||' when trying to INSERT/UPDATE dates '||
          TO_CHAR(PTREF_rec.VALID_FROM_DATE, 'DD-MON-YYYY') ||' and '|| 
          TO_CHAR(PTREF_rec.VALID_TO_DATE,   'DD-MON-YYYY') ||
        ' into PTREF.   Overlapping dates are: '|| RTRIM(Error_Message, ', ') ||'.');
      END IF;
    END LOOP;
    --
    --
    RAISE NO_ERROR;
    --
  EXCEPTION
    WHEN OTHERS THEN
      -- Clean up the memory used by the package
      PTREF_TRG.PTREF_ROWID.DELETE;
      --
      RAISE;
  END;
  --
EXCEPTION
  WHEN NO_ERROR THEN NULL;
  WHEN OTHERS   THEN RAISE;
END PTREF_A_IU_S_TRG;
/




Tom Kyte
December 22, 2011 - 10:17 am UTC

I don't see how this dbms_lock example helps in this case.


This constraint MUST be deferrable - it has to be - and triggers are 'immediate'. The check needs to be done either "on demand" (set constraint x immediate) or "on commit"

otherwise you would get one shot to insert (you cannot use insert with the same agreement_id twice - if the first one worked - the second one cannot possibly work). you cannot delete.

You might be able to do something funky with a merge that can insert, update and delete but that would be so far beyond obscure I would not even go there.



ugh: WHEN OTHERS THEN RAISE;


why do that - other than the hide the true error line number when you get an error?????

DBMS_LOCK limitations with serializable isolation level

Brian Camire, December 22, 2011 - 10:30 am UTC

DBMS_LOCK cannot protect against certain interactions when one or more of the concurrent transactions is using serializable isolation level.

To illustrate using Martin's example and code...

Start with an empty table.
SQL> TRUNCATE TABLE PTREF;

Table truncated.


In one session (call it session A), start a new transaction using serializable isolation level.
SQL> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;

Transaction set.


In another session (call it session B), insert a row and commit.

SQL>  INSERT INTO
  2   PTREF
  3   (
  4       PTF_ID,
  5       VALID_FROM_DATE,
  6       VALID_TO_DATE
  7   )
  8   VALUES
  9   (
 10       1,
 11       TO_DATE('01-MAR-2000','DD-MON-YYYY'),
 12       TO_DATE('31-MAR-2000','DD-MON-YYYY')
 13   );

1 row created.

SQL> COMMIT;

Commit complete.


Now, back in session A, see what's in the table.

SQL> SELECT * FROM PTREF;

no rows selected


It appears empty, because session A can't "see" the row that session B inserted, even though session B already comitted it. This is because, with serializable isolation level, session A "sees" the table as of the time its transaction started, which was before the time that session B inserted a row.

Since session B has already commited, it has released the DBMS_LOCK, which means there's nothing stopping session A from insert a row that overlaps with the one that session B inserted.

INSERT INTO
  2  PTREF
  3  (
  4      PTF_ID,
  5      VALID_FROM_DATE,
  6      VALID_TO_DATE
  7  )
  8  VALUES
  9  (
 10      1,
 11      TO_DATE('15-MAR-2000','DD-MON-YYYY'),
 12      TO_DATE('19-APR-2000','DD-MON-YYYY')
 13  );

1 row created.

SQL> COMMIT;

Commit complete.

SQL> SELECT * FROM PTREF;

    PTF_ID VALID_FRO VALID_TO_
---------- --------- ---------
         1 01-MAR-00 31-MAR-00
         1 15-MAR-00 19-APR-00



For more discussion, see http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:474221407101

I'd be interested to see how one can use DBMS_LOCK to get around this problem.
Tom Kyte
December 22, 2011 - 11:09 am UTC

thank you very much - nice.

Some frameworks by default set the isolation to serializable...

and there is no way to prevent its use in an application.


serializable won't affect the "serialized at the agreement level" example as they would get an ora 8177 when they tried to modify the rolled up value.


this, by the way, is why I avoid triggers in general for things like this. You have to think "sideways" when developing them to enforce a constraint like this. It gets very tricky and it is easy to mess up.

Helena Marková, December 23, 2011 - 2:53 am UTC


Defending the trigger idea

Stew Ashton, December 26, 2011 - 8:52 am UTC


I can think of two areas of improvement to the trigger implementation:

1) As Toon said, minimize serialization;
2) Optimize for batch modifications: the trigger implementation is row by row.

To minimize serialization, why not create a third table (similar to the materialized view) that is dedicated to percentages?

Here is an attempt to optimize for eventual batch modifications, using a compound trigger. I think it avoids excessive serialization. Criticisms welcome...
 > CREATE TABLE AGREEMENTS (
  AGREEMENT_ID NUMBER PRIMARY KEY,
  AGREEMENT_NAME VARCHAR2(63) NOT NULL
)
/
table AGREEMENTS created.

> CREATE TABLE PARTICIPANT (
  PARTICIPANT_NAME NUMBER NOT NULL,
  AGREEMENT_ID NUMBER NOT NULL,
  PARTICIPATION_PCT NUMBER NOT NULL,
  CONSTRAINT PK_PARTICIPANT PRIMARY KEY (PARTICIPANT_NAME, AGREEMENT_ID),
  FOREIGN KEY (AGREEMENT_ID) REFERENCES AGREEMENTS
 )
/
table PARTICIPANT created.

> CREATE TABLE PARTICIPATION_PCTS (
  AGREEMENT_ID NUMBER PRIMARY KEY REFERENCES AGREEMENTS,
  PARTICIPATION_PCT NUMBER
    CONSTRAINT PPCT_MUST_BE_100 CHECK (
      PARTICIPATION_PCT BETWEEN 99.99999 AND 100.00001
      OR PARTICIPATION_PCT BETWEEN -.00001 and .00001
    ) deferrable initially deferred
 )
/
table PARTICIPATION_PCTS created.

> CREATE or replace TYPE T_PARTICIPANT_PCT AS OBJECT (
 AGREEMENT_ID NUMBER,
 PARTICIPATION_PCT number
)
/
TYPE T_PARTICIPANT_PCT compiled

> CREATE TYPE TT_PARTICIPANT_PCT AS TABLE OF T_PARTICIPANT_PCT
/
TYPE TT_PARTICIPANT_PCT compiled

> CREATE OR REPLACE TRIGGER MAINTAIN_PARTICIPATION_PCT
FOR insert or delete or UPDATE OF PARTICIPATION_PCT ON PARTICIPANT
COMPOUND TRIGGER

THRESHHOLD CONSTANT integer := 100;
lt_participant_pct tt_participant_pct := tt_participant_pct();

PROCEDURE FLUSH_ARRAY IS
BEGIN
  MERGE INTO PARTICIPATION_PCTS O
  USING (
    SELECT AGREEMENT_ID, SUM(PARTICIPATION_PCT) PARTICIPATION_PCT
    FROM TABLE(CAST(LT_PARTICIPANT_PCT AS TT_PARTICIPANT_PCT))
    group by agreement_id
  ) N
  ON (O.AGREEMENT_ID = N.AGREEMENT_ID)
  WHEN MATCHED THEN
    UPDATE SET PARTICIPATION_PCT = O.PARTICIPATION_PCT + N.PARTICIPATION_PCT
    delete where (PARTICIPATION_PCT < 0.0001)
  WHEN NOT MATCHED THEN
    INSERT (AGREEMENT_ID, PARTICIPATION_PCT)
    VALUES(N.AGREEMENT_ID, N.PARTICIPATION_PCT);
END flush_array;

AFTER EACH ROW IS
BEGIN
  LT_PARTICIPANT_PCT.EXTEND;
  LT_PARTICIPANT_PCT(LT_PARTICIPANT_PCT.LAST):=
    T_PARTICIPANT_PCT(
      case when inserting then :new.agreement_id else :old.agreement_id end,
      CASE WHEN INSERTING THEN :NEW.PARTICIPATION_PCT
        WHEN DELETING THEN -:OLD.PARTICIPATION_PCT
        WHEN UPDATING THEN :NEW.PARTICIPATION_PCT - :OLD.PARTICIPATION_PCT END
    );
  IF LT_PARTICIPANT_PCT.LAST >= THRESHHOLD THEN
    FLUSH_ARRAY();
    lt_participant_pct.delete;
  END IF;
END AFTER EACH ROW;

AFTER STATEMENT IS
BEGIN
  IF LT_PARTICIPANT_PCT.count > 0 THEN
    FLUSH_ARRAY();
  END IF;
END AFTER STATEMENT;
END MAINTAIN_PARTICIPATION_PCT;
/
TRIGGER MAINTAIN_PARTICIPATION_PCT compiled

> INSERT INTO AGREEMENTS SELECT LEVEL, 'Agreement '||LEVEL FROM DUAL
CONNECT BY LEVEL <= 10001
/
10,001 rows inserted.

> INSERT INTO PARTICIPANT SELECT 1, LEVEL, 100 FROM DUAL
CONNECT BY LEVEL <= 10001
/
10,001 rows inserted.

> COMMIT
/
commited.

> DELETE FROM PARTICIPANT
/
10,001 rows deleted.

> COMMIT
/
commited.

> SELECT EXECUTIONS FROM V$SQL
WHERE sql_text like 'MERGE INTO PARTICIPATION_PCTS%'
/
EXECUTIONS
----------
202

Tom Kyte
December 26, 2011 - 10:57 am UTC

this approach still serializes at the agreement level, just as the one originally presented does.


Toon has agreed to come back with his dbms_lock approach in psuedo code (as it is a lot of generated code). It will involve the same amount of serialization as the materialized view approach I presented above (serialize on commit) - that is something the on commit materialized view imposes.


A trigger approach, without a materialized view, is going to serialize at the agreement level - yours does, mine does.

I concur that this definitely batches up and makes the trigger better for 'batch' operations - perhaps at the penalty of tiny operations (extra work being done).

Level of serialization

Stew Ashton, December 26, 2011 - 1:47 pm UTC


My intention was to serialize, not on every change at the agreement level, but only on changes to the percentages. If John Doe is updating the percentages, Jim Smith can update agreement_name without serializing because the percentage totals are in a separate table.

Is my understanding incorrect?

The third table is simply to answer Toon's objection that there could be less serialization. In real life, it seems like overkill.
Tom Kyte
December 26, 2011 - 1:57 pm UTC

ok, I see now, sure - that would be easy enough to retrofit to the original example as well - instead of:

update agreements set ppct =
nvl(ppct,0)+:new.participation_pct where agreement_id = :new.agreement_id;


use

merge into some_other_table
using (select :new.aggrement_id agreement_id, :new.particpation_pct participation+pct from dual) d
on (some_other_table.agreement_id = d.agreement_id)
when matched then update set participation_pct = participation_pct+d.participation_pct
when not matched then insert .....

Not a bad idea, a little more complex and would relax the serialization on the agreement main table....

thanks for clarifying

Simulating deferred execution with triggers

Toon Koppelaars, December 27, 2011 - 7:46 am UTC

When implementing multi-row constraints, it is important to first get the specificiation of the constraint absolutely clear.
The OP specified it as follows "...enforce the restriction that total participation within one agreement must sum to 100 percent.."
There is actually some ambiguity in this specification.

So what is meant, this (what I first thought was the case)?

create or replace assertion sum_pct_is_100 as
  check(not exists 
         (select 'participation not summing up to 100%'
          from (select p.agreement_id, sum(p.participation_pct) as sum_pct
                from participant p
                group by p.agreement_id) x
          where x.sum_pct <> 100));

Or this?

create or replace assertion sum_pct_is_100 as
  check(not exists 
         (select 'an agreement with no 100% participation'
          from agreement a
          where 100 <> (select nvl(sum(p.participation_pct),0)
                        from participant p
                        where p.agreement_id = a.agreement_id)));


In other words is this a single-table rule, only involving PARTICIPANT, or is this a two-table rule involving both tables?
Put simply: do we allow AGREEMENTs with no PARTICIPANTS?

Tom's first response actually has a solution for both (almost...): the MV-solution (the first one) implements the former assertion.
The trigger-solution (the second one) seems to attempt to implement the latter assertion. But is not complete as it allows a new agreement with no participation, but does not allow an existing (100% participated) agreement to have its participation removed.

Stew's code also attempts to implement the former assertion (When matched it performs a DELETE when the percentage is near 0.
It is not complete though, since it disregards updates of the agreement_id FK column (which is involved in the assertion text), which 'switches' a participants' participation to another agreement.
This can easily be fixed in his code.


[side comment]
By the way, if the latter assertion was intended, then my preference would be to introduce two assertions: the former one that checks all existing participation sums up to 100 per agreement (single-table constraint).
And one stating that there must be some participation for each agreement (two-table constraint):

create or replace assertion no_agree_without_participation
  check(not exists
         (select 'an agreement without participation
          from agreement a
          where not exists
                 (select 'some participation'
                  from participant p
                  where p.agreement_id = a.agreement_id)));

Allthough this is not the case for this particular example, in general, implementing an assertion that's actually a logical conjunction of two assertions, as the two seperate assertions, benefits serializability.
[end-of side comment]


With regards to my earlier comment:

By 'clever use of dbms_lock', I meant no magic other than that instead of relying on row-level locks, we can acquire an application lock (DBMS_LOCK) to ensure minimal serialization.
In order to "serialize at the agreement level" (the "clever" bit), the application lock's name would just have to involve the AGREEMENT_ID value.
Say we call this constraint SUMPCT100, then we can introduce an application lock named SUMPCT100[1] to serialize for agreement 1, and SUMPCT100[2], for agreement 2, etc.
And yes it then requires a range scan of the participant table as Tom followed up.

Here's an outline of the code:

- An after row trigger maintains which agreement_id's have been affected and require constraint validation.
- As this assertion is vulnerable for inserts, updates and deletes, it does so during execution of all three types of DML-statements.
- For updates only if one of the two involved columns was changed.
- These agreement_id values can be maintained in a (to be created) global temporary table (GTT), or in a persistent packaged array variable.

The after statement trigger then:
- Runs the check for each distinct agreement_id value maintained by the the row-trigger (it does the range-scan and the SUM).
- But before doing so, it acquires the application lock described above.
- Since this rule requires a deferred execution model, when a violation is detected, instead of doing a raise_application_error, the statement trigger saves this violation.
- Violations are saved in another (to be created) GTT, say GTT2.
- As part of validating the constraint, the statement trigger also removes prior saved violations that are 'ok' now.
- This way whenever GTT2 still has contents, the transaction should not be allowed to commit.
- To ensure this, a simple 'select * from GTT2' materialized view is created (on commit refresh)
- So at commit we only do a quick check to see if any violations are still present, i.e. we do not perform any (potentially) complex and time-consuming constraint validation.
- And an after insert trigger on this MV will raise the error(s) when GTT2 is not empty.

Here's an example of a transaction:

SQL> insert into agreements values(1,'Agree 1');

1 row created.

SQL> insert into agreements values(2,'Agree 2');

1 row created.

SQL> insert into agreements values(3,'Agree 3');

1 row created.

SQL>
SQL>
SQL> insert into participant values(10,1,100);

1 row created.

SQL> --
SQL> -- after-statement trigger acquired application lock sum_pct_is_100[1], validated agreement 1, found it to be ok.
SQL> --
SQL>
SQL> insert into participant values(10,2,90);

1 row created.

SQL> --
SQL> -- after-statement trigger acquired application lock sum_pct_is_100[2], validated agreement 2, detected violation, saved it in a GTT.
SQL> --
SQL>
SQL> insert into participant values(10,3,50);

1 row created.

SQL> --
SQL> -- after-statement trigger acquired application lock sum_pct_is_100[3], validated agreement 3, detected violation, saved it in a GTT.
SQL> --
SQL>
SQL> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-20999: Sum of participation for agreement 3 should be 100 (50).
ORA-06512: at "TOON.SP_PRE_COMMIT_CHECK", line 26
ORA-06512: at line 1
ORA-06512: at "TOON.SP_PRE_COMMIT_CHECK", line 23
ORA-20999: Sum of participation for agreement 2 should be 100 (90).
ORA-06512: at "TOON.MV_PRE_COMMIT_TRIGGER", line 3
ORA-04088: error during execution of trigger 'TOON.MV_PRE_COMMIT_TRIGGER'


SQL> --
SQL> -- MV (select * from GTT), gets populated, insert-trigger on MV reports errors.
SQL> -- Transaction gets rolled back, application locks get released.
SQL> --


This (deferred) execution model seems rather elaborate, but I believe it is no different than what takes place 'under-the-covers' when Oracle validates a deferred foreign key constraint.
That is:
- locking is always immediate.
- intermediate violations are temporarily saved and rechecked on subsequent DML-statements inside the transaction.

And of course :-) ... all this code can be generated, the essence of the constraint at hand is:

- It can be validated at the agreement-id level (and hence should be serialized at this level).
- It should be validated on PARTICIPANT inserts, deletes, and updates of agreement_id and participation_pct.
- The way to validate it is to run:

  select 'error'
  from  (select nvl(sum(participation_pct),0) as pct
         from participant
         where agreement_id = :b1)
  where pct <> 100;

With RuleGen (www.rulegen.com) you only input the above essence, and then all code is generated for you.


One final comment:
To minimize the row-by-row constraint validating, one should of course try to use multi-row DML statements as-much-as possible.
So if a user has a UI in which he/she can enter the participants and participation percentages for a given agreement, then ideally all this UI-data would get transferred into the DBMS via a stored-procedure call.
Then inside the stored procedure one should intelligently compose one multi-row insert statement to insert all participations in one go.
That way the statement trigger only fires once.


Tom Kyte
December 27, 2011 - 8:53 am UTC

Toon -

thanks for the comprehensive update here.


I think it does help point out that - doing this stuff via triggers is fairly complex and requires a bit of thought. (well, more than a bit of thought).

Personally, I'd probably rather rely on the output of your tool - RuleGen - then the output of many trigger coders. You've given it the thought required, whereas many trigger implementations (I would actually say "most") I've seen have not.

re: admirable post from Toon

Stew Ashton, December 27, 2011 - 12:39 pm UTC


I haven't read Toon's post enough times to fully understand it, but thanks already for pointing out the omission in my code.

One of the things I don't understand is how this approach addresses Brian Camire's concern about DBMS_LOCK and serializable transactions.

Best regards, Stew
Tom Kyte
December 27, 2011 - 2:05 pm UTC

it doesn't - when Toon and I emailed, he explained that the generated code can have a switch that throws an error when you are in serializable mode, preventing the transaction.

Serializable still an issue.

A reader, December 27, 2011 - 1:40 pm UTC

Hi Stew,

I should have mentioned that: it doesn't address the serializable isolation level. So RuleGen's code is also vulnerable to this. However you can set a flag if you want RuleGen to detect that a transaction is currently running in serializable isolation mode and get an error if such is the case.

Toon

PS. the 'isolation levels' defined a couple of decades ago have issues. Which have been recognized by the scientific research community. See for instance http://dl.acm.org/citation.cfm?id=223785
Personally I do not see a need to run in any other mode that the default read committed isolation level.

Martin Rose, December 29, 2011 - 8:15 am UTC

> Personally I do not see a need to run in any other mode that the default read committed isolation
level.

I'm so glad someone said this as I have never worked anywhere or discovered any application that makes use of this mode.

I would like to ask what it is that you can achieve in this mode that is impossible (or v. difficult) to achieve without it, as I have never seen it used ANYWHERE ? In fact, I wouldn't even know about it if it wasn't for this forum.


Next, if DBMS_LOCK is incompatible with SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, then Tom, this effectively means you are advising us to NEVER EVER use DBMS_LOCK. Clearly the writers of the DBMS_LOCK package intended it to be used, so is there a bug with the functioning of DBMS_LOCK and/or ISOLATION LEVEL SERIALIZABLE?

Having re-read the chapters on DBMS_LOCK and ISOLATION LEVEL SERIALIZABLE, I would suggest that the fault lies within DBMS_LOCK.
http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_lock.htm#ARPLS66763

DBMS_LOCK.  Operational Notes -

"Because a reserved user lock is the same as an Oracle lock, it has all the functionality of an Oracle lock, such as deadlock detection". 

http://docs.oracle.com/cd/E11882_01/appdev.112/e25518/adfns_sqlproc.htm#ADFNS1002.
About 1/3 the way down...,

"Letting Oracle Database Control Table Locking

If you let Oracle Database control table locking, your application needs less programming logic, but also has less control than if you manage the table locks yourself.

Issuing the statement SET TRANSACTION ISOLATION LEVEL SERIALIZABLE or ALTER SESSION ISOLATION LEVEL SERIALIZABLE preserves ANSI serializability without changing the underlying locking protocol". 
The last sentence is the give-away of the original intention..... "without changing the underlying locking protocol".

I also cannot find anything that says DBMS_LOCK is incompatible with ISOLATION LEVEL SERIALIZABLE, which I would have expected to see somewhere if that were the case.


So my points are:

i). What real world uses does ISOLATION LEVEL SERIALIZABLE mode have?
ii). Is DBMS_LOCK buggy because it doesn't work with ISOLATION LEVEL SERIALIZABLE ?
iii). Tom said,
"Personally, I'd probably rather rely on the output of your tool - RuleGen - then the output of many trigger coders. You've given it the thought required, whereas many trigger implementations (I would actually say "most") I've seen have not".
Maybe Tom could make a list of all the things that must be accounted for when using triggers to validate since this approach will not go away, and there are tons of legacy apps. out there using this method. People use triggers to validate all the time, and in the absence of any declarative cross-row constraint mechanism at the moment, people are going to carry on using triggers to validate.

iv). Maybe Oracle Corp. could give priority to developing such a declarative cross-row constraint mechanism so that a viable easy-to-implement mechanism does exist ?
Tom Kyte
December 29, 2011 - 11:45 am UTC

I would like to ask what it is that you can achieve in this mode that is impossible (or v. difficult) to achieve without it, as I have never seen it used ANYWHERE ? In fact, I wouldn't even know about it if it wasn't for this forum.


three places I've seen it used

a) the need to generate a read consistent report but the report itself generated some inserts/updates/deletes while executing. A read only transaction wouldn't work - serializable does. As would flashback query (as of scn :my_starting_scn would be added to the from list)

b) the use of some frameworks that start by issuing a serializable transaction. Whoever wrote the framework thought this was academically "the best". Usually a configurable parameter/setting

c) the running of the real world TPC-C benchmark, it is a requirement :)




I also cannot find anything that says DBMS_LOCK is incompatible with ISOLATION LEVEL SERIALIZABLE, which I would have expected to see somewhere if that were the case.

it is NOT incompatible - rather the code as implemented (the code written by US, not dbms_lock) doesn't do what it set out to in that mode.

In serializable - you see the database "as of the point in time your transaction began PLUS any modification YOUR transaction made". So any logic that checks multiple rows - some of which you have not updated or locked yourself - could be flawed. It will not be able to see any changes made by anyone else.



in answer to the points:

i) few and far between

ii) no, not at all. the bug was in the logic implemented by the person using dbms_lock. their code did not work in serializable (the code presented originally would - but only because it technically "over locks" in read committed mode.

iii) there is just one: you must be able to envision all of the ways your table(s) could be attacked in a multi-user situation using your knowledge of how concurrency control in the database you are programming works. In short, you must design it.


People use triggers to validate all the time

and there are data integrity issues in almost all of those applications - including ours :(

iv) I'll vote for that as a project... What we would need would be an enhancement request with a clear, concise, well written, factual business case behind it first...

Martin Rose, December 29, 2011 - 8:51 am UTC

By the way, none of the formatting on this forum works such as bold, italics and underline. It would also be nice to have a choice of fonts & sizes.
Tom Kyte
December 29, 2011 - 11:48 am UTC

the formatting all works - look below for a review I just put up.

when you get to a review, there is a little formatting link, if you click on that you'll get a series of buttons to use to get the formatting tags - which are just

[i]
[b]
[u]
[code]

where [] is really < and > - and you use a [/i], [/b] and so on to close the tag

Use case for serializable isolation level.

A reader, December 29, 2011 - 11:34 am UTC

If you think of serializable isolation as equal to a 'set transaction read-only' isolation that has writes allowed, then think of when you would use read-only transactions, where you would really like to do a bit of writing...

There is one use-case that I used one or two times in the early times of Oracle7. A large batch reporting program with multiple SELECT statements all requiring a consistent state of the database, but also a need to 'log' that the report has run.

Nowadays this can probably be done with a read-only transaction that logs its run using an autonomous transaction (haven't tested that though, but I assume it works).

iv). Maybe Oracle Corp. could give priority to developing such a declarative cross-row constraint mechanism so that a viable easy-to-implement mechanism does exist ?

+1 from me!
And: "a declarative cross-row constraint mechanism" = SQL Assertion support

I'd gladly give up RuleGen for that.

This is a test of the formatting

Thomas Kyte, December 29, 2011 - 11:47 am UTC

This is bold text

This is italic text

This is underlined

this
   is
   code
end this

Some test results

Stew Ashton, December 31, 2011 - 6:42 am UTC


I compared my compound trigger and Tom's row trigger for batch operations: insert, then update, then delete of 100,000 rows.
- the operations without any trigger ran in about 10 seconds
- with the compound trigger they ran in a little over 30 seconds
- with Tom's row trigger they ran in a little less than 50 seconds.

These triggers were both revised to use a separate percentage table and to handle changes in agreement_id.

Frankly, I don't think the difference in run time is sufficient to justify the extra complexity of the compound trigger, especially in an OLTP environment where most statements would modify few rows.

I also ran some tests with the materialized view approach and the operations ran in well over a minute.

I have no idea what the run time would be with Rulegen-generated triggers.

As a heavy user of materialized views for cross-row constraints, I must admit that the row trigger solution has the fewest moving parts and few lines of code.

However, I wouldn't put the percentage total in the AGREEMENTS table. Using Tom's original trigger:
 -- Easy to update the total directly
> INSERT INTO AGREEMENTS VALUES ( 1, 'x', 100 );
1 rows inserted.
> commit
commited.

-- Mutating table error is possible
> INSERT ALL 
INTO AGREEMENTS (AGREEMENT_ID,AGREEMENT_NAME) VALUES ( 2, 'x' )
into participant 
(PARTICIPANT_NAME,AGREEMENT_ID,PARTICIPATION_PCT) VALUES ( 100, 1, 100 )
select null from dual;

Error starting at line 40 in command:
INSERT ALL 
INTO AGREEMENTS (AGREEMENT_ID,AGREEMENT_NAME) VALUES ( 2, 'x' )
into participant 
(PARTICIPANT_NAME,AGREEMENT_ID,PARTICIPATION_PCT) VALUES ( 100, 1, 100 )
select null from dual
Error report:
SQL Error: ORA-04091: table STEW.AGREEMENTS is mutating, trigger/function may not see it
ORA-06512: at "STEW.PARTICIPANT_TRIGGER", line 4
ORA-04088: error during execution of trigger 'STEW.PARTICIPANT_TRIGGER'
04091. 00000 -  "table %s.%s is mutating, trigger/function may not see it"
*Cause:    A trigger (or a user defined plsql function that is referenced in
           this statement) attempted to look at (or modify) a table that was
           in the middle of being modified by the statement which fired it.
*Action:   Rewrite the trigger (or function) so it does not read that table.

Enhancement request proposal: BEFORE COMMIT trigger

Stew Ashton, January 01, 2012 - 9:45 am UTC


How about adding a BEFORE COMMIT timing point: the trigger runs if the table is modified, but it executes just before the COMMIT is finalized. The trigger would not have access to any :new or :old values, and it would only be able to read the data, maybe log errors in an autonomous transaction, and above all raise an application error.
CREATE OR REPLACE TRIGGER PARTICIPANT_TRIGGER
  AFTER INSERT OR UPDATE OR DELETE ON PARTICIPANT BEFORE COMMIT
declare
l_agreement_id participant.agreement_id%type;
begin
  SELECT AGREEMENT_ID INTO L_AGREEMENT_ID
  FROM (
    select agreement_id
    from participant
    GROUP BY AGREEMENT_ID
    HAVING SUM(PARTICIPATION_PCT) <> 100
  ) WHERE ROWNUM <= 1;
RAISE_APPLICATION_ERROR( -20999, 
  'Agreements must have total participation = 100 percent.' 
  || ' Error on agreement_id = ' || l_agreement_id);
exception when no_data_found then null;
END;
/
This is equivalent to Toon's assertion and would fully implement the requirement.

For better performance when few rows have changed, provide to the trigger a list of changed ROWIDs that could be used to check only changed data. This would be the equivalent of "query result change notification (QRCN)", but done within the same transaction.

Tom Kyte
January 01, 2012 - 10:35 am UTC

no it wouldn't, remember read consistent reads.

You need to add some serialization in there for this to work.

If you and I committed at the same time on a multi-cpu machine, this would not work properly.

If we used serializable transactions this would not work.


I like the on commit trigger, I've mimic-ed them in the past with a small refresh fast on commit materialized view, don't get my wrong on that, but it wouldn't by itself solve the problem.



Point taken, thanks

Stew Ashton, January 01, 2012 - 5:25 pm UTC


Staring at a tree, I forgot the forest.

So the trigger would definitely need a list of rowids of the changed records, in order to SELECT FOR UPDATE just the affected agreement_ids. If that list could go in a GTT, there would be no additional serialization...unless I missed something else.
Tom Kyte
January 01, 2012 - 8:39 pm UTC

well, there would be the additional serialization (equal to the example by Toon above with dbms_lock) at commit time. The commits would have to be serialized.

List of rowids will (in general) not suffice...

A reader, January 01, 2012 - 10:26 pm UTC

Rowid's won't cut it.

You need to know what data (involved columns only) got inserted, deleted or updated (an how in this case), to be able to efficiently check an assertion in general.

For instance.
- One agreement, with 4 participants, each 25%.
- Rowid X (on of the for participants) got removed.
What agreement was that?

On second thought: I guess you could do a flashback query to find out. But to have something like 'rows-inserted' and 'rows-deleted', like Sybase had almost two decades ago, would be very useful.

Martin Rose, January 02, 2012 - 5:00 am UTC

Can an MV log be of use here ?

re:List of rowids won't cut it / serialization on commit

Stew Ashton, January 02, 2012 - 7:36 am UTC


OK, I'm through telling Oracle how to implement assertions.

Agreed, "reader", rowids are not enough, I should have seen that.

Tom, I'm afraid I don't understand the part about needing to serialize the commits. I assume you are talking about solutions that range scan the PARTICIPANT table, since your trigger doesn't serialize on commit. Now, with the range scan method, if I update one agreement_id and you update another, what must we serialize on?

This is the part of Toon's post I didn't understand, why he has the MV in there. Is it just to mimic an "on commit" trigger? In that case, the serialization on commit would be more a "feature" of the fast refresh than a requirement.
Tom Kyte
January 02, 2012 - 8:43 am UTC

you have to serialize on agreement ids, you can only check the sum() for a given agreement id one at a time. If you and I were to commit at about the same time - using consistent reads - we would not see each others work.

So, you need to lock at agreement id levels, check the sum, commit, then let the next one do the same. Assuming you are NOT serializable in which case you have to throw an 8177 type of error if two people modified the details for the same agreement id.


the MV is the on commit trigger. And the serialization offered by it is a requirement.

RG triggers

Toon Koppelaars, January 02, 2012 - 7:40 am UTC


>I have no idea what the run time would be with Rulegen-generated triggers.

You're welcome to give it a try:

http://www.rulegen.com/free-download

They won't be as fast as Tom's first (row) trigger (only) solution. But if your business logic always does multi-row inserts (that is, insert all participants for one agreement in one go) then they'll perform only a bit slower.

Martin Rose, January 02, 2012 - 11:08 am UTC

I think that mining materialized view logs might be helpful. You may need to supplement it with an identifier so you know which rows are yours, but it has all the information you need. It could even be used as an alternative solution for mutating tables work-around.

ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY';

CREATE TABLE MV_LOG_TEST (A INTEGER PRIMARY KEY,
                          B DATE,
                          C DATE);

CREATE MATERIALIZED VIEW LOG ON MV_LOG_TEST
WITH ROWID, SEQUENCE (B, C), PRIMARY KEY
INCLUDING NEW VALUES;

INSERT INTO MV_LOG_TEST VALUES (1,
                                TO_DATE('01-JAN-2000', 'DD-MON-YYYY'),
                                TO_DATE('31-JAN-2000', 'DD-MON-YYYY'));

UPDATE MV_LOG_TEST
SET    C = TO_DATE('15-JAN-2000', 'DD-MM-YYYY')
WHERE  A = 1;

DELETE MV_LOG_TEST;

COMMIT;

SELECT *
FROM   MLOG$_MV_LOG_TEST;


A        B       C       M_ROW$$       SEQUENCE$$ DMLTYPE$$ OLD_NEW$$      XID$$

1 01-JAN-2000 31-JAN-2000 AAAtZgAAHAAAK+3AAA     30     I       N     9288691411321942
1 01-JAN-2000 31-JAN-2000 AAAtZgAAHAAAK+3AAA  31     U       O     9288691411321942
1 01-JAN-2000 15-JAN-2000 AAAtZgAAHAAAK+3AAA  32     U       N     9288691411321942
1 01-JAN-2000 15-JAN-2000 AAAtZgAAHAAAK+3AAA  33     D       O     9288691411321942

I don't know how to obtain my XID$$ though. I'm assuming it's the transaction ID, but I can't seem to get it with SYS_CONTEXT, or DBMS_TRANSACTION. Any clues Tom ?

@Martin Rose on XID

Stew Ashton, January 02, 2012 - 3:02 pm UTC


Hi Martin,

Alberto Dell'Era blogs about MVs and is definitely worth a read. See for example http://www.adellera.it/blog/2009/11/03/11gr2-materialized-view-logs-changes/ You are in 11GR2, right?

Quoting Alberto, "the new column xid$$ (transaction id) ... uniquely identifies the transaction that made the changes to the row. For the curious, the number is a combination of the elements of the triplet (undo segment number, undo slot, undo sequence); it is simply the binary concatenation of the three numbers shifted by (48, 32, 0) bits respectively."

Before the commit, you can get the transaction id from v$transaction. After the commit, you can get the commit SCN based on the transaction id.
> drop table MV_LOG_TEST;
table MV_LOG_TEST dropped.
> CREATE TABLE MV_LOG_TEST (A INTEGER PRIMARY KEY,
                          B DATE,
                          C DATE);
table MV_LOG_TEST created.
> CREATE MATERIALIZED VIEW LOG ON MV_LOG_TEST
WITH ROWID, SEQUENCE (B, C), PRIMARY KEY, commit scn
INCLUDING NEW VALUES;
materialized view log MV_LOG_TEST created.
> INSERT INTO MV_LOG_TEST VALUES (1,
                                TO_DATE('01-JAN-2000', 'DD-MON-YYYY'),
                                TO_DATE('31-JAN-2000', 'DD-MON-YYYY'));
1 rows inserted.
> UPDATE MV_LOG_TEST
SET    C = TO_DATE('15-JAN-2000', 'DD-MM-YYYY')
WHERE  A = 1;
1 rows updated.
> DELETE MV_LOG_TEST;
1 rows deleted.
> SELECT xid$$ FROM MLOG$_MV_LOG_TEST where rownum = 1
union all
SELECT XIDUSN*power(2,48) + XIDSLOT*power(2,32) + XIDSQN 
FROM V$TRANSACTION 
WHERE SES_ADDR = (SELECT SADDR FROM V$SESSION 
where sid = (select sid from v$mystat where rownum = 1));

XID$$                  
---------------------- 
844519419419493        
844519419419493  
      
> select * from all_summap where xid in (select xid$$ from MLOG$_MV_LOG_TEST);

XID                    COMMIT_SCN             
---------------------- ---------------------- 

> COMMIT;
commited.

> select * from all_summap where xid in (select xid$$ from MLOG$_MV_LOG_TEST);

XID                    COMMIT_SCN             
---------------------- ---------------------- 
844519419419493        11942328

Martin Rose, January 02, 2012 - 5:34 pm UTC

Thanks Stew. There's definitely some value there in using MV logs for recording changes with respect to cross-row validation, even purely for the fact that it reduces the amount we have to code.

Maybe Toon can use it in RuleGen instead of coding his own triggers, as it must be faster to use the built-in MV logging than writing ones own code to do a similar thing (especially so when writing to disk. It may not be as fast as using PL/SQL arrays in memory though).

Michel Cadot, January 03, 2012 - 2:00 am UTC


Test MV-log performance versus BYO trigger

Toon Koppelaars, January 03, 2012 - 4:38 am UTC

> Maybe Toon can use it in RuleGen instead of coding his own triggers, as it must be faster to use
the built-in MV logging than writing ones own code to do a similar thing


Hi Martin,

Here is a test I executed to compare the performance of 'Build Your Own' row-trigger + GTT to maintain what changed, versus using an MV-log table to do so.
We have a simple EMP-table, and assume there is some assertion that requires the SAL column to be logged. I'm not including any code in the after statement trigger to validate such assertion: I only included the stuff to maintain the transition effect (= delta data changes made by the DML-statement).
We then test 10000 single row inserts and deletes.
And I also changed the MV-log table to be a GTT, which is what RuleGen currently uses. For the test it probably doesn't matter at all.


SQL> start byo_versus_mvlog.txt
SQL> connect toon/toon
Verbonden.

Sessie is gewijzigd.

SQL> alter session set nls_language=english;

Session altered.

SQL>
SQL> drop table EMP;

Table dropped.

SQL> create table EMP
  2  (EMPNO       number(3,0)  not null primary key
  3  ,ENAME       varchar2(20) not null
  4  ,SAL         number(4,0)  not null)
  5  /

Table created.

SQL>
SQL> create materialized view log on emp
  2  with rowid(sal)
  3  excluding new values
  4  /

Materialized view log created.

SQL>
SQL> Rem
SQL> Rem Dirty action: turn MLOG$ table into a global temp table.
SQL> Rem
SQL> Rem This will mimic our Transition Effect table, which is now maintained
SQL> Rem by an Oracle-internal row-trigger.
SQL> Rem
SQL>
SQL> drop table mlog$_emp;

Table dropped.

SQL> CREATE global temporary TABLE MLOG$_EMP
  2   (    "SAL" NUMBER(4,0),
  3        "M_ROW$$" VARCHAR2(255 CHAR),
  4        "SNAPTIME$$" DATE,
  5        "DMLTYPE$$" VARCHAR2(1 CHAR),
  6        "OLD_NEW$$" VARCHAR2(1 CHAR),
  7        "CHANGE_VECTOR$$" RAW(255)
  8   )
  9  on commit delete rows
 10  /

Table created.

SQL>
SQL> create or replace trigger emp_aiuds
  2  after insert or update or delete on emp
  3  begin
  4    --
  5    delete mlog$_emp;
  6    --
  7  end;
  8  /

Trigger created.

SQL>
SQL> alter session set sql_trace=true;

Session altered.

SQL> set timing on
SQL> begin
  2    --
  3    for i in 1..10000
  4    loop
  5      --
  6      insert into emp values(104,'xxxxx',9999);
  7      commit;
  8      delete from emp where empno=104;
  9      commit;
 10      --
 11    end loop;
 12    --
 13  end;
 14  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:14.54                                  <<<<<======== 14.5 seconds.
SQL>
SQL> disc
Disconnected from Oracle Database 11g Release 11.1.0.7.0 - 64bit Production
SQL>
SQL> connect toon/toon
Verbonden.

Sessie is gewijzigd.

SQL> alter session set nls_language=english;

Session altered.

SQL>
SQL> drop table EMP;

Table dropped.

SQL> create table EMP
  2  (EMPNO       number(3,0)  not null primary key
  3  ,ENAME       varchar2(20) not null
  4  ,SAL         number(4,0)  not null)
  5  /

Table created.

SQL>
SQL> drop table emp_te;

Table dropped.

SQL> create global temporary table emp_te
  2  (rid          rowid not null
  3  ,sal          number
  4  ,dml          varchar2(1))
  5  on commit delete rows
  6  /

Table created.

SQL>
SQL> create or replace trigger emp_aiudr
  2  after insert or update or delete on emp
  3  for each row
  4  begin
  5    --
  6    if INSERTING then
  7      --
  8      insert into emp_te values(:new.rowid,null,'I');
  9      --
 10    elsif DELETING then
 11      --
 12      insert into emp_te values(:old.rowid,:old.sal,'D');
 13      --
 14    end if;
 15    --
 16  end;
 17  /

Trigger created.

SQL>
SQL> create or replace trigger emp_aiuds
  2  after insert or update or delete on emp
  3  begin
  4    --
  5    delete emp_te;
  6    --
  7  end;
  8  /

Trigger created.

SQL>
SQL> alter session set sql_trace=true;

Session altered.

SQL> set timing on
SQL> begin
  2    --
  3    for i in 1..10000
  4    loop
  5      --
  6      insert into emp values(104,'xxxxx',9999);
  7      commit;
  8      delete from emp where empno=104;
  9      commit;
 10      --
 11    end loop;
 12    --
 13  end;
 14  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:15.06                                  <<<<<<======= 15 seconds.
SQL>
SQL> disc
Disconnected from Oracle Database 11g Release 11.1.0.7.0 - 64bit Production
SQL>


And from the traces it also shows that the difference in negligable:

BYO transition effect table:

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       12      0.00       0.00          0          0          0           0
Execute  80008     11.29      11.52          0     100011     279094       60000
Fetch       24      0.00       0.00          0         40          0          16
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    80044     11.29      11.52          0     100051     279094       60016


MV-LOG table used as transition effect table:

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       16      0.00       0.00          0          0          0           0
Execute  80010     11.14      12.04          0     100011     277863       60000
Fetch       30      0.00       0.00          0         50          0          20
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    80056     11.14      12.04          0     100061     277863       60020



Conclusion:

There is no performance argument to use MV-logs as TE-tables instead of building your own row-trigger + GTT to do so.
What this proves to me is that the 'context switches' (SQL to PL/SQL and back) that BYO row-triggers incur, are also incurred by the internal trigger that Oracle executes to maintain an MV-log.

Building your own has the advantage of never interfering with any 'real' materialized views (+logs) that the customer might add.

Also the contents of the transition effect table are only relevant during the current transaction, or even more, during the current DML-statement. Which is why I remove them in an after statement trigger.
By design this is not the purpose of an MV-log table: it's contents should remain in there until all dependent MV's have been refreshed, be it fast on commit, or on demand.

Should have mentioned...

Toon Koppelaars, January 03, 2012 - 4:56 am UTC

I should of course given you the 'profile' of the trace, that tells where time was spent, otherwise you might think that over 10 seconds went into the commit's, which still would not conclusively tell us whether BYO was better/worse than using the MV-log.
In both cases roughly 5 seconds went into the maintaining of the TE-table / MV-log table. 1.5 into the commits, and the rest into performing the pl/sql block and its insert and delete statements.
You can test it yourself of course.

MV constraint

A reader, January 03, 2012 - 11:18 am UTC


Martin Rose, January 04, 2012 - 11:46 am UTC

Shame Toon. Still, it was worth a try.

Though on the plus side, maybe that means there's room for improvement regarding the performance of MV logs. ;-)

Martin Rose, February 04, 2012 - 6:50 am UTC

Tom, in your answer to the first post on this thread, you created a deferrable constraint? Isn't this redundant because the MV will only ever refresh upon commit anyway ?


ops$tkyte%ORA11GR2> alter materialized view participant_pct_is_100
2 add constraint check_ppct_is_100
3 check (ppct = 100)
4 deferrable;


Which brings me on to my next question, can I create a cross-row constraint using this MV method for a single table only (as opposed to 2 or more tables), that fires immediately (as opposed to being deferred at commit-time) ?

I don't believe I can, can I ?

Which leaves us to have to resort to triggers for that?
Tom Kyte
February 04, 2012 - 7:45 pm UTC

No, constraints against materialized views have to be deferrable.

Even though it is "on commit", it will process things 'row by row', not 'statement by statement'

therefore, during the processing of a materialized view, constraints may momentarily be violated - but they will be OK when they finish.

the materialized view method will be "on commit", not "on statement" - so if you wanted statement level, you would be forced into a trigger route.

Martin Rose, February 05, 2012 - 8:31 am UTC

Ahh, ok I get it now.

The reason I asked is because I created such an MV constraint for my overlapping dates problem earlier above, and it worked regardless of whether the constraint was deferred or not, because I wasn't interested in applying updates - only doing INSERTs.

Of course, that is because it doesn't matter for my problem when the constraint gets checked. When set to IMMEDIATE, when the COMMIT is issued it gets checked as the MV log is applying its changes to the MV; and when set to DEFERRED, it waits until after all those changes have been applied by the MV log, and then gets checked a moment just before the actual point of commit.

So for my problem it didn't matter, but in general it does.

Got it now. ;-)

To identify datas with Overlapping dates

Rajeshwaran, Jeyabal, February 29, 2012 - 11:35 pm UTC

Tom:

I have an application, where the users will feed data into table 'T'. How can i restrict them not to enter datas with overlapping values?

Is that any constraint can be defined on the table to achieve this?
drop table t purge;
create table t(
 code varchar2(5),
 from_dt date,
 thru_dt date);

insert into t values('C1',to_date('01-JAN-2012','dd-mon-yyyy'),to_date('31-JAN-2012','dd-mon-yyyy'));
insert into t values('C1',to_date('01-FEB-2012','dd-mon-yyyy'),to_date('29-FEB-2012','dd-mon-yyyy'));
insert into t values('C1',to_date('01-MAR-2012','dd-mon-yyyy'),to_date('10-MAR-2012','dd-mon-yyyy'));
insert into t values('C1',to_date('07-MAR-2012','dd-mon-yyyy'),to_date('20-MAR-2012','dd-mon-yyyy'));
commit;

rajesh@ORA10GR2> select * from t;

CODE  FROM_DT     THRU_DT
----- ----------- -----------
C1    01-jan-2012 31-jan-2012
C1    01-feb-2012 29-feb-2012
C1    01-mar-2012 10-mar-2012
C1    07-mar-2012 20-mar-2012

4 rows selected.

Elapsed: 00:00:00.85
rajesh@ORA10GR2>

I am using this below query to find the datas with overlapping dates.
rajesh@ORA10GR2> select code,from_dt,thru_dt
  2  from (
  3  select t.*,
  4      case
  5        when lag(thru_dt) over(order by from_dt) between from_dt and thru_dt
  6      then 1 end as valid_chk
  7  from t
  8      )
  9  where valid_chk is not null
 10  /

CODE  FROM_DT   THRU_DT
----- --------- ---------
C1    07-MAR-12 20-MAR-12

1 row selected.

Elapsed: 00:00:00.03
rajesh@ORA10GR2>

Tom Kyte
March 01, 2012 - 7:42 am UTC

search this site for

overlap

Cannot use on-commit MVs in distributed transactions

Martin Rose, May 15, 2012 - 3:03 am UTC

Tom, much to my dismay (and yours after reading this), it appears on-commit MVs cannot be used in distributed transactions.

Although DML can be posted, you are prevented from commiting the transcation. I get ORA-2050/12048/2051 when trying.

http://docs.oracle.com/cd/E11882_01/server.112/e10706/repmview.htm#i52501

"You cannot execute a distributed transaction on the master table of a refresh-on-commit materialized view. Refresh-on-commit materialized views are those created using the ON COMMIT REFRESH clause in the CREATE MATERIALIZED VIEW statement. You can execute a distributed transaction on the master table of a refresh-on-demand materialized view".


Without saying the word "trigger" ;-), can you suggest a work-around ?
Tom Kyte
May 15, 2012 - 6:05 am UTC

Make it a local transactions only - queue a job (dbms_job - since it does not commit) that will push the other data over to the remote site shortly after you commit.

I don't like doing distributed transactions in end user transactions in general - performance wise, not so good. Availability wise - not so good.

Can you do it as a loosely coupled (do local, commit, remote gets done right after) transaction?

Martin Rose, May 15, 2012 - 6:30 am UTC

Actually, I'm INSERTing locally with data from a remote table.
INSERT INTO <local table>
SELECT 
FROM   <remote table> JOIN <other local table>
In this particular instance, I can probably take a copy of the remote table with a CTAS (very few rows), and then reference that local copy instead. But it's not nice, and needs commenting.

FYI, the remote DB is for a third party off-the-shelf app. (rather than an in-house one) and so resides on its own machine.
Tom Kyte
May 15, 2012 - 1:50 pm UTC

I'm not worried about "commenting", never.

Sounds like a session based global temporary table would work better.

Insert into it, commit, then do what you need.

Martin Rose, May 15, 2012 - 6:37 am UTC

PS. If I use an MV log WITH COMMIT SCN, I get the error without even needing any on-commit refresh MV on the base table.

I don't think this is documented.

Absolute amounts not percentages

VA, February 16, 2019 - 4:51 pm UTC

How/can this technique be adapted when dealing with absolute amounts, not percentages? In other words, if the sum(detail.amt) should equal master.amt?

Thanks
Connor McDonald
February 18, 2019 - 2:16 am UTC

Generally I'm not a fan of storing both detail and summary level totals, but if (say) its a 3rd party app and you have no control, then you need a join in your mview

SQL>
SQL> create table agreements
  2    (
  3      agreement_id    number         not null
  4    , agreement_name  varchar2( 10 ) not null
  5    , tot_pct number
  6    , primary key ( agreement_id )
  7    ) ;

Table created.

SQL>
SQL> create table participant
  2    (
  3      participant_name  number not null
  4    , agreement_id      number not null
  5    , participation_pct number not null
  6    , primary key ( participant_name , agreement_id )
  7    , foreign key (agreement_id) references agreements
  8    ) ;

Table created.

SQL>
SQL> create materialized view log
  2  on agreements with rowid, (tot_pct) including new values;

Materialized view log created.

SQL>
SQL> create materialized view log
  2  on participant with rowid, (participation_pct) including new values;

Materialized view log created.

SQL>
SQL> create materialized view participant_pct_is_100
  2  refresh fast
  3  on commit
  4  as
  5  select a.tot_pct, p.agreement_id, sum(p.participation_pct) ppct, count(*) c1, count(p.participation_pct) c2
  6    from participant p,
  7         agreements a
  8   group by a.tot_pct, p.agreement_id
  9  /

Materialized view created.

SQL>
SQL> alter materialized view participant_pct_is_100
  2  add constraint check_ppct_is_100
  3  check (nvl(ppct,0) = nvl(tot_pct,0))
  4  deferrable;

Materialized view altered.

SQL>
SQL>


Missing joins

Rajeshwaran, Jeyabal, February 18, 2019 - 10:59 am UTC

SQL> create materialized view participant_pct_is_100
  2  refresh fast
  3  on commit
  4  as
  5  select a.tot_pct, p.agreement_id, sum(p.participation_pct) ppct, count(*) c1, count(p.participation_pct) c2
  6    from participant p,
  7         agreements a
  8   group by a.tot_pct, p.agreement_id
  9  /

Materialized view created.


your MV above missed the WHERE/JOIN clause in the mview creation ( a.agreement_id = p.agreement_id )
Connor McDonald
February 19, 2019 - 5:50 am UTC

Yes - editing fail on my part.

SQL> create materialized view participant_pct_is_100
  2  refresh fast
  3  on commit
  4  as
  5  select a.tot_pct, p.agreement_id, sum(p.participation_pct) ppct, count(*) c1, count(p.participation_pct) c2
  6    from participant p,
  7         agreements a
  8   where p.agreement_id = a.agreement_id 
  9   group by a.tot_pct, p.agreement_id

Not working?

VA, January 29, 2022 - 9:44 pm UTC

create table pct 
(
 mth varchar2(6) not null
 ,order_id int not null
 ,emp_id int not null,
 pct int not null
 ,constraint pck#uk unique (mth,order_id,emp_id)
);

insert into pct(mth,order_id,emp_id,pct) values ('202201',1,1,50);

insert into pct(mth,order_id,emp_id,pct) values ('202201',1,2,50);

create materialized view log on pct
with rowid, (pct) including new values;

 create materialized view pct_mv
   refresh fast
   on commit
   as
   select mth,order_id, sum(pct) pct
  from pct
    group by mth,order_id;
    
    alter materialized view pct_mv 
add constraint pct_is_100_x
check (pct=100) deferrable;


    
    
update pct set pct=51 where emp_Id=1;

commit;

select * from  pct_mv



See above. On commit, the MV is not refreshed so the constraint does not kick in. What am I missing?
Connor McDonald
January 31, 2022 - 2:24 am UTC

You need to see what the mview can and cannot do, and then adjust is accordingly, eg

SQL> create table pct
  2  (
  3   mth varchar2(6) not null
  4   ,order_id int not null
  5   ,emp_id int not null,
  6   pct int not null
  7   ,constraint pck#uk unique (mth,order_id,emp_id)
  8  );

Table created.

SQL> insert into pct(mth,order_id,emp_id,pct) values ('202201',1,1,50);

1 row created.

SQL> insert into pct(mth,order_id,emp_id,pct) values ('202201',1,2,50);

1 row created.

SQL> create materialized view log on pct
  2  with rowid, (pct) including new values;

Materialized view log created.

SQL>  create materialized view pct_mv
  2     refresh fast
  3     on commit
  4     as
  5     select mth,order_id, sum(pct) pct
  6    from pct
  7      group by mth,order_id;

Materialized view created.

SQL> exec dbms_mview.explain_mview('pct_mv');

PL/SQL procedure successfully completed.

SQL> select * from  MV_CAPABILITIES_TABLE
  2  @pr
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : PCT
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         :
MSGTXT                        :
SEQ                           : 1
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REFRESH_COMPLETE
POSSIBLE                      : Y
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         :
MSGTXT                        :
SEQ                           : 1002
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REFRESH_FAST
POSSIBLE                      : Y
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         :
MSGTXT                        :
SEQ                           : 2003
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REWRITE
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         :
MSGTXT                        :
SEQ                           : 3004
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : PCT_TABLE
POSSIBLE                      : N
RELATED_TEXT                  : PCT
RELATED_NUM                   : 41
MSGNO                         : 2068
MSGTXT                        : relation is not a partitioned table
SEQ                           : 4005
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REFRESH_FAST_AFTER_INSERT
POSSIBLE                      : Y
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         :
MSGTXT                        :
SEQ                           : 5006
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REFRESH_FAST_AFTER_ONETAB_DML
POSSIBLE                      : N
RELATED_TEXT                  : PCT
RELATED_NUM                   : 21
MSGNO                         : 2143
MSGTXT                        : SUM(expr) without COUNT(expr)
SEQ                           : 6007
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REFRESH_FAST_AFTER_ONETAB_DML
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         : 2142
MSGTXT                        : COUNT(*) is not present in the select list
SEQ                           : 6008
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REFRESH_FAST_AFTER_ANY_DML
POSSIBLE                      : N
RELATED_TEXT                  : MCDONAC.PCT
RELATED_NUM                   :
MSGNO                         : 2165
MSGTXT                        : mv log does not have sequence #
SEQ                           : 7009
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REFRESH_FAST_AFTER_ANY_DML
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         : 2161
MSGTXT                        : see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
SEQ                           : 7010
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REFRESH_FAST_PCT
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         : 2157
MSGTXT                        : PCT is not possible on any of the detail tables in the materialized view
SEQ                           : 8011
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REWRITE_FULL_TEXT_MATCH
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         : 2159
MSGTXT                        : query rewrite is disabled on the materialized view
SEQ                           : 9012
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REWRITE_PARTIAL_TEXT_MATCH
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         : 2159
MSGTXT                        : query rewrite is disabled on the materialized view
SEQ                           : 10013
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REWRITE_GENERAL
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         : 2159
MSGTXT                        : query rewrite is disabled on the materialized view
SEQ                           : 11014
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REWRITE_PCT
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         : 2158
MSGTXT                        : general rewrite is not possible or PCT is not possible on any of the detail tables
SEQ                           : 12015
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : PCT_TABLE_REWRITE
POSSIBLE                      : N
RELATED_TEXT                  : PCT
RELATED_NUM                   : 41
MSGNO                         : 2068
MSGTXT                        : relation is not a partitioned table
SEQ                           : 13016

PL/SQL procedure successfully completed.


Notice that fast refresh after DML and various other things are not possible and there are reasons given. So we use those reasons to change our mview log and our mview

SQL> drop materialized view pct_mv;

Materialized view dropped.

SQL>
SQL> drop materialized view log on pct;

Materialized view log dropped.

SQL>
SQL> create materialized view log on pct
  2  with rowid, sequence, (pct) including new values;

Materialized view log created.

SQL>
SQL>
SQL>  create materialized view pct_mv
  2     refresh fast
  3     on commit
  4     as
  5     select mth,order_id, sum(pct) pct, count(pct) c1, count(*) c2
  6    from pct
  7      group by mth,order_id;

Materialized view created.

SQL> delete MV_CAPABILITIES_TABLE;

14 rows deleted.

SQL> exec dbms_mview.explain_mview('pct_mv');

PL/SQL procedure successfully completed.

SQL>
SQL> select * from  MV_CAPABILITIES_TABLE
  2  @pr
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : PCT
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         :
MSGTXT                        :
SEQ                           : 1
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REFRESH_COMPLETE
POSSIBLE                      : Y
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         :
MSGTXT                        :
SEQ                           : 1002
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REFRESH_FAST
POSSIBLE                      : Y
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         :
MSGTXT                        :
SEQ                           : 2003
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REWRITE
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         :
MSGTXT                        :
SEQ                           : 3004
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : PCT_TABLE
POSSIBLE                      : N
RELATED_TEXT                  : PCT
RELATED_NUM                   : 69
MSGNO                         : 2068
MSGTXT                        : relation is not a partitioned table
SEQ                           : 4005
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REFRESH_FAST_AFTER_INSERT
POSSIBLE                      : Y
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         :
MSGTXT                        :
SEQ                           : 5006
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REFRESH_FAST_AFTER_ONETAB_DML
POSSIBLE                      : Y
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         :
MSGTXT                        :
SEQ                           : 6007
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REFRESH_FAST_AFTER_ANY_DML
POSSIBLE                      : Y
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         :
MSGTXT                        :
SEQ                           : 7008
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REFRESH_FAST_PCT
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         : 2157
MSGTXT                        : PCT is not possible on any of the detail tables in the materialized view
SEQ                           : 8009
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REWRITE_FULL_TEXT_MATCH
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         : 2159
MSGTXT                        : query rewrite is disabled on the materialized view
SEQ                           : 9010
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REWRITE_PARTIAL_TEXT_MATCH
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         : 2159
MSGTXT                        : query rewrite is disabled on the materialized view
SEQ                           : 10011
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REWRITE_GENERAL
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         : 2159
MSGTXT                        : query rewrite is disabled on the materialized view
SEQ                           : 11012
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : REWRITE_PCT
POSSIBLE                      : N
RELATED_TEXT                  :
RELATED_NUM                   :
MSGNO                         : 2158
MSGTXT                        : general rewrite is not possible or PCT is not possible on any of the detail tables
SEQ                           : 12013
==============================
STATEMENT_ID                  :
MVOWNER                       : MCDONAC
MVNAME                        : PCT_MV
CAPABILITY_NAME               : PCT_TABLE_REWRITE
POSSIBLE                      : N
RELATED_TEXT                  : PCT
RELATED_NUM                   : 69
MSGNO                         : 2068
MSGTXT                        : relation is not a partitioned table
SEQ                           : 13014

PL/SQL procedure successfully completed.

SQL>  alter materialized view pct_mv
  2  add constraint pct_is_100_x
  3  check (pct=100) deferrable;

Materialized view altered.

SQL> update pct set pct=51 where emp_Id=1;

1 row updated.

SQL>
SQL> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view or zonemap refresh path
ORA-02290: check constraint (MCDONAC.PCT_IS_100_X) violated


SQL>


More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library