Skip to Main Content
  • Questions
  • Why commit/rollback or any DDL command not allowed in trigger or function?


Question and Answer

Chris Saxon

Thanks for the question, tushar.

Asked: February 12, 2020 - 7:30 pm UTC

Answered by: Chris Saxon - Last updated: February 14, 2020 - 11:32 am UTC

Category: PL/SQL - Version: 12c

Viewed 100+ times

You Asked

Hi Tom,

Theoretically I know that commit/rollback/DDL or anything that causes transaction to end are not allowed in a trigger and function if calling function in SQL statement. To use any of those in trigger/function we can use PRAGMA AUTONOMOUS TRANSACTION. However, I am more interested in knowing WHY not allowed? If you can provide practical example would be great.

For example, in row level trigger i have audit table (MyAudit) where I am inserting record (which says who ran and when just scenario) whenever update is fired on table let say EMPLOYEE. Since my DML is on EMPLOYEE table why I am not allowed to log record in audit table (MYAUDIT) with knowing fact that audit table won't break data consistency/integrity in EMPLOYEE table?

Assume simple function returning employee salary upon passing employee number to function. Function is also inserting one record for employee number along with pay status LOW/MEDIUM/HIGH in "PAYSCALE" table before returning value. Why I can't commit insert operation on PAYSCALE table (within function) and calling this same function in SELECT query which is using EMPLOYEE table?

NOTE: These are just examples scenarios you may find them useless but just trying to keep things simple.

I tried google but could not find solid answer why as most of says what if next DML fails and my argument to myself was i still care to log & commit succeeded DMLs info.

Thanks for looking and your valuable time.

and we said...

It breaks transactional consistency.

A SQL statement should be an atomic unit. Either it processes all the data or none.

Allowing commit/rollback in triggers or functions called in SQL breaks this rule. An update of 100 rows could get part-way through, encounter an error, then undo its work. If you have a trigger committing as you go along, this is impossible.

my argument to myself was i still care to log & commit succeeded DMLs info

The point is the DML hasn't succeeded until it's updated all 100 rows.

For example:

I create table of the number 1 .. 10. These must be greater than zero.

I then update them all to decrease their value by 1. So the row with value 1 will raise an exception:

create table t (
  c1 check ( c1 > 0 )
) as
with rws as (
  select level x from dual
  connect by level <= 10
  select * from rws
  order  by 1 desc;
create or replace trigger trg 
before update on t
for each row
  dbms_output.put_line ( 
    'Updating ' || :old.c1 || ' to ' || :new.c1 
update t
set    c1 = c1 - 1;

Updating 10 to 9
Updating 9 to 8
Updating 8 to 7
Updating 7 to 6
Updating 6 to 5
Updating 5 to 4
Updating 4 to 3
Updating 3 to 2
Updating 2 to 1
Updating 1 to 0

ORA-02290: check constraint (CHRIS.SYS_C0016806) violated

If I could commit in the trigger the all the changes made before updating row 1 would be saved. This breaks a fundamental principle of SQL.

Any writing you to do audit tables or similar is irrelevant. Allowing commits in the trigger breaks the "all or nothing" rule of SQL.

Of course, having a large UPDATE blow up because of one little error is annoying.

There are various tricks to get around this problem. The most widely applicable is the LOG ERRORS clause. This allows the database to record details of failing rows, while still completing the update "successfully".

Not convinced yet?

How about a more realistic example modelling bank transactions:

- There's a trigger on the transactions table
- When you insert a transaction, this updates the balances of the accounts involved
- If an account exceeds its credit limit, raise an error

I've used an autonomous transaction to allow committing of half a transaction. The trigger commits and raises an exception half-way through, leading to out-of-sync balances:

create table transactions (
  from_acct int, to_acct int, amount number
create table accounts (
  account_id int, balance number

insert into accounts values ( 1, 100 );
insert into accounts values ( 2, 100 );

select sum ( balance ) from accounts;


create or replace trigger trg 
before insert on transactions
for each row
  pragma autonomous_transaction;
  update accounts 
  set    balance = balance + :new.amount
  where  account_id = :new.to_acct;
  raise_application_error (
    :new.from_acct || ' exceeded credit limit'
  update accounts 
  set    balance = balance - :new.amount
  where  account_id = :new.from_acct;

insert into transactions values ( 1, 2, 50 );

ORA-20001: 1 exceeded credit limit

select sum ( balance ) from accounts;



Yes the example above is terrible coding. But remember triggers could fire more triggers. Functions can call functions. Leading to all sorts of knock-on effects.

In the general case, allowing commits could lead to all sorts of nasty surprises!

So it's disallowed.

and you rated our response

  (1 rating)

Is this answer out of date? If it is, please let us know via a Review


Thank you very much

February 14, 2020 - 2:21 am UTC

Reviewer: Tushar Lapani from Pittsburgh

I am 100% convinced now why not allowed in a trigger but still not on board why not in function. It would be great if you can share such a scenario where commit in function will cause inconsistency would be great.

Due to time crunch you may not get chance and I couldn't ask for more favor. This explanation is fair enough at least for trigger .


Chris Saxon


February 14, 2020 - 11:32 am UTC

It's exactly the same problem.

In the update example, I could have a function decrement which subtracts one from the input. And assign the column value to that. e.g.:

update t
set    c1 = decrement ( c1 );

If there's a commit inside the function, it breaks transactional consistency in exactly the same way as the trigger.

More to Explore


The Oracle documentation contains a complete SQL reference.