Skip to Main Content
  • Questions
  • pragma autonomous_transaction within procedure before creating synonyms

Breadcrumb

more

Connor and Chris don't just spend all day on AskTOM. You can also catch regular content via Connor's blog and Chris's blog. Or if video is more your thing, check out Connor's latest video and Chris's latest video from their Youtube channels. And of course, keep up to date with AskTOM via the official twitter account.

Question and Answer

Tom Kyte

Thanks for the question, Vijay.

Asked: September 17, 2008 - 9:56 am UTC

Answered by: Tom Kyte - Last updated: October 30, 2019 - 4:33 am UTC

Category: Developer - Version: 10.2.0

Viewed 50K+ times! This question is

Whilst you are here, check out some content from the AskTom team: Table Functions, Part 5b: Table Functions vs Pipelined Table Functions

You Asked

Hi Tom,

I have created a stored procedure with in oracle package which creates list of synonyms depending upon the change of dblink server name. I need to execute this procedure to create/replace synonym pointing to another dblink server. My question is regarding use of pragma autonomous_transaction at the beinging of the procedure which create or replace the synonyms. Is it must to use pragma autonomous_transaction before creating synonyms within a procedure? Please suggest. Thanks.

Vijay

and we said...

no, you do not need autonomous transactions for this at all.

In fact, I wish they did not exist, they are so misused.


They (autonomous transactions) are useful in one case

to log an error message


this is "ok"

create or replace procedure log_error( ..... )
as
   pragma autonomous_transaction;
begin
   insert into logging_table (...) values (...);
   commit;
end;



and then in your plsql code you can:


begin
    .... your code
exception
    when others then 
           log_error( .... );
           RAISE;  /* this is crucial, you MUST re-raise the exception */
end;



that is about the only time you ever want to use an autonomous transaction - we want to log the error, but not mess around with the existing transaction state.

and you rated our response

  (12 ratings)

Is this answer out of date? If it is, please let us know via a Review

Reviews

Use of pragma autonomous_transaction

September 18, 2008 - 10:34 am UTC

Reviewer: Vijay Kodia from NH USA

Thanks for your feedback, appreciated.

another useful use case for this feature

September 18, 2008 - 4:21 pm UTC

Reviewer: A reader

... might be using nextval of a sequence ?

anyone who doesn't like the ability for pl/sql-programmers to open an autonomous transaction is free to downgrade to 8.0 or prior
Tom Kyte

Followup  

September 18, 2008 - 9:52 pm UTC

huh, what you just said makes absolutely no sense whatsoever.


why would using nextval of a sequence be a "useful use case for this feature"

explain yourself - as it stands, it makes no sense.

Why raise the exception again?

September 25, 2008 - 6:57 pm UTC

Reviewer: Carlos from Toledo, Ohio, USA

Tom, you say:

log_error( .... );
RAISE; /* this is crucial, you MUST re-raise the exception */

Is this crucial so that the autonomous transaction can work? Or you are saying this in order to manage the exception somewhere else in the code...

Please clarify.

Thanks

Carlos
Tom Kyte

Followup  

September 26, 2008 - 1:08 pm UTC

when others 
  then log_error;
end;


is for all intents and purposes the same as
when others
  then null;
end;



http://www.google.com/search?q=site%3Atkyte.blogspot.com+when+others+then+null


it hides the fact that an unrecoverable, unexpected, uh-oh - something bad happened error occurred from the invoker of your procedure. They have a right and a need to know.

And they should not catch it - in fact - you should not catch it. At most, the TOP LEVEL plsql block MIGHT BE:

begin
   procedure( :x, :y, ... );
exception
   when others
   then 
        log_error;
        RAISE;
end;


instead of

begin procedure(:x,:y,....); end;



but that is the ONLY bit of code that should have the when others - the top level. Nothing below should have it at all.

That is, when writing your routines, do not catch when others - the client will, and the client might submit a block like I coded above to catch it, log it BUT YET RE-RAISE IT (so the client knows "whoops, we are dead, goodbye")

One other use for autonomous transactions

October 08, 2008 - 5:54 pm UTC

Reviewer: Nick Pierpoint from Holmfirth, UK.

I agree that autonomous transactions are greatly overused and misused, but I have one other use for them as well as "to log an error message" (although it is a closely related use).

I use them to record where I am through a long-transaction so that other processes can poll a table to know what's-what.

For example:

procedure set_process_status (
        p_process_code varchar2,
        p_process_status varchar2)
is

    pragma autonomous_transaction;

begin

    update process_status
    set
        process_status = p_process_status
    where
        process_code = p_process_code;

end set_process_status;


Tom Kyte

Followup  

October 08, 2008 - 10:24 pm UTC

dbms_application_info.set_session_longops

use that instead (please!!!!!)

Probably right.

October 09, 2008 - 11:24 am UTC

Reviewer: Nick Pierpoint from Holmfirth, UK.

You're probably right, but I avoided using set_session_longops as I didn't want the processing information visible to other users. Not a great reason.

Would set_session_longops be doing an autonomous transaction "behind the scenes"?
Tom Kyte

Followup  

October 09, 2008 - 2:09 pm UTC

no, it doesn't need to - v$ tables are for the most part "views of in memory data structures"


using nextval of a sequence ...

November 09, 2008 - 1:35 pm UTC

Reviewer: Sokrates

... is using an autonomous transaction "behind the scenes"
Probably this is meant be "a reader" on Sept 18th
Tom Kyte

Followup  

November 11, 2008 - 3:07 pm UTC

autonomous transactions are the feature you and I have access to.

the behind the scenes transactions like updating seq$ have been happening since way way way before 8.0

so again, even that would not make sense. 'a reader' said "go back to 8.0".

Ok, 8.0 has no autonomous transactions (back when life was good)
8.0 allows you to do seq.nextval.

therefore, autonomous transactions - the feature you and I have access to - are not necessary for seq.nextval.

There are a lot of database internals that should never be exposed to most people, this is one of them.


Flashback query existed since version 4.0
It was first 'exposed' as a read only transaction in 7.0
As serializable transactions in 7.3.4
As flashback query in 9iR1

It was a good thing to expose.


Autonomous transactions - not so...

Other uses of autonomous transaction

March 29, 2012 - 5:26 am UTC

Reviewer: Aalbert from The Netherlands

You said:
> no, you do not need autonomous transactions for this at all.
> In fact, I wish they did not exist, they are so misused.
>
> They (autonomous transactions) are useful in one case
>
> to log an error message

There are other uses.

In our system, we use triggers to log who made the last change to a record. However, sometimes we need to fix stuff without touching that information, so we have to turn the triggers off. If we did that while the system is in use, other users wouldn't be firing the triggers either, so we need to lock the table.
First we lock, then we disable the triggers. But disabling triggers is DDL and thus an automatic commit, which releases the lock.
Autonomous transactions to the rescue!
First we lock the table, then we use an autonomous transaction to disable the triggers, then we do our work, then we use another autonomous transaction to enable the triggers, and then we commit, releasing the lock.

Do you see any flaws in this method, or would you agree that this is legitimate use of autonomous transactions?

Example:
CREATE TABLE ats_test(
 x int NOT NULL PRIMARY KEY,
 last_change_user VARCHAR2(100) NOT NULL
);

CREATE TRIGGER biur_ats
BEFORE INSERT OR UPDATE OF x ON ats_test
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
DECLARE
BEGIN
 SELECT user INTO :new.last_change_user FROM dual;
END;
/

INSERT INTO ats_test(x) VALUES(1);
INSERT INTO ats_test(x) VALUES(2);
INSERT INTO ats_test(x) VALUES(3);
INSERT INTO ats_test(x) VALUES(4);

SELECT * FROM ats_test;

COMMIT;

-- Later...
-- O no! Those are the wrong values!
-- They need to be twice as big.
-- A script is created to correct them,
-- but it is run using another user
-- which we don't want to show in the
-- records.
-- What do we do?

LOCK TABLE ats_test IN EXCLUSIVE MODE;

DECLARE
  PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
  EXECUTE IMMEDIATE 'ALTER TRIGGER biur_ats DISABLE';
END;
/

UPDATE ats_test
SET x = x * 2
WHERE x BETWEEN 1 AND 4;

SELECT * FROM ats_test;

DECLARE
  PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
  EXECUTE IMMEDIATE 'ALTER TRIGGER biur_ats ENABLE';
END;
/

COMMIT;


--DROP TRIGGER biur_ats;
--DROP TABLE ats_test

Tom Kyte

Followup  

March 29, 2012 - 7:09 am UTC


I would have just had an admin package:

create or replace package my_admin_package
as
   g_disable_trigger boolean := false;

   ... other useful stuff for admins .....
end;
/



and then your trigger would have as line 1:

if (my_admin_pkg.g_disable_trigger) then return; end if;


and you could just:

lock table
exec my_admin_pkg.g_disable_trigger := true;
update
commit;



no ddl, no flushing of tons of sql from the shared pool, no ddl.



better yet, get rid of those triggers.
http://www.oracle.com/technetwork/issue-archive/2008/08-sep/o58asktom-101055.html

by the way, don't select into when a simple assignment would do the trick.

:new.last_change_user := user;



Legitimate use of Autonomous Transaction?

April 10, 2012 - 2:57 pm UTC

Reviewer: Matthew McPeak from Secaucus, NJ

Almost afraid to mention this one...

Client had Oracle Configurator installed, which, by design runs on a separate server and requires a database connection. So, we had:

1) Process A (on database server) create a sales order header and lines.
2) Process A -- call Oracle Configurator (via HTTP) to validate configuration
3) Process B (on Oracle Configurator server) -- connect to database and validate configuration and respond to Process A.
4) Process A -- commit or rollback sales order header & lines depending on success of Process B.

Unfortunately, there were Configurator Extensions (the way Oracle lets customers extend the functionality of Oracle Configurator) that needed to see information about the sales order being validated and this information was beyond what is included in the HTTP message to the Configurator server.

So... I had Process A write its uncommitted work to a table via an autonomous transaction so that Process B could see it. It was a little more "robust" than that, but that's the jist.

What do you think: "legitimate" or "I hate your code"?

Thanks, as always!

Matt

Tom Kyte

Followup  

April 11, 2012 - 11:13 am UTC

Ok, that sounds legit.

The data written by A was an input to B and not part of A's transaction really.

I'll up the count to two :) the number of legitimate uses...

#3 maybe?

May 02, 2012 - 10:05 am UTC

Reviewer: Matthew McPeak from Secaucus, NJ

Tom -- this issue just came up...

We have a PL/SQL program that is basically...

CURSOR LOOP...
SAVEPOINT A;
BEGIN
DO STUFF... CALL 3rd PARTY APIS... DO MORE STUFF...
EXCEPTION
WHEN OTHERS THEN
ROLLBACK TO A;
UPDATE TABLE STATUSES TO INDICATE ERROR;
COMMIT; -- STATUS UPDATES
END;
END LOOP;

In other words, on unexpected errors, we want to record the error and process the next record. One record with a data issue should not keep the whole batch from processing (that's an explicit business requirement).

The issue is that one of those 3rd party APIs we are calling (3rd party = Oracle!) is committing. So, when there is an exception, our savepoint has already been lost and our exception handler gets an ORA-01086.

The expedient thing to do is wrap the call to the offending 3rd party API in an autonomous transaction... so when it commits it is not committing our main transaction.

Even as I'm advising the developer to try it, I'm thinking "Tom probably would have some choice words about this".

Is this a bad idea? I can't really see the downside -- the API already has a commit in it so transaction control is out the window, which is usually why I think you don't want to use autonomous transactions willy-nilly.

Any better ideas?

Thanks in advance!

Matt





Tom Kyte

Followup  

May 02, 2012 - 2:10 pm UTC

Now you know why I wish commit and rollback did not exist in PLSQL! I truly do!


Probably you need to change your processing so that you commit in parity with them - so your process is restartable. You would log the success or failure of every record processed and add code that reads this log at the beginning to figure out where to pick up from (or have it remove the record to be processed from the inbound processing queue).


so - more like this:
figure out where to start
cursor for 
loop
   begin
      do something
      call 3rd party api as an autonomous transaction
      do something  
      log_success;
   exception 
   when others
   then
        rollback;
        log_error;
   end;
   commit;
end;


It isn't perfect - and cannot be because of the commits in that 3rd party API - which if you ask me should have a bug filed against them as it makes them "not useful" in real life - since they do commit.


Thanks!

May 03, 2012 - 9:45 am UTC

Reviewer: Matthew McPeak from Secaucus, NJ

Thanks, Tom! If there is an exception after the 3rd party API call, we'll roll back and leave orphaned the record created by the 3rd party API. Ugly, but I think I can live with it.

In case anyone questions the autonomous transaction, I had the developer put a comment in the code saying "Tom Kyte said this was OK". (kidding).

Thanks again, as always!

Matt



autonomous transaction suggested use #3 / from Andre in Switzerland

October 28, 2019 - 4:03 pm UTC

Reviewer: Andre from Switzerland

Dear Tom,

We have 26 databases with several DB_LINKS primarily for streams replication as well as MV replication

I would like to run by you a suggested use #3 for autonomous transactions

Database A receives data to load and process
Streams replicates the data to database B
Database D (Data-Mart) needs to have their MV refreshed shortly
after
(1) All data in A had been processed and streamed to B
(2) Additional processing in B completes successfully

Additional considerations
a) data volumes vary from day to day = time window 1-4 hours
b) there are delays in processing that extend to 10-12 hours
c) there are various incidents (on a weekly basis and these need to get resolved case-by-case

Proposed (tested to be working very well) use of autonomous TXN
is as follows:
(I) Procedure X in database A updates the audit record in B
(II) Procedure Y in database B updates the audit record also
Instead of having this audit table in database A or B it is in database D.
A trigger with PRAGMA Autonomous_Transaction fires a REFRESH procedure in database taking data from MV-LOGS stored in B.

This approach makes the solution 100% automatic.
The previous method was to rely on emails generated at A and at B databases and then submit a job to perform a refresh.

There are dozens of similar cases and automation saves time.

Are you OK with this approach or would offer a better one.

Thanks
Kind regards
Andre (Switzerland)

Connor McDonald

Followup  

October 30, 2019 - 4:33 am UTC

My only concern there is this scenario:

Database A and/or B are up and available, and D is down for maintenance/patching etc.

Currently your applications in A and B will still work fine, because they do not have any dependency on D being up.

Under your proposal, if D is down, then in effect, so are A and B because that trigger will fail to fire.

January 06, 2020 - 2:49 am UTC

Reviewer: Andre

Dear Connor, Tom et al on your very special site:

Let me clarify..:

1) The database "D" is a Data-Mart with an Application
-- + queries + BO reporting etc - ALL accessing MV's
2) The MV's on "D" used to be refreshed regularly with
-- a dedicated scheduler which would run regardless of
-- the status of database "B"
3) If DB "D" is down (which would be indeed very rare)
-- then surely no MV's refresh would be done
4) Normally MV's should be refreshed to provide business
-- with data which..:
-- a) is in sync with database "B" (stage)
-- b) also contains ALL data loaded into "A" replicated
-- -- via STREAMS from "A" to "B"

The solution that I had developed (described in my original
report is IMHO better than the one that used to be which
would execute MV's refresh on a specific time (once a day)
regardless of which processes finished or not on "A" and
regardless of any possible issues with STREAMS.

The solution that I had implemented - does not depend on
a chance that all processes had completed including STREAMS
replication (which often 30-50% gets stuck).
Instead - the same procedure checks that ALL had been done
including replication as well as an additional process on "B"
and only then triggers the MV refresh on "D"
- in which it is using an AUTONOMOUS TXN.

This works fine.
It does deliver results.
MV's get refreshed no sooner than the entire batch of files
--gets loaded and also without any delay and without any
--intervention

I had shared this with you only to check if you are OK with
such a use of AUTONOMOUS TXN - as I know how Tom is against it
and ... I do actually AGREE - for very good and valid reasons.

So, hopefully you and Tom might endorse this as one more
use of the AUTONOMOUS_TRANSACTION feature .

Thanks
Best wishes
Andre