Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question.

Asked: February 25, 2001 - 5:55 am UTC

Last updated: September 10, 2012 - 6:28 pm UTC

Version: 8.1.6

Viewed 10K+ times! This question is

You Asked

1)is it required that all the sql statements in my application (consisting of beans and servlets and jsp's) be followed by a commit statement...

what happens if i donot give any commit after each sql statements? do they commit when the session ends..
is there any feature where i can set the auto commit to true or false in the database server itself..
how should it be actually?

2)i have a schema with all tables with keys,views,sequences,triggers etc...

i implement fine grained access control on this and my policy and context also exist in this schema(securitywise is it ok or should i keep the securrity policy in system)

i have different users who use these table,views etc..
For which objects should i create pvt. synonyms in those users/schemas and for which objects should i give grants?

i understand that even if i create a sequence i have to give grant to the user..
but for keys and indexes do i need to create sequences ?

my understanding is i have to create synonymns for dblinks,functions,packages,procedures,sequences,tables,views,triggers and then give grants accordingly...
did i miss out something?

3)i am using multimaster replication for the above scenario, so what else do i need to take care of?.. like what are the objects
for which i have to create snapshots and groups etc...

regards
muralidhar



and Tom said...

1) if you are using JDBC, it unfortunately "auto commits" after each and every statement. It doesn't do transactions. I believe the first line of code after connecting in JDBC should always be:

conn.setAutoCommit (false);


Here are the rules for when to commit:

o you should commit as soon as you can
o you should NEVER commit earlier then you should.

You would only commit after each statement if in fact EACH statement was an entire transaction. If you were going an ATM transfer for example and needed to do the SQL:

update accounts set balance = balance - 1000 where
account_id = 123;
update accounts set balance = balance + 1000 where
account_id = 456;

to transfer $1000 from savings into checking, you would NOT want to auto commit or commit at all between statements (what if the second update fails, you just "lost" $1,000). You would want to commit right after the transfer was complete in order to make the transaction "durable" -- permanent.


You should ALWAYS commit explicitly and you should always commit when your business logic dictates it -- not after every statement, not relying on the database to do it when you log off (if you log off, Oracle will commit your work before it does so, yes)

2) You only need to grant on the objects they will be accessing. If you grant execute on a procedure, they need NO grants on the tables the procedure accesses. If you grant select on a table, they need NO access to the indexes or constraints. Just the top level object is granted on.

Synonyms are just aliases and have no privileges associated with them. You always need to grant on the objects to which they refer for them to work.

You do not have to create synonyms at all, they are optional. You can use "alter session set current_schema=APPLICATION_SCHEMA" instead.

3) I cannot answer that since I didn't write your application ;)

If you are using MM replication -- you don't need to have snapshots and snapshot groups (those are different from Multi-Master)....


Rating

  (34 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Very helpful, and complete

Harrison Picot, April 05, 2001 - 10:51 am UTC


Control the DB commit at the Java level

Bala, May 20, 2003 - 4:55 pm UTC

Hi Tom,

One question.

This is the process the application is currently doing.

During Order creation, Java code populates the user data into result bean and each result bean is mapped to the database tables. During Order check out, the individual stored procedure is called and the database insert/update is done and committed at each stored procedure level.

But logically the transaction should be committed only after all the stored procedure calls are successful with out any errors.

Also the stored procedure (back end) is returning the unique id (sequence no) back to the client at every stored procedure completion.


My questions are :
==================

Say i call 5 stored procedures from Java code. Each stored procedure is creating records in some tables. In case, if the 5th stored procedure call fails (due to some reason), i have to rollback all the records upto the point of time.

Can i do this ?

I will remove the commit at the stored procedure level. The individual stored procedure is returning success or failure back to the caller in JDBC.

I want to control the commit at the Java code level.
If all the stored procedure call is successful then i will do commit at the Java side else rollback all the records since begining of the transaction.

Do you see any drawback with this approach.

Please advice .

Thanks

In fact

Tom Kyte
May 21, 2003 - 7:25 am UTC

yes you can do this.

I wish PLSQL didn't support commit/rollback. I firmly believe transaction control MUST be done at the topmost, invoker level. That is the only way you can take these N stored procedures and tie them together in a transaction.

Thank you for your confirmation.

A reader, May 21, 2003 - 3:34 pm UTC


further clarification

Vijay Sehgal, May 29, 2003 - 10:00 am UTC

Hi Tom,
Good day please suggest on this. In the project I am working on the procedures in database are called from servlets using jdbc , auto commit is off and the commit/rollback is done in procedures after successful/failure of transaction. What is your suggestion should the commit be always issued from servlets or front end technology used or if done in procedures for complete transaction it is ok and what is the best way of these.

Thanks as always.

Regards,
Vijay

Tom Kyte
May 29, 2003 - 10:06 am UTC

well, will you someday want to call procedure "P1" and "P2" as an atomic unit of work? If so, p1 and p2 should not commit or rollback.

Me, I believe the client should be responsible for this.

Commit big amount of data

Danny Wang, May 29, 2003 - 12:17 pm UTC

Hi Tom,

We have a single transaction which will insert 40M+ rows into a table (6 indexes associated with it) and it needs 50GB+ rollback segment. Also, there may be more rows in the future.

Here is my question.

What's your suggestion about the maximum rollback segment usage per atomic transaction? It couldn't be unlimited, right?

Thanks

Tom Kyte
May 29, 2003 - 1:47 pm UTC

I wouldn't have the indexes enabled. I would need about 0 bytes of rollback to do this.

I would disable indexes
I would nologging the table
I would direct path insert
I would rebuild the indexes in parallel/nologging
I would backup the affected tablespaces

commit issue

Reader, July 30, 2003 - 6:21 pm UTC

I have an issue. My script updates a table and commits every 15 minutes. However, after about 75% of the update is complete, the script failed. When I rerun the script, I don't want to do the updates that were already committed. How would I do it? I am sure you have some ideas in the way I have to code to avoid running the updates that were already committed. could you please help? thanks.

Tom Kyte
July 30, 2003 - 7:45 pm UTC

you do that by removing the commit....

or, writing lots of code to make yourself restartable.

is there a generic way to do this? of course not, I've no idea or clue what your UNIQUE process that on one else in the world does...




Commits and Savepoints

Darren, July 31, 2003 - 3:16 am UTC

Tom,

Quick question on commits. I know that commits are synched to the LGWR process and hence are bad not only from an overall process flow but bad from a perfomance point of view too. Do savepoints also synch to the LGWR process and hence suffer performance hits?


Also in a web application like asktom (mod_plsql). Would you rely on mod_plsql doing the commit (it bieng the client) or would you do it in your top most procedure?

Regards.


Tom Kyte
July 31, 2003 - 7:18 am UTC

savepoints are logical things only -- they let you group N statements together so as to allow you to rollback to savepoint (but not "commit to savepoint")

so no, they do not sync to disk.


We commit explicitly at the top level.

Very interesting and useful ...

Helena Marková, July 31, 2003 - 3:44 am UTC


commits

Duncan, November 28, 2003 - 7:17 am UTC

If using jdbc and setting autocommit off. Then running a pl/sql procedure that passes success or failure back to the app so the app has transaction control over commit/rollback.

What happens though if the transaction fails and sends failure back to the app which is then supposed to send the rollback command to the db but what if the connection is severed before this rollback command can get to the db? Does the db do a commit? If so is this not a bad thing?

Tom Kyte
November 28, 2003 - 10:34 am UTC

the db will rollback uncommitted work in reaction to a failure.

committing someone else' transaction?

Gus Spier, October 22, 2004 - 3:16 pm UTC

Tom,
A user went to asktom and found out how to delete the 68 duplicate rows out of the 47million row partitioned table. He managed to "lose" his connection to the database while the delete was running. Now he wants to know if he can issue a commit to that SID/SERIAL# from another session. Is that possible?

Tom Kyte
October 23, 2004 - 9:15 am UTC

it is (fortunately) quite impossible.

HTML DB autocommits?

Mark, February 09, 2005 - 7:05 pm UTC

What do you think of this, Tom? </code> http://forums.oracle.com/forums/thread.jsp?forum=137&thread=288319&tstart=0&trange=15 <code>

Apparently we can't have full control over commits in HTML DB. Is there something I am missing - something fundamentally wrong with spanning a transaction over multiple pages? Is doing this outside the scope of HTML DB's intended purpose? Frankly, I was rather disappointed to read about this.

Interested in your opinion.

Thank you!
Mark


Tom Kyte
February 10, 2005 - 1:01 am UTC

on the web -- in a stateless environment, i would never let a transaction span pages (if you do, then the transaction must maintain a state, resources on my server. meaning you would be taking resources on my server even when you were not really "using it")


So, on asktom, I have a multi-page "transaction" called "ask a question". I gather information on two screens (two pages), we let you review, manipulate, change the supplied information may times if you want - and then we 'commit it'

how?

by using the session state (saved in a table, persisted on the server) as a "transaction cache", you put your data in there, work on it and when you get to "save the question", we take all of this stuff and put it into the database in a single transaction (putting it finally into the right tables).

Sort of like a shopping cart. You put things in, you take them out and finally you goto the front of the store and "checkout". Then and only then is the real transaction "committed"

Also, html db provides for really robust lost update detection transparently -- if you query a row out, it'll make sure when you go to update it that it was not modified by some other transaction (when using their update methods -- if you write your own update in your own API, you have to use owa_opt_lock or some other method to provide the same)

RE: HTML DB autocommits?

Mark, February 10, 2005 - 11:35 am UTC

Thanks for the response. If I may play devil's advocate...

I feel it would be worth the overhead of having a persistent session. If I'm not mistaken, the web server would simply keep track of the user via a cookie, and map that cookie ID to the corresponding Oracle session ID. Once that's taken care of, each user would interact with the database just as if they each had SQL Plus open, only with a pretty GUI.

Having a session per page precludes:

- splitting up of really long forms into several pages without doing tricks - e.g. my form takes up 4 pages - I would like to have page 1 do an insert, then pages 2 and 3 do an update on the newly inserted row, and finally page 4 does an update and a commit (or a rollback if the user so chooses). Maybe they can even go back to page x and edit the previous info (another update). I just feel this would be more natural and elegant than something like asktom's transaction cache.

- use of neat Oracle features like on commit temporary tables, deferred constraints, MVs that refresh on commit - all are essentially confined to transactions on a single page

- It seems I have to use owa_opt_lock to control concurrency (thanks for pointing that out - I like to have most functionality in packages, so I'm not using HTML DB update methods)

So basically I feel like the user interface is imposing limitations on what I can and cannot do in the database, and that avoiding these limitations would be worth the session state overhead. Obviously others - who are probably more knowledgeable - disagree, but maybe it would be worth having the option, at least.

Thanks for hearing me out.
Mark


Tom Kyte
February 11, 2005 - 3:23 am UTC

o html db keeps a session state for you. splitting that form up into 4 pages is *trivial* I do it here -- i stated above "i collect information from you, its in your session state). you store the form values in the session state (managed entirely by htmldb) and then you "post it" when they say "done"

it is easy, hugely easy. no tricks, nothing up my sleeve.

o same answer -- you process the transaction AS A TRANSACTION -- give me inputs, do transaction. if it took 15 pages to collect information, so be it, the TRANSACTION is at the end.

o html db does this for you if you want -- but I don't see this as an issue



My site would not be able to service the user community it does if I had persistent connections. people exit the browser, leave their session on my database. now I have to time it out. but what if I time you out simply because you waited 5 minutes (not because you exited the browser). now the app says "sorry -- you lose". My server would have to be huge to support hundreds of persistent sessions (instead of the 2-20 concurrent sessions I have right now) since at any point in time, there are hundreds of people that would be having a session. It would never work.

I guess we'll have to agree to disagree here. I don't believe in stateful connections. We called that client server in the past -- and the web made it so that systems can scale incredibly using stateless paradigms.

RE: HTML DB autocommits?

Mark, February 10, 2005 - 2:22 pm UTC

Well I just had a chat with a friend that made me change my mind. He isn't that experienced with Oracle, but he has a lot of web development experience, and just understands the concepts well. I'll post our log below, in case anyone is interested:

Friend: hmm..I like both options :-) but I think you are delving into features of HTMLDB that I don't know enough about to give a proper answer

Friend: though, I think I prefer his method..storing the data into a session state table, and when the user commits it takes the data out and moves it to the final destination

Friend: hmm..does the method he mention not work with packages?

Mark: the reason I like my method is you don't even need a session state table..you just update and insert into the "live" table, and until you commit it's like you have your own different version of the table - nobody else can see your changes until you commit

Mark: which is actually the standard way to do this stuff, but they imposed these limitations simply because they wanted to not keep track of a state, since they're using html.

Friend: true, but say the user jumps to a totally different area of the site, would that affect the transaction for that page? and how long would it take for the transaction to be cleaned up if the user never goes back to the page?

Mark: you could have it automatically rollback if you jump out of that section..like in my example of the four page form, you'd just do a rollback if you exited those four pages...rollback time depends on how many changes you make, but if you're just adding a few rows here and there it'll be close to instantaneous

Friend: but how would you signal the web server that you jumped out of that section if the user say hit the back button

Mark: hmm yeah that's true..would kind of depend on people using the links provided and not using the back button, or typing in random urls on their own..hmmmm

Friend: yeah

Mark: well then I'd say that anytime you start a new transaction, you do a rollback first...so if you hit the back button, you'd still have uncommitted changes, but if you ever go into another area where you are capable of committing (a separate 4-page form let's say) then your changes get rolled back...if you jump to page 3 in that second form, since you didn't insert the row in the first place (in form 2, page 1), you'd get an error, which could be handled gracefully - it would jump you to form 2 page 1, which would first do a rollback then begin that next transaction

Friend: then say you have two browser windows open, by say clicking a link while holding the shift down, at that point both windows would be using the same session..the second window could end up canceling the transaction of the first window

Mark: hmmm then maybe autonomous transactions would be a good solution, cause using those you can kinda define "sub" transactions which only get committed or rolled back from within that transaction...so each time the form was opened i'd start a separate autonomous transaction...only problem then is your uncommitted changes would hang around till you logged off or expired, which isn't a big deal as long they don't get huge

Friend: but how would you determine which form is actually being sent from the user? window 1 or window 2?

Friend: and if that is being used on the internet, you basically just gave people an easy way to take down your server :-)

Mark: i guess the name for the transaction would have to be generated and somehow associated with the page...maybe a hidden field...i see what you're sayin about people taking down the server...hmm

Friend: but even in a hidden field, you open a new window to the same page in the same session, that hidden field is the same too :-)

Friend: now granted that scenario might not be a big deal, and if you can recover easily from that no loss..but something to remember

Friend: which is kinda why I'm leaning towards the method asktom mentioned..as it will handle all these scenarios

Friend: including expiring the old data without you having to worry about it

Mark: well the hidden field would be the same in the new window if you already started the transaction, but if you opened page 1 in a new window - and so you had a new transaction begin there - you could have the two separate forms going without interfering with each other

Friend: true, that would at least take care of one possibility

Mark: but i was just thinking in the bathroom, I don't think you can have multiple autonomous transactions, nor can you give them names.

Friend: well thats a different problem :-)

Mark: if you can't do that then you can't do what I was saying, pretty much :-)

Friend: the other advantage to the method asktom mentioned is say you have 2 web servers, both hitting the same database, with no transactions spanning pages a load balancer could easily just direct the user to either server without worries

Friend: that probably doesn't affect you, but could later

Mark: ah well guess that's probably the way to go..not that I have a choice anyway. just didn't really understand why we were forced to do it that way :-)


Tom Kyte
February 11, 2005 - 7:32 pm UTC

thanks, that was very good


I think I like your friend, from the point in time they said:

...
but I think you are delving into features
of HTMLDB that I don't know enough about to give a proper answer
........


(meaning -- they will state what they know and say when they think they are getting out of their area of knowledge).

For Mark

Bob B, February 10, 2005 - 10:41 pm UTC

Let's say one day, your you and 9 other coworkers decided to order out for lunch. You're nominated to get the food. There are 3 restaurants they want you to go to. Would you try and remember what each one wanted or would you write it down and, when you got to each restaurant, recite what was wanted from that restaurant?

You and your coworkers are the people logged into the system doing things. The restaurants are the different actions each user wants to do and the orders are the information that each action needs to complete. The paper represents a database table and your head represents sessions/open transactions (i.e. RAM/CPU).

In both situations, its easier to record the information and retrieve it when its needed. Takes up less resources and can be even more flexible than storing all the information in memory. A person could log in, do pages 1 and 2, logs off, the database could crash (and be recovered) and the user could come back and do pages 3 and 4 and then commit the transaction. With sessions, if the database crashed between pages 2 and 3, the user has to start all over

Alrighty, I see the light.

Mark, February 11, 2005 - 10:09 am UTC

Thanks Tom, I am convinced this is the way to go now. About the first point, I had misunderstood what you were saying about the session state -- I thought we had to do some kind of trick using a table that is not a temporary table, but is essentially used as one (mapping session IDs to temporary data, deleting the right data at the end of transaction.) I get it now. Should have showed your post to the HTML DB guy here.

I was still confused about the second point for a moment, but then the light bulb went on... In pages 1 through n-1 you're just collecting info and putting in HTML DB's session state. Then the transaction simply begins and ends after the page n is submitted.

Thanks again for the awesome assistance.


Granting rights to a user.

Ajums TT, October 05, 2005 - 11:08 am UTC

Hi Tom,
This may sound like a stupid question, but I just want to know if it can be done. Can we have users who are allowed to change data in their particular session but not allowed to commit the changes?

Just Curious.

Tom Kyte
October 05, 2005 - 11:46 am UTC

no, not really

does not stress the system to have long or large transactions

Steve, November 17, 2005 - 4:41 pm UTC

Tom,
I have 2 questions when I read your article, "Locking and Concurrency - Part1" on dbazine.com.

1. "You should defer committing as long as you have to. You should not do it quickly to avoid stressing the system, as it does not stress the system to have long or large transactions......"

I don't quite understand why long & large transactions
does not stress the system, but frequently committing will stress the system. what statistics tell me that?

the long & large transactions require huge rollback segments which will stress the system. In my batch system, if a long & large transaction
failed(rollback seg is too small), I lost a coup of hours, say 3 hrs, if I split the long&large transaction into two(if I can), I would save 1.5 hr if sencond transaction failed. In a time contraint batch system, should I avoid long and large transactions?


2. "If all of your applications use optimistic locking then using a straight UPDATE is generally OK, rows are locked for a very short duration as updates are applied and committed..... "

If doing so, I still have chance to lose the update, right?
Why suggest so?


Thank you for your advice!

Steve





Tom Kyte
November 18, 2005 - 10:15 am UTC

1) the entire quote should be:

<quote>
You should defer committing as long as you have to. You should not do it quickly to avoid stressing the system, as it does not stress the system to have long or large transactions. The rule is commit when you must, and not before. Your transactions should only be as small or large as your business logic dictates.
</quote>


log file sync - every time you commit, you generate more undo and redo (to close out the transaction) and you WAIT for lgwr to write the data to the redo logs.

If you load 10,000 records and commit each one - you will wait 10,000 times for log file sync.

If you load 10,000 records and commit once - at the end - you will wait one time for log file sync.

And the time to commit is NOT a function of the amount of data modified. I cover that in detail in the book.

So, by committing frequently you:

a) generate more total redo
b) generate more total undo
c) spend much of your time waiting for log file syncs, instead of loading more data


Now, everything boils down to common sense. Can you checkpoint your loads? Sure, no problem - as long as you make them RESTARTABLE (another concept I cover in detail in the book - many people do not make them restartable and end up with a mess when it fails 1/2 through - but after committing part of the stuff - they have no way to restart it in the middle).

My point is - don't over commit, commit when you have completed a logical unit of work. Do not commit every row, not even ever 1,000 rows during a load. Do not write transactions that commit every statement (they are not transactions anymore). It is OK to have locks. It does not stress the system


Oh, and if you have big transactions, size your rollback properly for them. And you can use resumable statements (so if you do run out of space, you do not lose the work, you fix the underlying space problem and continue)


2) no, because the update includes the lost update detection logic in it. (see the where clause - it made sure the columns were not changed since you read it out).

The point about the "you can just use the update" was about getting blocked.

IF everyone uses optimistic locking
THEN
locks are taken for very very short periods, your update will not block
for a long time
ELSE
some things might be using pessimistic concurrency control, your update
could block for a long time, therefore you might want to use the
select for update NOWAIT to avoid getting blocked
END IF


but - lost update is avoided because you are USING optimistic concurrency control (one of the various techniques I demonstrated in that chapter)

Thank you! Tom

steve, November 18, 2005 - 2:30 pm UTC

"RESTARTABLE (another concept I cover in detail in the book.."

which chapter in which book? I have your "expert one-to-one" and "effect oracle design"

Thanks!

Tom Kyte
November 18, 2005 - 3:54 pm UTC

chapter 8 "Transactions" Expert Oracle Database Architecture.


chapter 4 "Transactions" Expert one on one Oracle.

Stored procedure call for every 5 minutes using jdbc in java

srishailam, December 11, 2005 - 3:53 am UTC

hi
This is sri. I have a query related stored procedure call in JDBC.
My application needs that i have to retrive the data for every 5 minutes from the database.How it is possible?

Tom Kyte
December 11, 2005 - 4:17 am UTC

run the stored procedure every 5 minutes? Not really sure what to say here - you just need to write the code that every 5 minutes calls the database stored procedure? Or am I missing something...

Commit taking some time

Yogesh Purabiya, February 17, 2006 - 9:49 am UTC

In your first book, you have explained that "commit" statement takes the least time to get executed.
It does the least work; all our work is done prior to the commit.
Yes, I did see it for a long time.
But, recently, I saw commit taking few (5-10) seconds.

The client is Toad on win-2000 professional.
Database & instance ( Oracle 8.1.7 ) on win-NT-4.

There were 12 Legacy instances of various offices on different Win-NT machines each using oracle 8.0.5.

We are now putting all the Legacy data at one place.
I have decided to use FGAC-policies to replace those instances.
I have imported all of them in 12 dummy users of a single instace in Oracle 8.1.7.
I am copying data from those users to a real user with (RO_Name) (Reg-office-name) as an added column.

All the target tables are having NO-Logging enabled.
All the tables have only BitMap idexes (1 each) on the column RO_Name; i.e., no other index except these BitMap-ones.

For each dummy user, I do this in a batch, before giving commit; like

INSERT /*+ APPEND */ select t1.*, 'RO1' from RO1.Table1 t1;
INSERT /*+ APPEND */ select t1.*, 'RO1' from RO1.Table2 t1;
...
INSERT /*+ APPEND */ select t1.*, 'RO1' from RO1.Table249 t1;
INSERT /*+ APPEND */ select t1.*, 'RO1' from RO1.Table250 t1;

COMMIT; -- This takes 5-10 seconds.

In each batch inserts (each transaction),
total no. of rows inserted are nearly 100000 to 500000.


Tom Kyte
February 17, 2006 - 2:53 pm UTC

don't use toad, what then. Toad does lots of "things" - use sqlplus.

But - commit will do the LEAST amount of work - it takes about the same amount of time to commit 1,000,000,000 records as it does 10 - because lgwr is CONSTANTLY writing out the redo data and all we have to wait for on the last one is the last flush of the redo log buffer to disk.

If the tables are non-logged, then I seriously doubt it is really the commit, but rather something in toad that is reporting things "strangely", since there is nothing to flush!



Commit taking some time

Yogesh Purabiya, February 18, 2006 - 11:39 pm UTC

(1) I am using Toad since last 3 years.
This happened only these times.
In all other cases, commit did not take
more than few ms (say 10 to 100 ms).
There are no other users using this instance;
it is still being build up by only one person
(i.e., me of course).

(2) While refreshing "this" page from the history (IE-6),
I got the following error; just curious to know why :

ORA-01400: cannot insert NULL into ("ASK_TOM"."WWC_ASK_QUESTION_ACCESS_LOG$"."DISPLAYID")



Tom Kyte
February 19, 2006 - 9:04 am UTC

1) so, I don't care of you have been using toad since before dirt was discovered. What does the duration of time you are using something have to do with anything?

If you are doing those inserts
AND those objects are nologging
then - there is nothing to be done during the commit really. So, something else is afoot.


2) you refreshed a posted page I presume, you ran the transaction twice.

... some correction on the previous review ...

Yogesh Purabiya, February 19, 2006 - 12:22 am UTC

(2) While refreshing "this" page from the history (IE-6),
I got the following error; just curious to know why :

Sorry, not IE-6; it is Mozilla FireFox 1.5.0.1

(3) One more point :

Do you accept (and reply) queries via e-Mail ?
In the office, we do not have internet access nearby.

Tom Kyte
February 19, 2006 - 9:07 am UTC

3) hmmm, you can send email but don't have a network?

No, please don't email me - consider the implications. There is more than one of you out there and only one of me here.

Commit taking some time

Yogesh Purabiya, February 21, 2006 - 12:05 pm UTC

Yes, you are right.
SQL*Plus took quite a small time.
I wonder what toad might be doing for such a simple thing like "commit".
I know, toad does take lot of time while SELECTing - it must be formatting the output. (Whenever I check with Windows' Task-Manager, it shows 99 % CPU usage for SELECTing)

But, what about commit ? So, I thought there might be something in my tables, indexes, session-environment that was wrong. Since I did not see such thing for 3 years, I doubted other things. Like, recently I changed from oralce 8.0.0 to 8.1.7, etc.

Thanks for the reply.

Tom Kyte
February 22, 2006 - 8:05 am UTC

ask the people that make toad what is up


second time this morning
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:7986513233505#57388102498203 <code>

Mail & Internet

Yogesh Purabiya, February 21, 2006 - 12:11 pm UTC

We have mail.
We have network.
But, the internet is allowed from only few PC's; and they are not close.

Tom Kyte
February 22, 2006 - 8:05 am UTC

"sorry" I guess is all I can say. I'd be giving serious consider to alternate employment personally.

Autocommit and impact on performance for selects

Allen Shatzer, May 17, 2006 - 6:29 pm UTC

I think I know the answer to this but want to confirm. Our application defaults to autocommit mode (some methods turn it off when there are multiple updates). My question is whether autocommit will result in a performance hit when only selects are being performed.

My guess is that it does not result in a performance hit since there are no updates to be written out to the redo logs.

Is my assumption here correct? What I am trying to determine is if we have a potential problem with performance due to log file syncs when only selects are being performed and autocommit is enabled.


Tom Kyte
May 18, 2006 - 10:32 am UTC

depends, if the driver actually calls "commit", there will be of course a round trip to the database server at the very least. The database is optimized to ignore a commit when it can (no transaction started) but if a client makes the call - you will have the call be made.

That said, running with autocommit on is horrible. I still cannot imagine why the inventors of JDBC did that (I understand ODBC, that was Microsoft after all, holding a lock in sqlserver is considered "bad" - but only if you don't care about your data ;)

Replacement of Incremental commit

Arindam Mukherjee, July 27, 2006 - 7:37 am UTC

Respected Mr. Tom,

Right now I am quite in a deep trouble. I need your help. We are using Oracle 9i and Sun Solaris – 5.8 and Pro*C program. Through this program, we Update and Insert more than 2000 records. And at the same time, so many other Pro*C programs are also running to update and insert to Database concurrently.

So we implemented incremental commit. Now the current logic in the program regarding the incremental commit strategy does not provide a clean rollback solution in the event of an error. The logic needs to be modified to have either a single commit, or some other check pointing strategy, such that in the event that an error occurs, the loader can issue a rollback command and be assured that all new and/or updated records have been rolled back.

1> What type of problem do you envisage if I use single “Commit”? I am afraid, if rollback space runs out of space in the light of my pro*c program and other programs executed concurrently in the production server.

2> Can we use “Savepoint” in Pro*C but I don’t believe in it because it can’t surmount the rollback space problem. Anyway, if it is the only viable solution in this case, please suggest me and please write me two lines pro*c code using “Savepoint”. Otherwise you please suggest me the best method.


Regards,
Arindam Mukherjee


Tom Kyte
July 27, 2006 - 12:09 pm UTC

2,000 records? that is tiny.

You commit when your transaction is complete. 2000 records is such a small tiny transaction - I cannot envision how you would run out of undo space.

Situations where commits/rollbacks in procedures are appropriate?

Salaam Yitbarek, August 11, 2006 - 11:22 am UTC

We've used commits and rollbacks in our stored procedures from the start, the reasoning being that they have been created as self-contained transactions.

Our applications have access to the database only via PL/SQL procedures, and the procedures/applications have been designed in such a way that the apps don't have 'transactions' that span multiple procedures.

Example procedures:

checkout_publication: Checks out (as in version control) a publication to a given user. (The CHECKEDOUT_BY column on the PUBLICATION table is set to the user name)

modify_publication: Modifies some columns of a given publication.

The worry has also been that if commits/rollbacks are left to the calling application (VB and other windows apps), what happens if an application 'forgets' and open transactions and locks are left lingering.

For example, in the checkout_publication procedure, there is a SELECT FOR UPDATE. What happens if that transaction is not appropriately ended by the calling program?

So I am not clear on why the calling applications should do the commits/rollbacks if the stored procedures are built in the way I've described above.

Can you perhaps expand on the costs/benefits of letting the application do the commits/rollbacks so that I can make a case to my colleagues?

Thanks.

Tom Kyte
August 11, 2006 - 12:15 pm UTC

the top level client invoking is the only one that "knows" when the transaction is over.

Your top level client here is - your top level plsql routine.

But what if in the future the need to modify AND checkout a publication as a single transcation rears it's head.


that worry should be extended to "we worry about bugs" - period. That would be a bug would not it? And it seems that it would be a bug readily caught during testing.

the problem is - many people get into the habit of EVERY SINGLE SILLY LITTLE PROCEDURE must commit or rollback. They break every single rule of transactional processing. The abuse it.


The client is the only one that knows and if the client forgets - well, then that client has an easily identifiable bug that needs to be corrected.

Rollback is not happening

Karthi, February 23, 2010 - 2:17 pm UTC

Hi tom,

I am working on the DW environment. We have a set of jobs which runs every day and load the data. The job calls the SQL files. We are using Oracle 9i.

I have added the “WHENEVER SQLERROR EXIT FAILURE; “ in SQL file to stop the execution if any one of the SQL script fails. So I can fix the error and rerun the same SQL file once again. (We provide commit in the end)

But I am facing one issue here. The script1 and script2 is loading the same t_hist table. I need commit after the completion of the both the statement. If the script2 fails I fix the issue and I will rerun the same file. 

The below scenario is not doing the rollback.  I need your guidance here. 

SQL> WHENEVER SQLERROR EXIT FAILURE;
SQL> create table t_hist ( x number(5), y number(5));

Table created.

SQL> create table t ( x number(5), y number(5));

Table created.

SQL> insert into t values ( 1,2);

1 row created.

SQL> insert into t values ( 1,4);

1 row created.

SQL> insert into t values ( 1,6);

1 row created.

SQL> insert into t values ( 1,8);

1 row created.

SQL> create table t1 ( x number(5), y number(5));

Table created.

SQL> insert into t1 values ( 2,2);

1 row created.

SQL> insert into t1 values ( 2,4);

1 row created.

SQL> insert into t1 values ( 2,6);

1 row created.

SQL> insert into t1 values ( 2,8);

1 row created.

SQL> commit;

Commit complete.

/* ---------------- Script1 --------------------- */

SQL> insert into t_hist
  2  select * from t;

4 rows created.

/* ---------------- Script2 --------------------- */

SQL> insert into t_hist
  2  select x from t1;
insert into t_hist
            *
ERROR at line 1:
ORA-00947: not enough values 


The script1 has loaded the 4 records and script2 has failed. So the session will close because of the “WHENEVER SQLERROR EXIT FAILURE;” command.

I have opened the another session and executed the “select * from t_hist”. The first execution has loaded the data and committed the data. 

New session result:

SQL> select * from t_hist;

         X          Y
---------- ----------
         1          2
         1          4
         1          6
         1          8

I did not give any commit statement after the execution of the script1. Still the records got committed. 

My questions:

1.Is it because of the “WHENEVER SQLERROR EXIT FAILURE;” command? Is it auto committing the session after script2 fails?

2.if answer is “yes” for the question #1 . Then how can I over come this one?  My scenario is “I need to commit the data after the completion of the both stmts. Because I need to avoid the duplication while rerunning sql file during the failure”.

Regards,
Karthi

Tom Kyte
March 01, 2010 - 8:26 am UTC

well, sqlplus isn't going to be a very good way to run a production environment like this - you have almost NO control whatsoever. There is no replacement for a real language - sqlplus isn't even a scripting language, it is a very very very very simple, dumb command line interpreter.


You can fix this, but I hesitate to tell you how (sqlplus by default COMMITS on exit, you would change the default). I hesitate because this is just about the WRONGEST APPROACH EVER to take.

You should use plsql
You should write a store procedure
They can have proper and real error detection and handling
It is a real language, with real ability
SQLPlus would be, is, will be - the wrong way to even consider approaching this.

Rollback is not working

KArthi, March 01, 2010 - 2:35 pm UTC

Thanks a lot tom for your reply.

You have suggested writing stored procedures instead of the SQL files. We are dealing with many jobs and each job is having some set files.

Do we need to convert all the SQL files to stored procedure? Do you have any other suggestion for me?

Could you tell me what are all the draw backs if we change the default value (sqlplus by default COMMITS on exit) and Could you tell us how to change the default vale. This is for my learning purpose.

Regards,
KArthi

Tom Kyte
March 02, 2010 - 7:00 am UTC

rollback is working - your code is broken.



In order to do professional work, real work, work that can be validated, trusted, relied on - you will need to use a language that supports error detection and error handling.

If you try to do things on the cheap, with minimal effort, you get what you pay for. Our industry has a saying "garbage in, garbage out", it applies to code as well as data.



If you want to write something that can be relied on, you will use a language that supports error detection and error handling, logging of errors, conditional branching upon exceptional conditions, etc.

Do it right, don't do it lazy.

I will not tell you how to change the default, that would - to be blunt - be irresponsible of me for I know you would just do that, assume "all is good" and then get seriously burned later. Just do it right please.

Created a SP

Karthi, March 02, 2010 - 11:45 am UTC

Hi Tom,

I have created the stored procedure to implement that one. Thanks a lot for your guidance. I have learned one good practice from you and I will not change the default settings.

I found another two ways also. Those methods are working fine. But I did not implement it.

Method 1:

Given the SQL scripts with in BEGIN and END. It is working fine because it commits at the end. It rollbacks if the script2 fails.

BEGIN
/* ---------------- Script1 --------------------- */

insert into t_hist
select * from t;


/* ---------------- Script2 --------------------- */

insert into t_hist
select x from t1;

END;
/


Method 2:

Found the FAILURE ROLLBACK option in the WHENEVER SQLERROR EXIT command.

Tried with below command before executing the scripts. This is also not committing the records if the script2 fails.


<b>“WHENEVER SQLERROR EXIT FAILURE ROLLBACK “</b>


Please provide you comments on above methods (Interested to understand very well about all the concepts)

Once again Thanks a lot for your guidance.

Regards,
Karthi

Tom Kyte
March 02, 2010 - 12:52 pm UTC

BEGIN
/* ---------------- Script1 --------------------- */

insert into t_hist
select * from t;


/* ---------------- Script2 --------------------- */

insert into t_hist
select x from t1;

END;
/



there is no script1/script2 there - you took the contents of the scripts and put them into a SINGLE BLOCK of code - a stored procedure without a name, we call those anonymous blocks. It is just like a stored procedure - just not saved into the database (and the privileges are a bit different, but it is rather like a stored procedure - there is NOT ANY SCRIPT1/SCRIPT2, you removed those and created a program)



method2: wish you wouldn't have, now you'll be tempted. You shouldn't need to ask me what my comments on that method are - I was pretty clear how I felt above...



thanks tom

Karthi, March 02, 2010 - 1:50 pm UTC

Thanks a lot tom.

I have written separate SP. I did not post that. I have asked the suggestion about the other two methods except stored procedure method.

I will follow stored procedure best practice in future also.

Regards,
Karthi

Another pitfall here

Michael Tefft, March 03, 2010 - 5:28 am UTC

Did you realize that in your script:
create table t_hist ( x number(5), y number(5));
create table t ( x number(5), y number(5));
insert into t values ( 1,2);
insert into t values ( 1,4);
insert into t values ( 1,6);
insert into t values ( 1,8);
create table t1 ( x number(5), y number(5));
insert into t1 values ( 2,2);
insert into t1 values ( 2,4);
insert into t1 values ( 2,6);
insert into t1 values ( 2,8);
commit;

the CREATE TABLE T1 is DDL; DDL implicitly commits. That is why your first 4 inserts were not rolled back.

It is certainly a good thing that you are considering restart and recoverability; unfortunately, many developers do not consider this until an incident actually occurs. But you need to re-think your process, and make sure you have a robust design.

If your process actually includes DDL such as you show here (even at the beginning of the script), you have probably gone down the wrong road.

Thanks

Karthi, March 03, 2010 - 11:14 am UTC

Hi Michael Tefft,

Thanks for you comments.

I have created tables – t,t1 and t_hist for replicating the scenario. I do not have the DDLs in my real time scenario.

Now I have a robust design with the help of Tom.

Regards,
Karthi

Commit...

Jay, August 31, 2012 - 12:58 am UTC

i Just wonder if we inserting 40 M Recordes in table its took around 10 mins to insert, just wann to know when we fire commit it its writing into data block. and immediately finish its executiong. does it mean that it finish writing into data blocks or what internally happend when we fire commit stmt. can you pls brief about it. i am very much curious about to know it.
Tom Kyte
September 10, 2012 - 6:28 pm UTC

do you have access to any of the expert oracle database architecture editions? I go over this in an entire chapter - it is rather involved.

basically - as you are inserting, you are generating redo, that redo is constantly being flushed to disk by lgwr. If the buffer cache fills up - some of your undo and data blocks might get written to disk to - or they might be in the cache for the entire time.

when you finally go to commit - lgwr just has to write out the last tiny bit of redo and tell it is done. The blocks in the cache may or may not have been written to disk. Usually - they are just in the cache still - but the redo is on disk safely.

More to Explore

Security

All of the vital components for a secure database are covered in the Security guide.