Clarification ?
Anil Pant, September 29, 2003 - 12:42 am UTC
What dou mean by "RBS is used by selects -- RBS is never extended on behalf of a SELECT. Only inserts/updates/deletes cause rollback to grow. "
September 29, 2003 - 7:40 am UTC
hmm, three statements
a) RBS is used by selects (we use rollback data to construct read consistent views of data. if we hit data that was changed since the beginning of your query, we use rollback data to "unchange it")
b) RBS is never extended on behalf of a select (meaning, a rollback segment will not extend because a read only SELECT query is running and just "might" need some of the existing data. the rollback segment will wrap around if it can, overwriting old rollback data -- that that query might need)
c) only inserts/updates/deletes cause rollback to grow (to grow = to extend. only a modification can cause rollback to extend and only then if it needs to extend. the modification will just wrap around and reuse space that a select might need if it can)
hope that helps.
Serialize a transaction and use a specific Rollback segment
Rateesh Mohan, March 08, 2004 - 4:05 am UTC
hi tom,
I have a procedure for populating two tables in a serialized fashion.The insert itself causes too much rollback and comes out with error cannot extend rollback segment. For this purpose i have created an additional rollback segment with very large extents( as big as 100 mb) and max extents as 25 but the problem is i am unable to set the transaction as seriazable and to use a particular rollback segment . It says both has to be the first statement.
So my questions are
1) Serialized insert's/update's DOES NOT cause more rollback than normal insert. is this correct please confirm .
2) Is there any way for a serializable transaction to use a specific rollback segment
thanks ,
March 08, 2004 - 8:13 am UTC
I'm a big big big fan of equi sized RBS. sounds like you have not setup sufficient RBS for your system.
Serializable is going to be deadly on a long running transaction - You would be best off by simply locking the source tables -- else you are going to get "cannot serialize access" after running for a while (or 1555 as the odds of 1555 relate to the size of your smallest rbs, not your largest and you don't seem to have enough rbs in the first place)
1) correct
2) not that I'm aware of -- I've never tried because I've never used 'use rollback segment'
You might be able to use INSERT /*+ APPEND */ to minimize the amount of undo generated.
Some stat
Alex, March 08, 2004 - 8:48 am UTC
I had some experience with benchmarking
TX_SERIALIZABLE via jdbc. In averagely
busy environment (~20 TPS, very rare clashes)
serializable level was 10-20% slower than
READ_COMMITED. (Oracle 817).
note: 10-20% overheat was for simple jdbc calls.
For call with heavy resultset workout overheat
was only ~5%
As described in Oradocs, while in TX_SERIALIZABLE,
Oracle has to mark selected rows with timestamp and
transaction id (doing more work than in
TX_READ_COMMITED). SO some overhead is expected,
but it is "practically" small.
If TX going for > 60-120 sec then
"ORA-08177 can't serialize access for this transaction"
is rather friquent guest here.
Thus, we did not accepted "serializable" for long
transactions (between HTML requests in our case) and settled
for HTTPSession and/or additional "status" column in
database.
March 08, 2004 - 9:38 am UTC
serializable is appropriate only for very very short transactions (in any database -- in others, you would get deadlocks instead of 8177).
we run our TPC-C's with serializable.
In serializable, all you've done is frozen your SCN as of the start of the transaction -- Oracle does not mark records "specially" in this mode. Any overhead would be due to the extra work to provide the consistent read for data that has changed since your transaction began. Serializable by itself doesn't add any real additional work.
More on 8177 and long running transaction in serializable mode
Oleg Oleander, August 05, 2004 - 12:05 pm UTC
Dear Tom,
I have a routine that inserts thousands of rows into a table one by one in a loop. Every insert is preceded by multiple select from the same table. Inserted values are calculated from the selected values. (a spreadsheet basically [yes, we are on 9r2]) These selected values include many from the ones previously inserted.
In read commited mode it works fine and the results are fine as well, but phantom reads may result in errors in the calculation. In serializable mode I get 8177. Inittrans is 2 on the table but there is only one transaction running even if that could modify a block more than twice. There are no updates in the transaction, there are no autonomous transactions used. More over the 8177 is raised by a select statement. Im missing something here, please give your valuable opinion on these:
- why am i getting 8177?
- exactly under what circumstances is the 8177 raised?
- can a single transaction consume more than one inittrans entry?
Best regards, Oleg
August 05, 2004 - 1:13 pm UTC
why is this more than one single insert into select statment is my immediate question.
I would need more than one "insert into select"
Oleg, August 05, 2004 - 5:07 pm UTC
The order of calculating the values is decided by trying and trying. So even if I implement it using "insert into select" statements I would need more than one of it, therefore facing the same problem of phantom reads.
August 05, 2004 - 8:54 pm UTC
flashback query.
you can pick the point in time as of which the query(s) execute - you would query out the scn for the queries at the beginning of the process.
Thank you very much, I will test it
Oleg, August 06, 2004 - 5:08 am UTC
Serializable mode is like all select in the transaction would be a flashback query to the time the transaction began. So I can get the same result with explicitly specifinig the scn time. Am i right? Would you please answer my original question about 8177, or send me a link where I can find more info about it. Concept guide and error messages are quite short on it. Thank you very much for help.
August 06, 2004 - 8:13 am UTC
you are getting the 8177 because a block you are visiting cannot be made "consistent". I believe your process includes some updates as well.
can we use serializable in oltp
A reader, June 09, 2005 - 1:31 pm UTC
Based on your experience, what do you think about trading system like NYSC, LSE or nasdaq. They usually have high volum
tx rate. they are small and fast. do you think in that maginitude of tx rate, serialization of tx is an option ?
TIA
June 09, 2005 - 6:34 pm UTC
the first question
a) is it a need
and if the answer is yes, then yes it is an option.
error of serialize mode
Bhavesh Ghodasara, September 27, 2005 - 9:22 am UTC
hi tom,
I am having this problem:
I create one cursor and based on this cursor I want to insert values into tables.
there are almost 10000 inserts in one procedure,,
and I got ora-08177: error of serialize mode..
what is the cause??
Thanks in advance.
Bhavesh
September 27, 2005 - 11:39 am UTC
the cause is, that data you are attempting to modify has itself been modified since your transaction began.
running a really long transaction under serializable is not recommended, it is designed for short duration OLTP transactions.
what are you doing that requires serializable here?