Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Subhakar.

Asked: November 29, 2000 - 3:42 pm UTC

Last updated: November 20, 2015 - 9:50 am UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

Hi Tom,

Can U please explain me with an example on how to set/tune the parameter PCTVERSION ?

Thankx,
Subhakar

and Tom said...

If there is a high amount of concurrent updates as well as reads of the LOB column, choose a high value for PCTVERSION.

PCTVERSION is the percent of LOB storage space kept for old versions of LOB pages to maintain read consistency for reads started before an update. The default value is 10 percent.

You use the "store as" clause to set this

CREATE TABLE "OPS$TKYTE"."CLOB_TABLE" ("X" CLOB)
PCTFREE 10 PCTUSED 40
INITRANS 1 MAXTRANS 255
LOGGING STORAGE(INITIAL 524288)
TABLESPACE "USERS"
LOB ("X") STORE AS (TABLESPACE "USERS" ENABLE STORAGE IN ROW
CHUNK 8192
PCTVERSION 10
NOCACHE STORAGE(INITIAL 524288)) ;


PCTVERSION can be changed after creation using:
ALTER TABLE tabname MODIFY LOB (lobname) ( PCTVERSION n );


If a session is attempting to use an OLD version of a LOB
and that version gets overwritten (because PCTVERSION is too small) then the user will typically see the errors:

ORA-01555: snapshot too old:
rollback segment number with name "" too small
ORA-22924: snapshot too old

PCTVERSION can prevent OLD pages being used and force the segment to extend instead.





Rating

  (37 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

snapshot too old

A reader, November 10, 2003 - 7:43 am UTC

Hi

We tried to export our table which contains BLOBs and it is impossible. We run our export at 3 o´clock in the morning, there is absolutely noone in the database and it always end up wth ORA-01555 error (cant see why if there are DML going on). We even increased PCTVERSION to 60 and same...

Searched Metalink, it seems that this happen to many people but there is no solution at all... Do you have any suggestion?

Tom Kyte
November 10, 2003 - 12:01 pm UTC

your RBS's are too small, does the error reference a RBS. you need to permanently increase the size of your RBS's.

but, if you are using export as a "backup" you probably want to rethink that. "backups" are "backups" -- export is a tool to get data out to put into another database. it is not a backup tool.

proper backups won't every get a 1555.

yes it does references RBS

A reader, November 10, 2003 - 2:44 pm UTC

Hi

It does references RBS number however we dont understand, there is noone except the export process in the database how come it get ORA-01555?

Also, isnt PCTVERSION like RBS but it is for LOBs...? Or LOBs still use RBS to give consistent reads?

Tom Kyte
November 10, 2003 - 3:08 pm UTC



lobs version themselves in the lob space -- not in the rbs. so, if you are getting "rbs such and such too small", it isn't a lob that is the issue.

there are others in the database (there are always others). Oracle backgrounds for example. and they need not be modifying anything you are touching.

but again -- are you using export as a backup? that, that in itself would be "wrong"

no we are not using export as backup

A reader, November 11, 2003 - 1:16 am UTC

No! Export is NOT our backup strategy! RMAN + Veritas Netbackup

The problem is we have been trying to export this table for over 4 months! We even set PCTVERSION to a very high value and no luck! Will try to increase RBS

Tom Kyte
November 11, 2003 - 7:15 am UTC

transport the tablespace.

(pctversion wasn't the right thing to modify in this case, if it was pointing to a RBS)

we also have same problem

A reader, November 11, 2003 - 10:27 am UTC

We also face export problems with BLOBs

EXP-00056: ORACLE error 1555 encountered
ORA-01555: snapshot too old: rollback segment number with name "" too small
ORA-22924: snapshot too old

So far there is no slution for us, setting PCTVERSION to 70 does not help.... same situation too, noone is in the database....

Tom Kyte
November 11, 2003 - 11:58 am UTC

use a transport then.



cannot

A reader, November 11, 2003 - 12:00 pm UTC

we are using oracle 8.0.6 :-\

pctfree

A reader, December 22, 2003 - 5:26 pm UTC

Hi

is pctversion like pctfree???


==========
PCTVERSION is the percent of LOB storage space kept for old versions of LOB pages to maintain read consistency for reads started before an update.

Do you mean if my LOB uses 1M and pctversion 10% my LOB will actually have a 1.1MB extent?

What is a page? Is its size same as chunk size? Is page same as chunk?

Tom Kyte
December 22, 2003 - 6:43 pm UTC

pctversion is like "undo retention". specifies how much of the old stuff to keep around for subsequent reads of the data.

page is block.

Different CHUNK size

A reader, March 14, 2004 - 11:08 am UTC

Tom,

In our application we have some tables that utilize BLOB (LOB) columns to store data. I would like to know the following :
1. What is CHUNK size in LOB declaration
2. Our database has a CHUNK size set to 32K. Our DBAs mentioned that the table was growing very rapidly because of the 32K chunk size BLOB column in that and they reorganized the table's BLOB column to 8k CHUNK size and they also stored the BLOB in a separate tablespace of it's own. They also mentioned that the CHUNK size is also a reason why the table could grow exponentially large. My understanding however is that when a BLOB column is stored, only a BLOB locator (which is a pointer to the BLOB) will be stored in the table. In that case how will the chunk size make any difference? Please clarify
3. You mention that PCTVERSION is the percentage of space used for LOB read-consistency. Does that mean LOBs does not use RBS for read-consistency. Also, in one of your discussions you locked the table explicitly when a lob column was updated. Any reason why the table was locked explicitly?

Tom Kyte
March 14, 2004 - 11:22 am UTC

have you read:

</code> http://docs.oracle.com/docs/cd/B10501_01/appdev.920/a96591/toc.htm

1) 
http://docs.oracle.com/docs/cd/B10501_01/appdev.920/a96591/adl07mds.htm#117572 <code>

chunk is how the blob is stored as well -- if you use a 32k chunk -- the blob will allocate 32k of database blocks at a time. So, if you store 10k, you'll have 32k. if you store 15k, you'll have 32k. if you store 33k, you'll have 64k and so on.

2) the blob is stored (by default) INLINE upto about 4000 bytes. Then the lob locator would be used to point to space OUTSIDE of the table (so no, a lob wont cause a table to have "exponential" growth, whatever that is)...


3) correct, that is what it means.

do you have a pointer to that Q&A, if you do, I can tell you why I did it in that context. I searched for


"lock table" blob

"lock table" clob

and could not find it and it is not ringing any bells for me...

Please clarify my understanding

A reader, March 14, 2004 - 12:24 pm UTC

Tom,

In response to the question 1 above, you mention that CHUNK size specifies
the way space is allocated for a LOB.
Please confirm my understanding :
1. If my database block size is 8k and my chunk size is 32K
it will allocate 4 database blocks each time. Is that
correct? It is the same case as a table storage
parameter declarations where a table will allocate
INITIAL_EXTENT bytes even if the data stored in the
table is less than INITIAL_EXTENT.
2. If my understanding to the above question is correct,
why do books insist about carefully declaring the CHUNK
size to avoid wastage of space and how different is it
from the INITIAL_EXTENT,NEXT_EXTENT table storage
parameters (because space will be wasted anyway when
the data in the table occupies less than
INITIAL_EXTENT,NEXT_EXTENT and Oracle will allocate
that much number of bytes anyway)
3. In the above discussion you mention
"the blob is stored (by default) INLINE upto about 4000 bytes. Then the lob
locator would be used to point to space OUTSIDE of the table (so no, a lob wont
cause a table to have "exponential" growth, whatever that is)..."
Does that mean that declaring a CHUNK size of 8k or 32K
will not cause the table to grow exponentially because only a LOB
locator will be stored in the table segment and the actual LOB
segment will be stored outside of the table. Is my understanding
correct?
4. Please refer to the discussion about
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:378418239571, <code>
for the discussion titled
"lob tablespace -- moving a lob"

Thanks much

Tom Kyte
March 14, 2004 - 12:56 pm UTC

1) yes.

2) the chunk belongs to A LOB instance. not all lobs, a single lob. a 32k chunk means that the lob in a single row will take 32k all to itself, regardless of how big it is. chunks belong to a single lob

3) by default, the blob will be stored IN the table (enable storage in row) when the blob is less than 4000 bytes. When the blob exceeds 4000 bytes, it'll be moved out of line (disable storage in row) and pointed to.

Chunk size has nothing to do with this behaviour. It just means that for lobs <= 4000 bytes, they are IN the table, for lobs > 4000 bytes they are pointed to and stored outside of the table. In neither case would they cause the table to "explode"

4) uh huh. I was not updating a lob, we were MOVING EVERY SINGLE ONE of them. In order to ensure "fastest success", so as to not have to wait for individual row locks in a multi-user environment, I locked the table in anticipation of "having to lock every row in the table". I wanted to prevent INSERTS, DELETES and the like which would get in the way. This was a one time and one time only move. It made sense to lock the table.

LOB chunk size

Vikas Bagga, March 26, 2004 - 9:52 am UTC

I have a table with LOBS. The DBMS_LOB.GETLENGTH tells me my LOBS are about 1175111 bytes in size. My CHUNK size is set to 8192.
What should I be setting my Chunk size. I will be having 3 millions plus records with 4 LOBS in the table.
Also what queries/views will be helpful to track my LOBS.

Thanks


Tom Kyte
March 26, 2004 - 10:55 am UTC

have you read through the application developers guide for large objects?


the chunk size is the smallest unit of allocation. If you use an Nk chunk, each INSTANCE (row/column) of a lob will take at least Nk (when not null) and will take M*Nk always (multiples of Nk)....


probably with averages is, it is hard to say. If 50% of your LOBS are really small and 50% are really big -- it could be that a large chunk size wastes tons of space 50% of the time.

for 3million records with 4 lobs and 1m per lob, that would be about 11terabytes of LOB data (3,000,000*4*1m = 12,000,000m = 11,718gig = 11.44 terabytes)

that'll be "large", you'll need to include partitioning for manageability reasons in this design from the get go I would think.

q on lobs

A reader, June 16, 2004 - 11:30 am UTC

From the oracle doc on lobs,
"Best performance for LOBs can be achieved by specifying storage for LOBs in a
tablespace different from the one used for the table that contains the LOB. If many
different LOBs will be accessed frequently, then it may also be useful to specify a
separate tablespace for each LOB column or attribute in order to reduce device
contention."
Is this true? does having the table containing
the lob and the lob itself in separate tablespaces
help? Not sure how since you need to get the lob
locator from the table first to get at the lob - so
is not this argument falacious in the same way as the
argument that says put your indexes and tables in
separate tablespaces?

Tom Kyte
June 16, 2004 - 1:20 pm UTC

it is not true unless you have a system setup like one might setup a system in 1990.

using raid stripes, using multiple files per tablespace -- IO spread out is achieved in that fashion. not by tablespaces.



thanx!

A reader, June 16, 2004 - 3:50 pm UTC

that is what I thought and learnt from your site
elsewhere in discussions related to indexes and
tables....


Ora-01555 and local variables

Steve Kiteley, June 27, 2005 - 3:57 am UTC

Tom,

We are having a similar problem on a customer site (although we can't reproduce it locally). We have isolated it down to addressing a local XMLType variable...

We have a document table with several relational columns and an XMLType (stored as a clob) column called xml_text.

Within our PL/SQL code we have a local variable defined thus

lx_xml_text document.xml_text%TYPE;

We copy the table value into the local variable and then manipulate the lx_xml_type according to various business rules and finally write it back to the table. However at one point while addressing the LOCAL variable we get an ora-01555 error.

My questions are

1) As we are using the %TYPE declaration method does the local variable inherit the PCTVERSION/RETENTION values from the 'parent' column and hence become subject to those values.

2) If not what reason could we be getting an ora-01555 when addressing a local variable.

3) As this should be a fairly transient operation (sub 1 second) and we are running in automaitic undo mode with RETENTION set to 10800 again why would this happen.

4) Would explicitly defining the XMLType column with an increase PCTVERSION to override the default RETENTION setting give us a workaround?

Thanks in anticipation...



Tom Kyte
June 27, 2005 - 7:15 am UTC

you will not get a 1555 snapshot too old from a "local variable". Sorry, but something is amiss here -- the only way to get a 1555 is to run a query, to read data from the database.

Now, if your local variable has a LOB in it and you populated the lob from the database and you wait "long time" to dereference the data in the lob - you can get a 1555 from the lob itself (the lob is read consistent with respect to the point in time you fetch it out).

so, there is something amiss here in the diagnoses, we'd need a little more information.

Steve Kiteley, June 27, 2005 - 8:04 am UTC

OK, some code snippets...

We copy our XMLType column into a local variable

SELECT .......
,xml_text
........
INTO ........
,lx_acd04_xml_text
........
FROM document
WHERE doc_id = p_doc_id;

When then process it setting tags where necessary in lx_acd04_xml_text. To facilitate debugging we maintain a record of how far we have got through the code by setting a variable lc_error_pos

So the process continues through some loops (x and y are loop variables) until...

lc_error_pos := '230';
--
lc_comm_text :=
lc_comm_text
|| '<number_other>'
|| dbpk_document.gettag
(lx_acd04_xml_text
, '*/request_details/request_detail['
|| x
|| ']/specific_details/specific_detail['
|| y
|| ']/number/text()')
|| '</number_other>';
--
lc_error_pos := '240';

Where dbpk_document.gettag is a wrapper around dbpk_xml.gettag which is a generic function we use which returns the value of a specific tag based on an xpath value using

XMLTYPE.EXTRACT (px_xml, pc_path)

Where px_xml is the XML local variable and pc_path is the XPath.

We have a WHEN OTHERS exception handler in the main process which catches unexpected errors and logs them. From the log we can see the code gets as far as lc_error_pos = '230' but not '240'

The error stack we get from our customer is

An error has occurred.Value for DOC_ID = 587.Module = DBPK_MODULE35.PROCESS_DOCUMENT. Error position = 230. - ORA-01555: snapshot too old: rollback segment number with name "" too small
ORA-22924: snapshot too old
ORA-06512: at "SYS.DBMS_LOB", line 739
ORA-06512: at "OMS.DBPK_XML", line 644
ORA-22924: snapshot too old





Tom Kyte
June 27, 2005 - 8:29 am UTC

well, you haven't really "copied" it, you fetched a lob locator -- later when you goto dereferenced it, it is saying "sorry, the read consistent image you fetched out a while ago cannot be reconstructed, this lob has been modified and the necessary 'old information' isn't there anymore.


So, are YOU modifying this lob in your routine (and committing intermittently) or is some other session modifying this lob?

(to truly "copy it", you would have to insert into another table or dbms lob.copy it into a temporary table, you are just pointing to it in plsql)

Interesting

Jim Weatherly, December 18, 2005 - 9:00 pm UTC

The original post is several years old so please feel free to disregard this review if you don't think it is relevant. 

In the original post there is some disagreement over whether the LOB was causing the ORA-1555/ORA-22924 or something else. I also have a scenario which exhibits similar behavior. In this case I have identified a row from which I can select any and every column except a particular CLOB column. However, when I only select the column in question, I get the error. 


SQL>ALTER SESSION SET EVENTS '10046 trace name context forever, level 12';

Session altered.

SQL>SELECT desc_text
  2  FROM AW_ITEM_T
  3  WHERE item_uid = 2620517;
ERROR:
ORA-01555: snapshot too old: rollback segment number  with name "" too small
ORA-22924: snapshot too old

Elapsed: 00:00:00.01
SQL>ALTER SESSION SET EVENTS '10046 trace name context off'; 

Session altered.

I get the same error if I try to export this partition. I can sort of work around it if I update the column to something else (or null), but overwriting data is not my preferred method of dealing with what seems to be some kind of data corruption. 

Tom Kyte
December 19, 2005 - 7:22 am UTC

if this repeats - please contact support.

lobs outside rows

dan, December 27, 2006 - 1:54 pm UTC

Hi tom,

1.
How can i find out in a database which has say large number of tables which has data type either xml or blob, which tables have lob columsn stored in rows and which have lob columsn stored out of row??
I though of selecting the bytes column from dba_segments where segment_type ='LOBSEGMENT' and bytes > 4000,
and then from dba_lobs find the corresponding table name and thats the table which has lobs stored outside the table row?

2.if most of the tables have lobs stored inrows and the chunk size is 4k as well as db block size is 10k and there are many reads from lobs. Is it worth creating a migrating these tables to a seprate tablespace of say 32k block size??

3. if i have lots of direct path reads /writes waits in statspack, where can i find the exact table name(the lob column table) which undergoes direct path, so that i can think of cache the lob? i have large no of lob tables and cache is not set? i dont want to set case for all tables which has lobs>4000 bytes, i want to know which sets of tables are causing waits direct path/read in my statspack?

cheers


Tom Kyte
December 28, 2006 - 9:39 am UTC

1) query the data dictionary? all_tab_columns has a data_type column

to find them "in or out of row", you would have to

a) see if disable storage in row was set, if so, you know they are ALL out of line
b) query the table otherwise to see how many are less than 4000 bytes, that'll be the number of inline rows.

dba_segments would not be useful there.

2) if you have a 10k block size, you have a problem, that would be a nonsensical block size - it would work, but would be not smart.

if your lobs are 4k, why would you want to store them on 32k blocks???!?!?! that doesn't make any sense.

3) query v$segstat

dani, December 28, 2006 - 10:51 am UTC

Hi
When you mentioned the following line, what did you mean which table to query?
b) query the table otherwise to see how many are less than 4000 bytes, that'll be the number of inline rows.

1.
What I basically want is to find out is say there is a
table with definition:
Table Jobstats (id varchar2(200), records xml_type)
Since I can not use sqlplus to query the above table, is there a way to find generally how many records of the table Jobstats have rows stored in and how many are stored out?
That would give me an idea about the nature of the table so i can cashe its lob column to make it use db cache buffer and not pga causing physical io.

2.
Please find below some inportant areas of statspack report
db os 10g rel 2. block size = 4k, lobs chnk size 4k.
Database has 64,000 tables all of them are lobs of xml type.

In your opinion can we do something to optimise the batch which takes 6 hrs -this statspack is for that 6 hrs of batch?

Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 10 27-??????-06 19:47:18 23 61.4
End Snap: 22 28-??????-06 01:30:44 25 134.0
Elapsed: 343.43 (mins)
DB Time: 679.07 (mins)

Cache Sizes

Begin End
Buffer Cache: 3,200M 3,200M Std Block Size: 4K
Shared Pool Size: 512M 512M Log Buffer: 14,356K

Load Profile

Per Second Per Transaction
Redo size: 948,012.42 6,074.94
Logical reads: 14,173.31 90.82
Block changes: 2,493.73 15.98
Physical reads: 217.70 1.40
Physical writes: 174.11 1.12
User calls: 3,279.55 21.02
Parses: 49.76 0.32
Hard parses: 1.41 0.01
Sorts: 8.04 0.05
Logons: 0.00 0.00
Executes: 2,235.28 14.32
Transactions: 156.05

% Blocks changed per Read: 17.59 Recursive Call %: 17.08
Rollback per transaction %: 0.00 Rows per Sort: 7.17

Instance Efficiency Percentages (Target 100%)

Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 100.13 In-memory Sort %: 100.00
Library Hit %: 99.90 Soft Parse %: 97.16
Execute to Parse %: 97.77 Latch Hit %: 99.51
Parse CPU to Parse Elapsd %: 21.53 % Non-Parse CPU: 99.54

Shared Pool Statistics

Begin End
Memory Usage %: 46.40 89.79
% SQL with executions>1: 38.55 86.07
% Memory for SQL w/exec>1: 47.99 84.18

Top 5 Timed Events

Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
CPU time 28,173 69.1
log file sync 3,232,568 15,282 5 37.5 Commit
log file parallel write 2,812,377 6,902 2 16.9 System I/O
db file sequential read 906,398 4,509 5 11.1 User I/O
direct path read 2,263,477 359 0 .9 User I/O

Main Report
Statistic Name Time (s) % of DB Time
DB CPU 28,172.52 69.14
sql execute elapsed time 21,276.62 52.22
PL/SQL execution elapsed time 2,124.10 5.21
Wait Class Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
Commit 3,232,568 0.01 15,282 5 1.01
System I/O 2,864,074 0.00 7,252 3 0.89
User I/O 6,759,532 0.00 5,272 1 2.10
Concurrency 50,049 1.39 388 8 0.02






dba_segments would not be useful there.

2) if you have a 10k block size, you have a problem, that would be a nonsensical block size - it would work, but would be not smart.

if your lobs are 4k, why would you want to store them on 32k blocks???!?!?! that doesn't make any sense.

3) query v$segstat

Tom Kyte
December 29, 2006 - 9:07 am UTC

query the table with the lob of course - the only way to count how many rows in a given table have a lob column < 4000 would be to ...... query that table.


1) use any tool you want to query the table, you have to if you want to do what you say you want to do.

but you can query that v$segment-statistic view to see if you do lots of physical io against the lob segment - that you have large lobs, NOT RELEVANT if you do not actually read them very often. that you do lots of physical IO, relevant.

(eg: you are looking to measure the wrong metric perhaps here..... size isn't the metric you are interested in, work performed - that would be)

2) only thing i'll say about that is your soft parse % is low.

lobs performance

dani, January 01, 2007 - 1:43 am UTC

Thanks for your info, our developer have been able to tel me the lob rows and i calculates their size. However since our db size is also 4k,would the statement that lob rows >4000 would stay out of row remains valid in our case bec we already have a smaller block size of 4k, perhaps in our case lobs > 3600 bytes will be stored out side?

Another event happebed which basically reduces our batch by 1.5 hours. Which is that we introduced a parameter commit_write=BATCH. Now the same db can perform atleast 30 times more faster(we calculated this by no of trans/sec on avg became more).

Based on the new awr reports I can make the following assessment-pls see if i am right on this.

1. move indexes to a spearate larger block size say 32k to help perform db seq read faster as now my db has that event as top wait. Previously it was log file sync but since we used commit_write, thats gone.

2.Use keep pool to put most used smaller segments there bec in my statspack i see higher small table full scans?

3. Although stats pack report says i should have more db cache and shared pool , but i think due to nature of db, i can never have enough real mem to satisfy this request. if i add more, the next awr report will again predict to have more sga. i think thats not the rigt path, my sga already is too high.

4. I see high cache buffer chains, but i belive i can not do much about it as a dba.

5. I have lots of sql*network round trips. the thing is that there are no clients involved in the sense that the batch runs by an appliocation code which is also present on the same aix server. So would tuning the sqlnet packet size help?

Lastly
a) could youprovide your suggestions on this statspack?
b) is there a way in oracle that i can know how much of savings i will do by tuning certain area of statspack?
c) The batc is standalone, meaning no online at that time.
Since we introduced the BATCH parameter whichmakes logwr perform asych, In the event of batch failure(app code side),
will there be any extra thing a dba has to perform bef the batch is re started from the failed point?
d)likewise if thereis isnstance crashes, i am proposing to retore the backup which was taken bef the batch and re run thge batch from begining?
Cheers

AWR report in next thread as its is big
Tom Kyte
January 01, 2007 - 9:10 am UTC

row data that exceeds the size of a block is stored "chained" on multiple blocks.

I don't know what "parameter" that is - must be part of a programming language API would I presume and if before it was auto-committing every statement (which would demonstrate a lack of transactional thought on your part), that would make sense as you would have been waiting for log_file_sync waits with each statement.

1) won't do it.
2) so, are they actually INCURRING PHYSCIAL IO????? why would "a lot of scans" lead you to believe "need to cache super special". The two are unrelated.

3) ok?

4) sure you can, indexing, physical structures (clutering, index organized tables and the like).... Tuning queries.

5) there are always clients, don't ever say there are no clients - the batch application is a client.

you want to reduce the number of round trips - move code into the database. tuning the sqlnet packet size - you are literally grasping at straws here.

Please never again post a six part statspack like that.

part 6

dani, January 01, 2007 - 1:51 am UTC

object queue header operation
17,770,244 16,224 24 16,200 0 0 0
In memory undo latch
19,728,262 8,188 43 8,149 0 0 0
messages
1,316,130 7,308
Tom Kyte
January 01, 2007 - 9:05 am UTC

stop it - a six part review because we have a 32k limit!!!

I do not analyze statspack type reports like that, keep it to something relevant about the original question.

clarification on PCTVERSION

Sravan, August 28, 2008 - 6:11 pm UTC

Learnt much from your site. Thank you.

The following is an excerpt from Application developer's Guide:
===============
PCTVERSION is the percentage of all used LOB data space that can be occupied by old versions of LOB data pages. As soon as old versions of LOB data pages start to occupy more than the PCTVERSION amount of used LOB space, Oracle tries to reclaim the old versions and reuse them. In other words, PCTVERSION is the percent of used LOB data blocks that is available for versioning old LOB data.
===============

The word "used" in the 1st line caught my attention.

In trying to understand the actual meaning of PCTVERSION, it says "used LOB data space" - does it mean its the space occupied by the current lob data which already exists in the tablespace?.

In other words, lets say I have 25 rows in a table which has 1 lob column. And each lob column is 5K. PCTVERSION is 10.
The space occupied by all the LOBS would be 250K. So, for any new record insert the PCTVERSION would be 25K (10%)which would be kept aside for versioning?
Am I right in my understanding? If so, as the # of rows increase in the table, the actual space reserved for versioning would also increase, no? When the # of rows is 100, then the space reserved would be 125K. This doesn't sound right.

What am I missing ?

Appreciate your help.
Tom Kyte
August 29, 2008 - 11:07 pm UTC

yes, that is what used means, the measure of used changes over time.


and remember lobs are stored in chunks, your 5k lob would probably be in AT LEAST an 8k chunk - so 25 of them would be 25*8k.....


why doesn't this "sound right" - the more data you have, the more "version" space you need. If not, set pctversion low...

"used"

Sravan, September 03, 2008 - 5:44 pm UTC

Thanks for reponding.

The thing which am still confused is

1) Given that the LOB insert row size/rate is constant (one row at a time)
2) the table data size grows as more rows are inserted
3) the versioning space reserved for new rows inserted later is more

Example:
When trying to insert 26th row, the reserved space is 10% of (25*8k) = 20K

When trying to insert 100th row, the reserved space is 10% of (100*8k) = 80K

The question is though the actual LOB size for the 26th row and the 100th row is the same, but the reserved space for the 100th row is very high compared to the 26th row. Shouldn't the reserved size be the same irrespective of row #? Why should the reserved space be dependent on how much space is already used by the existing lob data in the table ?

Please tell me if am not expressing my thoughts clearly
Tom Kyte
September 04, 2008 - 8:29 am UTC

the reserved space is for the segment - not individual rows.

If you have a 100mb lob segment, you'll have less reserved space for that lob segment in total then if you have a 1000mb lob segment. Percentage wise - they'll be the same, but quantity wise each has a different amount.


reserved space is specified as a PERCENTAGE, this is what percentages do.

If you have a sales tax of 10% for example and you spend $100, you have $10 in tax. If you spend $1,000 - you have $100 in tax.


Santosh, December 05, 2008 - 8:41 am UTC

How are you?
Please advise
 
This is about an environment where a live table need to be redefined.
 
==============================================================
 
Existing table BATCH_RESPONSE_MESSAGE has partition on  BATCH_RESP_ID         
 
For e.g it is as shown below
 
CREATE TABLE BATCH_RESPONSE_MESSAGE
(
  BATCH_RESP_ID         NUMBER,
  BATCH_CONFIG_ID       NUMBER                  NOT NULL,
  BATCH_MESSAGE_ID      VARCHAR2(50 BYTE),
  ASID                  VARCHAR2(50 BYTE)       NOT NULL,
  TRANSPORT_CHANNEL     VARCHAR2(50 BYTE),
  CONVERSATION_ID       VARCHAR2(50 BYTE),
  SEQUENCE_NUMBER       NUMBER                  NOT NULL,
  BATCH_STATUS          VARCHAR2(1 BYTE)        NOT NULL,
  PROCESS_START_TIME    DATE,
  CREATION_TIME         DATE                    NOT NULL,
  RESPONSE_CONTENT      CLOB,
  RESPONSE_DATE         DATE,
  AGGREGATOR_SITE       CHAR(1 BYTE),
  SENDER_SITE           CHAR(1 BYTE),
  BATCH_SIZE            NUMBER,
  BATCH_INTERACTION_ID  VARCHAR2(100 BYTE),
  BATCH_COUNT           NUMBER,
  BATCH_TYPE            VARCHAR2(30 BYTE)       NOT NULL,
  REF_TO_MESSAGE_ID     VARCHAR2(50 BYTE)
)
TABLESPACE BATCH_SEND_DATA
PCTUSED    0
PCTFREE    10
INITRANS   1
MAXTRANS   255
PARTITION BY RANGE (BATCH_RESP_ID) 

 
===========================================================================
 
This table need to have a different partition on CREATION_TIME inclusive of subpartition BATCH_TYPE.
For e.g it should be as below
 
CREATE TABLE BATCH_RESPONSE_MESSAGE
(
  BATCH_RESP_ID         NUMBER,
  BATCH_CONFIG_ID       NUMBER                  NOT NULL,
  BATCH_MESSAGE_ID      VARCHAR2(50 BYTE),
  ASID                  VARCHAR2(50 BYTE)       NOT NULL,
  TRANSPORT_CHANNEL     VARCHAR2(50 BYTE),
  CONVERSATION_ID       VARCHAR2(50 BYTE),
  SEQUENCE_NUMBER       NUMBER                  NOT NULL,
  BATCH_STATUS          VARCHAR2(1 BYTE)        NOT NULL,
  PROCESS_START_TIME    DATE,
  CREATION_TIME         DATE                    NOT NULL,
  RESPONSE_CONTENT      CLOB,
  RESPONSE_DATE         DATE,
  AGGREGATOR_SITE       CHAR(1 BYTE),
  SENDER_SITE           CHAR(1 BYTE),
  BATCH_SIZE            NUMBER,
  BATCH_INTERACTION_ID  VARCHAR2(100 BYTE),
  BATCH_COUNT           NUMBER,
  BATCH_TYPE            VARCHAR2(30 BYTE)       NOT NULL,
  REF_TO_MESSAGE_ID     VARCHAR2(50 BYTE)
)
TABLESPACE BATCH_SEND_DATA
PCTUSED    0
PCTFREE    10
INITRANS   1
MAXTRANS   255
PARTITION BY RANGE (CREATION_TIME) 
SUBPARTITION BY LIST (BATCH_TYPE)
SUBPARTITION BY LIST (BATCH_TYPE)
SUBPARTITION TEMPLATE
  (SUBPARTITION P_BRM_L1 VALUES ('ONS_MDR', 'CHRIS_CRB', 'CHRIS_CRD', 'CHRIS_DGU', 'CHRIS_NNA'),
   SUBPARTITION P_BRM_L2 VALUES ('PPA'),
   SUBPARTITION P_BRM_L3 VALUES ('HL7')
  )
(  
  PARTITION P_BRM_20081201 VALUES LESS THAN (TO_DATE(' 2008-12-02 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    NOLOGGING
    NOCOMPRESS
    TABLESPACE BATCH_SEND_DATA
LOB (RESPONSE_CONTENT) STORE AS 
        (
          ENABLE        STORAGE IN ROW
          CHUNK       8192
          PCTVERSION  10
          NOCACHE
          STORAGE    (
                      BUFFER_POOL      DEFAULT
                     )
        )
    PCTFREE    10
    INITRANS   1
    MAXTRANS   255
    STORAGE    (
                BUFFER_POOL      DEFAULT
               )
  ( SUBPARTITION P_BRM_20081201_P_BRM_L1 VALUES ('ONS_MDR', 'CHRIS_CRB', 'CHRIS_CRD', 'CHRIS_DGU', 'CHRIS_NNA')
 
========================================================================================================
 
We have used dbms_redefinition package for online redefinition of this table through the use of interim table.  This has been successfully tested on OTR1 7 OTR2 environment. But it fails due to the below reason in NIS5 environment.
SQL>

Redefining interim table for BATCH_RESPONSE_MESSAGE...

BEGIN dbms_redefinition.start_redef_table ('BATCHSEND06B', 'BATCH_RESPONSE_MESSAGE', 'BATCH_RESP_MESSAGE_INTERIM'); END;

*

ERROR at line 1:

ORA-12008: error in materialized view refresh path

ORA-01555: snapshot too old: rollback segment number with name "" too small

ORA-22924: snapshot too old

ORA-06512: at "SYS.DBMS_REDEFINITION", line 50

ORA-06512: at "SYS.DBMS_REDEFINITION", line 1343

ORA-06512: at line 1

Table redefined.

Synchronizing interim table for BATCH_RESPONSE_MESSAGE...

BEGIN dbms_redefinition.sync_interim_table ('BATCHSEND06B', 'BATCH_RESPONSE_MESSAGE', 'BATCH_RESP_MESSAGE_INTERIM'); END;

*

ERROR at line 1:

ORA-23540: Redefinition not defined or initiated

ORA-06512: at "SYS.DBMS_REDEFINITION", line 117

ORA-06512: at "SYS.DBMS_REDEFINITION", line 1448

ORA-06512: at line 1

Table synchronized.

 
We believe this too be environmental issue due to the huge load of data in NIS5.
An SR was raised with oracle and Oracle says that CLOB is creating problem and usage of Dbms_redefintion is not supported for huge data migration. We have to opt for new method.
==============================================================================================================
 
Can you please provide any alternate approach?

Tom Kyte
December 09, 2008 - 12:10 pm UTC

your undo tablespace is too small or your undo retention is too small. You need to make sure your undo is preserved for the duration of time it takes to do the initial start_redef call. That is all (very simple actually)


Please give me that SR number so that I can have a talk with the analyst that gave you really wrong information and educate them.



dbms_redefinition --- snaphot too old

Santosh Vijayan, December 12, 2008 - 11:19 am UTC

Thanks Tom for the reply

The undo tablespace is 16 GB and maxsize is unlimited and is in autoextend mode and there are plenty of free space on database server. Undo retention is set to 14400.

The table segment is 20 GB and there is only 4000 records. This size is due to the CLOB segments in the table.

We had checked with maximum value for PCTVERSION and RETENTION for lob, but the process failed within 40 minutes of execution.

The SR number is 7227942.993

The partition change is on hold unless a solution is found.
Thanks a lot for your support.
Tom Kyte
December 12, 2008 - 12:49 pm UTC

you need to tell me how the lob was initially created - that tells us how the "undo" is managed for it. lobs manage their undo in the lob segment itself.

I don't know what you mean "with maximum value" - also, pctversion and retention are mutually exclusive.

<quote src=Expert Oracle Database Architecture>
...
RETENTION Clause
This is a mutually exclusive alternative to the PCTVERSION clause, and it is valid when you are using automatic undo management in the database. Rather than reserving some percentage of space in the LOBSEGMENT to version LOBs, the RETENTION clause uses the same time-based mechanism to retain data. The database would have the parameter UNDO_RETENTION set to specify how long to retain undo information for consistent reads. This parameter would also apply to LOB data in this case.
It should be noted that you cannot specify the retention time using this clause; rather, it is inherited from the database setting of UNDO_RETENTION itself.
......
</quote>


I read the SR, the analyst never said "...Oracle says that CLOB is creating problem and usage of Dbms_redefintion is not supported for huge data migration..." not in the SR anyway. What was said was:

"... Do the redefinition in a down time as dbms_redefinition cannot be used
under this amount of load ..."

meaning - given your particular set of circumstances (hitting the 1555, due to not using retention I would surmise, you personally cannot do this during peak load time) is currently precluding you from using this. Not that it "cannot be used for huge data migration" - it was designed for huge data migration - think about it, if the data were small - why

a) reorg at all?
b) bother with online - it is small - it would be fast





http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14249/adlob_tables.htm#sthref190

alter the table and use retention this time.

"PCTVERSION", version 8.1.7

Santosh Vijayan, December 17, 2008 - 5:57 am UTC


Hi Tom,

Thanks a lot for the information provided.
And also many thanks for clearing the confusion on SR.
I understand that LOB setting need to be changed from PCTVERSION to retention in order to use UNDO retention features for LOB undo.
I have the new definition script as below.
Is it right?

===============================================================

CREATE TABLE BATCH_RESPONSE_MESSAGE
(
BATCH_RESP_ID NUMBER,
BATCH_CONFIG_ID NUMBER NOT NULL,
BATCH_MESSAGE_ID VARCHAR2(50 BYTE),
ASID VARCHAR2(50 BYTE) NOT NULL,
TRANSPORT_CHANNEL VARCHAR2(50 BYTE),
CONVERSATION_ID VARCHAR2(50 BYTE),
SEQUENCE_NUMBER NUMBER NOT NULL,
BATCH_STATUS VARCHAR2(1 BYTE) NOT NULL,
PROCESS_START_TIME DATE,
CREATION_TIME DATE NOT NULL,
RESPONSE_CONTENT CLOB,
RESPONSE_DATE DATE,
AGGREGATOR_SITE CHAR(1 BYTE),
SENDER_SITE CHAR(1 BYTE),
BATCH_SIZE NUMBER,
BATCH_INTERACTION_ID VARCHAR2(100 BYTE),
BATCH_COUNT NUMBER,
BATCH_TYPE VARCHAR2(30 BYTE) NOT NULL,
REF_TO_MESSAGE_ID VARCHAR2(50 BYTE)
)
TABLESPACE BATCH_SEND_DATA
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
PARTITION BY RANGE (CREATION_TIME)
SUBPARTITION BY LIST (BATCH_TYPE)
SUBPARTITION TEMPLATE
(SUBPARTITION P_BRM_L1 VALUES ('ONS_MDR', 'CHRIS_CRB', 'CHRIS_CRD', 'CHRIS_DGU', 'CHRIS_NNA'),
SUBPARTITION P_BRM_L2 VALUES ('PPA'),
SUBPARTITION P_BRM_L3 VALUES ('HL7')
)
(
PARTITION P_BRM_20081201 VALUES LESS THAN (TO_DATE(' 2008-12-02 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
NOLOGGING
NOCOMPRESS
TABLESPACE BATCH_SEND_DATA
LOB (RESPONSE_CONTENT) STORE AS
(
ENABLE STORAGE IN ROW
CHUNK 8192
---PCTVERSION 10
RETENTION
NOCACHE
STORAGE (
BUFFER_POOL DEFAULT
)
)
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
BUFFER_POOL DEFAULT
)
( SUBPARTITION P_BRM_20081201_P_BRM_L1 VALUES ('ONS_MDR', 'CHRIS_CRB', 'CHRIS_CRD', 'CHRIS_DGU', 'CHRIS_NNA')
LOB (RESPONSE_CONTENT) STORE AS ( TABLESPACE BATCH_SEND_DATA )
TABLESPACE BATCH_SEND_DATA,
SUBPARTITION P_BRM_20081201_P_BRM_L2 VALUES ('PPA')
LOB (RESPONSE_CONTENT) STORE AS ( TABLESPACE BATCH_SEND_DATA )
TABLESPACE BATCH_SEND_DATA,
SUBPARTITION P_BRM_20081201_P_BRM_L3 VALUES ('HL7')
LOB (RESPONSE_CONTENT) STORE AS ( TABLESPACE BATCH_SEND_DATA )
TABLESPACE BATCH_SEND_DATA ),
PARTITION P_BRM_20081202 VALUES LESS THAN (TO_DATE(' 2008-12-03 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
NOLOGGING
NOCOMPRESS
TABLESPACE BATCH_SEND_DATA
LOB (RESPONSE_CONTENT) STORE AS
(
ENABLE STORAGE IN ROW
CHUNK 8192
--PCTVERSION 10
RETENTION
NOCACHE
STORAGE (
BUFFER_POOL DEFAULT
)
)
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
BUFFER_POOL DEFAULT
)
( SUBPARTITION P_BRM_20081202_P_BRM_L1 VALUES ('ONS_MDR', 'CHRIS_CRB', 'CHRIS_CRD', 'CHRIS_DGU', 'CHRIS_NNA')
LOB (RESPONSE_CONTENT) STORE AS ( TABLESPACE BATCH_SEND_DATA )
TABLESPACE BATCH_SEND_DATA,
SUBPARTITION P_BRM_20081202_P_BRM_L2 VALUES ('PPA')
LOB (RESPONSE_CONTENT) STORE AS ( TABLESPACE BATCH_SEND_DATA )
TABLESPACE BATCH_SEND_DATA,
SUBPARTITION P_BRM_20081202_P_BRM_L3 VALUES ('HL7')
LOB (RESPONSE_CONTENT) STORE AS ( TABLESPACE BATCH_SEND_DATA )
TABLESPACE BATCH_SEND_DATA ),
PARTITION P_BRM_MAXVALUE VALUES LESS THAN (MAXVALUE)
NOLOGGING
NOCOMPRESS
TABLESPACE BATCH_SEND_DATA
LOB (RESPONSE_CONTENT) STORE AS
(
ENABLE STORAGE IN ROW
CHUNK 8192
--PCTVERSION 10
RETENTION
NOCACHE
STORAGE (
BUFFER_POOL DEFAULT
)
)
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
BUFFER_POOL DEFAULT
)
( SUBPARTITION P_BRM_MAXVALUE_P_BRM_L1 VALUES ('ONS_MDR', 'CHRIS_CRB', 'CHRIS_CRD', 'CHRIS_DGU', 'CHRIS_NNA')
LOB (RESPONSE_CONTENT) STORE AS ( TABLESPACE BATCH_SEND_DATA )
TABLESPACE BATCH_SEND_DATA,
SUBPARTITION P_BRM_MAXVALUE_P_BRM_L2 VALUES ('PPA')
LOB (RESPONSE_CONTENT) STORE AS ( TABLESPACE BATCH_SEND_DATA )
TABLESPACE BATCH_SEND_DATA,
SUBPARTITION P_BRM_MAXVALUE_P_BRM_L3 VALUES ('HL7')
LOB (RESPONSE_CONTENT) STORE AS ( TABLESPACE BATCH_SEND_DATA )
TABLESPACE BATCH_SEND_DATA )
)
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;

=============================================================

Also I understand the following.

1> During Peak load DBMS_REDEFINITION will not work as it is designed for huge data migration (& not for data migration at peak load time).

Database and table cannot be taken offline anytime.
We need to make the partition changes online itself.
Please let me know if you have hinted any approach.

I have an approach, Can you please comment on it?

It is a manual SQL / PL/SQL clubbed with (datapump import) approach.

The details are as below

========================================================================================

1> CREATE TABLE BATCH_RESPONSE_MESSAGE_TEMP with all column similar as BATCH_RESPONSE_MESSAGE.

2> The partition structure for table BATCH_RESPONSE_MESSAGE_TEMP will be different and it will have new partition & Subpartition definition.

3> Only Primary key constraint will exist for BATCH_RESPONSE_MESSAGE_TEMP. This is to avoid duplication.

4> Foreign Key constraint & other Indexes will not be created intially on BATCH_RESPONSE_MESSAGE_TEMP.

5> Approaches to load data in TABLE BATCH_RESPONSE_MESSAGE_TEMP and keep it updated as TABLE BATCH_RESPONSE_MESSAGE.

====================================================================================================================================

Approach 1>

CREATE TRIGGER TRG_X ON BATCH_RESPONSE_MESSAGE & KEEP IT DISABLED
Trigger will update whatever data changes happen on BATCH_RESPONSE_MESSAGE to BATCH_RESPONSE_MESSAGE_TEMP table also.

Data will be populated on BATCH_RESPONSE_MESSAGE_TEMP through datapump Import .
Can it be done through Network link pointing to same database but source (BATCH_RESPONSE_MESSAGE) and target (BATCH_RESPONSE_MESSAGE_TEMP) tables are different.
I am not sure about it?

When datapump import is invoked. The trigger TRG_X will be enabled.
There will be instance when a update is happening on BATCH_RESPONSE_MESSAGE but it is not yet loaded on BATCH_RESPONSE_MESSAGE_TEMP, in such case trigger will be
handled to insert the record in BATCH_RESPONSE_MESSAGE_TEMP and update with changes.

Once all the data is imported. Create Foreign Key constraint & other Indexes on BATCH_RESPONSE_MESSAGE_TEMP which were skipped earlier.

All this time the Trigger will keeping the BATCH_RESPONSE_MESSAGE_TEMP online updated.

Finally Take the table BATCH_RESPONSE_MESSAGE offline by renaming it and rename BATCH_RESPONSE_MESSAGE_TEMP to BATCH_RESPONSE_MESSAGE.
(For this period the database will be offline for applications and I believe that renaming tables should not take long time).

====================================================================================================================================

Approach 2>

CREATE TRIGGER TRG_X ON BATCH_RESPONSE_MESSAGE & KEEP IT DISABLED
Trigger will update whatever data changes happen on BATCH_RESPONSE_MESSAGE to BATCH_RESPONSE_MESSAGE_TEMP table also.

Data will be populated on BATCH_RESPONSE_MESSAGE_TEMP through SQL script or PL/SQL script.

SQL script could be insert into BATCH_RESPONSE_MESSAGE_TEMP select * from BATCH_RESPONSE_MESSAGE.
But I am not sure whether this approach will succeed given the amount of undo space required to keep a snapshot of the data being
added in BATCH_RESPONSE_MESSAGE_TEMP.

PL/SQL script could be insert into BATCH_RESPONSE_MESSAGE_TEMP through a for loop of a cursor ( select * from BATCH_RESPONSE_MESSAGE).
Commits will be done in batches of millions to check failure due to the undo space.

The trigger TRG_X will be enabled when SQL script or PL/SQL script is run.

There will be instance when a update is happening on BATCH_RESPONSE_MESSAGE but it is not yet loaded on BATCH_RESPONSE_MESSAGE_TEMP, in such case trigger will be
handled to insert the record in BATCH_RESPONSE_MESSAGE_TEMP and update with changes.

Once all the data is imported. Create Foreign Key constraint & other Indexes on BATCH_RESPONSE_MESSAGE_TEMP which were skipped earlier.

All this time the Trigger will keeping the BATCH_RESPONSE_MESSAGE_TEMP online updated.

Finally Take the table BATCH_RESPONSE_MESSAGE offline by renaming it and rename BATCH_RESPONSE_MESSAGE_TEMP to BATCH_RESPONSE_MESSAGE.
(For this period the database will be offline for applications and I believe that renaming tables should not take long time)

====================================================================================================================================

Can you please comment on the above approaches? Is this right approach? Will it work?
This will not be easy with the amount of data , partitions and CLOB segments.
Also please advise what I have missed out on the above approaches?

Many thanks

Santosh
Tom Kyte
December 29, 2008 - 12:27 pm UTC

lots of stuff here.

Yes, you put retention in instead of pctversion - you accomplished that.

As for:

1> During Peak load DBMS_REDEFINITION will not work as it is designed for huge
data migration (& not for data migration at peak load time).

I have tried to say over and over again - that is entirely utterly completely FALSE. FALSE. Untrue. Not True. Incorrect. Wrong. Not accurate.

If you have sufficient undo retention, you make use dbms_redefinition whenever. We use it on some of our largest databases - which are always "at peak" (and given some of the redef's have taken longer than a day and done during the week....)




Santosh Vijayan, January 05, 2009 - 7:11 am UTC

Thanks a lot TOM

A reader, April 11, 2011 - 7:34 am UTC

Tom,

we have a scheme with a lot of LOBs and UNDO_RETENTION set to 14400 (Database Version is 10.2, LOBS were created using RETENTION option). LOBs are being updated very often by concurrent sessions.

Is there any possibility to see how much space in the schema are currently used for before_imgages of LOBs (both those still need for undo and those that could be overwritten)? I assume dba_extents shows all lobs (current data and before_images) and not able to differ each other.

Will "alter table move" get rid of all before_images (again obsolete/could be overwritten and still needed for undo) of LOBs -- as a DDL operation it should not generate any undo itself)
Tom Kyte
April 13, 2011 - 8:46 am UTC

There is no method I'm aware of to see how many blocks are being retained for undo purposes. We don't move them to the freelist so something like dbms_space won't work.

Moving them will release the space, yes.

ops$tkyte%ORA11GR2> create table t ( x int , y clob )
  2  LOB (y) STORE AS Y_LOB (RETENTION)
  3  /

Table created.

ops$tkyte%ORA11GR2> select dbms_metadata.get_ddl( 'TABLE', 'T' ) from dual;

DBMS_METADATA.GET_DDL('TABLE','T')
-------------------------------------------------------------------------------

  CREATE TABLE "OPS$TKYTE"."T"
   (    "X" NUMBER(*,0),
        "Y" CLOB
   ) SEGMENT CREATION DEFERRED
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
  TABLESPACE "USERS"
 LOB ("Y") STORE AS BASICFILE "Y_LOB"(
  TABLESPACE "USERS" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION
  NOCACHE LOGGING )


ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> declare
  2          l_data long := rpad( '*', 32000 );
  3  begin
  4          for i in 1 .. 100
  5          loop
  6                  insert into t (x,y) values ( i, l_data );
  7          end loop;
  8          commit;
  9  end;
 10  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select segment_name, segment_type from user_segments;

SEGMENT_NAME                   SEGMENT_TYPE
------------------------------ ------------------
T                              TABLE
SYS_IL0000088538C00002$$       LOBINDEX
Y_LOB                          LOBSEGMENT

ops$tkyte%ORA11GR2> exec show_space( 'Y_LOB', user, 'LOB' );
Unformatted Blocks .....................              88
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................               0
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................               0
Full Blocks        .....................             400
Total Blocks............................             512
Total Bytes.............................       4,194,304
Total MBytes............................               4
Unused Blocks...........................               0
Unused Bytes............................               0
Last Used Ext FileId....................               4
Last Used Ext BlockId...................          18,816
Last Used Block.........................             128

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> delete from t;

100 rows deleted.

ops$tkyte%ORA11GR2> commit;

Commit complete.

ops$tkyte%ORA11GR2> exec show_space( 'Y_LOB', user, 'LOB' );
Unformatted Blocks .....................              88
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................               0
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................               0
Full Blocks        .....................             400
Total Blocks............................             512
Total Bytes.............................       4,194,304
Total MBytes............................               4
Unused Blocks...........................               0
Unused Bytes............................               0
Last Used Ext FileId....................               4
Last Used Ext BlockId...................          18,816
Last Used Block.........................             128

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> alter table t move lob (y) store as y_lob;

Table altered.

ops$tkyte%ORA11GR2> exec show_space( 'Y_LOB', user, 'LOB' );
Unformatted Blocks .....................               0
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................               0
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................               0
Full Blocks        .....................               0
Total Blocks............................               8
Total Bytes.............................          65,536
Total MBytes............................               0
Unused Blocks...........................               5
Unused Bytes............................          40,960
Last Used Ext FileId....................               4
Last Used Ext BlockId...................          60,200
Last Used Block.........................               3

PL/SQL procedure successfully completed.


A reader, April 11, 2011 - 7:39 am UTC

Tom,

we have a scheme with a lot of LOBs and UNDO_RETENTION set to 14400 (Database Version is 10.2, LOBS were created using RETENTION option). LOBs are being updated very often by concurrent sessions.

Is there any possibility to see how much space in the schema are currently used for before_imgages of LOBs (both those still need for undo and those that could be overwritten)? I assume dba_extents shows all lobs (current data and before_images) and not able to differ each other.

Will "alter table move" get rid of all before_images (again obsolete/could be overwritten and still needed for undo) of LOBs -- as a DDL operation it should not generate any undo itself)

Undo and REDO for LOBs

A reader, April 28, 2011 - 11:09 am UTC

Tom,

If I am updating LOBs and Oracle will save UNDO data for the lobs in a data segment, I can say that if I change 1 bit of a LOB than at least one chunk of UNDO data and two chunks of REDO data will be generated?
Tom Kyte
April 28, 2011 - 12:04 pm UTC

undo for lobs is managed in the lob segment itself. We do not do the undo for the lob segment in the undo tablespace.


A reader, April 28, 2011 - 12:43 pm UTC

Tom,

My question was if for this undo managed in the segment, REDO will be generated, and if this generated REDO will managed chunk by chunk (so at least one chunk of REDO will be generated if 1 bit changed)
Tom Kyte
April 28, 2011 - 12:50 pm UTC

No, it doesn't work that way. The lob chunks are managed by the pointers in the lob index. We update a lob index to remove a chunk, we don't have to actually 'touch' the chunk.

How to calculate the ratio of "active LOB data" to UNDO

A reader, May 16, 2011 - 9:13 am UTC

Tom,

According to the Metalink Notes [ID 386341.1] we can use the query:

select sum(dbms_lob.getlength (<lob column name>)) from <table_name>;

to calculate active LOB data. The rest of allocated data is undo space. This works very good for BLOBs.

I have a problem to calculate such ratio for tables with CLOBs/XMLTYPE:

select sum(dbms_lob.getlength(t.<lob column name>.getClobVal())) from t t

In this case the query will return the number of chars and not bytes. My question is how can I get bytes instead of number of chars here for database with characterset AL16UTF16 (for e.g. the function lengthb will not work in this case).
Tom Kyte
May 18, 2011 - 3:28 am UTC

it works "ok" for lobs - because lobs are managed in CHUNKS. getlength returns the length of the lob which is always going to be SMALLER than the chunk allocated to the lob.



ops$tkyte%ORA11GR2> create table t ( x xmltype );

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into t values ( xmltype( '<a>hello world</a>' ) );

1 row created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select lengthb( t.x.getclobval() ) from t t;

LENGTHB(T.X.GETCLOBVAL())
-------------------------
                       19




why can't you use lengthb?

A reader, May 25, 2011 - 12:38 pm UTC

Tom,

What characterset has the database where you did run you query? My database has AL16UTF16 and getlenthb does not work.

So I am still looking for solution...
Tom Kyte
May 25, 2011 - 2:21 pm UTC

I'm WE8MSWIN1252

but you have yet to define what "does not work" means.

Can you run my example and cut and paste it just like I did so we can see what you see?

what is getlengthb?


A reader, May 26, 2011 - 6:23 am UTC

Tom,

Sorry, I can not run your code on my system.
Sorry, getlenthb was a typo.

The error I get is(and it is exactly expected behavior according to Oracle 10.2 Documentation of lenthb function in multibyte character set):

desc T

RECID VARCHAR2 NOT NULL
XMLRECORD SYS.XMLTYPE

select sum(lengthb(t.XMLRECORD.getClobVal())) from t t

ORA-22998: CLOB or NCLOB in multibyte character set not supported.

So how can I find (estimate) how much data (upsides UNDO data for LOBs) I have in this table? It is a biggest (200 GB) table in the database. The second biggest table is 70 GB, consists of blobs and currently have only about 6 MB real data (the rest is UNDO for LOBs). As for BLOBs, dbms_lob.getlength works showing exactly the size of "real data" (alter table move will reduce the table size from 70 GB to 6 MB). What I need is a query which will give me the estimates for CLOBs/XMLTYPEs
Tom Kyte
May 26, 2011 - 8:36 am UTC

ah, you just tickled my memory.

In a multibyte database using utf - we store clob data in UCS2 - a 2 byte fixed width character set.

Just multiply by 2.


A reader, May 26, 2011 - 8:47 am UTC

So I simple should multiple by 2 the result returned by:

select sum(dbms_lob.getlength(t.XMLRECORD.getClobVal())) from t t

and I will get bytes instead of number of chars?

In my case dbms_lob.getlength and length returns the same
Tom Kyte
May 26, 2011 - 9:28 am UTC

getlength/length return number of characters.

since the clob data is encoded using a fixed width UCS-2 character set - every character = 2bytes

A reader, June 10, 2011 - 6:42 am UTC

Tom,

I see different values for PCTFREE / RETENTION of the same table for LOBs from dbms_metadata.get_ddl and from dba_lobs:

dmbs_metadata.get_ddl:

XMLTYPE COLUMN "XMLRECORD" STORE AS CLOB "LOB_T1"(
TABLESPACE "XML" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 100
CACHE
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))
PARALLEL 12

dba_lobs shows RETENTION of 10800 for the same table.

The db version is 10.2.0.4.

What is the reason of this difference? Is it possible that the part of the LOB stored in the table itself (in-row part) has a different retention /pctfree settings as the part stored in LOB segment? How does Oracle manage undo for the part of LOB stored in table itself? Does Oracle use undo tablespace for this part?

Tom Kyte
June 10, 2011 - 8:08 am UTC

data stored in the table is versioned in the undo tablespace - as part of the database block. Only data in the lob segment is versioned using the lob segment itself.

If you have a complete example - from a create table, to both queries you are running to review what you have, I'll take a look.

A reader, June 10, 2011 - 8:25 am UTC

select dbms_metadata.get_ddl('TABLE', 'T' , 'TOWNER') from dual

CREATE TABLE "TOWNER"."T"
( "RECID" VARCHAR2(255),
"XMLRECORD" "SYS"."XMLTYPE" ,
CONSTRAINT "T_ENTRY_PK" PRIMARY KEY ("RECID")
USING INDEX PCTFREE 10 INITRANS 12 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "INDXXML" ENABLE
) PCTFREE 10 PCTUSED 40 INITRANS 12 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "DATAXML"
XMLTYPE COLUMN "XMLRECORD" STORE AS CLOB "LOB_T"(
TABLESPACE "DATAXML" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 100
CACHE
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))
PARALLEL 12

select * from dba_lobs where segment_name like 'LOB_T'

OWNER TABLE_NAME COLUMN_NAME SEGMENT_NAME TABLESPACE_NAME INDEX_NAME CHUNK PCTVERSION RETENTION FREEPOOLS CACHE LOGGING IN_ROW FORMAT PARTITIONED

TOWNER T SYS_NC3$ LOB_T DATAXML LOBI_T 8192
10800
YES YES YES ENDIAN NEUTRAL NO
Tom Kyte
June 10, 2011 - 2:43 pm UTC

now, where is the original create table?

and can you just show *relevant* stuff from dba_lobs - you know, so we can see what you mean...



A reader, June 12, 2011 - 12:49 pm UTC

The relevant stuff from dba_lobs is retention=10800.....

I never had the "original" create table statement...(the application was created somewhere 7000 km and 5 years away from now and here). No chance to get this statement.

But the question (why dbms_metadata und dba_lobs report different values) is eligible, isn#t it?


Tom Kyte
June 17, 2011 - 10:15 am UTC

In order to diagnose this, I'd need to know how the table was created.

Switching from pctversion to retention

Sam, October 18, 2011 - 1:28 pm UTC

I wanted to switch the lobs in my database from pctversion to retention in order to free up some space from deleted lobs.

Table lobtest was created using pctversion.

SQL> desc lobtest
Name Null? Type
----------------------------------------- -------- ----------------------------
C CLOB

I am using the following statement.

alter table lobtest modify lob (c) (retention);

Is this sufficient to switch the lob to retention ?







PCTVERSION and updates

A Reader, November 17, 2015 - 12:02 pm UTC

Hi,

We had issue on enq : HW contention on a table with clob column.

DB version : 10.2.0.5
Tablespace management : Manual Segment Space Management

we had below waits topping the AWR

Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class 
enq: HW - contention 1,379,941 4,012,251 2,908 95.9 Configuration 


and

 Elapsed Time (s) CPU Time (s) Executions Elap per Exec (s) % Total DB Time SQL Id SQL Module SQL Text 
3,575,065 3,921 323,340 11.06 85.44 xxxxxxxxxx JDBC Thin Client update t  ... 
244,967 1,167 46,612 5.26 5.85 xxxxxxxxxxxx JDBC Thin Client insert into t...


We have close to 500k updates on CLOB column and 50k rows inserts every hour.

As the updates were the most waiting we recommended to increase the PCTVERSION from 10 to 50.

Pre-allocation of extents to the LOB segment was Oracle recommendations when we raised SR. Issue was fixed with increasing PCTVERSION , however pre-allocation of extents to CLOB segment was also done.

My questions

q1) Does updates on CLOB with PCTVERSION low ( say zero ) overwriting the existing "old image" blocks would cause HW enqueue? How actually
the space management within the CLOB works? is acquiring HW enqueue is required because some session wants to overwrites it ? so to protect the block
structure ( as lightweight latches are used to protect memory structures) this HW latch is acquired, so that no 02 processes/program could reuse the same
block ? if all this true then WHY HW enqueue lock? By acquiring HW enqueue in case of Space reuse by UPDATES , does high water mark actually moves?

q2) I feel, we should have segregation of HW enqueue between the enqueue caused by UPDATES ( reuse of "old image") and INSERTS on clob ? I understand that
UPDATES on clob is kind of INSERT followed by Update but there should be distinction between HW contention enqueue due to frequent extent allocation
and the enqueue caused during reuse of "previous image blocks" [ with PCTVERSION set to very low value here ]

q3) We did experienced few years back in similar workload and application database that UPDATES were waiting on HW contention enqueues, pre-allocation of extents to the clob segments was done but it didn't solve the HW contention issue. Increasing PCTVERSION to 50 did resolved the issue. So is it fair to say that in an database where we have massive UPDATEs on clob column and facing HW contention enqueue increasing PCTVERSION is the 1st thing to do if it is set to lower value ( default 10)?

q4) PCTVERSION is percentage of what ? As per my understanding it is - % of blocks reserved for Consistent Reads..
say we have PCTVERSION set to 10
CLOB segment size is 700gb
and session#1 has updated 100mb of CLOB, so these 100mb will be kept aside ( "before image") but only 10% of 100mb i.e. 10mb will be untouched and remaining
90mb "previous image blocks" can be overwritten by session#2 whilst doing UPDATES on clob ? I strongly feel pct in pctVERSION is not the % of actual
clob segment size ( 700gb x10% = 70gb ) which is reserved for consistent read purposes?

q5) Securefile using 11g is the place wherein Oracle manages the CLOB efficiently? in 10g MSSM tablespaces, keeping clob segment size small ( using partitioning or purging inactive data ) is the only way out ? We were told there is no BUG exists for MSSM when we see HW enq contention ?

thanks
Connor McDonald
November 20, 2015 - 9:50 am UTC

Some information here might be useful

https://orainternals.wordpress.com/2008/05/16/resolving-hw-enqueue-contention/

but to your *list* of questions

1) HW enqueue will be needed if your segment needs to grow (because the high water mark needs to move), especially if multiple people are all wanting to move the HWM. The less frequently the segment grows, the less contention you get. (bugs etc can obviously play a part here)

2) it doesnt matter what kind of activity you have - what matters is does the segment need to grow. If pctversion is very high, then you could segment faster growth because space is not reclaimable as soon.

3) that strikes me as counter-intuitive. A higher pctversion should reaosnably expect to see faster segment growth, because we dont reclaim old chunks.

4) From the docs

"Specify the maximum percentage of overall LOB storage space used for maintaining old versions of the LOB"

Key word there is *overall*. If you lob segment occupies 20Gb (and pctver is 10), then we will allow the segment to grow by up to 2G of old data before we start reclaiming that old space for subsquent updates.

5) SECUREFILE is definitely the way to go.

More to Explore

CLOB/BLOB/etc

Complete documentation on Securefiles and Large Objects here