Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, T.S.V.S..

Asked: April 08, 2001 - 11:37 pm UTC

Last updated: November 30, 2012 - 3:58 am UTC

Version: 8.1.5

Viewed 10K+ times! This question is

You Asked


Hai Tom,

I am porting the data from old structures to New structures. Some times I can prot more than 1,00,000 records also. At that time it is giving the following error :
ora-1562:failed to extend rollback segment no.2
ora-01628:max#extents (249) reached for rollback segment block.

I know how to increase max# extents and size of extents for tables. But rollback segements how I can increase?

Thanking You,
Kishore

and Tom said...

You can chang the maxextents via:

alter rollback segment rbs2 storage ( maxextents N );


if you are going to change next, you could:

alter rollback segment rbs2 storage ( next 524288 );

but I would recommend DROPPING and recreating them so they have uniformly sized extents.

Rating

  (20 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Strange

Invisible, February 16, 2004 - 11:25 am UTC

I just did this:

SQL> declare
  2    n integer;
  3  begin
  4    for n in 0..65535 loop
  5      insert into cipher_rc6_sbox (input, output) values (n, (2*n+1)*n);
  6    end loop;
  7    commit;
  8  end;
  9  /
declare
*
ERROR at line 1:
ORA-01562: failed to extend rollback segment number 1
ORA-01628: max # extents (121) reached for rollback segment ROLLBACKSEG1
ORA-6512: at line 5

Erm... how much rollback do you NEED? :-| I wouldn't have thought this tiny thing would need very much...

It's only a test database, so I haven't allocated much rollback. (And I am the *only* person using this DB.)

I was under the impression that using AUTOEXTEND means that the thing... well... automatically extends to be whatever size it needs to be! Clearly I was catastrophically mistaken... :-S

Anyway, the rollback segments:

Create Tablespace RollbackData
DataFile: '...'
Size 20M  AutoExtend ON;

Create Public Rollback Segment RollbackSeg1  Tablespace RollbackData  Storage (Optimal 4M);

[same for RollbackSeg2..4]

I have no idea what Oracle's problem is. I suspect this means I fundamentally don't understand how this stuff works.

This is not good.

[Guess who operates the company's Oracle databases?]

PS. Is there a Better Way(tm) to create the data I'm trying to create?
 

Tom Kyte
February 16, 2004 - 11:46 am UTC

might not be your transaction that is causing this.

were there other transactions happening here?  look in v$transaction (perhaps you left a window open last week sometime.  we use RBS space in a circular fashion -- we cannot reuse RBS data until the transactions that were using it committed.  It only takes a tiny -- let longer lived -- transaction to cause this.  Just means you tried to wrap back around in the RBS and couldn't -- but could not grow it either).

If you have my book "expert one on one Oracle" -- i describe how RBS is used.

The concepts guide (a MUST MUST read for any Oracle professional) covers it as well.

The ADMIN guide would be a second "must read" for you followed by the backup and recovery guide (at least 2 times).

You do not say what version of Oracle you are using either -- in 9i, you should use AUM (automatic undo mgmt) instead of manually creating your own RBS's.

Autoextend lets a file grow as segments extend.

Problem here is you said "max extents is 121 for this RBS", so -- didn't matter that the file could grow, you did not let the RBS grow.  You also did not set up an INITIAL, NEXT (should probably be 1m and 1m in many generic cases to start with).



But, yes, your approach generates THE MOST undo possible.  Consider the following 4 approaches:

ops$tkyte@ORA920PC> create table cipher_rc6_sbox ( input number, output number );
 
Table created.
 
Elapsed: 00:00:00.04
ops$tkyte@ORA920PC> create index t1_idx on cipher_rc6_sbox(input);
 
Index created.
 
Elapsed: 00:00:00.05
ops$tkyte@ORA920PC> create index t2_idx on cipher_rc6_sbox(output);
 
Index created.
 
Elapsed: 00:00:00.05
ops$tkyte@ORA920PC> create index t3_idx on cipher_rc6_sbox(input,output);
 
Index created.
 
Elapsed: 00:00:00.04
ops$tkyte@ORA920PC> create index t4_idx on cipher_rc6_sbox(output,input);
 
Index created.
 
Elapsed: 00:00:00.02

<b>Just assuming you are loading into heavily indexed tables....</b>


ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> declare
  2    n integer;
  3  begin
  4    for n in 0..65535 loop
  5      insert into cipher_rc6_sbox (input, output) values (n, (2*n+1)*n);
  6    end loop;
  7  end;
  8  /
 
PL/SQL procedure successfully completed.
 
Elapsed: 00:00:47.64
ops$tkyte@ORA920PC> select used_ublk*8/1024 "RBS used in MEG" from v$transaction;
 
RBS used in MEG
---------------
     27.3984375
 
Elapsed: 00:00:00.05

<b>now, same table -- no indexes:</b>
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> drop table cipher_rc6_sbox;
 
Table dropped.
 
Elapsed: 00:00:00.62
ops$tkyte@ORA920PC> create table cipher_rc6_sbox ( input number, output number );
 
Table created.
 
Elapsed: 00:00:00.07
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> declare
  2    n integer;
  3  begin
  4    for n in 0..65535 loop
  5      insert into cipher_rc6_sbox (input, output) values (n, (2*n+1)*n);
  6    end loop;
  7  end;
  8  /
 
PL/SQL procedure successfully completed.
 
Elapsed: 00:00:18.35
ops$tkyte@ORA920PC> select used_ublk*8/1024 "RBS used in MEG" from v$transaction;
 
RBS used in MEG
---------------
       4.421875
 
Elapsed: 00:00:00.03
ops$tkyte@ORA920PC>


<b>Now, same table -- BULK load:</b>

ops$tkyte@ORA920PC> drop table cipher_rc6_sbox;
 
Table dropped.
 
Elapsed: 00:00:00.20
ops$tkyte@ORA920PC> create table cipher_rc6_sbox ( input number, output number );
 
Table created.
 
Elapsed: 00:00:00.05
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> insert into cipher_rc6_sbox
  2  select n, (2*n+1)*n
  3    from ( select rownum-1 n
  4             from all_objects, all_objects
  5                    where rownum <= 65536 );
 
65536 rows created.
 
Elapsed: 00:00:06.97
ops$tkyte@ORA920PC> select used_ublk*8/1024 "RBS used in MEG" from v$transaction;
 
RBS used in MEG
---------------
         .15625
 
Elapsed: 00:00:00.00

<b>that is quite the difference!  but wait, it gets better:</b>

ops$tkyte@ORA920PC> create table cipher_rc6_sbox ( input number, output number );
 
Table created.
 
Elapsed: 00:00:00.05
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> insert /*+ APPEND */ into cipher_rc6_sbox
  2  select n, (2*n+1)*n
  3    from ( select rownum-1 n
  4             from all_objects, all_objects
  5                    where rownum <= 65536 );
 
65536 rows created.
 
Elapsed: 00:00:03.72
ops$tkyte@ORA920PC> select used_ublk*8/1024 "RBS used in MEG" from v$transaction;
 
RBS used in MEG
---------------
       .0078125
 
Elapsed: 00:00:00.00


 

Illuminating...

Invisible, February 16, 2004 - 12:02 pm UTC

Well, it's not another transaction; I only just switched on the machine with the DB on it (which isn't networked).

I generally stay away from the Oracle manuals, simply because they are SO difficult to use. (For example, there are several hundred pages describing in minute the exact syntax diagrams... but would be useful if it said what these things actually DO. :-S)

Anyways, I'm using Oracle 8i. (8.1.6 IIRC...)

<quote>
Autoextend lets a file grow as segments extend.
</quote>

I get it...

<quote>
Problem here is you said "max extents is 121 for this RBS", so -- didn't matter
that the file could grow, you did not let the RBS grow. You also did not set up
an INITIAL, NEXT (should probably be 1m and 1m in many generic cases to start
with).
</quote>

I don't remember saying that... is it a default if not explicitly specified or something?

Anyway, I'll go have a look at the syntax diagrams again.

Thanks for you time.


Tom Kyte
February 16, 2004 - 12:48 pm UTC

you just described the concepts guide!


it is a default,


select * from dba_segments where segment_name = 'YOUR ROLLBACK SEGMENT NAMES';

will show you. could be 40k/40k meaning about 4.7m max with 121 extents.


Neither the concepts nor the admin nor the backup guide are full of train tracks -- just the SQL reference (one guide, one guide out of all of them!) has that.


I've learned most everything I know about Oracle from those documents you avoid. They are not nearly anywhere near what you describe. One document with wire diagrams. The rest -- well, not so.

clarification of amount of undo generated

amit poddar, February 16, 2004 - 1:25 pm UTC

Hi,
In your examples above you should how oracle generates different amount of undos in different scenarios.

Inserting in table without indexes would generate less undo. That makes sense
But could you please clarify why bulk insert generates less undo than than the row by row plsql insert



Tom Kyte
February 16, 2004 - 1:35 pm UTC

because each insert statement has some amount of "header/trailer" overhead stuffed on the undo record generated -- each statement is just that, a statement with fixed overhead.

by doing in bulk, we generate 1/65536th of that overhead. Doing things in bulk is always a "good thing (tm)"

I always say "do it in a single statement if you can".

about append hint

amit poddar, February 16, 2004 - 1:38 pm UTC

Hi,
I missed this in previous question

What about append hint
it seems to generate much less undo almost negiligible
Could you shed some light on why ??

Tom Kyte
February 16, 2004 - 1:48 pm UTC

it is its very nature.

append writes above the high water mark for the table.
the only undo necessary is therefore the undo to undo the "get me space" request. The actual insert generates no real undo. In order to rollback that insert -- all we need do is "un-get your space" and poof -- data is gone.

If you have indexes, they will generate UNDO regardless but even that is minimized with append as the indexes are maintained in BULK at the end of the insert -- minimizing the work done to the index itself.

Once again...

Invisible, February 17, 2004 - 5:32 am UTC

Well, I can never find anything in the Oracle docs. I can find masses of highly technical writing which seems to assume you already know it all, and I can find introductory material which assumes you bearly know what a computer is, but I can find almost none of the material I want - the bit in the middle! It's really very difficult indeed to find any useful info at all - just finding out the exact command to cause a log switch took me ages. (Is it ALTER DATABASE something? No, wait, maybe it's ALTER SYSTEM... or is it ALTER LOG...)

If it makes any odds, my lecturers at university had the same trouble. But anyway, moaning about it isn't going to make the documentation magically get better, so let's do something more positive with this thread...

<quote>
But, yes, your approach generates THE MOST undo possible.
</quote>

Thought so!

I figured more frequent commits might change that (it's not like there are real "transactions" here, I'm just loading data into a table, and I'm the only database user). But you seem to recommend infrequent commits. (Certainly the extra checkpoints would [presumably] slow things down.)

I don't have ANY indexes; I figured it would be faster to build those AFTER I fill up the table (if I choose to do them at all).

<quote>
ops$tkyte@ORA920PC> insert into cipher_rc6_sbox
2 select n, (2*n+1)*n
3 from ( select rownum-1 n
4 from all_objects, all_objects
5 where rownum <= 65536 );

65536 rows created.
</quote>

Huh?!

Well, clearly a single SQL statement is better than my loops approach - but I couldn't think of any other way to do it.

I have no idea how that example works! :-/

<quote>
ops$tkyte@ORA920PC> insert /*+ APPEND */ into cipher_rc6_sbox
2 select n, (2*n+1)*n
3 from ( select rownum-1 n
4 from all_objects, all_objects
5 where rownum <= 65536 );

65536 rows created.
</quote>

Ditto.

(Besides, isn't /* */ just a comment?)

Again, thanks for your time.


Tom Kyte
February 17, 2004 - 8:53 am UTC

sorry if this sounds overbearing, that is not my intention. I'm just on a crusade to get people to *read the stuff*.

also, one should keep in mind that this is a continual learning process. I learn something new about Oracle every single day -- and I've been using it for 16 years.

question for you tho

why would you want to know how to switch logs? before knowing what logs are, do, etc? I mean -- switching logs manually just isn't "something normally done" -- ever? I would hope it would be something you would think about after reading something like the admin guide (as it is sort of an admin task)


but that aside, lets follow my line of thinking here.


a) I want to switch logs (why, no idea, just do)
b) log mgmt is an admin task.
c) admin task are in the admin guide
d) admin guide has chapter 7 "managing the online redo log"
e) i goto that chapter and the first page has list of topics
f) read list and see "forcing log switches"

hmmm. Perhaps you are just only using the sql ---->>> REFERENCE <<<---- (would hope academics at a school would understand the use of a reference manual vs the others -- well, actually, I would hope a lecturer at a university to which I'm paying big bucks would have long ago -- i don't know -- sort of MASTERED the topic they are lecturing on?)


The sql reference -- very dry, gives you syntax and prereqs.

Admin Guide -- well, does what it says, tells you how to admin, discusses in a conversational tone the topics. Take that log switch topic -- here is what it had to say:

<quote>
Forcing Log Switches

A log switch occurs when LGWR stops writing to one online redo log group and starts writing to another. By default, a log switch occurs automatically when the current online redo log file group fills.

You can force a log switch to make the currently active group inactive and available for online redo log maintenance operations. For example, you want to drop the currently active group, but are not able to do so until the group is inactive. You may also wish to force a log switch if the currently active group needs to be archived at a specific time before the members of the group are completely filled. This option is useful in configurations with large online redo log files that take a long time to fill.

To force a log switch, you must have the ALTER SYSTEM privilege. Use the ALTER SYSTEM statement with the SWITCH LOGFILE clause.

The following statement forces a log switch:

ALTER SYSTEM SWITCH LOGFILE;
</quote>


You see -- I violently disagree that the documentation "needs to get better".

I think it just needs to be *read*, or at least *searched*.

Let's take that example again. I want to find out how to switch logs. Ok, I goto

</code> http://otn.oracle.com/pls/db92/db92.homepage

(which is the online docs, freely available to anyone on the planet).  I plug in 

log switch

in the "enter word or phrase" dialog and hit search....

I get "99 topics in 19 books".  the admin guide had 12 matching topics and is listed first (it does not have the most hits, it had the most relevant ones).  I click on Admin Guide and get a VTOC (virtual table of contents...) for my search:

Administrator's Guide (12 matching topics)
 
    
    
      
About Oracle Background Processes
 Dropping Log Groups
 Dropping Online Redo Log Members
 Factors Affecting the Setting of ARCHIVE_LAG_TARGET<b>
 Forcing Log Switches</b>
 Log Switches and Log Sequence Numbers
 Managing the Online Redo Log
 Responding to Online Redo Log Failure
 Running a Database in ARCHIVELOG Mode
 Running a Database in NOARCHIVELOG Mode
 Setting the ARCHIVE_LAG_TARGET Initialization Parameter
 Setting the Size of Online Redo Log Members 


Ok, so -- if you ignore the reference (except to get explicit help on a specific command -- using the reference would be like using a spanish dictionary to try and phrase a question in spanish when you only speak english.  not very useful.  I'd rather has a PHRASE book at that point.  But, in order to find out what each word in a phrase meant, I'd want that dictionary no?)  and use the other guides, you'd find it is actually "pretty darn good"


In my last book -- I even set up a roadmap of required reading:


                  Concepts Guide (all)
                         |
                         v
                  New Features Guide (all)
                         |
 dev track +------------------------------+  dba track
           |                              |
    app. dev. guide                 backup/recovery concepts
           |                              |
    pl/sql guide                    rman guide
           |                              |
    performance planning            admin guide
           +------------------------------+
                        |
                   performance tuning 
                   guide and reference





And I wouldn't expect what I did above with the INSERT as SELECT to be "documented" as it is one of those things that "if you know sql, you should just 'know'".  It is "just sql".


You wanted 64k rows.  Well, we can generate as many rows as we want by selecting from some tables.


select null from all_objects, all_objects

just generates a result set that is the cartesian product of all_objects with all_objects.  The first row in all_objects is joined to every row in all_objects, then the second row and so on.  So, at the end of the day, we get cardinality(all_objects) * cardinality(all_objects) rows out of that.  Given that all_objects has 10's of thousands of rows in 8i and up -- that is "lots of rows".  We needed 64k so we just:


select null from all_objects, all_objects where rownum <= 65536

to stop it (rownum being a builtin very important psuedo column in Oracle -- if you use the above link to search for rownum, you can read about it if you don't know what it is).


Now, you needed "an i looping number from 0..on", well, we just 

select rownum-1 n from all_objects, all_objects where rownum <= 65536

and we get that.  Then, we just apply your formula.  



/* */ is a comment

/*+ .... */ is a HINT


http://docs.oracle.com/docs/cd/B10501_01/server.920/a96533/hintsref.htm#7834

/*+  APPEND */ specifically is a direct path insert, used in bulk loads for high speed loading.  You'll have to see:

http://docs.oracle.com/docs/cd/B10501_01/server.920/a96524/c21dlins.htm#11358 <code>

to fully appreciate it



Oops

Invisible, February 17, 2004 - 7:30 am UTC

<quote>
I figured more frequent commits might change that (it's not like there are real "transactions" here, I'm just loading data into a table, and I'm the only database user). But you seem to recommend infrequent commits. (Certainly the extra checkpoints would [presumably] slow things down.)
</quote>

Erm... what on Earth am I chatting about?!

It's LOG SWITCHES that cause a checkpoint, not commits...

Feeling stupid now. :-/


Tom Kyte
February 17, 2004 - 9:56 am UTC

well, frequent commits as well as single row inserts do generate gobs more redo -- so actually there is a correlation between the two!!!

commit more often = more redo generated = more frequent log switching = more checkpoint activity.

You'll love that books....

Franco, February 17, 2004 - 7:31 am UTC

.... I can't force you to read Oracle manuals, but I can suggest Tom books, then you'll feel the need to read the related Oracle manuals! Anyway, /* */ = Oracle9i SQL Reference==>Chapter 2 'Basic Elements of Oracle SQL' pag. 2-92 ;-)

Orange

Invisible, February 17, 2004 - 10:39 am UTC

<quote>
sorry if this sounds overbearing, that is not my intention. I'm just on a crusade to get people to *read the stuff*.
</quote>

...in other words, READ THE MANUAL ;-)

I know, a lot of people don't - and there can be loads of useful information in manuals.

I did actually TRY to read Oracle's various manuals, but in the end I just gave up. It was vastly too hard to find the information I wanted. The documentation just seems to be a HUGE mass of pageso, and I really struggled to work out where to look for various things.

...so I went out and bought a book about Oracle. Suddenly everythnig made much more sense. (For example, as a result our database is now *regularly backed up*, which is always a Good Thing[tm]...)

<quote>
also, one should keep in mind that this is a continual learning process. I learn something new about Oracle every single day -- and I've been using it for 16 years.
</quote>

Surely this is true of just about any nontrivial area of knowledge! ;-)

<quote>
question for you tho

why would you want to know how to switch logs? before knowing what logs are, do, etc? I mean -- switching logs manually just isn't "something normally done" -- ever? I would hope it would be something you would think about after reading something like the admin guide (as it is sort of an admin task)
</quote>

Oh, totally - if I hadn't already worked out what a redo log is, I wouldn't be trying to do a log switch. The fact is, I was working on a test database, and I wanted to change the redo log size. To do that, create new logs and drop the old ones. But you can't do that while one of them is active...

As I say, I had no idea where to look in the documentation to find the command... I eventually found it (I now have the printout for ALTER SYSTEM and ALTER DATABASE on my desk, since these are the two most used commands ever!)

<quote>
a) I want to switch logs (why, no idea, just do)
b) log mgmt is an admin task.
c) admin task are in the admin guide
d) admin guide has chapter 7 "managing the online redo log"
e) i goto that chapter and the first page has list of topics
f) read list and see "forcing log switches"
</quote>

I got lost at B... I wouldn't know what to classify the task under! :-S

C... I didn't realise there even WAS an "admin guide"... I'll have to see if I can find that... wonder what else is in it?

D, E, F - C kinda makes these moot.

<quote>
hmmm. Perhaps you are just only using the sql ---->>> REFERENCE <<<---- (would hope academics at a school would understand the use of a reference manual vs the others -- well, actually, I would hope a lecturer at a university to which I'm paying big bucks would have long ago -- i don't know -- sort of MASTERED the topic they are lecturing on?)
</quote>

Well *I* was only reading the SQL Reference (and it wasn't easy to find - there's SQL, there's SQL* Plus, there's PL/SQL [no idea what the difference is or which one I need to read about])

/* Dear me - too many nested comments! :-S */

When at Uni, I didn't have access to *any* documentation at all. Just had to take very good notes. ;-)

As for the lecturers... well yes, you would have thought your money would buy you something. To be fair, they knew just about everything there is to know about writing SQL queries and designing schemas. Just not about the exact implementation of who Oracle does other stuff. (Concurrency, backups, etc.) Wasn't really part of the course anyway...

...

Mmm... now that quote from the Admin Guide sounds a lot more helpful than anything I managed to find before! I'll have to see if it's on my CD...

<quote>
You see -- I violently disagree that the documentation "needs to get better".
</quote>

I couldn't tell. ;-)

No, seriously... what good is having all these great explainations if I can't find them? They may as well not exist if I don't know where they are. Fact is, all the documentation I could find was terse or just confusing. [Yeah, I know - you're going to tell me I'm looking in the wrong places.]

<quote>
Let's take that example again. I want to find out how to switch logs. Ok, I goto

</code> http://otn.oracle.com/pls/db92/db92.homepage

(which is the online docs, freely available to anyone on the planet).
</quote>

Well, it keeps asking me for a username and password. (Just like the discussion forums did. When I tried to apply for said account, the website demanded all sorts of details I'm not willing to give out... Presumably this is a similar thing.)

But hey, the documentation is all on my CD - somewhere.

<quote>
In my last book -- I even set up a roadmap of required reading:
</quote>

Yeah - I saw a preview of Chapter 1 online somewhere. (Well, can't remember exactly WHICH of your books, but one of them.) The whole chapter was in fact rather good if I may say so.

<quote>
You wanted 64k rows.  Well, we can generate as many rows as we want by selecting from some tables.
</quote>

Didn't know there was a table called ALL_OBJECTS... Given that there is, and that it's big, this now makes more sense. (There's still that RowNum part though - if I could figure out which manual to look in, it's probably there somewhere. Another one of those Oracle mysteries - like that DUAL thing... [I'll save that for another thread])

<quote>
/* */ is a comment

/*+ .... */ is a HINT
</quote>

OK... (I think!)

</quote>
http://d

/*+  APPEND */ specifically is a direct path insert, used in bulk loads for high speed loading.  You'll have to see:

http://d <code>

to fully appreciate it
</quote>

Also asks me for passwords... lol. But again, I suppose it's in the documents somewhere.

Anyway, I've hopefully solved the present problem. I don't really want to get into a big argument about how good Oracle's documentation is or isn't. I shall go forth and make a second attempt at reading it, and see if I can find anything useful. (Like why my test database throws strange and obscure errors at me, etc.) If I have any more specific questions, I'll come back and ask.

Thanks for your time.


Tom Kyte
February 17, 2004 - 10:48 am UTC

if you are not willing to make up fake information but supply a real email (at a yahoo account) to get access to the best material around -- well, sorry -- not much I can do.

I'm signed up at IBM and MS -- do you think they "really know me"?


once you do that, rownum will be revealed.


get an otn account, you aren't into Oracle until you have one. Make up stuff (i use barney fife, bfife@mayberry.rfd, policeman -- alot myself)

cartesian product of all_objects with all_objects

Kunjan Mehta, July 12, 2004 - 3:27 am UTC

Hi Tom,

I find many of the examples given on yr site where u use all_objects - a simple yet effective way to create lots of rows or to creat load. and being a frequent visitor of yr site, i also tend to use a lot of time. 
Recently the application developed by us was giving quite a few problems on the client site and hence we were running statspack to find the cause of problems. In an attempt to simulate the problems offshore, we wanted to load the application. and i fired the query

12:40:54 SQL> select count(*) from all_objects;

  COUNT(*)
----------
     47264

Elapsed: 00:00:16.00

it took 16 seconds and CPU utilization shot to approx 50 %. so far so good. (database was on a sun solaris machine with dual CPU). next, i fired the query 

12:41:22 SQL> select 47264 * 47264 from dual;

47264*47264
-----------
 2233885696

Elapsed: 00:00:00.10

and then i fired ...

12:41:50 SQL> select count(*) from all_objects, all_objects;

i was assuming that probably i would get result in an hour or so, but ... OOPS, since then it's been more than 64 and a half hour and the query has given any results, and my CPU utilization is still going strong (!!) at 49 %. 

below is the output of 'PRSTAT'.
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
  5418 oracle    186M  159M run     40    0  64:35:51  50% oracle/11

was i want to know is ..
(a) is the query going into infinite loop  ok/normall/predictable from this query ? or there would be some problem in OS / Database?
(b) as usuall, i tried to check again yr site, and i found that the exmples you have given related to all_objects were having one or the another where clause to restrict the output e.g. where rownum <- 65553 etc. or where 1=0 etc. so is 'select count(*) from all_objects,all_objects' query is a 'holy sin'? does oracle create some sort of temp objects to execute this query which results in some sort of unending loop? 

I was using 
SQL*Plus: Release 9.0.1.3.0 - Production 
and 
Oracle9i Enterprise Edition Release 9.0.1.3.0 - Production
With the Partitioning option
JServer Release 9.0.1.3.0 - Production

 

Tom Kyte
July 12, 2004 - 11:19 am UTC

even using just straightforward "multiply and divide" logic, you are talking about:

ops$tkyte@ORA9IR2> select 47264*16/60/60 from dual;
 
47264*16/60/60
--------------
    210.062222


210 hours.  

that is 2 BILLION rows that will be output -- every row in all_objects joined with every other row in all_objects! 

all_objects, all_objects

Kunjan Mehta, July 13, 2004 - 2:13 am UTC

Hi tom,

fine & thanks. i miscalculated as 47264 rows takes 16 seconds then 47264 * 47264 rows should take 16 * 16 sec - that's mistake. as you wrote, it would take 47264 * 16 secs.

another doubt: since a simple count(*) for 47264 rows takes 16 seconds (2954 rows per second) in our setup (whatever hardware, DB parameters, network we have) can we deduce following from the above?
(1) we would get approx 3000 rows per second.
(2) we can not process more than 3000 rows per second in our existing setup considering in live run, more complex queries would be there and oracle would have to do lots of other things also.

i think, the 1st deduction would be wrong. can you please guide?



Tom Kyte
July 13, 2004 - 11:37 am UTC

getting 3,000 rows per second is not something you can really measure, unless you add a ton of other assumptions.

3,000 rows via a full scan of a simple table in cache?
3,000 rows via a full scan of a simple table not in cache?
3,000 rows from a complex 5 table join with many predicates?
?????

and so on.  On my laptop - i get 10's of thousands of rows per second from all_objects (depending on the number of physical IO's i must perform to do it)

ops$tkyte@ORA9IR2> set timing on
ops$tkyte@ORA9IR2> set autotrace traceonly statistics;
ops$tkyte@ORA9IR2> set arraysize 500
ops$tkyte@ORA9IR2> select * from all_objects;
 
30654 rows selected.
 
Elapsed: 00:00:01.46
 
Statistics
----------------------------------------------------------
         25  recursive calls
          0  db block gets
     173015  consistent gets
          0  physical reads
          0  redo size
    1935866  bytes sent via SQL*Net to client
       1170  bytes received via SQL*Net from client
         63  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      30654  rows processed
 

but like I said "rows per second" is not really a metric one typically measures.

 

What is "Invisible" on?

Phil W, September 01, 2004 - 10:21 am UTC

<quote>When at Uni, I didn't have access to *any* documentation at all. Just had to
take very good notes. ;-)
</quote>

Sounds like school mate not a degree course. Tom, I honestly do not know why you have the patience to answer questions like this.

A reader, August 18, 2005 - 10:51 pm UTC

Tom,
I have got the following in the alert.log.

ORA-01562: failed to extend rollback segment number 7
ORA-01628: max # extents (3000) reached for rollback segment RBS07

it is 3GB rollback segment, and i have not got the error till now. And I have asked around what have been changed lately, but all said no change. So it's my job to investigate what happend.

Since it had happended, how can we find out? I have the statspack setup for every hour. and the aud trail is true.

Thanks


Tom Kyte
August 18, 2005 - 11:30 pm UTC

you either had a really large transaction run or.....

someone did a transaction and left for the week. We cannot reuse the extents they are in until they come back and commit.

Look in v$transaction and join to v$session (session.taddr = transaction.addr) and using last_call_et in v$session, see if you have any inactive, but really long transactions out there.

(if you use Automatic undo management, we can do better, we cannot reuse their extents but we can steal extents on either side of them)

How to find "bad" codes for error 1628

A reader, November 07, 2005 - 10:02 am UTC

Tom,
Thanks for the comment on my question.
We have the very large installation, a lot politics involved. They do not want to tell me what applications had or will be loaded to the production. I serve to them as consultant and they do not want me to test their applications in terms of the performance prior the implementations. But they want me to troubleshoot the problem if occurred. A Very difficult work env.

Basically, I can only be reactively to the problems on the system. ItÂ’s 24X7, but we are not working during night. ORA-1628 (3 GB segs) occurs a few times at night. Since they do not want to tell me what are the applications running at night, how can I track down the offenders? There were no trace files generated by ORA 1628. How to implement a method to capture the codes which caused Ora-1628? We are on 8.1.7.4.



Tom Kyte
November 07, 2005 - 11:59 am UTC

... they do not want me to test their applications in terms
of the performance prior the implementations. But they want me to troubleshoot
the problem if occurred. A Very difficult work env.....

difficult? I have other terms for such a situation - many of them are politically incorrect so I shall refrain from using them here.


You don't need to track down the offenders, you only need to fix the rollback segments to be permanently larger.

You can use a severerror trigger to capture them however. See the application developers guide:

</code> http://docs.oracle.com/docs/cd/A87860_01/doc/appdev.817/a76939/adg14evt.htm#999369 <code>

for details. you can capture the v$session information.

server error trigger

A reader, November 07, 2005 - 7:35 pm UTC

"" you can capture the v$session information. ""

if assumed to join v$sql to get the queies, then there would be too many in the sharedpool at ORA 1628 occurrs, because it's a large applications. we may still not be able to nail it.

thanks.

Tom Kyte
November 08, 2005 - 9:44 pm UTC

all I said was you can capture the v$session information? not able to nail "what" exactly?



extending rollback segments

Tony, January 23, 2006 - 5:40 am UTC

<quote>
You can chang the maxextents via:

alter rollback segment rbs2 storage ( maxextents N );


if you are going to change next, you could:

alter rollback segment rbs2 storage ( next 524288 );

but I would recommend DROPPING and recreating them so they have uniformly sized
extents.
</quote>

Which of the above would you recommend? It seems to me that increasing the max extents would be the easiest thing to do.

How many rollback segments do you recommend for an online transactional database? currently we use 4, is this number to small?

Is there a limit to the max extents in a rollback segment?


Tom Kyte
January 23, 2006 - 10:27 am UTC

I would recommend using automatic undo management in the database (AUM), a feature that was not available way back when.


That is what I would recommend today.

extending rollback segments in 8i

Tony, January 23, 2006 - 10:46 am UTC

Sorry, I forgot to include the database version. This database still runs on 8i.
Can you answer the above questions relating to an 8i database?

Tom Kyte
January 23, 2006 - 10:55 am UTC

my recommendation back then was and still is to use equi-sized rollback segments.

create new ones, drop old ones after you don't need them for read consistency anymore.


ORA-01628: Max # Extents (32765) in 10.2.0.5/Auto undo

Sita, June 11, 2012 - 9:17 pm UTC

Hi Tom:

Oracle Version: 10.2.0.5 and DB_BLOCK_SIZE: 8K

ORA-01628: Max # Extents (32765) For Rollback Segment _SYSSMUx$

We are using AUM. We are constantly hitting this error.
I searched metalink and other notes and it says about break the transaction into smaller and rerun it to avoid this error.


During that time of the error, Session was running "some grant statement" and noticed that LOCK_MODE=3 on OBJAUTH$ table. But no rows found in v$transaction either. Not sure why ORA-01628 was raised here.

How do we fix this error ? MAXEXTENTS in undo segments (AUM) cannot be increased either.

One of the note says this error is not seen in 11.2.0.3.
Tom Kyte
June 12, 2012 - 4:35 am UTC

please open a service request for something like this.

ORA-1628

A reader, November 28, 2012 - 12:19 am UTC

On sunday we had found many unexpired extents(count>32k) for few segments. Yesterday and today we had faced ORA-01628 max # extents 32765 reached for rollback segment.
Is it an error or warning ? What is the intent?
Will oracle use other free segments (75% is used now) available after throwing this error?



Tom Kyte
November 29, 2012 - 7:59 am UTC

sounds like you might have an outstanding transaction that has never committed - for days and days.


look in v$transaction


if a transaction begins - it will be assigned to an undo segment (rollback segment). it will put something into an extent in that undo segment.

that extent cannot be reused until that transaction commits. In fact, every extent AFTER that extent in that undo segment cannot be reused until that transaction commits. We'll just have to keep adding extents - until we cannot, then failure sets in.


so, it sounds like you have a transaction that has been outstanding for days and days. look in v$transaction for it.

A reader, November 29, 2012 - 10:45 am UTC


As per my knowledge different transaction can share same rollback segment - may be i am wrong.

My understanding is:

say transaction t1 is assigned extent E1 than other transaction t2 can be assign next extent E2 in same segment.

your comment above

"In fact, every extent AFTER that extent in that undo segment cannot be reused until that transaction commits"

Can you please explain the reason for not assigning to other transaction?


Thanks in Advance

Tom Kyte
November 30, 2012 - 3:58 am UTC

As per my knowledge different transaction can share same rollback segment - may
be i am wrong.


absolutely they can, nothing on this page says otherwise...


say transaction t1 is assigned extent E1 than other transaction t2 can be
assign next extent E2 in same segment.


actually, other transactions such as T2 can use E1 as well - it isn't allocated extent by extent to transactions - it is block by block.


Can you please explain the reason for not assigning to other transaction?


they are assigned to other transactions - it is because UNDO segments are managed in a circular buffer.

Your transaction T1 might have undo in E1, E100, E1000, E2000 and so on. Unless and until it commits or rollsback - extent E1 cannot be reused - which means that E2 cannot be reused - because E2 comes *after* E1 in the circle. Which means E3 cannot be reused and so on. As soon as ALL transactions that used E1 are committed - then we can reuse E1 and not before. Then E2 can be considered and then E3 and so on

Read the Database Concepts, it describes this in detail:
http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#CHDGJJEJ

A reader, November 30, 2012 - 4:29 am UTC

Thanks for clarification!!